- Poster presentation
- Open Access
More flexibility for code generation with GeNN v2.1
© Nowotny et al. 2015
- Published: 18 December 2015
- Graphic Processing Unit
- Neuronal Network
- Application Programming Interface
- Code Snippet
- Neural Network Simulation
GeNN (GPU enhanced Neuronal Networks) [1, 2] is a software framework that was designed to facilitate the use of GPUs (Graphics Processing Units) for the simulation of spiking neuronal networks. It is built on top of the CUDA (Common Unified Device Architecture)  application programming interface provided by NVIDIA Corporation and is entirely based on code generation: Users provide a compact description of a spiking neuronal network model and GeNN generates CUDA and C++ code to simulate it, also taking into account the specifics of the GPU hardware detected at compile time.
In this contribution we describe novel work on GeNN, which has transformed it to a yet more flexible tool for facilitating the use of GPUs for simulations accelerated by GPUs. The main innovations involve replacing previous fixed templates for synapse dynamics and learning models by user-definable code snippets, so allowing redefinition of virtually every dynamic element of a neural network simulation. This transition has also enabled the completion of the Brian2 to GeNN and SpineML to GeNN interfaces .
Summary of code slots available in GeNN for user-defined models
Deployment and Function
Main time step update of the neuron dynamics
A Boolean expression defining when spikes occur, checked every time step
The code that defines a change in neuron variables, employed when a spike occurs
Code that describes the synapse update after a pre-synaptic spike
Code that describes the synapse update after a pre-synaptic spike event
Code for the synaptic update triggered by a post-synaptic spike
A Boolean expression that defines synaptic events
Update code for internal synapse dynamics applied every time step independent of spiking
Code that describes the transformation of synaptic variables into a post-synaptic current
Code that describes the shared dynamics of summed synaptic activation, typically decay
Other improvements in GeNN 2.1 include an improved CUDA block size estimation algorithm, access to pre- and post-synaptic variables in synaptic models, and a number of bug fixes.
GeNN has reached level of stability where it should be of increasing use to the wider computational neuroscience community, in particular with the completion of its interfaces to other simulators.
This work was supported by the EPSRC (Green Brain Project, grant number EP/J019690/1) and a Royal Academy of Engineering/Leverhulme Trust Fellowship.
- Nowotny T: Flexible neuronal network simulation framework using code generation for NVidia® CUDA™. BMC Neuroscience. 2011, 12 (Suppl 1): P239-PubMed CentralView ArticleGoogle Scholar
- Yavuz E, Turner J, Nowotny T: Simulating spiking neural networks on massively parallel graphical processing units using a code generation approach with GeNN. BMC Neuroscience. 2014, 15 (Suppl 1): O1-PubMed CentralView ArticleGoogle Scholar
- CUDA. accessed 2015-02-25, [http://www.nvidia.com/object/cuda_home_new.html]
- Nowotny T, Cope AJ, Yavuz E, Stimberg M, Goodman DFM, Marshall J, Gurney K: SpineML and Brian 2.0 interfaces for using GPU enhanced Neuronal Networks (GeNN). BMC Neuroscience. 2014, 15 (Suppl 1): P148-PubMed CentralView ArticleGoogle Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.