Skip to main content

Parallelizing large networks using NEURON-Python

Research on brain organization has become increasingly dependent on large multiscale neuronal network simulations. Here we describe current usage of the NEURON simulator with MPI (message passing interface) in Python for simulation of networks in the high performance computing (HPC) parallel computing environment [14]. The mixed ordinary differential equation (ODE) and event-driven nature of neuronal simulations offers advantages for parallelization, by allowing each node to work independently for a period equivalent to the minimal synaptic delay before exchanging queue information with other nodes, obviating the need to exchange information at every time step.

NEURON's ParallelContext allows access to a few of the important general collective MPI calls, as well as calls adapted from prior usages from the LINDA package, now reimplemented under MPI. From Python, a NEURON ParallelContext is created using pc = h.ParallelContext(), where h provides access to NEURON simulation objects after from neuron import h, ParallelContext permits the periodic transfer of spike information via queue exchanges.

Pseudo-random streams must be consistent regardless of numbers of nodes in order to set connectivity, delays, and weights that are not fully defined from experimental studies. These streams are kept consistent regardless of number of nodes being used and therefore allows for the simulations to be identical. In order to create this, randomizers are established for particular purposes using NEURON's h.Random().Random123().[5] The key to reproducibility is to define each randomizer according to 1. a particular usage, 2. a particular cell (based on a global identifier or gid), 3. a particular run based on a run identifier runid: e.g., after for r in randomizers: r.Random123_globalindex(runid, randomdel.Random123(id32('randomdel'), self.gid, 0) where id32() provides a 32-bit hash for a name: def id32(obj):return hash(obj)&0xffffffff.

Data saving must be initially managed at node of origin and then combined across nodes, to be accessible for analysis or viewing. Given that file saving may occur incrementally during simulation from different nodes on different local filesystems, file management becomes important. There are several ways to handle data saving, the benefits and applicability of which will be presented. Spike recordings are created as vectors on a per-node basis, later consolidated. Other state variables may also be saved at the same time, or may be recreated later utilizing a re-run of individual cells with identical stimulation using NEURON's PatternStim.

We note that ParallelContext in NEURON permits the development of hybrid networks using various types of cells: event-driven cells, integrate-and-fire cells, multicompartment cells, as well as complex cells with calculation of internal chemical milieu. Load balancing in the hybrid circumstance is a crucial issue, particularly when some cells are computationally large due to inclusion of reaction-diffusion mechanisms to develop multiscale models from molecule to network.


  1. Carnevale NT, Hines ML: The NEURON Book. 2006, Cambridge University Press, New York

    Book  Google Scholar 

  2. Hines ML, Carnevale NT: NEURON: a tool for neuroscientists. Neuroscientist. 2001, 7: 123-135.

    Article  PubMed  CAS  Google Scholar 

  3. Hines ML, Carnevale NT: Translating network models to parallel hardware in neuron. J Neurosci Methods. 2008, 169: 425-455.

    Article  PubMed  CAS  PubMed Central  Google Scholar 

  4. Migliore M, Cannia C, Lytton WW, Hines ML: Parallel network simulations with NEURON. J. Computational Neuroscience. 2006, 6: 119-129.

    Article  Google Scholar 

  5. Salmon JK, Moraes MA, Dror RO, Shaw DE: Parallel random numbers: as easy as 1,2,3. Proceedings of 2011 International Conference for High Performance Computing, Networking, Storage and Analysis, November 12-18, 2011, Seattle, Washington. 2011

    Google Scholar 

Download references

Author information

Authors and Affiliations


Corresponding author

Correspondence to Alexandra H Seidenstein.

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Seidenstein, A.H., McDougal, R.A., Hines, M.L. et al. Parallelizing large networks using NEURON-Python. BMC Neurosci 16 (Suppl 1), P151 (2015).

Download citation

  • Published:

  • DOI:


  • Message Passing Interface
  • Synaptic Delay
  • File Management
  • Parallel Computing Environment
  • Global Identifier