Skip to main content
  • Poster presentation
  • Open access
  • Published:

An efficient and accurate solver for large, sparse neural networks

The mammalian brain has about 1011 neurons and 1014 synapses, with each neuron presenting complex intra-cellular dynamics. The huge number of structures and interactions underlying nervous system function thus make modeling its behavior an extraordinary computational challenge. One strategy to reduce computation time in networks is to replace computationally expensive, stiff models for individual cells (such as the Hodgkin-Huxley equations and other conductance-based models) with integrate-and-fire models. Such models save time by not numerically resolving neural behavior during its action potential; instead, they simply detect the occurrence of an action potential, and propagate its effects to postsynaptic targets appropriately. Thus, a complicated system of continuous ordinary differential equations is replaced with a simpler, but discontinuous, differential equation.

However, accurate existing methods for integrating discontinuous ordinary differential equations (ODEs) scale poorly with problem size, requiring O(N2) time steps for a system with N variables. The underlying challenge is that discontinuities introduce O(dt) errors to conventional time integration schemes, thus requiring very small time steps in the vicinity of a discontinuity [1].

In this work, we propose a method to reduce this computational load by embedding local network "repairs" within a global time-stepping scheme. In addition, high-order accuracy can be achieved without requiring the global time step to be bounded above by the minimum communication delay, as is currently required in the hybrid time-driven/event-driven scheme used by NEST [2]: this allows more powerful exploitation of exact subthreshold [3, 4] and quadrature-based [5] integration schemes. If the underlying network is sufficiently sparse the algorithm, Adaptive Localized Replay (ALR), will attain time complexity O(N) (Figure 1A). We apply our method to a network of integrate-and-fire neurons that simulates dynamics of a small patch of primary visual cortex (Figure 1B) [5, 6].

Figure 1
figure 1

(A) Comparison of runtime for a fully event-driven ("Full Replay") and ALR methods, for integrate-and-fire networks of various system sizes N. (B) Raster plot of a 32 × 32 grid of V1 model neurons responding to a drifting grating stimulus. Inset: schematic of a subset of the network, with selected synapses identified and shaded by strength. Red: AMPA; orange: NMDA, blue: fast GABA.

References

  1. Shelley MJ, Tao L: Efficient and accurate time-stepping schemes for integrate-and-fire neuronal networks. J Comp Neurosci. 2001, 11 (2): 111-119.

    Article  CAS  Google Scholar 

  2. Gewaltig MO, Diesmann M: NEST (NEural Simulation Tool). Scholarpedia. 2007, 2 (4): 1430-

    Article  Google Scholar 

  3. Brette R: Exact simulation of integrate-and-fire models with synaptic conductances. Neural Computation. 2006, 18 (8): 2004-2027.

    Article  PubMed  Google Scholar 

  4. Morrison A, Straube S, Plesser HE, Diesmann M: Exact subthreshold integration with continuous spike times in discrete-time neural network simulations. Neural Computation. 2007, 19 (1): 47-79.

    Article  PubMed  Google Scholar 

  5. Rangan AV, Cai D: Fast numerical methods for simulating large-scale integrate-and-fire neuronal networks. J Comp Neurosci. 2007, 22 (1): 81-100.

    Article  Google Scholar 

  6. Cai D, Rangan AV, McLaughlin DW: Architectural and synaptic mechanisms underlying coherent spiking activity in V1. Proceedings of the National Academy of Sciences. 2005, 102 (16): 5868-5873.

    Article  CAS  Google Scholar 

Download references

Acknowledgements

This work was supported by the SMU Hamilton Undergraduate Research Scholars Program (RS).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Andrea K Barreiro.

Rights and permissions

Open Access  This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.

To view a copy of this licence, visit https://creativecommons.org/licenses/by/4.0/.

The Creative Commons Public Domain Dedication waiver (https://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Stolyarov, R.M., Barreiro, A.K. & Norris, S. An efficient and accurate solver for large, sparse neural networks. BMC Neurosci 16 (Suppl 1), P179 (2015). https://doi.org/10.1186/1471-2202-16-S1-P179

Download citation

  • Published:

  • DOI: https://doi.org/10.1186/1471-2202-16-S1-P179

Keywords