- Poster presentation
- Open Access
Back-engineering of spiking neural networks parameters
BMC Neuroscience volume 10, Article number: P289 (2009)
We consider the deterministic evolution of a time-discretized spiking network of neurons with connection weights with delays, taking network of generalized integrate and fire (gIF) neuron model with synapses into account . The purpose is to study a class of algorithmic methods able to calculate the proper parameters (weights and delayed weights) allowing the reproduction of a spike train produced by an unknown neural network.
The problem is known as NP-hard when delays are to be calculated. We propose here a reformulation, now expressed as a Linear-Programming (LP) problem, thus allowing to provide an efficient resolution. It is clear that this does not change the maximal complexity of the problem, whereas the practical complexity is now dramatically reduced at the implementation level. More precisely we make explicit the fact that the back-engineering of a spike train (i.e., finding out a set parameters, given a set of initial conditions), is a Linear (L) problem if the membrane potentials are observed and a LP problem if only spike times are observed, for a gIF model. Numerical robustness is discussed. We also explain how it is the use of a generalized IF neuron model instead of a leaky IF model that allows to derive this algorithm. Furthermore, we point out how the L or LP adjustment mechanism is distributed and has the same architecture as a "Hebbian" rule. A step further, this paradigm is easily generalizable to the design of input-output spike train transformations.
Numerical implementations are proposed in order to verify that is always possible to simulate an expected spike train. The results obtained shows that this is true, expect for singular cases. In a first experiment, we consider the linear problem and use the singular value decomposition (SVD) in to obtain a solution, allowing a better understanding the geometry of the problem. When the aim is to find the proper parameters from the observation of spikes only, we consider the related LP problem and the numerical solutions are derived thanks to the well-established improved simplex method as implemented in GLPK library. Several variants and generalizations are carefully discussed showing the versatility of the method.
Learning parameters for the neural network model is a complex issue. In biological context, this learning mechanism is mainly related to synaptic weights plasticity and as far as spiking neural network are concerned STDP . In the present study, the point of view is quite different since we consider supervised learning, in order to implement the previous capabilities. To which extends we can "back-engineer" the neural network parameters in order to constraint the neural network activity is the key question addressed here.
Cessac B, Viéville T: On dynamics of integrate-and-fire neural networks with adaptive conductances. Front Comput Neurosci. 2008, 2: 2-10.3389/neuro.10.002.2008. doi:10.3389/neuro.10.002.2008.
Bohte SM, Mozer MC: Reducing the variability of neural responses: A computational theory of spike-timing-dependent plasticity. Neural Computation. 2007, 19: 371-403. 10.1162/neco.2007.19.2.371.
Baudot P: Nature is the code: high temporal precision and low noise in V1. PhD thesis. 2007.
Partially supported by the ANR MAPS & the MACCAC ARC projects.
About this article
Cite this article
Rostro-Gonzalez, H., Cessac, B., Vasquez, J.C. et al. Back-engineering of spiking neural networks parameters. BMC Neurosci 10, P289 (2009). https://doi.org/10.1186/1471-2202-10-S1-P289
- Neural Network
- Singular Value Decomposition
- Spike Train
- Neuron Model
- Synaptic Weight