- Poster presentation
- Open access
- Published:
Slow points and adiabatic fixed points in recurrent neural networks
BMC Neuroscience volume 16, Article number: P88 (2015)
The time scales for cognitive information processing in the brain range, at least, from milliseconds (the time scale of the action potential) to many seconds (the time scale of short-term memory), spanning several orders of magnitude. In this context the slow dynamical components can be regarded as background processes modulating adiabatically the parameters governing the fast neural activity. For the case of recurrent neural networks the slow processes then change adiabatically the attractor landscape, including the adiabatic fixed points, inducing possibly both second order bifurcations, with respect to the steady-state neural activity, and first order catastrophes.
In this contribution we investigate the slow adaption of the attractor landscape of exemplary small recurrent neural networks consisting of continuous-time point neurons [1]. The state of one of these integrate-and-fire neurons is fully determined by its membrane potential and two adapting internal parameters [2], the gain and the threshold, with the time scale of adaption 1/ε being substantially larger than the time scale of the primary neural activity. We point out that not only the adiabatic fixed points of the network are important for shaping the neural dynamics, but also the points in phase space where the flow slows down considerably (called slow points or attractor ruins [3]).
We rigorously examine the metadynamics of the attractor landscape for a three-neuron system, observing five different phases (see Fig. 1) for different values of the adaption rate, of which four can be distinguished by the number and stability of the adiabatic fixed points. Three of the observed transitions are of second order. The remaining transition to the phase of lowest adaption is of first order and shows hysteresis. This transition occurs, remarkably, at very low adaption rates of ε ~ 10-5 and is characterized by a higher-order catastrophe.
We conclude that even relatively simple recurrent networks may show highly non-trivial adapting attractor landscapes and that the study of the attractor metadynamics in the brain maybe important for a further understanding of decision processes and dynamical memory recall.
References
Linkerhand M, Gros C: Generating functionals for autonomous latching dynamics in attractor relict networks. Sci Rep. 2013, 3-
Triesch J: A gradient rule for the plasticity of a neuron's intrinsic excitability. Proceedings of ICANN. 2005, Springer, 65-70.
Gros C, Linkerhand M, Walther V: Attractor Metadynamics in Adapting Neural Networks. ArXiv preprint. 2014, arXiv:1404.5417
Acknowledgements
This work benefited from discussions with Bulcsú Sándor. The research was supported by funds of the DFG and Studienstiftung d. dt. Volkes.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
About this article
Cite this article
Wernecke, H., Gros, C. Slow points and adiabatic fixed points in recurrent neural networks. BMC Neurosci 16 (Suppl 1), P88 (2015). https://doi.org/10.1186/1471-2202-16-S1-P88
Published:
DOI: https://doi.org/10.1186/1471-2202-16-S1-P88