- Poster presentation
- Open Access
Slow points and adiabatic fixed points in recurrent neural networks
© Wernecke and Gros 2015
- Published: 18 December 2015
- Neural Activity
- Recurrent Neural Network
- Internal Parameter
- Background Process
- Memory Recall
The time scales for cognitive information processing in the brain range, at least, from milliseconds (the time scale of the action potential) to many seconds (the time scale of short-term memory), spanning several orders of magnitude. In this context the slow dynamical components can be regarded as background processes modulating adiabatically the parameters governing the fast neural activity. For the case of recurrent neural networks the slow processes then change adiabatically the attractor landscape, including the adiabatic fixed points, inducing possibly both second order bifurcations, with respect to the steady-state neural activity, and first order catastrophes.
In this contribution we investigate the slow adaption of the attractor landscape of exemplary small recurrent neural networks consisting of continuous-time point neurons . The state of one of these integrate-and-fire neurons is fully determined by its membrane potential and two adapting internal parameters , the gain and the threshold, with the time scale of adaption 1/ε being substantially larger than the time scale of the primary neural activity. We point out that not only the adiabatic fixed points of the network are important for shaping the neural dynamics, but also the points in phase space where the flow slows down considerably (called slow points or attractor ruins ).
We conclude that even relatively simple recurrent networks may show highly non-trivial adapting attractor landscapes and that the study of the attractor metadynamics in the brain maybe important for a further understanding of decision processes and dynamical memory recall.
This work benefited from discussions with Bulcsú Sándor. The research was supported by funds of the DFG and Studienstiftung d. dt. Volkes.
- Linkerhand M, Gros C: Generating functionals for autonomous latching dynamics in attractor relict networks. Sci Rep. 2013, 3-Google Scholar
- Triesch J: A gradient rule for the plasticity of a neuron's intrinsic excitability. Proceedings of ICANN. 2005, Springer, 65-70.Google Scholar
- Gros C, Linkerhand M, Walther V: Attractor Metadynamics in Adapting Neural Networks. ArXiv preprint. 2014, arXiv:1404.5417Google Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.