- Poster presentation
- Open access
- Published:
Local structure supports learning of deterministic behavior in recurrent neural networks
BMC Neuroscience volume 16, Article number: P195 (2015)
Many aspects of behavior, such as language, navigation, or logical reasoning require strongly deterministic and sequential processing of sensory and internal signals. This type of computation can be modeled conveniently in the framework of finite automata.
In this study, we present a recurrent neural network based on biologically plausible circuit motifs, which is able to learn such deterministic behavior from sensory input and reinforcement signals. We find that simple, biologically plausible structural constraints lead to optimized solutions and significantly improve the training process.
Previous work [1, 2] has shown how arbitrary finite automata can be hand-crafted in simple networks of neural populations by interconnecting multiple Winner-Take-All units - small circuit motifs that match the properties of cortical canonical microcircuits [3, 4]. Figure 1 illustrates this transformation from an automaton to neural network with populations of neurons encoding either the state or potential state transitions. We extend that work by introducing a reinforcement learning mechanism whose weight updates take the form of reward-modulated Hebbian rule. This mechanism leads to reconfiguration of the network connectivity in such a way that a desired behavior is learned from sequences of inputs and reward signals.
As a key result of our study, we find that simple constraints on the network topology, favoring local connectivity patterns, lead to dramatic improvements both in training time and in the optimality of the found solution, where the optimum is defined as the automaton with the minimum number of states used to implement a given behavior. These structural constraints correspond well to biological neural systems, where short-range connections far outnumber long-range ones.
References
Rutishauser U, Douglas RJ: State-dependent computation using coupled recurrent networks. Neural Comput. 2009, 21 (2): 478-509.
Neftci E, Binas J, Rutishauser U, Chicca E, Indiveri G, Douglas JR: Synthesizing cognition in neuromorphic electronic systems. Proc Natl Acad Sci U S A. 2013, 110 (37): 3468-3476.
Douglas RJ, Martin KAC: Neuronal circuits of the neocortex. Annu Rev Neurosci. 2004, 27 (1): 419-451.
Douglas RJ, Martin KAC: Recurrent neuronal circuits in the neocortex. Curr Biol. 2007, 17 (13): 496-500.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
About this article
Cite this article
Binas, J., Indiveri, G. & Pfeiffer, M. Local structure supports learning of deterministic behavior in recurrent neural networks. BMC Neurosci 16 (Suppl 1), P195 (2015). https://doi.org/10.1186/1471-2202-16-S1-P195
Published:
DOI: https://doi.org/10.1186/1471-2202-16-S1-P195