Scalability properties of multimodular networks with dynamic gating
© Martí et al; licensee BioMed Central Ltd. 2013
Published: 8 July 2013
Brain processes arise from the interaction of a vast number of neurons. While the number of participating elements is enormous, interactions are generally limited by physical and metabolic constraints---a neuron is connected to thousands other neurons, a far lower number than the hundred billion neurons in the brain. Unfortunately, it is the number of connections per neuron, and not the total number of neurons, what often determines the performance of large neural networks (measured, e.g., as memory capacity), a fact that hinders the scalability of these systems.
We hypothesize that the scalability problem can be circumvented by using multimodular architectures, in which individual modules composed of local, densely connected recurrent networks interact with one another through sparse connections. We propose a general model of multimodular attractor neural networks in which each module state changes only upon external event and the change depends on the state of a few other modules. To implement this scheme, every module has to disregard the state of any module not involved in a particular interaction. Because a module can potentially interact with several others, ignoring the states of non-relevant modules would require learning of an exponentially large number of conditions.
We solve this problem by adding a group of neurons that dynamically gate the interactions between modules. These neurons receive inputs from the modules and event signals through random sparse connections, and respond to combinations of event-states. This information is then sent back to the modules. Because they implement conjunctive representations, the number of necessary gating neurons grows only polynomially with the number of modules. We hypothesize that gating neurons reside in cortical layer 2/3, and that they mediate the interactions between modules in layer 5/6. The laminar organization of the neocortex could thus be a crucial architectural solution to the scalability problem.
Funding: DARPA SyNAPSE, Gatsby, Kavli, Swiss National Science Foundation.
- O'Kane D, Treves A: Short- and long-range connections in autoassociative memory. J Phys A-Math Gen. 1992, 25 (19): 5055-10.1088/0305-4470/25/19/018.View ArticleGoogle Scholar
- Rigotti M, Rubin DBD, Wang XJ, Fusi S: Internal representation of task rules by recurrent dynamics: the importance of the diversity of neural responses. Front Comput Neurosci. 2010, 4:Google Scholar
- Roudi Y, Latham PE: A balanced memory network. PLoS Comput Biol. 2007, 3 (9): e141-10.1371/journal.pcbi.0030141. 09PubMed CentralView ArticleGoogle Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.