Recent computational models based on reservoir computing (RC) are gaining attention as plausible theories of cortical information processing. In these models, the activity of a recurrently connected population of neurons is sent to one or many read-out units through a linear transformation. These models can operate in a chaotic regime which has been proposed as a possible mechanism underlying sustained irregular activity observed in cortical areas [1, 2]. Furthermore, models based on RC replicate the neural dynamics involved in decision making , interval timing , and motor control . However, one biological constraint that has been overlooked in these models is their resistance to small connectivity perturbations such as failures in synaptic transmission, a phenomenon that occurs frequently in healthy circuits without causing any drastic functional changes. Here, we show that different implementations of RC display very little resistance to small synaptic disruptions and discuss the implications of such fragility for RC mechanisms that may be present in neural coding. With the FORCE  procedure, networks lost their ability to replicate a jagged sinusoidal signal after a single neuron was removed from the reservoir (Figure 1A). Networks with innate training  showed a similar effect on a timing task (Figure 1B). The lag in the timing and the noise in the output both increased monotonically as further neurons were removed (Figure 1C,D); networks reached random performance after ~1.5% of neurons were eliminated. After the suppression of a single neuron, the spectrum of the weight matrix was greatly disturbed and repeated trials displayed unreliable trajectories, as assessed with principal components analysis. When individual synapses were removed instead of neurons, networks reached random performance after ~0.5% of synapses from the reservoir were eliminated. While living neuronal circuits can withstand small synaptic disruptions without compromising task performance, our results suggest that such disruptions have a catastrophic impact on the behaviour of RC models. Retraining the read-out unit seems to be futile as it results as a completely new solution post retraining instead of a finer restructuration. These results cast doubt on the validity of a large class of models that claim to capture the neuronal mechanisms of cognitive and behavioral tasks.
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.