Skip to content


  • Poster Presentation
  • Open Access

Recovery of computation capability for neural networks from damaged states using self-organized criticality

BMC Neuroscience201011 (Suppl 1) :P30

  • Published:


  • Neural Network
  • Root Mean Square Error
  • Critical State
  • Classification Performance
  • Initial Random State

It is critical for biological neural networks to recover their functions when some neurons of networks are damaged by lesion or aging. Self-organized criticality which induced neuronal avalanche whose distribution of size following a power-law [1] provides computational optimality to neural networks [2]. We show that the criticality of neural networks, an intrinsic property of complex networks, also provides the robustness of computational capability to neural networks.

We constructed a neural network which consisted of 300 integrate-and-fire neurons with random connections. For neural networks to exhibit self-organized criticality, we used a dynamic synapse model, neural network with activity-dependent depressive synapses, which followed the Liquid State Machine Paradigm [3]. We used EEG recordings obtained during a two-class classification task from The Wadsworth BCI Dataset (IIb) [4]. EEG signals of only 4 channels were chosen for inputs to neural networks. Thus, two readout units were connected to neural networks which determined one of the classes from input signals. We estimated the classification performance of these data for 4 states of the neural network: initial random state (A state), critical state (B state), damaged state (C state, i.e. 10% neurons removed) and critical state of the damaged neural network (D state).

To confirm that neural network can transit into critical states, we followed the procedure based on the paper of Levina et al. [1]. We found that the features of self-organized criticality or power-law distribution in the damaged neural networks could be maintained by the dynamic synapses as shown in Figure 1. The classification performance was recovered from the damaged state when neural networks were into critical states (Table 1). The root mean square error (RMSE) increased in the C state, but the criticality of neural networks reduced errors to the level of a undamaged state. These results demonstrate that the criticality may contribute to recovery for capability of computation in damaged neural networks.
Figure 1
Figure 1

The distribution of the avalanches size in the neural network.

Table 1

Root Mean Square Errors (RMSE) of each state in the neural network

Neural Networks State


A state


B state


C state


D state


Authors’ Affiliations

Department of Bio and Brain Engineering, KAIST, Daejeon, South Korea


  1. Levina A, Herrmann JM, Geisel T: Dynamical synapses causing self-organized criticality in neural networks. Nature Physics. 2007, 3: 857-860. 10.1038/nphys758.View ArticleGoogle Scholar
  2. Legenstein R, Maass W: Edge of chaos and prediction of computational performance for neural circuit models. Neural Networks. 2007, 20: 323-334. 10.1016/j.neunet.2007.04.017.View ArticlePubMedGoogle Scholar
  3. Maass W, Natschläger T, Markram H: Real-Time Computing Without Stable States: A New Framework for Neural Computation Based on Perturbations. Neural Computation. 2002, 14: 2531-2560. 10.1162/089976602760407955.View ArticlePubMedGoogle Scholar
  4. Blankertz B: BCI competition 2003. []


© Jeong and Kim; licensee BioMed Central Ltd. 2010

This article is published under license to BioMed Central Ltd.