Skip to main content
  • Oral presentation
  • Open access
  • Published:

Efficient supervised learning in networks with binary synapses

Recent experiments [1, 2] have suggested single synapses could be similar to noisy binary switches. Binary synapses would have the advantage of robustness to noise and hence could preserve memory over longer time scales compared to analog systems. Learning in systems with discrete synapses is known to be a computationally hard problem. We developed and studied a neurobiologically plausible on-line learning algorithm that is derived from Belief Propagation algorithms. This algorithm performs remarkably well in a model neuron with N binary synapses, and a discrete number of 'hidden' states per synapse, that has to learn a random classification problem. Such a system is able to learn a number of associations which is close to the information theoretic limit, in a time which is sub-linear in system size, corresponding to very few presentations of each pattern. Furthermore, performance is optimal for a finite number of hidden states, that scales as N1/2 for dense coding, but is much lower (~10) for sparse coding (see Figure 1). This is to our knowledge the first on-line algorithm that is able to achieve efficiently a finite capacity (number of patterns learned per synapse) with binary synapses.

Figure 1
figure 1

Learning capacity and learning time. (left) achieved capacity vs. the number of synapses N, with different numbers of hidden states, in the sparse coding case: the algorithm can achieve up to 70% of the maximal theoretical capacity at N ~10000 with 10 hidden states; (right) average learning time (number of presentations per pattern) versus number of patterns to be learned, for N = 64000: less than 100 presentations are required up to the critical point where learning fails.

The algorithm is similar to the standard 'perceptron' learning algorithm, but with an additional rule for synaptic transitions which occur only if a currently presented pattern is 'barely correct' (that is, a single synaptic flip would have caused an error). In this case, the synaptic changes are meta-plastic only (change in hidden states and not in actual synaptic state), and go towards stabilizing the synapse in its current state. This rule is crucial to the algorithm's performance, and we suggest that it is sufficiently simple to be easily implemented by neurobiological systems.

References

  1. Petersen CC, Malenka RC, Nicoll RA, Hopfield JJ: All-or-none potentiation at CA3-CA1 synapses. Proc Natl Acad Sci USA. 1998, 95: 4732-4737. 10.1073/pnas.95.8.4732.

    Article  PubMed  CAS  PubMed Central  Google Scholar 

  2. O'Connor DH, Wittenberg GM, Wang SSH: Graded bidirectional synaptic plasticity is composed of switch-like unitary events. Proc Natl Acad Sci USA. 2005, 102: 9679-9684. 10.1073/pnas.0502332102.

    Article  PubMed  PubMed Central  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Carlo Baldassi.

Rights and permissions

Open Access This article is published under license to BioMed Central Ltd. This is an Open Access article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Baldassi, C., Braunstein, A., Brunel, N. et al. Efficient supervised learning in networks with binary synapses. BMC Neurosci 8 (Suppl 2), S13 (2007). https://doi.org/10.1186/1471-2202-8-S2-S13

Download citation

  • Published:

  • DOI: https://doi.org/10.1186/1471-2202-8-S2-S13

Keywords