- Oral presentation
- Open Access
Efficient supervised learning in networks with binary synapses
© Baldassi et al; licensee BioMed Central Ltd. 2007
- Published: 6 July 2007
- Model Neuron
- Longe Time Scale
- Hide State
- Sparse Code
- Dense Code
The algorithm is similar to the standard 'perceptron' learning algorithm, but with an additional rule for synaptic transitions which occur only if a currently presented pattern is 'barely correct' (that is, a single synaptic flip would have caused an error). In this case, the synaptic changes are meta-plastic only (change in hidden states and not in actual synaptic state), and go towards stabilizing the synapse in its current state. This rule is crucial to the algorithm's performance, and we suggest that it is sufficiently simple to be easily implemented by neurobiological systems.
- Petersen CC, Malenka RC, Nicoll RA, Hopfield JJ: All-or-none potentiation at CA3-CA1 synapses. Proc Natl Acad Sci USA. 1998, 95: 4732-4737. 10.1073/pnas.95.8.4732.PubMedPubMed CentralView ArticleGoogle Scholar
- O'Connor DH, Wittenberg GM, Wang SSH: Graded bidirectional synaptic plasticity is composed of switch-like unitary events. Proc Natl Acad Sci USA. 2005, 102: 9679-9684. 10.1073/pnas.0502332102.PubMedPubMed CentralView ArticleGoogle Scholar
This article is published under license to BioMed Central Ltd.