- Poster presentation
- Open Access
A spiking temporal-difference learning model based on dopamine-modulated plasticity
© Potjansu et al; licensee BioMed Central Ltd. 2009
- Published: 13 July 2009
- Neural Network Model
- Realistic Firing
- Conditioning Protocol
- Dopaminergic Signal
Making predictions about future rewards and adapting the behavior accordingly is crucial for any higher organism. One theory specialized for prediction problems is temporal-difference (TD) learning. Experimental findings suggest that TD learning is implemented by the mammalian brain. In particular, the resemblance of dopaminergic activity to the TD error signal  and the modulation of corticostriatal plasticity by dopamine  lend support to this hypothesis. We recently proposed the first spiking neural network model to implement actor-critic TD learning , enabling it to solve a complex task with sparse rewards. However, this model calculates an approximation of the TD error signal in each synapse, rather than utilizing a neuromodulatory system.
Here, we propose a spiking neural network model which dynamically generates a dopamine signal based on the actor-critic architecture proposed by Houk . This signal modulates as a third factor the plasticity of the synapses encoding value function and policy. The proposed model simultaneously accounts for multiple experimental results, such as the generation of a TD-like dopaminergic signal with realistic firing rates in conditioning protocols , and the role of presynaptic activity, postsynaptic activity and dopamine in the plasticity of corticostriatal synapses . The excellent agreement between the predictions of our synaptic plasticity rules and the experimental findings is particularly noteworthy, as the update rules were postulated employing a purely top-down approach.
We performed simulations in NEST  to test the learning behavior of the model in a two dimensional grid-world task with a single rewarded state. The network learns to evaluate the states with respect to its reward proximity and adapt its policy accordingly. The learning speed and equilibrium performance are comparable to those of a discrete time algorithmic TD learning implementation.
The proposed model paves the way for investigations of the role of the dynamics of the dopaminergic system in reward-based learning. For example, we can use lesion studies to analyze the effects of dopamine treatment in Parkinson's patients. Finally, the experimentally constrained model can be used as the centerpiece of closed-loop functional models.
Partially funded by EU Grant 15879 (FACETS), BMBF Grant 01GQ0420 to BCCN Freiburg, Next-Generation Supercomputer Project of MEXT, Japan, and the Helmholtz Alliance on Systems Biology.
- Schultz W, Dayan P, Montague PR: A neural substrate of prediction and reward. Science. 1997, 275: 1593-1599. 10.1126/science.275.5306.1593.PubMedView ArticleGoogle Scholar
- Reynolds JN, Hyland BI, Wickens JR: A cellular mechanism of reward-related learning. Nature. 2001, 413: 67-70. 10.1038/35092560.PubMedView ArticleGoogle Scholar
- Potjans W, Morrison A, Diesmann M: A spiking neural network model of an actor-critic learning agent. Neural Computation. 2009, 21: 301-339. 10.1162/neco.2008.08-07-593.PubMedView ArticleGoogle Scholar
- Houk JC, Adams JL, Barto AG: A model of how the basal ganglia generate and use neural signals that predict reinforcement. 1995, MIT Press, Cambridge, MAGoogle Scholar
- Reynolds JN, Hyland BI, Wickens JR: Dopamine-dependent plasticity of corticostriatal synapses. Neural Networks. 2002, 15: 507-521. 10.1016/S0893-6080(02)00045-X.PubMedView ArticleGoogle Scholar
- Gewaltig M-O, Diesmann M: NEST (neural simulation tool). Scholarpedia. 2007, 2: 1430.View ArticleGoogle Scholar
This article is published under license to BioMed Central Ltd.