Volume 8 Supplement 2

Sixteenth Annual Computational Neuroscience Meeting: CNS*2007

Open Access

Spike timing dependent plasticity implements reinforcement learning

  • Roberto A Santiago2,
  • Patrick D Roberts1Email author and
  • Gerardo Lafferriere2
BMC Neuroscience20078(Suppl 2):S16

DOI: 10.1186/1471-2202-8-S2-S16

Published: 6 July 2007

An explanatory model is developed to show how synaptic learning mechanisms modeled through spike-timing dependent plasticity (STDP) can result in longer term adaptations consistent with reinforcement learning models. In particular, the reinforcement learning model known as temporal difference (TD) learning has been used to model neuronal behavior in the orbitofrontal cortex (OFC) and ventral tegmental area (VTA) of macaque monkey during reinforcement learning. While some research has observed, empirically, a connection between STDP and TD there is as yet no explanatory model directly connecting TD to STDP. Through analysis of the STDP rule, the connection between STDP and TD is explained. We further show that an STDP learning rule drives the spike probability of reward predicting neurons to a stable equilibrium. The equilibrium solution has an increasing slope where the steepness of the slope predicts the probability of the reward. This connection begins to shed light into more recent data gathered from VTA and OFC which are not well modeled by TD. We suggest that STDP provides the underlying mechanism for explaining reinforcement learning and other higher level perceptual and cognitive function.

Authors’ Affiliations

(1)
Neurological Sciences Institute, Oregon Health & Science University
(2)
Department of Mathematics and Statistics, Portland State University

Copyright

© Santiago et al; licensee BioMed Central Ltd. 2007

This article is published under license to BioMed Central Ltd.

Advertisement