Volume 9 Supplement 1

Seventeenth Annual Computational Neuroscience Meeting: CNS*2008

Open Access

Spike-based reinforcement learning of navigation

  • Eleni Vasilaki1Email author,
  • Robert Urbanczik2,
  • Walter Senn2 and
  • Wulfram Gerstner1
BMC Neuroscience20089(Suppl 1):P72

DOI: 10.1186/1471-2202-9-S1-P72

Published: 11 July 2008

Introduction

We have studied a spiking, reinforcement learning model derived from reward maximization [1, 2] where causal relations between pre-and postsynaptic activity set a synaptic eligibility trace [2, 3]. Neurons are modeled according to the "Integrate-and-Fire" model with escape noise. Synapses are binary and are modulated via the release probability. The synaptic release probability is updated when a global reward signal (such as dopamine) is received.

We have used the learning algorithm in a model of the Morris Water Maze task. The simulated rat explores the environment in random search. After only few trials the rat has learned to approach the goal from arbitrary start conditions, see Figure 1. The model features automatic generalization in state and action space due to coding by overlapping profiles of place cell and action cells [4].
https://static-content.springer.com/image/art%3A10.1186%2F1471-2202-9-S1-P72/MediaObjects/12868_2008_Article_895_Fig1_HTML.jpg
Figure 1

Escape latency versus number of trials. Escape latency measures the time it takes the simulated rat to reach a hidden platform starting from arbitrary initial conditions. Learning is achieved in less than 20 trials. Error bars indicate 25% and 75% percentiles.

Authors’ Affiliations

(1)
Laboratory of Computational Neuroscience, School of Computer and Communications Sciences and Brain Mind Institute, Ecole Polytechnique Fédérale de Lausanne (EPFL)
(2)
Institute of Physiology, University of Bern

References

  1. Pfister JP, Toyoizumi T, Barber D, Gerstner W: Optimal Spike-Timing Dependent Plasticity for Precise Action Potential Firing in Supervised Learning. Neural Computation. 2006, 18 (6): 1309-1339. 10.1162/neco.2006.18.6.1318.View ArticleGoogle Scholar
  2. Florian RV: Reinforcement learning through modulation of spike-timing-dependent synaptic plasticity. Neural Computation. 2007, 19 (6): 1468-1502. 10.1162/neco.2007.19.6.1468.View ArticlePubMedGoogle Scholar
  3. Izhikevich EM: Solving the Distal Reward Problem through Linkage of STDP and Dopamine Signaling. Cerebral Cortex. 2007, 17: 2443-2452. 10.1093/cercor/bhl152.View ArticlePubMedGoogle Scholar
  4. Strösslin T, Sheynikhovich D, Chavarriaga R, Gerstner W: Robust self-localisation and navigation based on hippocampal place cells. Neural Networks. 2005, 18 (9): 1125-1140. 10.1016/j.neunet.2005.08.012.View ArticlePubMedGoogle Scholar

Copyright

© Vasilaki et al; licensee BioMed Central Ltd. 2008

This article is published under license to BioMed Central Ltd.

Advertisement