- Poster presentation
- Open access
- Published:
A model of cell specialization using a Hebbian policy-gradient approach with "slow" noise
BMC Neuroscience volume 10, Article number: P136 (2009)
We study a model of neuronal specialization using a policy gradient reinforcement approach. (1) The neurons stochastically fire according to their synaptic input plus a noise term; (2) The environment is a closed-loop system composed of a rotating eye and a visual punctual target; (3) The network is composed of a foveated retina directly connected to a motoneuron layer; (4) The reward depends on the distance between the subjective target position and the fovea and (5) the weight update depends on the Hebb-like product r(t)Z ij (t) where r(t) is the reward and Z ij (t) is a Hebbian trace updated according to the product [S i (t)-F i (t)] e j (t), where S i (t) is the post-synaptic spike, F i (t) is the firing probability and e j (t) is the pre-synaptic activity [1, 2].
Several temporal scales are to be considered when modeling such neuromimetic controller systems. First, the typical integration time of the neurons is of the order of few milliseconds. Second, the motor commands have a duration on the order of 100 ms. In the design of an adaptive controller, this temporal mismatch must be taken into account.
For that, we consider that the firing probability is monitored by a "pink noise" term whose autocorrelation is of the order of 100 ms, so that the firing probability is overestimated (or underestimated) for about100 ms periods. The rewards occurring meanwhile assess the "quality" of those elementary shifts, and modify the firing probability accordingly.
Every motoneuron being associated to a particular angular direction, we test at the end of the learning process the preferred output of the visual cells. We find that accordingly with the observed final behavior, the visual cells preferentially excite the motoneurons heading in the opposite angular direction (see Figures 1 and 2).
References
Bartlett P, Baxter J: Synaptic modifications in spiking neurons that learn. 1999, Technical report, Australian National University
Florian R: A reinforcement learning algorithm for spiking neural networks. Proc of Seventh International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC'05). 2005, 299-306.
Acknowledgements
The author thanks the INRIA Lille-Nord europe for 1-year delegation in the SEQUEL team.
This work is supported by the french ANR MAPS (ANR-07-BLAN-0335-02).
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This article is published under license to BioMed Central Ltd. This is an Open Access article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Daucé, E. A model of cell specialization using a Hebbian policy-gradient approach with "slow" noise. BMC Neurosci 10 (Suppl 1), P136 (2009). https://doi.org/10.1186/1471-2202-10-S1-P136
Published:
DOI: https://doi.org/10.1186/1471-2202-10-S1-P136