Volume 13 Supplement 1

Twenty First Annual Computational Neuroscience Meeting: CNS*2012

Open Access

Convergence analysis of efficient online learning in Bayesian spiking neurons

  • Andre Van Schaik1Email author,
  • Levin Kuhlmann2,
  • Michael Hauser-Raspe1,
  • Jonathan Manton2,
  • Jonathan Tapson1 and
  • David B Grayden2
BMC Neuroscience201213(Suppl 1):P129

DOI: 10.1186/1471-2202-13-S1-P129

Published: 16 July 2012

Bayesian spiking neurons (BSNs) provide a probablisitic and intuitive interpretation of how spiking neurons could work and have been shown to be equivalent to leaky integrate-and-fire neurons under certain conditions [1]. The study of BSNs has been restricted mainly to small networks because online learning, which currently involves a maximum-likelihood-expectation-maximisation (ML-EM) approach [2, 3], is quite slow. Here a new approach to estimating the parameters of Bayesian spiking neurons, referred to as fast learning (FL), is presented and compared to online ML-EM learning.

Learning in a BSN is local to the neuron and involves estimation of the transition rate and observation rate parameters of an underlying implicit hidden Markov model (HMM), the hidden state of which the BSN output encodes [1]. Rather than estimating the parameters by maximizing the log-likelihood of the hidden states and the synaptic observations given the parameters as is done in ML-EM [2, 3], the FL algorithm directly calculates statistics upon which the parameters depend. This is achieved by taking advantage of the relationship between the log-odds ratio of the hidden state computed by the BSN and the probability that the hidden state is ‘on’ given the past synaptic observations, https://static-content.springer.com/image/art%3A10.1186%2F1471-2202-13-S1-P129/MediaObjects/12868_2012_Article_2666_IEq1_HTML.gif .

Online learning in a two BSN neuron hierarchy is explored, where the first neuron receives N=20 synapses driven by Poisson processes and the second neuron receives input from only the first neuron. Simulations were performed for a fixed set of transition rates and observation rates for 10 different perturbations of the initial transition and observation rate estimates: ±0-20%, ±20-40%,..., ±180-200%. Initial rates were not allowed to go below 10-6. Each perturbation condition was simulated 100 times by randomly selecting the initial parameter values.

Although the FL algorithm is not as exact as the ML-EM at estimating the true parameter values for small perturbations of the initial rate estimates relative to the true rates, the FL algorithm is able to reliably estimate the parameters for initial perturbations of up to 200%, whereas the ML-EM algorithm estimates begin to deviate after perturbations of about 40-60%. Moreover, the simplicity of the FL algorithm means that it runs on the order of 25 times faster than the ML-EM implementation considered. These results hold true for both the first and second neurons in the two BSN neuron hierarchy. For the first neuron in the hierarchy, the RMS difference in the time series of the probability https://static-content.springer.com/image/art%3A10.1186%2F1471-2202-13-S1-P129/MediaObjects/12868_2012_Article_2666_IEq1_HTML.gif calculated for the estimated and the true parameter values, follows a similar pattern to the parameter estimates when the FL and ML-EM algorithms are compared, with average RMS errors of 0.2% obtained for the FL algorithm across the range of perturbations studied.

Although we do not have a formal proof of convergence of the FL algorithm, we conclude that the FL algorithm can stably estimate the parameters over a large range of initial perturbations and it can do this very quickly. Thus the FL algorithm makes online learning in networks of BSNs much more tractable.



This work was supported by ARC Discovery Project grant DP1096699 and the University of Western Sydney.

Authors’ Affiliations

MARCS Institute, University of Western Sydney
NeuroEngineering Laboratory, Department of Electrical and Electronic Engineering, The University of Melbourne


  1. Deneve S: Bayesian spiking neurons i: inference. Neural Comput. 2008, 20: 91-117. 10.1162/neco.2008.20.1.91.View ArticlePubMedGoogle Scholar
  2. Deneve S: Bayesian spiking neurons ii: learning. Neural Comput. 2008, 20: 118-145. 10.1162/neco.2008.20.1.118.View ArticlePubMedGoogle Scholar
  3. Mongillo M, Deneve S: Online line expectation-maximization in hidden Markov models. Neural Comput. 2008, 20: 1706-1716. 10.1162/neco.2008.10-06-351.View ArticlePubMedGoogle Scholar


© Van Schaik et al; licensee BioMed Central Ltd. 2012

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.