Volume 16 Supplement 1

24th Annual Computational Neuroscience Meeting: CNS*2015

Open Access

Approximate nonlinear filtering with a recurrent neural network

  • Anna Kutschireiter1Email author,
  • Simone C Surace1, 2,
  • Henning Sprekeler3 and
  • Jean-Pascal Pfister1
BMC Neuroscience201516(Suppl 1):P196

https://doi.org/10.1186/1471-2202-16-S1-P196

Published: 18 December 2015

One of the most fascinating properties of the brain is its ability to continuously extract relevant features in a changing environment. Realizing that sensory inputs are not perfectly reliable, this task becomes even more challenging. This problem can be formalized as a filtering problem where the aim is to infer the state of a dynamically changing hidden variable given some noisy observation. A well-known solution to this problem is the Kalman filter for linear hidden dynamics or the extended Kalman filter for nonlinear dynamics. On the other hand, particle filters offer a sampling-based approach to approximate the posterior distribution. However, it remains unclear how these filtering algorithms may be implemented in neural tissue. Here, we propose a neuronal dynamics which approximates non-linear filtering.

Starting from the formal mathematical solution to the non-linear filter problem, the Kushner equation [1], and assuming linear and noisy observations we derive a stochastic rate-based network whose activity samples the posterior dynamics. We found that taking samples following these stochastic posterior dynamics is able to solve the inference task with a performance comparable to that of standard particle filtering or (extended) Kalman filtering. Indeed, for a linear hidden dynamics we exactly retrieve the Kalman filter equations from our neural filter. In Figure 1 we show the error of the filtered estimate as a function of the observation noise for two different parameter choices in our filter equations.
Figure 1

Left: A sample trajectory of the real hidden state and its filtered estimate, showing the ability of the neural filter to infer the hidden variable. Right: For a nonlinear hidden dynamics, the neuronal filter we propose achieves an estimation error which is comparable to that of a particle filter or an extended Kalman filter (EKF). The worst neuronal filter corresponds to our filter with a suboptimal parameter choice.

Thus, the neuronal filter we propose provides an efficient way to infer the state of temporally changing hidden variables. In addition, due to the locality of the underlying mathematical model, the filter is made biologically plausible from a neural-sampling perspective, hence providing a possible framework for the neural sampling hypothesis [2].

Authors’ Affiliations

(1)
Institute of Neuroinformatics, University of Zurich and ETH Zurich
(2)
Department of Physiology, University of Bern
(3)
Bernstein Center for Computational Neuroscience, Technical University Berlin

References

  1. Kushner H: On the Differential Equations Satisfied by Conditional Probability Densities of Markov Processes, with Applications. Journal of the Society for Industrial & Applied Mathematics. Control. 1962, 2 (1):Google Scholar
  2. Fiser J, Berkes P, Orbán G, Lengyel M: Statistically optimal perception and learning: from behavior to neural representations. Trends in Cognitive Sciences. 2010, 14 (3): 119-130.PubMedPubMed CentralView ArticleGoogle Scholar

Copyright

© Kutschireiter et al. 2015

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Advertisement