- Poster presentation
- Open access
- Published:
Sound localization with spiking neural networks
BMC Neuroscience volume 10, Article number: P313 (2009)
Introduction
The ability of various species to localize sounds – estimating with reasonable accuracy the direction from which a given sound source is coming – is thought to make use of several cues [1]. Differences in arrival times of the sound between the two ears (interaural time difference, ITD) and difference in intensity (interaural level difference, ILD) are thought to be the main cues for estimating the azimuth of a sound source. Estimating the elevation and whether or not the sound is coming from the front or back is usually thought to involve monaural spectral cues arising from the anisotropic filtering properties of the head and outer ears.
Spiking neural network framework
We present a framework for analyzing these cues using spiking neural networks, extending the Jeffress model for ITD sensitivity [2]. Coincidence detector neurons perform a similarity operation on their inputs. Using this similarity mechanism, networks can be designed which exhibit neurons sensitive to sounds coming from particular locations. Mechanisms underlying ITD, ILD and spectral filtering sensitivity can be addressed in this framework. In particular, we demonstrate a very simple neural network that exhibits spatial sensitivity. The network consists of a matrix of coincidence detectors each receiving an input from the left ear and the right, each passed through different cochlear filters. The binaural neurons in the network are sensitive to both ITD and ILD cues. Maximum likelihood estimation based on the output of these neurons can localize sounds to a very high degree of accuracy, suggesting that a distributed code based on these very simple neurons provides sufficient information to estimate sound location.
Learning
The spiking neural network model we have developed uses synchrony and distributed codes. This makes it an ideal candidate for studying learning using spike-timing dependent plasticity (STDP) [3, 4], which has been observed in several places in the auditory system [5]. Experiments have shown that we are able to learn to localize sounds when our head or ear shape changes (thus changing the filtering properties) [6]. We investigate how plasticity might explain these findings in our model.
References
Middlebrooks JC, Green DM: Sound localization by human listeners. Ann Rev Psychology. 1991, 42: 135-159. 10.1146/annurev.ps.42.020191.001031.
Jeffress LA: A place theory of sound localization. J Comparative and Physiological Psychology. 1948, 41: 35-39. 10.1037/h0061495.
Gerstner W, Kempter R, van Hemmen JL, Wagner H: A neuronal learning rule for sub-millisecond temporal coding. Nature. 1996, 383: 76-78. 10.1038/383076a0.
Song S, Miller KD, Abbott LF: Competitive Hebbian learning through spike-timing-dependent synaptic plasticity. Nat Neurosci. 2000, 3: 919-926. 10.1038/78829.
Tzounopoulos T, Kim Y, Oertel D, Trussell LO: Cell-specific, spike timing-dependent plasticities in the dorsal cochlear nucleus. Nat Neurosci. 2004, 7: 719-725. 10.1038/nn1272.
Hofman PM, Van Riswick JGA, Van Opstal AJ: Relearning sound localization with new ears. Nat Neurosci. 1998, 1: 417-421. 10.1038/1633.
Acknowledgements
Work partially supported by ANR-NEURO-22-01.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This article is published under license to BioMed Central Ltd. This is an Open Access article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Goodman, D., Pressnitzer, D. & Brette, R. Sound localization with spiking neural networks. BMC Neurosci 10 (Suppl 1), P313 (2009). https://doi.org/10.1186/1471-2202-10-S1-P313
Published:
DOI: https://doi.org/10.1186/1471-2202-10-S1-P313