- Poster presentation
- Open Access
Sound localization with spiking neural networks
© Goodman et al; licensee BioMed Central Ltd. 2009
- Published: 13 July 2009
- Sound Source
- Sound Localization
- Interaural Time Difference
- Coincidence Detector
- Interaural Level Difference
The ability of various species to localize sounds – estimating with reasonable accuracy the direction from which a given sound source is coming – is thought to make use of several cues . Differences in arrival times of the sound between the two ears (interaural time difference, ITD) and difference in intensity (interaural level difference, ILD) are thought to be the main cues for estimating the azimuth of a sound source. Estimating the elevation and whether or not the sound is coming from the front or back is usually thought to involve monaural spectral cues arising from the anisotropic filtering properties of the head and outer ears.
We present a framework for analyzing these cues using spiking neural networks, extending the Jeffress model for ITD sensitivity . Coincidence detector neurons perform a similarity operation on their inputs. Using this similarity mechanism, networks can be designed which exhibit neurons sensitive to sounds coming from particular locations. Mechanisms underlying ITD, ILD and spectral filtering sensitivity can be addressed in this framework. In particular, we demonstrate a very simple neural network that exhibits spatial sensitivity. The network consists of a matrix of coincidence detectors each receiving an input from the left ear and the right, each passed through different cochlear filters. The binaural neurons in the network are sensitive to both ITD and ILD cues. Maximum likelihood estimation based on the output of these neurons can localize sounds to a very high degree of accuracy, suggesting that a distributed code based on these very simple neurons provides sufficient information to estimate sound location.
The spiking neural network model we have developed uses synchrony and distributed codes. This makes it an ideal candidate for studying learning using spike-timing dependent plasticity (STDP) [3, 4], which has been observed in several places in the auditory system . Experiments have shown that we are able to learn to localize sounds when our head or ear shape changes (thus changing the filtering properties) . We investigate how plasticity might explain these findings in our model.
Work partially supported by ANR-NEURO-22-01.
- Middlebrooks JC, Green DM: Sound localization by human listeners. Ann Rev Psychology. 1991, 42: 135-159. 10.1146/annurev.ps.42.020191.001031.View ArticleGoogle Scholar
- Jeffress LA: A place theory of sound localization. J Comparative and Physiological Psychology. 1948, 41: 35-39. 10.1037/h0061495.View ArticleGoogle Scholar
- Gerstner W, Kempter R, van Hemmen JL, Wagner H: A neuronal learning rule for sub-millisecond temporal coding. Nature. 1996, 383: 76-78. 10.1038/383076a0.PubMedView ArticleGoogle Scholar
- Song S, Miller KD, Abbott LF: Competitive Hebbian learning through spike-timing-dependent synaptic plasticity. Nat Neurosci. 2000, 3: 919-926. 10.1038/78829.PubMedView ArticleGoogle Scholar
- Tzounopoulos T, Kim Y, Oertel D, Trussell LO: Cell-specific, spike timing-dependent plasticities in the dorsal cochlear nucleus. Nat Neurosci. 2004, 7: 719-725. 10.1038/nn1272.PubMedView ArticleGoogle Scholar
- Hofman PM, Van Riswick JGA, Van Opstal AJ: Relearning sound localization with new ears. Nat Neurosci. 1998, 1: 417-421. 10.1038/1633.PubMedView ArticleGoogle Scholar
This article is published under license to BioMed Central Ltd.