Volume 10 Supplement 1

Eighteenth Annual Computational Neuroscience Meeting: CNS*2009

Open Access

Sound localization with spiking neural networks

BMC Neuroscience200910(Suppl 1):P313

DOI: 10.1186/1471-2202-10-S1-P313

Published: 13 July 2009


The ability of various species to localize sounds – estimating with reasonable accuracy the direction from which a given sound source is coming – is thought to make use of several cues [1]. Differences in arrival times of the sound between the two ears (interaural time difference, ITD) and difference in intensity (interaural level difference, ILD) are thought to be the main cues for estimating the azimuth of a sound source. Estimating the elevation and whether or not the sound is coming from the front or back is usually thought to involve monaural spectral cues arising from the anisotropic filtering properties of the head and outer ears.

Spiking neural network framework

We present a framework for analyzing these cues using spiking neural networks, extending the Jeffress model for ITD sensitivity [2]. Coincidence detector neurons perform a similarity operation on their inputs. Using this similarity mechanism, networks can be designed which exhibit neurons sensitive to sounds coming from particular locations. Mechanisms underlying ITD, ILD and spectral filtering sensitivity can be addressed in this framework. In particular, we demonstrate a very simple neural network that exhibits spatial sensitivity. The network consists of a matrix of coincidence detectors each receiving an input from the left ear and the right, each passed through different cochlear filters. The binaural neurons in the network are sensitive to both ITD and ILD cues. Maximum likelihood estimation based on the output of these neurons can localize sounds to a very high degree of accuracy, suggesting that a distributed code based on these very simple neurons provides sufficient information to estimate sound location.


The spiking neural network model we have developed uses synchrony and distributed codes. This makes it an ideal candidate for studying learning using spike-timing dependent plasticity (STDP) [3, 4], which has been observed in several places in the auditory system [5]. Experiments have shown that we are able to learn to localize sounds when our head or ear shape changes (thus changing the filtering properties) [6]. We investigate how plasticity might explain these findings in our model.



Work partially supported by ANR-NEURO-22-01.

Authors’ Affiliations

Équipe Audition (CNRS, ENS, Université Paris Descartes), Département d'Etudes Cognitives, École Normale Supérieure


  1. Middlebrooks JC, Green DM: Sound localization by human listeners. Ann Rev Psychology. 1991, 42: 135-159. 10.1146/annurev.ps.42.020191.001031.View ArticleGoogle Scholar
  2. Jeffress LA: A place theory of sound localization. J Comparative and Physiological Psychology. 1948, 41: 35-39. 10.1037/h0061495.View ArticleGoogle Scholar
  3. Gerstner W, Kempter R, van Hemmen JL, Wagner H: A neuronal learning rule for sub-millisecond temporal coding. Nature. 1996, 383: 76-78. 10.1038/383076a0.PubMedView ArticleGoogle Scholar
  4. Song S, Miller KD, Abbott LF: Competitive Hebbian learning through spike-timing-dependent synaptic plasticity. Nat Neurosci. 2000, 3: 919-926. 10.1038/78829.PubMedView ArticleGoogle Scholar
  5. Tzounopoulos T, Kim Y, Oertel D, Trussell LO: Cell-specific, spike timing-dependent plasticities in the dorsal cochlear nucleus. Nat Neurosci. 2004, 7: 719-725. 10.1038/nn1272.PubMedView ArticleGoogle Scholar
  6. Hofman PM, Van Riswick JGA, Van Opstal AJ: Relearning sound localization with new ears. Nat Neurosci. 1998, 1: 417-421. 10.1038/1633.PubMedView ArticleGoogle Scholar


© Goodman et al; licensee BioMed Central Ltd. 2009

This article is published under license to BioMed Central Ltd.