Volume 12 Supplement 1

Twentieth Annual Computational Neuroscience Meeting: CNS*2011

Open Access

A functional spiking model of the ITD processing pathway of the barn owl

BMC Neuroscience201112(Suppl 1):P20

DOI: 10.1186/1471-2202-12-S1-P20

Published: 18 July 2011

Sound localization in the barn owl relies on the early processing of two binaural cues, ITD (interaural time differences) and ILD (interaural level differences), that takes place in two parallel pathways in the auditory brainstem. Previous modeling studies fell into one of two classes, either they used biophysically plausible neural models on simplified binaural stimuli with artificially induced ITDs, or abstract cross-correlation models on realistic acoustical inputs. Here we present a functional spiking neural model of the ITD processing pathway of the barn owl, with a realistic virtual acoustic environment.

The acoustical environment was reproduced by filtering natural sounds through measured Head-Related Transfer Functions (HRTFs) in the barn owl (Tyto Alba). Basilar membrane filtering was modeled with a bank of linear gammachirp filters and compression. Monaural neurons in the Nucleus Magnucellularis (NM) receive inputs from auditory nerve fibers and project axons to binaural neurons in the Nucleus Laminaris (NL), with various conduction delays, so that best delays depend on preferred frequency in accordance with measured distributions [1].

Empirical findings show that the best tuning of NL neurons is invariant to level and ILD changes [3], which implies that the spike timing of their monaural inputs should also be invariant to level. This is a challenging problem, since integration to a fixed threshold inevitably results in earlier spike timing when the input level is increased. We address this issue by using a spiking model with dynamic threshold [2, 4]. We derive analytical conditions under which one can achieve this level invariance, and show through numerical simulations that the responses of our model to realistic inputs are indeed robust to level and ILD changes.

The azimuth is estimated from the activation pattern of NL neurons. We found that the model could accurately estimate the azimuth of a large set of natural sounds with various levels, and significant background noise. Furthermore, we studied the robustness of ITD estimation in our model to reflective environments. We believe this study is a first step towards the development of neural models of sound localization in ecological environments.



This work was supported by the European Research Council (ERC StG 240132).

Authors’ Affiliations

Equipe Audition, Département d'Etudes Cognitives, Ecole Normale Supérieure
Laboratoire Psychologie de la Perception, CNRS and Université Paris Descartes


  1. Wagner H, Asadollahi A, Bremen P, Endler F, Vonderschen K, et von Campenhausen M: Distribution of Interaural Time Difference in the Barn Owl's Inferior Colliculus in the Low- and High-Frequency Ranges. J. Neurosci. 2007, 27: 4191-4200. 10.1523/JNEUROSCI.5250-06.2007.View ArticlePubMedGoogle Scholar
  2. Platkiewicz J, Brette R: Threshold Equation for Action Potential Initiation. PLoS Computational Biology. 2010, 6: 105-112. 10.1371/journal.pcbi.1000850.View ArticleGoogle Scholar
  3. Konishi M: Coding of Auditory Space. Annual review of neuroscience. 2003, 26: 31-55. 10.1146/annurev.neuro.26.041002.131123.View ArticlePubMedGoogle Scholar
  4. Howard MKA, Rubel EW: Dynamic spike thresholds during synaptic integration preserve and enhance temporal response properties in the avian cochlear nucleus. Journal of Neuroscience. 2010, 30: 12063-12074. 10.1523/JNEUROSCI.1840-10.2010.PubMed CentralView ArticlePubMedGoogle Scholar


© Benichoux and Brette; licensee BioMed Central Ltd. 2011

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.