Volume 10 Supplement 1

Eighteenth Annual Computational Neuroscience Meeting: CNS*2009

Open Access

Multilinear models for the auditory brainstem

  • Bernhard Englitzu1, 2Email author,
  • Misha Ahrens3,
  • Sandra Tolnai2, 4,
  • Rudolf Rübsamen2,
  • Maneesh Sahani3 and
  • Jürgen Jost1
BMC Neuroscience200910(Suppl 1):P312

DOI: 10.1186/1471-2202-10-S1-P312

Published: 13 July 2009

The representation of acoustic stimuli on the level of the brainstem forms the basis for further auditory processing. While some simple characteristics of this representation are widely accepted, it remains a challenge to predict the firing rate at high temporal resolution in response to arbitrary stimuli. Such predictive models would be helpful tools for further investigations, in particular sound localization. Devising a model involves several choices: the stimulus representation, the modeling framework, and the performance measure. In this study we explore these choices for single cell responses from the medial nucleus of the trapezoid body (MNTB), which constitute a well-identifiable and homogeneous neuronal population. Detailed models of MNTB responses have not been studied before. We estimate a recently introduced family of models, the multilinear models ([1], Figure 1), which encompass the classical spectrotemporal receptive field (STRF) and allows arbitrary input nonlinearities and certain multiplicative time-frequency interactions. To reliably quantify the explained variance for noisy responses, we use the predictive power [2] as performance measure. We find that nonlinear models and a cochlear-like (gamma-tone) stimulus representation lead to significant improvements in predictive power. On average, 75% of the explainable variance can be predicted. Since the models deliver faithful predictions, a meaningful interpretation of the estimated model structures becomes possible. Including multiplicative interactions strongly reduce the inhibitory fields in the linear kernels. Together with their spectrotemporal location, this suggests cochlear suppression as their source. Similar improvements in predictive power are obtained for input and output-nonlinearities, with best performance for the combination of both. In conclusion, the context model provides a rich and still interpretable extension over other nonparametric models for modeling responses in the MNTB.
Figure 1

Schematic overview of estimated models. An acoustic stimulus is created from broadband amplitude modulations. Three spectrotemporal representations of the sound are used as input for the following models: First, a multilinear model (dimensions: time, frequency, level) is estimated, e.g. a STRF, an input nonlinearity model (IN+STRF) or a Context model. Second, an estimated output linearity rescales the multilinear predicton to the final firing rate prediction. We compare the performance contributed by the individual parts.

Authors’ Affiliations

Max-Planck-Institute for Mathematics in the Sciences
Institute for Biology II, University of Leipzig
Gatsby Computational Neuroscience Unit, UCL
Department of Physiology, Anatomy and Genetics, University of Oxford


  1. Ahrens MB, Linden JF, Sahani M: Nonlinearities and contextual influences in auditory cortical responses modeled with multilinear spectrotemporal methods. J Neurosci. 2008, 28: 1929-42. 10.1523/JNEUROSCI.3377-07.2008.PubMedView ArticleGoogle Scholar
  2. Sahani M, Linden J: How linear are auditory cortical responses?. Advances in Neural Information Processing Systems. 2003, 15: 301-308.Google Scholar


© Englitzu et al; licensee BioMed Central Ltd. 2009

This article is published under license to BioMed Central Ltd.