Skip to main content
  • Poster presentation
  • Open access
  • Published:

Likelihood-free Bayesian analysis of neural network models

The goal of cognitive modeling is to understand complex behaviors within a system of mathematically-specified mechanisms or processes; to assess the adequacy of the model to account for experimental data, and to obtain an estimate of the model parameters, which carry valuable information about how the model captures the observed behavior for both individuals and groups. From a theoretical perspective it is essential that we fully understand how the parameters of a model affect the model predictions, and those parameters interact with one another.

Despite the importance of understanding the full range of valid parameter estimates, difficulties encountered in deriving the full likelihood function have prevented the application of fully Bayesian analyses for many cognitive models, especially those that attempt to capture neurally-plausible mechanisms. Recent advances in likelihood-free techniques have allowed for new insights to simulation-based cognitive models [1–3]. Yet, present likelihood-free methods have two critical sources of error that continue to prevent their widespread adoption. The first source of error arises from the use of summary statistics that are not sufficient for the parameters of interest. When a set of summary statistics are not sufficient, one cannot guarantee convergence to the correct posterior distribution. Because it is impossible to guarantee that a summary statistic is sufficient when a likelihood function is unavailable, current likelihood-free estimation techniques introduce error in the posterior distribution, and this error is not directly measurable. The second source of error results from the tolerance threshold that is used to evaluate the approximate likelihood in some algorithms. Even when sufficient statistics are known, a nonzero tolerance threshold will result in inaccurate posterior estimates [2].

Here we present a new, fully-generalizable method, which we call the probability density approximation (PDA) method, for performing likelihood-free Bayesian parameter estimation that does not suffer from these sources of error. Our method works by generating a set of simulated data and constructing an estimate of the underlying probability density function through scaled kernel density estimation. We illustrate the importance of our method by comparing two neural network models of choice reaction time that have never been analyzed using Bayesian techniques due to their computational complexity: the Leaky Competing Accumulator (LCA) [4] model and the Feed-Forward Inhibition (FFI) [5] model. Both models embody neurologically plausible mechanisms such as "leakage", or the passive decay of evidence during a decision, and competition among alternative through either lateral inhibition (in the LCA model) or feed-forward inhibition (in the FFI model). However, it remains unclear as to which dynamical system best accounts for empirical data, due to the limitations imposed by intractable likelihoods. Specifically, complexity measures that take into account posterior uncertainty and model complexity have yet to be applied. Our method of Bayesian analysis leads to results favoring the competitive mechanisms in the LCA over the feed-forward inhibition in the FFI, and reveals parameter trade-offs within these neurologically plausible models as well as interesting individual differences.

References

  1. Turner BM, Dennis S, Van Zandt T: Likelihood-free Bayesian analysis of memory models. Psychological Review.

  2. Turner BM, Van Zandt T: A tutorial on approximate Bayesian computation. Journal of Mathematical Psychology. 2012, 56: 69-85. 10.1016/j.jmp.2012.02.005.

    Article  Google Scholar 

  3. Turner BM, Sederberg PB: Approximate Bayesian computation with Differential Evolution. Journal of Mathematical Psychology. 56: 375-385.

  4. Usher M, McClell JL: On the time course of perceptual choice: The leaky competing accumulator model. Psychological Review. 2001, 108: 550-592.

    Article  CAS  PubMed  Google Scholar 

  5. Shadlen MN, Newsome WT: Neural basis of a perceptual decision in the parietal cortex (area LIP) of the rhesus monkey. Journal of Neurophysiology. 2001, 86: 1916-1936.

    CAS  PubMed  Google Scholar 

Download references

Acknowledgements

This work was funded by NIH award number F32GM103288 to the first author.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Brandon M Turner.

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Turner, B.M., Sederberg, P.B. & McClelland, J.L. Likelihood-free Bayesian analysis of neural network models. BMC Neurosci 14 (Suppl 1), P270 (2013). https://doi.org/10.1186/1471-2202-14-S1-P270

Download citation

  • Published:

  • DOI: https://doi.org/10.1186/1471-2202-14-S1-P270

Keywords