Skip to main content

Partial information decomposition as a unified approach to the characterization and design of neural goal functions

In many neural systems anatomical motifs are found repeatedly in different places. Despite this repetition these motifs often seem to serve a perplexing variety of functions. A prime example is the canonical microcircuit, which is repeated across multiple cortical areas, but supports a variety of functions from sensory processing and memory to executive functions and motor control. The multiplicity of functions served by a single anatomical motif suggests a common, but more abstract, information processing goal underlying all the different functions. Identifying this goal from neural recordings is a key challenge in understanding the general principles of neural information processing. The apparent diversity of functions makes it clear that this common goal cannot be described using function-specific language (e.g. "edge filters"), but calls for an abstract framework. Here, information theory is the obvious candidate. Notable past approaches using information theoretic descriptions of neural goal functions suggested to maximize the mutual information between input and output [1], maximize the coherent mutual information that all the inputs share about the output [2], or, very generally, to minimize the free energy [3]. To facilitate these efforts, and to better dissect the implications of existing neural goal functions, we suggest to build on a recent progress in information theory, termed partial information decomposition (PID). PID allows to measure which of a set of inputs contributes either uniquely, redundantly or synergistically to the output of a (neural) processing unit [47], and which fraction of the output's entropy remains unexplained by the input set. We show how these measures can be used to identify an information theoretic footprint of a neural goal function. Most importantly, these measures can quantify how much of the information is modified rather than merely relayed when passing through the neural processor [8]. This shifts the focus from information transmission to more complex processing and allows a much better understanding of the (theoretical?) capabilities of a neuron or neural circuit. Using this approach we show how to better understand existing neural goal functions using PID measures and provide an information theoretic framework for the design of novel goal functions for artificial neural networks.

References

  1. 1.

    Linsker R: Self-organization in a perceptual network. Computer. 1988, 21 (3): 105-117.

    Article  Google Scholar 

  2. 2.

    Kay JW, Phillips WA: Coherent Infomax as a computational goal for neural systems. Bull Math Biol. 2011, 73 (2): 344-372.

    PubMed  Article  Google Scholar 

  3. 3.

    Friston K, Kilner J, Harrison L: A free energy principle for the brain. J Physiol Paris. 2006, 100 (1-3): 70-87.

    PubMed  Article  Google Scholar 

  4. 4.

    Williams PL, Beer RD: Nonnegative Decomposition of Multivariate Information. ArXiv10042515 Math-Ph Physicsphysics Q-Bio. 2010

    Google Scholar 

  5. 5.

    Bertschinger N, Rauh J, Olbrich E, Jost J, Ay N: Quantifying Unique Information. Entropy. 2014, 16 (4): 2161-2183.

    Article  Google Scholar 

  6. 6.

    Griffith V, Koch C: Quantifying Synergistic Mutual Information. Guided Self-Organization: Inception. Edited by: Prokopenko M. 2014, Springer Berlin Heidelberg, 159-190. [Emergence, Complexity and Computation, vol. 9]

    Google Scholar 

  7. 7.

    Wibral M, Lizier JT, Priesemann V: Bits from Brains for Biologically-Inspired Computing. Frontiers in Robotics and AI. 2015

    Google Scholar 

  8. 8.

    Lizier JT, Flecker B, Williams PL: Towards a synergy-based approach to measuring information modification. Artificial Life (ALIFE), 2013 IEEE Symposium on. IEEE. 2013, S43-S51.

    Google Scholar 

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to Michael Wibral.

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Wibral, M., Phillips, W.A., Lizier, J.T. et al. Partial information decomposition as a unified approach to the characterization and design of neural goal functions. BMC Neurosci 16, P199 (2015). https://doi.org/10.1186/1471-2202-16-S1-P199

Download citation

Keywords

  • Artificial Neural Network
  • Executive Function
  • Mutual Information
  • Free Energy
  • Motor Control