Inferring effective computational connectivity using incrementally conditioned multivariate transfer entropy
© Lizier and Rubinov; licensee BioMed Central Ltd. 2013
Published: 8 July 2013
Effective connectivity analysis is a popular approach in computational neuroscience, whereby one seeks to infer a network of directed edges between neural variables (e.g. voxels or regions in fMRI, or reconstructed source time series in MEG data) which can explain their observed time-series dynamics. This is an important approach in understanding brain function, contrasting with functional connectivity analysis in being directed and dynamic, and with structural connectivity analysis in not requiring interventions and in being task-modulated. In particular, effective connectivity analysis seeks to find a minimal circuit model that can reconstruct the activity patterns contained in the given data. Ideally, such inference would be: made using model-free techniques; capture non-linear, multivariate, directional relationships; handle small amounts of data, and be statistically robust.
The information-theoretic measure transfer entropy (TE)  (a non-linear Granger causality) is becoming widely used for this purpose . However its use is generally focussed only on interactions between a single source and destination (inferring an edge where TE is statistically significant), or else attempting multivariate considerations by conditioning on all other variables in the system (only practically possible with small numbers of variables or linear interactions). We aim to extend TE-based effective network inference to multivariate techniques, specifically: capturing collective interactions where target outcomes are due to multiple source variables (synergies); eliminating spurious connections for correlated sources (redundancies); and avoiding combinatorial explosions in source groups evaluated. We aim to maximize inference of true interactions while minimizing inference of spurious interactions.
In this manner, we describe a new method  which addresses the above requirements in considering multivariate source interactions. For each node in the network, the method identifies the set of source nodes which provide the most statistically significant information regarding its dynamics, and are thus inferred as those source information nodes from which that destination is computed. This is done using incrementally conditioned TE, gradually building the set of source nodes for a destination conditioned on the previously identified sources.
- Schreiber T: Measuring information transfer. Phys Rev Lett. 2000, 85: 461-464. 10.1103/PhysRevLett.85.461.View ArticlePubMedGoogle Scholar
- Vicente R, Wibral M, Lindner M, Pipa G: Transfer entropy-a model-free measure of effective connectivity for the neurosciences. J Comp Neurosci. 2011, 30 (1): 45-67. 10.1007/s10827-010-0262-3.View ArticleGoogle Scholar
- Lizier JT, Rubinov M: Multivariate construction of effective computational networks from observational data. Max Planck Institute for Mathematics in the Sciences Preprint 25/2012. 2012Google Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.