Skip to main content
Figure 3 | BMC Neuroscience

Figure 3

From: TRENTOOL: A Matlab open source toolbox to analyse information flow in time series data with transfer entropy

Figure 3

Graphical visualization of TE. (A) Coupled systems X → Y. To test directed interaction X → Y we predict a future Y(t + u) (star) once from past values (circles) of Y alone: Y e s t ( Y ) ( t + u ) =F ( Y ( t ) , Y ( t - τ ) , Y ( t - 2 τ ) ) , once from past values of Y and X: Y e s t ( X , Y ) ( t + u ) =F ( Y ( t - τ ) , Y ( t - 2 τ ) , X ( t - τ ) , X ( t - 2 τ ) ) . d - embedding dimension, τ - embedding lag. (B) Embedding. Y(t + u), Y(t), X(t) - coordinates in the embedding space, repetition of embedding for all t gives an estimate of the probability p(Y(t + u), Y(t),X(t)) (part C, embedding dimensions limited to 1).(C) p(Y(t + u)|Y(t),X(t)) - probability to observe Y(t+u) after Y(t) and X(t) were observed. This probability can be used for a prediction of the future of Y from the past of X and Y. Here, p(Y(t + u)|Y(t), X(t)) is obtained by a binning approach. We compute p(Y(t + u) ± Δ, Y(t) ± Δ,X(t) ± Δ), let Δ → 0 and normalize by p(Y(t),X(t))). TRENTOOL computes these densities via a Kernel-estimator. (D) p(Y(t + u)|Y(i)) predicts Y(t + u) from Y(t), without knowing about X(t). It predicts the future of Y from the past of Y alone. (E) If the past of X is irrelevant for prediction, the conditional distributions p(Y(t + u)|Y(t), X(t)), should be all equal to p(Y(t + u)|Y(t)). Differences indicate directed interaction from X to Y. Their weighted sum is transfer entropy.

Back to article page