Skip to content

Advertisement

  • Poster presentation
  • Open Access

Using transfer entropy to study synaptic integration in Purkinje cells

  • 1Email author,
  • 1,
  • 1,
  • 1 and
  • 1
BMC Neuroscience201516 (Suppl 1) :P236

https://doi.org/10.1186/1471-2202-16-S1-P236

  • Published:

Keywords

  • Time Window
  • Mutual Information
  • Purkinje Cell
  • Active Channel
  • Spike Train

Information theory has been used in many ways to analyse the output of neurons and how it relates to the input. Mutual information is often used as a means to describe this relationship [1], but as it is symmetrical, it does not necessarily describe how much information is transferred specifically from input to output. An alternative method, transfer entropy [2], introduces directionality to the analysis of this problem. Transfer entropy allows one to study how the predictability of the output of the cell is affected by varying time windows of preceding input spike trains.

In this work we are using two different extensions to transfer entropy that attempt to limit bias. The first method of estimating transfer entropy we are using is as described by Wibral et al [3], which places a restriction of a single time step on the length of the delay considered on the destination time series. This restriction is placed to avoid overestimating the information transferred from input to output by ensuring that the information provided by the history of the output has not been underestimated. The second approach, put forward by Gourévitch and Eggermont [4] creates a 'shuffled' transfer entropy value from the average transfer entropy, taken over many runs where the history of the input is randomly shuffled. The shuffled value can then be used to normalise the transfer entropy of the un-shuffled data.

We are currently investigating the use of transfer entropy to study the output of an active Purkinje cell model [5, 6] in response to a gamma-distributed input, while varying the location of the input site. Active channels in dendrites allow the effect of input to be location-independent, but using transfer entropy enables us to highlight differences in delay of information transfer based on the morphology of the cell.

Authors’ Affiliations

(1)
School of Computer Science, University of Hertfordshire, Hatfield, UK

References

  1. London M, Schreibman A, Häusser M, Larkum ME, Segev I: The Information Efficacy of a Synapse. Nature Neuroscience. 2002, 5 (4): 332-340.PubMedView ArticleGoogle Scholar
  2. Schreiber T: Measuring Information Transfer. American Physical Society. 2000, 85 (2): 461-464.Google Scholar
  3. Wibral M, Pampu N, Priesemann V, Siebenhühner F, Seiwert H, Lindner M, Lizier JT, Vicente R: Measuring Information-Transfer Delays. PLoS One. 2013, 8 (2):Google Scholar
  4. Gourévitch B, Eggermont JJ: Evaluating Information Transfer Between Auditory Cortical Neurons. Journal of Neurophysiology. 2007, 97: 2533-2543.PubMedView ArticleGoogle Scholar
  5. De Schutter E, Bower JM: An Active Membrane Model of the Cerebellar Purkinje Cell I. Simulation of Current Clamps in Slice. Journal of Neurophysiology. 1994, 71 (1): 375-400.PubMedGoogle Scholar
  6. De Schutter E, Bower JM: An Active Membrane Model of the Cerebellar Purkinje Cell II. Simulation of Synaptic Responses. Journal of Neurophysiology. 1994, 71 (1): 401-419.PubMedGoogle Scholar

Copyright

Advertisement