The ability to precisely quantify time on the scale of hundreds of milliseconds is critical towards the processing of complex sensory and motor patterns. However, the natures of neural mechanisms for temporal processing (at this scale) in the brain are mostly unknown. Based on experimental data (psychophysics, cell cultures, electrophysiology) and theoretical studies, it is largely debated whether dedicated circuits or intrinsic mechanisms of neural circuits underlie the timing process . One specific type of timing model, namely state-dependent networks (SDN) , shows that time is encoded in the temporal patterns of activity of neural populations and emerges from the internal dynamics of recurrent networks. This can be achieved without the need of dedicated timing units. However, such intrinsic models in their present form have difficulty accounting for crossmodal transfer . In contrast, recent experimental evidence indicates that medial premotor cortical neurons of behaving monkeys show specific interval tuning across modalities (auditory and visual) . In this work we propose a hybrid model, making the hypothesis that dedicated interval tuning mechanisms of individual neurons augment the intrinsic dynamics of large recurrent networks (dynamic reservoir). Using a network model of rate-coded neurons starting with random initialization of synaptic connections, we propose a learning rule based on local active information storage (LAIS)  to adapt neuronal time constants with respect to the input stimuli to the network. Measured at each spatiotemporal location of the reservoir, LAIS gives a probabilistic measure of the amount of information in the previous state of the neuron that is relevant in predicting the next state. Interestingly high LAIS regions in the network correlate to significant events in time (intervals) of the driving stimulus. Furthermore, we combine this with mutual information driven intrinsic plasticity scheme in order to stabilize chaotic activity in the network. Incoming input drives the network which, in turn, is connected to readout neurons (Figure 1.A) that display the learned behavior for temporally dependent sensory motor tasks. Reservoir-to-output connections can be adapted using both supervised and reward modulated learning rules. Using single and multiple interval discrimination tasks, we show that our network reproduces (across modalities) a linear increase in temporal variability with increase in interval duration. This correlation is also observed in experimental data . Furthermore we demonstrate that our dedicated timing mechanism complements the inherent transient dynamics of the network by successfully learning complex time dependent motor behaviors; like handwriting generation (Figure 1.B and C), locomotion pattern transformation and temporal memory tasks. In essence, our hybrid model demonstrates that time can be encoded by a combination of dedicated and intrinsic mechanisms with the possibility to ‘learn’ the temporal structure of incoming stimuli .
Merchant H, Perez O, Zarco W, Gamez J: Interval tuning in the primate medial premotor cortex as a general timing mechanism. J. Neurosci. 2013, 33: 9082-9096. 10.1523/JNEUROSCI.5513-12.2013.View ArticlePubMedGoogle Scholar
Wibral M, Lizier JT, Vögler S, Priesemann V, Galuske R: Local active information storage as a tool to understand distributed neural information processing. Front. Neuroinform. 2014, 8: 1-PubMed CentralView ArticlePubMedGoogle Scholar
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.