- Poster presentation
- Open Access
How and where does the brain predict the when: a Bayesian approach to modeling temporal expectation
BMC Neuroscience volume 12, Article number: P53 (2011)
How does the brain learn to predict when an event is going to occur? We know from studies that vary the foreperiod – the time between a warning and a response stimulus – that people can model the temporal variability of stimulus onset, i.e. react faster when a stimulus is statistically more likely . We also know that reaction time (RT) decreases with the passage of time, showing that people dynamically update the temporal expectation of the stimulus . Recent neurophysiological investigations and brain lesion studies have also revealed that areas in the dorsolateral prefrontal cortex, inferior parietal cortex and posterior cerebellum perform functionally distinct roles in the generation of temporal expectations . In the current study, we use both behavioral and neurophysiological findings to develop a theoretical account of the computational processes that underlie the generation of temporal expectations.
We identify four independent processes that are critical to generating temporal expectation: (a) a pulse-generator (oscillator or other tonic activity) that relates the sensory stimulus to an internal signal (b) an integrator, that accumulates the tonic activity, generating a temporal percept, (c) a predictor, that forms a probabilistic model based on the combination of the sensory signal and its temporal percept, and (d) a process that monitors the temporal percept and dynamically updates predictions, generating a temporal expectation (see Figure 1). The novelty of our work lies in laying out the computational principles behind the processes (c) and (d). We propose that the brain learns to predict time by modeling sensory signals as a finite mixture of hidden temporal causes. The problem of prediction can then be translated to a Bayesian inference problem and learning can be performed through well known algorithms such as expectation maximization. The monitor updates predictions dynamically by moving through the sequence of temporal causes, a process that can be modeled as a Hidden Markov Model that progressively invalidates temporal causes with the passage of time.
Simulating this model reproduces the faster RTs with increasing foreperiods for a rectangular distribution of foreperiods and the lack of change in RTs when using an exponential distribution of foreperiods, a well known behavioral finding . In addition to these implicit timing results, simulations on the model also reproduce explicit timing results where top-down, cue-based learning is used to orient participants' attention in time . Thus, our research identifies the distinct computational steps involved in the generation of temporal expectation. Prediction itself seems to require a forward modelling of sensory signals, a function typically attributed to the parietal cortex and the cerebellum , while dynamically updating these predictions seems to require maintaining and moving through a sequence of hypotheses, a high-level cognitive function typically attributed to the prefrontal cortex. Thus, by laying out the computations underlying each process, our study paves the way for understanding the role of different regions of the brain in predicting the time at which to expect a stimulus.
Klemmer ET: Simple reaction time as a function of time uncertainty. J Exp Psychol. 1957, 54: 195-200. 10.1037/h0046227.
Niemi P, Näätänen R: Foreperiod and simple reaction time. Psychol Bull. 1981, 89: 133-162. 10.1037/0033-2909.89.1.133.
Coull JT, Cheng RK, Meck WH: Neuroanatomical and neurochemical substrates of timing. Neuropsychopharmacology Reviews. 2011, 36: 3-25. 10.1038/npp.2010.113.
Coull JT, Nobre AC: Where and when to pay attention: The neural systems for directing attention to spatial locations and to time intervals as revealed by both pet and fMRI. J Neurosci. 1998, 18: 7426-7435.
Wolpert DM, Miall RC, Kawato M: Internal models in the cerebellum. Trends Cogn Sci. 1998, 2: 338-347. 10.1016/S1364-6613(98)01221-2.