Skip to content

Advertisement

You're viewing the new version of our site. Please leave us feedback.

Learn more

BMC Neuroscience

Open Access

Theoretical understanding of three-dimensional, head-free gaze-shift

BMC Neuroscience201415(Suppl 1):P184

https://doi.org/10.1186/1471-2202-15-S1-P184

Published: 21 July 2014

Shifting the line of sight is naturally accomplished by the movements of both eye and head. We are studying the oclumotor system responsible for planning such head-free gaze-shifts. This includes not only the study of the kinematic mechanisms for coordination of eye and head in three-dimensional space, but also an inquiry into the nature of the internal representations that underlie the observed behavior. The latter is believed to be based on neural representations of sensory signals, coding information is receptors’ frame of reference, being transformed into representations of motor commands, coding information in effectors’ reference frame.

At the behavioral level, we propose a kinematic model which gets retinal error and initial eye and head orientations as input and describes an experimentally-inspired sequence of rotations including saccadic eye movement, head movement and vestibule-ocular reflex (VOR). Experimentally observed constraints, Listings’ law for eye and Fick strategy for head [1], have been applied. Independent parameters have been defined to control the amount of the head rotation and its contribution to gaze. Figure 1 shows the flow of information in the model in which input and output signals are shown in red and blue boxes respectively and each signal is computed specifically based on the signals from which it receives input.
Figure 1

Flow of information in the static kinematic model. Red rectangles show model inputs. Blue rectangles show model outputs. Black ovals are the model parameters.

At the representation level, we have used the neural engineering framework (NEF) [2] to implement a neurophysiologically realistic model of the system. Signals in the kinematic model, shown in figure 1, were considered multidimensional vectors represented by a combination of nonlinear encoding and weighted linear decoding. Computation of each variable from other signals was implemented by transformation of the representations: nonlinear functions of multiple variables characterized as a biased linear decoding of some higher-dimensional representation in a population. We have considered the neurophysiological evidence on the brain areas encoding different signals (e.g. representation of target relative to eye in SC or representation of eye movement relative to head in PPRF and riMLF) as constraints on their representations in our model.

The success of our theoretical study will be evaluated by its success in simulating the known behavior and replicating internal neural signals resembling those recorded by neurophysiologists. The kinematic model has been evaluated based on successfully simulating the known behavior: the accuracy of the gaze shifts and obeying the kinematic constraints for eye and head. The neural network model will be evaluated based on its success in replicating the internal neural signals resembling those recorded by neurophysiologists: the frames of reference and position dependencies of the artificial units.

Authors’ Affiliations

(1)
Department of Biology, York University
(2)
Centre for Vision Research, York University

References

  1. Crawford JD, Ceylan MZ, Klier EM, Guitton D: Three-dimensional eye-head coordination during gaze saccades in the primate. Journal of Neurophysiology. 1999, 81 (4): 1760-1782.PubMedGoogle Scholar
  2. Eliasmith C, Stewart TC, Choo X, Bekolay T, DeWolf T, Tang Y, Rasmussen D: A large-scale model of the functioning brain. Science. 2012, 338 (6111): 1202-1205. 10.1126/science.1225266.View ArticlePubMedGoogle Scholar

Copyright

© Daemi and Crawford; licensee BioMed Central Ltd. 2014

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Advertisement