Skip to content


  • Poster presentation
  • Open Access

Functional mechanisms of motor skill acquisition

BMC Neuroscience20078 (Suppl 2) :P203

  • Published:


  • Prefrontal Cortex
  • Motor Skill
  • Neural Activity
  • Animal Behavior
  • Neural System

As a motor skill is learned, behavior progresses from execution of movements that appear to be separately generated to recruitment of a single entity. Movements come to be executed more quickly, require less attention, and behavior loses flexibility. Neural activity also changes. Task-related neuron activity during a movement executed as part of a motor skill differs from that during the same movement executed alone. Also, cortical planning areas (e.g., frontal and prefrontal cortices) dominate control early in learning, while less cognitive areas (e.g., striatum) dominate later. The change in behavior and neural activity suggests that different control strategies and systems are employed as the motor skill develops.

We propose that the behavioral and neural progression is due to a transfer of control to different types of controllers: explicit planner, which selects movements by considering the goal; value-based, which selects movements based on estimated values of each choice; and static-policy, in which a sensory cue directly elicits a movement – no decision is made. Explicit planners require much computation (and thus time and attention) and pre-existing knowledge, but are able to make reasonable decisions with little experience and are flexible to changes in task and environment. Static-policy controllers require little computation and knowledge, but must be trained with experience and are inflexible. Value-based controllers have intermediate characteristics. Neural systems can implement these mechanisms: frontal cortices conduct planning, striatum and prefrontal cortex estimate values, and the static policy controller can be implemented by a direct mapping, such as thalamus (sensory) to striatum (motor). The progression of the behavior and neural systems associated with the progression of the controllers is similar to that seen in motor skill development.

We test the validity of this scheme with computational models – based on biologically plausible mechanisms and architecture – in which an agent must execute a series of actions (analogous to movements), elicited by the controllers, to solve tasks. As the succeeding controller is trained, it selects a movement faster than the preceding controller, which relinquishes control. By comparing model behavior to human and animal behavior in analogous tasks, we show that it exhibits qualities indicative of motor skill acquisition. We also investigate how task specification and environmental conditions affect motor skill development and strategy, how the presence of existing motor skills affect the agent's strategy in solving other tasks, and the parallels between resulting model behavior and human and animal behavior.

Previous models have investigated how different controllers participate in biological decision making [1] and motor control [24]. While each model has unique properties, they all show that the availability of different controllers improves learning and behavior.

Authors’ Affiliations

Neuroscience and Behavior Program, University of Massachusetts Amherst, Amherst, MA 01003, USA
Department of Computer Science, University of Massachusetts Amherst, Amherst, MA 01003, USA


  1. Daw ND, Niv Y, Dayan P: Uncertainty-based competition between prefrontal and dorsolateral striatal systems for behavioral control. Nat Neurosci. 2005, 8: 1704-1711. 10.1038/nn1560.PubMedView ArticleGoogle Scholar
  2. Kawato M: Feedback-error-learning neural network for supervised motor-learning. Advanced Neural Computers. Edited by: Eckmiller R. 1990, Elsevier, North-Holland, 365-372.Google Scholar
  3. Hikosaka O, Nakahara H, Rand MK, Sakai K, Lu X, Nakamura K, Miyachi S, Doya K: Parallel neural networks for learning sequential procedures. Trends Neurosci. 1999, 22: 464-471. 10.1016/S0166-2236(99)01439-3.PubMedView ArticleGoogle Scholar
  4. Rosenstein MT, Barto AG: Supervised actor-critic reinforcement learning. Handbook of Learning and Approximate Dynamic Programming. Edited by: Si J, Barto AG, Powell WB, Wunsch D. 2004, Wiley-IEEE Press, Piscataway, NJ, 359-380.Google Scholar


© Shah and Barto; licensee BioMed Central Ltd. 2007

This article is published under license to BioMed Central Ltd.