- Poster presentation
- Open Access
Modeling maladaptive decision-making in a rat version of the Iowa Gambling Task
© Valton et al; licensee BioMed Central Ltd. 2011
- Published: 18 July 2011
- Pathological Gambling
- Markov Decision Process
- Behavioral Trait
- Iowa Gambling Task
- Reward Sensitivity
Deficits in decision-making have been repeatedly observed in various psychiatric disorders (i.e. ADHD, Pathological Gambling, Mania, OCD and Substance Abuse) as well as in frontal lobe patients. Such decision-making deficits are often assessed using the Iowa Gambling task (IGT) . The IGT represents a realistic decision-making task where subjects are asked to choose between targets associated with rewards and penalties of varying likelihood and amplitude. Previous studies have shown that when healthy participants take the IGT, around a third of these perform poorly, similar to psychiatric patients .
Recently, these behavioral findings were successfully translated to animal research in a rodent version of the IGT, the Rat Gambling Task (RGT). In common with human studies, it was found that a third of a healthy population of rats exhibited poor decision-making performances . The rats were tested in other tasks aiming at characterizing behavioral traits such as impulsivity, sensitivity to reward, cognitive inflexibility and risk seeking. Poor decision makers were always characterized by high scores for a combination of these behavioral traits.
Here we use a model of learning and decision-making in the RGT to answer the following questions: (1) how do the behavioral traits described above influence learning; (2) how is this manifested in terms of their decision-making performance?
In order to model the learning and decision process of the RGT, we used a TD-learning algorithm . The model agent experiences the environment by learning the values of rewards and penalties for each state using trial and error sampling. As the agent gets a more accurate representation of the environment, it takes more appropriate decisions, using a ‘softmax’ action selection process. The RGT is modeled as a Markov decision process and we extended the classical TD-learning algorithm by incorporating risk seeking , reward sensitivity and cognitive inflexibility. These behavioral traits were implemented independently and influence either the learning rate or the perception of rewards by the agent. The parameters of the model were extracted for each rat by fitting their performance to the model.
We found that the model could account for the performances of good and poor subpopulations of decision makers. Additionally, the parameters defining the behavioral traits extracted from the model correlated significantly with those measured experimentally for the poor and good decision makers’ subgroups. The model was also able to predict the inflexibility of poor decision makers during reversal conditions.
Our work supports the hypothesis that it is a combination of high scores for risk seeking, sensitivity to reward and cognitive inflexibility that lead to poor decision-making performances. According to the model, behavioral traits affect the learning process of the subjects by altering the estimated value of the received rewards and reducing their ability to reverse their initial estimations. This results in an incorrect perception of the environment, leading to an optimal decision-making according to their world representation but aberrant according to the real outcome of the task.
EPSRC, MRC, BBSRC
- Dunn BD, Dalgleish T, Lawrence AD: The somatic marker hypothesis: a critical evaluation. Neuroscience Biobehavioral Reviews. 2006, 30 (2): 239-271. 10.1016/j.neubiorev.2005.07.001.View ArticlePubMedGoogle Scholar
- Rivalan M, Ahmed SH, Dellu-Hagedorn F: Risk-Prone Individuals Prefer the Wrong Options on a Rat Version of the Iowa Gambling Task. Biological Psychiatry. 2009, 66 (8): 743-749. 10.1016/j.biopsych.2009.04.008.View ArticlePubMedGoogle Scholar
- Schultz W, Dayan P, Montague R: A neural substrate of prediction and reward. Science. 1997, 275 (5306): 1593-1599. 10.1126/science.275.5306.1593.View ArticlePubMedGoogle Scholar
- Li J, Chan L: Reward Adjustment Reinforcement Learning for Risk-averse Asset Allocation. International Joint Conference on Neural Networks. 2006, 534-541.Google Scholar
This article is published under license to BioMed Central Ltd. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.