The Outcome‐Representation Learning Model: A Novel Reinforcement Learning Model of the Iowa Gambling Task

Abstract

The Iowa Gambling Task (IGT) is widely used to study decision‐making within healthy and psychiatric populations. However, the complexity of the IGT makes it difficult to attribute variation in performance to specific cognitive processes. Several cognitive models have been proposed for the IGT in an effort to address this problem, but currently no single model shows optimal performance for both short‐ and long‐term prediction accuracy and parameter recovery. Here, we propose the Outcome‐Representation Learning (ORL) model, a novel model that provides the best compromise between competing models. We test the performance of the ORL model on 393 subjects’ data collected across multiple research sites, and we show that the ORL reveals distinct patterns of decision‐making in substance‐using populations. Our work highlights the importance of using multiple model comparison metrics to make valid inference with cognitive models and sheds light on learning mechanisms that play a role in underweighting of rare events.

Publication
In Cognitive Science
Date