Selected Publications

Trait impulsivity—defined by strong preference for immediate over delayed rewards and difficulties inhibiting prepotent behaviors—is observed in all externalizing disorders, including substance use disorders. Longstanding theories of personality and decision-making predict that neurally mediated individual differences in sensitivity to reward cues versus punishment cues (frustrative non-reward) interact to affect behavioral tendencies. We use hierarchical Bayesian analysis in three samples with differing levels of substance use (N=967) to identify interactive dependencies between trait impulsivity and state anxiety on impulsive decision-making. Our findings reveal how anxiety moderates impulsive decision-making and demonstrate benefits of hierarchical Bayesian analysis over traditional approaches for testing theories of psychopathology spanning levels of analysis.
In CPS., 2020.

Background: Impulsivity is central to all forms of externalizing psychopathology, including problematic substance use. The Cambridge Gambling task (CGT) is a popular neurocognitive task used to assess impulsivity in both clinical and healthy populations. However, the traditional methods of analysis in the CGT do not fully capture the multiple cognitive mechanisms that give rise to impulsive behavior, which can lead to underpowered and difficult-to-interpret behavioral measures. Objectives: The current study presents the cognitive modeling approach as an alternative to traditional methods and assesses predictive and convergent validity across and between approaches. Conclusion: The cognitive modeling approach is a viable method of measuring the latent mechanisms that give rise to choice behavior in the CGT, which allows for stronger statistical inferences and a better understanding of impulsive and risk-seeking behavior.
In DAD., 2019.

To date, studying facial expressions has been hampered by the labor-intensive, time-consuming nature of human coding. We describe a partial solution: automated facial expression coding (AFEC), which combines computer vision and machine learning to code facial expressions in real time. We provide an example in which we use AFEC to evaluate emotion dynamics in mother–daughter dyads engaged in conflict. Among other findings, AFEC (1) shows convergent validity with a validated human coding scheme, (2) distinguishes among risk groups, and (3) detects developmental increases in positive dyadic affect correspondence as teen daughters age.
In Dev. and Psych., 2019.

Historically, facial expression research has followed from discrete emotion theories, which posit a limited number of distinct affective states that are represented with specific patterns of facial action. Much less work has focused on dimensional features of emotion (e.g., positive and negative affect intensity). We use computer-vision and machine learning (CVML) to identify patterns of facial actions in 4,648 video recordings of 125 human participants. Our results show that CVML can both (1) determine the importance of different facial actions that human coders use to derive positive and negative affective ratings when combined with interpretable machine learning methods, and (2) efficiently automate positive and negative affect intensity coding on large facial expression databases.
In PLOS ONE., 2019.

The Iowa Gambling Task (IGT) is widely used to study decision‐making within healthy and psychiatric populations. Here, we propose the Outcome‐Representation Learning (ORL) model, a novel model that provides the best compromise between competing models. We test the performance of the ORL model on 393 subjects’ data collected across multiple research sites, and we show that the ORL reveals distinct patterns of decision‐making in substance‐using populations.
In Cog. Sci., 2018.

There is a growing interest in psychology to apply advanced computational models to decision making data collected from psychiatric populations to better understand maladaptive choice patterns. However, there are currently no easy-to-use tools for those who may not have the sophisticated mathematical/programming background to use such methods. Here, we present an R package which can fit an array of decision making models to a variety of different tasks with a single line of code.
In CPSY, 2017.

Recent Publications

Anxiety Modulates Preference for Immediate Rewards among Trait-Impulsive Individuals: A Hierarchical Bayesian Analysis

Details PDF Code

A computational model of the Cambridge gambling task with applications to substance use disorders

Details PDF Code

Using automated computer vision and machine learning to code facial expressions of affect and arousal: Implications for emotion dysregulation research

Details PDF

Using computer-vision and machine learning to automate facial coding of positive and negative affect intensity

Details PDF Code

The Outcome‐Representation Learning Model: A Novel Reinforcement Learning Model of the Iowa Gambling Task

Details PDF Code

Easyml: Easily Build And Evaluate Machine Learning Models

Details PDF Code

The Indirect Effect of Emotion Regulation on Minority Stress and Problematic Substance Use in Lesbian, Gay, and Bisexual Individuals

Details PDF

Revealing Neurocomputational Mechanisms of Reinforcement Learning and Decision-Making With the hBayesDM Package

Details PDF Code

Recent & Upcoming Talks

Recent Posts

More Posts

Introduction In this post, we will explore how measurement error arising from imprecise parameter estimation can be corrected for. Specifically, we will explore the case where our goal is to estimate the correlation between a self-report and behavioral measure–a common situation throughout the social and behavioral sciences. For example, as someone who studies impulsivity and externalizing psychopathology, I am often interested in whether self-reports of trait impulsivity (e.g., the Barratt Impulsiveness Scale) correlate with performance on tasks designed to measure impulsive behavior (e.


The Reliability Paradox Defining Reliability In 2017, Hedge, Powell, and Sumner (2017) conducted a study to determine the reliability of a variety of of behavioral tasks. Reliability has many different meanings throughout the psychological literature, but what Hedge et al. were interested in was how well a behavioral measure consistently ranks individuals. In other words, when I have people perform a task and then measure their performance, does the measure that I use to summarize their behavior show high test-retest reliability?


Introduction In this post, we will explore frequentist and Bayesian analogues of regularized/penalized linear regression models (e.g., LASSO [L1 penalty], Ridge regression [L2 penalty]), which are an extention of traditional linear regression models of the form: \[y = \beta_{0}+X\beta + \epsilon\tag{1}\] where \(\epsilon\) is the error, which is normally distributed as: \[\epsilon \sim \mathcal{N}(0, \sigma)\tag{2}\] Unlike these traditional linear regression models, regularized linear regression models produce biased estimates for the \(\beta\) weights.




A cross-platform Python toolbox for analyzing facial expression data.


An R/Python toolbox for easily fitting a variety of machine learning models.


An R toolbox for fitting an array of decision making models with hierarchical Bayesian analysis.


Abnormal Psychology

I am currently teaching PSYCH 3331 at The Ohio State University. My course takes a dimensional and developmental perspective, and is divided mainly into sections for internalizing, externalizing, and psychotic forms of psychopathology. Further, I use principles of active learning to engage students during class. My course design was inspired largely by Ziv Bell, who developed a “flipped class” curriculum for his students to better understand and apply principles of abnormal psychology to everyday life.