Statistical Modeling
Introduction The Dunning-Kruger effect is perhaps one of the most well-known effects in all of social psychology, defined as the phenomenon wherein people with low “objective” skill tend to over-estimate their objective skill, whereas people with high objective skill tend to under-estimate their skill. In the popular media, the Dunning-Kruger effect is often summarized in a figure like the one below (source):
Caption: The Dunning-Kruger Effect in Popular Media
Despite the widespread use of figures such as the one above, the specific form of the effect is misleading–it suggests that people with low objective skill perceive their skill to be higher than those with the highest skill, which is not what Dunning and Kruger actually found in their original 1999 study.
Introduction In this post, we will explore how measurement error arising from imprecise parameter estimation can be corrected for. Specifically, we will explore the case where our goal is to estimate the correlation between a self-report and behavioral measure–a common situation throughout the social and behavioral sciences.
For example, as someone who studies impulsivity and externalizing psychopathology, I am often interested in whether self-reports of trait impulsivity (e.g., the Barratt Impulsiveness Scale) correlate with performance on tasks designed to measure impulsive behavior (e.
The Reliability Paradox Defining Reliability In 2017, Hedge, Powell, and Sumner (2017) conducted a study to determine the reliability of a variety of of behavioral tasks. Reliability has many different meanings throughout the psychological literature, but what Hedge et al. were interested in was how well a behavioral measure consistently ranks individuals. In other words, when I have people perform a task and then measure their performance, does the measure that I use to summarize their behavior show high test-retest reliability?
Introduction In this post, we will explore frequentist and Bayesian analogues of regularized/penalized linear regression models (e.g., LASSO [L1 penalty], Ridge regression [L2 penalty]), which are an extention of traditional linear regression models of the form:
[y = \beta_{0}+X\beta + \epsilon\tag{1}] where (\epsilon) is the error, which is normally distributed as:
[\epsilon \sim \mathcal{N}(0, \sigma)\tag{2}] Unlike these traditional linear regression models, regularized linear regression models produce biased estimates for the (\beta) weights.