Tom Bartlett of the Chronicle of Higher Education posted a very even-handed and nuanced article about the state of priming research that you can read here. Though the article is centered around an interview with John Bargh, the “father” of behavior priming, Bartlett also interviews and gets opinions from other well known priming researchers (and replicators) including Ap Dijksterhuis, Hal Pashler, and Joesph Cesario.
My views on the (non)replicability of priming research—particularly flashy research that makes it into top caliber journals—are admittedly less neutral. I was particularly disappointed by some of Bargh’s responses to the skepticism about his research. For example:
Why not do an actual examination? Set up the same experiments again, with additional safeguards. It wouldn’t be terribly costly. No need for a grant to get undergraduates to unscramble sentences and stroll down a hallway. Bargh says he wouldn’t want to force his graduate students, already worried about their job prospects, to spend time on research that carries a stigma. Also, he is aware that some critics believe he’s been pulling tricks, that he has a “special touch” when it comes to priming, a comment that sounds like a compliment but isn’t. “I don’t think anyone would believe me,” he says.
This is not the attitude a researcher should have when defending his work. If you believe in your work, like Bargh clearly does, then you should back it up, not hide away from controversy. If the priming effects Bargh demonstrated in his now classic paper (1996) are “real,” then why would he worry that his students would be wasting their times by researching something that has been stigmatized? Furthermore, this explanation hardly maps onto reality. A quick look on Bargh’s lab website shows continued research on behavioral priming (albeit less productivity).
Another questionable comment:
Bargh contends that we know more about these [priming] effects than we did in the 1990s, that they’re more complicated than researchers had originally assumed. That’s not a problem, it’s progress. And if you aren’t familiar with the literature in social psychology, with the numerous experiments that have modified and sharpened those early conclusions, you’re unlikely to successfully replicate them. Then you will trot out your failure as evidence that the study is bogus when really what you’ve proved is that you’re no good at social psychology.
Hal Pashler questions the logic of this argument:
One possible explanation for why these studies continually and bewilderingly fail to replicate is that they have hidden moderators, sensitive conditions that make them a challenge to pull off. Pashler argues that the studies never suggest that. He wrote in that same e-mail: “So from our reading of the literature, it is not clear why the results should be subtle or fragile.”
In other words, if it worked for you, why doesn’t it work for me? If there really are moderators that are so subtle that conducting a direct replication of the work—just as Hal Pashler did—fails to produce the same results, then it just doesn’t make sense to prescribe strong arguments for the utility of priming on behavior change. On a related note, the original effect size that Bargh found in his behavioral priming research on walking speed (Exp. 2; 1996) was a whopping d = 1.08 for the elderly primed group v. neutrally primed group, suggesting a strong effect that would unlikely be erased by subtle changes in experimental protocol, especially as far as a direct replication is concerned.
If indeed priming effects are much smaller than originally envisioned, it severely questions their practical usefulness. In particular, suggesting that individuals who are lonely take longer showers (the “hot shower” study Bartlett references) to simulate interpersonal warmth has a variety of real world implications. While this study has received a lot of criticism from the psychological community in the past year (including a failed replication with over 2500 participants, cited in the article), the general media has been far less critical. Prominent news outlets have run with the story, outright claiming that individuals can just “wash the loneliness away.” Idit Shalev (coauthor on the paper with John Bargh) suggests in the Chronicle that:
It was never claimed that priming warmth is a cure for depression. There is need to develop public health interventions including interventions based on priming. Clearly, it is too early to conclude what is the merit of these interventions as research is still very young.
However, this contradicts the conclusions the researchers come to in their paper:
Thus, it appears that the “coldness” of loneliness or rejection can be treated somewhat successfully through the application of physical warmth—that is, physical and social warmth might be substitutable for each other to some extent…Our experimental evidence suggests that the substitution of physical for social warmth can reduce needs for affiliation and emotion regulation caused by loneliness and social rejection, needs that characterize several mental and social disorders with major public health significance.
While the authors aren’t preaching on soapboxes about the health benefits of long, hot showers, they certainly more than hint that people with mental disorders (of which depression is one) might benefit from longer showers. But they key issue here is more whether Bargh and Shalev are couching their argument appropriately. When tiny effects are reported in ways that are easily misunderstood by casual readers (especially reporters and the lay person), it is easy to create a false consensus that perpetuates itself and leads to misunderstanding. And that is never good for science.
It’s not the case that skepticism about priming research should lead us to believe that all studies are unreliable and should be discredited. Nor is it a call to arms against any one person or a witch hunt against John Bargh. What is important is that priming researchers (and psychologists more generally) be more open about their research, so that we can weed out the unreliable work to get down to findings that are credible, replicable, and useful.