In our Study of the Day feature series, we highlight a research publication related to a John Templeton Foundation-supported project, connecting the fascinating and unique research we fund to important conversations happening around the world.
In 1947, a few years before he quit his job as a think tank mathematician to play blackjack for a living, Jess Marcum wrote a memorandum for the RAND Corporation suggesting a new way to gauge the accuracy of radar signals. The resulting paper, “A Statistical Theory of Target Detection by Pulsed Radar,” laid the groundwork for the field of signal detection theory (SDT), which uses probabilities to analyze conclusions reached on the basis of fuzzy data. SDT would eventually be used to help scientists understand a host of non-radar-related detection problems, including visual perception, immune function, and medical diagnoses.
Over the past few years, Bertram Gawronski, a professor of psychology who leads Social Cognition Lab at UT Austin has been exploring the ways that SDT can be applied to help us understand why people fall for misinformation — and what can be done about it.
In a recent paper, Gawronski and his colleagues Lea Nahon and Nyx Ng use the old radar analysis framework (including “hits,” “misses” and “false alarms”) to show how several common assumptions about misinformation are themselves misinformed. Their contrarian conclusions:
- People are in fact quite good at discerning true from false information
- Partisan bias colors people’s judgements more pervasively than previously thought
- Bias most often yields incorrect judgments by making people skeptical of the truth rather than gullible about falsehoods.
Crossing the Threshold
One of the key concepts of signal detection theory is the distinction between two factors in any system for making an underlying judgement about whether information (be it a blip on a radar screen or a claim in a news article) is judged to be true or false.
The first factor, sensitivity, refers to how good an observer is at discerning the likely validity of different bits of information. The second, threshold, refers to the observer’s overall tendency to accept or reject information as true.
The two operate independently; an observer might have good sensitivity in distinguishing the distributions of likely-true and likely-false information, but if their threshold for judgement is unhelpfully low, the observer will correctly label most true information as true — but will also have a higher rate of false alarms, where false information is also labeled as true.
Conflating the effects of sensitivity and threshold yields the mythologies that Gawronski et al see debunked in recent research findings. Starting from the observation that many people seem to believe disinformation, a standard approach has been to treat it as a sensitivity issue. But the research findings don’t bear this out, suggesting that the problem isn’t discernment but bias, in the form of mis-calibrated thresholds.
People are pretty good, it turns out, at differentiating the veracity of information, but partisan bias shifts their thresholds to be more prone to believe things confirmed by their previous beliefs and disbelieve things that contradict them.
Indeed, studies of confirmation bias — having a lower threshold for information that confirms one’s pre-existing views — have led people to assume that gullibility to false information is the main factor underlying inaccurate beliefs. This is also not supported by recent research, the authors say.
Rather, it is gullibility’s mirror image, skepticism, that has a much greater effect on the sum accuracy of our beliefs.
Partisan biases misplace our thresholds in such a way that we’re more often skeptical of true information that contradicts our preconceived notions than we are gullible about false information that confirms them. This cuts against the grain of how skepticism and gullibility are generally conceived, with the former generally viewed as a better and rarer trait (in terms of etymology, “gullible” likely traces back to the notion of a person being willing to swallow anything, while “skeptic” ultimately harkens to an ancient Greek adjective meaning inquiring or reflective).
This distinction is important, the authors say, because many current misinformation interventions aim to nudge people to become less gullible and more skeptical.
“Reducing skepticism against true information,” the authors write, “likely requires different types of interventions than reducing gullibility to false information.”
Such recalibration of our personal epistemic radar might mean changing the focus from reducing reliance on misinformation to refining our abilities to identify and stick with sources worth trusting.