Are facts or logic better for dubunking disinformation? Yes!
Brandolini’s law (also known as the bullshit asymmetry principle) holds that the amount of energy needed to refute bullshit is an order of magnitude bigger than to produce it. Given there’s a whole industry for producing climate denial, debunking it takes a lot of energy, so it’s good to know how to do it best.
By Climate Denier Roundup
For the latest on that, we turn to a novel study that tests debunking on Instagram to compare fact-focused rebuttals to logic-based ones, and debunkings that come after a person sees misinformation compared to seeing the pre-bunking first.
An unpaywalled version is available courtesy of study co-author and lead cartoonist John Cook, who laid out the findings in a twitter thread and CrankyUncle post, where you can see the misinformation and debunking images for yourself.
The study, by Cook, Emily Vraga of the University of Minnesota and Sojung Claire Kim of George Mason, is novel in a few ways, the first of which is the use of Instagram as the medium. While other studies have looked at rebuttals in tweet or Facebook post format, the primarily visual nature of Instagram makes it a unique venue for debunkings, in that they are generally text-based. So the use of Cook’s cartoons as a key component of the debunking (which do also include brief text), combined with the humor that other studies have shown is persuasive, makes this quite different from studies in the existing literature.
Beyond showing that cartoons can be vehicles for combating misinformation, the study’s findings show that while science-based fact checks are effective after a person sees something misleading, they’re not effective as a prebuttal. Logic-based rebuttals, though, that focus not on the specific details of a claim, but instead the logical fallacy it uses to mislead, are effective at reducing the credibility of misinformation both before and after someone is exposed to it.
To test all this, the researchers designed a simple experiment with a simulated series of Instagram posts, including a commonly shared denier image showing a plant’s growth at various levels of CO2, and the claim that CO2 is good for plants. Then, they created contradicting posts to test whether facts or logic are more effective in debunking the claim, and whether a pre-bunking shown to subjects before the denier meme was as effective in correcting it than a debunking shown afterwards.
In my Cranky Uncle vs. Climate Change book, I use both methods to counter climate myths. Here's a page from the book where I counter the "CO2 is plant food so it's good to emit CO2" myth by explaining the facts & the fallacy in the myth. https://t.co/xnkw3OU1pr 3/13 pic.twitter.com/UCrDocIC00
— John Cook (@johnfocook) June 4, 2020
The fact-based debunking explained that “plants are fussy” and their food is water and nutrients as well s CO2, accompanied by a poolside cactus making demands to a waiter (“He’s so prickly!” the waiter complains.)
The logic-focused post instead kept it simple with a statement that CO2 is causing climate change, which hurts plants, and the explanation that the plant food myth “is an oversimplification- it’s like saying humans need calcium so all we need to eat is ice cream.”
The researchers found the logic-based pre- and de-bunking were effective in reducing the credibility of the misinformation, and the fact-based debunking was as well. But the fact-based prebunking didn’t seem to make a difference, which makes some sense as you’re not necessarily going to remember details about a myth if you have no context for it.
But there’s more to it than that. Because they asked subjects to rate the credibility of the misinformation and rebuttals, and found that while logic-based corrections were most effective in reducing the credibility of misinformation, people actually considered the fact-based rebuttal to be more credible than the logical one. And the fact-based prebuttals, which had the weakest effect of reducing misinformation, were seen to be particularly credible.
So overall, this tells us that logic-based debunkings are your best bet, because they work whether or not the audience has seen the myth. And since logic is relatively universal, educating the public about cherry-picking, fake experts or other logical fallacies prepares them for a range of misinformation, as opposed to the facts that are specific to each myth.
Not really. It's a small intervention – just a single Instagram post. If we really want to move the needle in the public sphere, we need to practice the @maibached mantra of "simple clear messages repeated often from a variety of trusted voices." We need persistence & discipline.
— John Cook (@johnfocook) June 4, 2020
Granted, the differences between the effects of logic vs fact were fairly small, which given that there was only a single exposure to the debunking isn’t surprising. It’s safe to say that no one’s going to be mad if you present facts, and in fact, debunkings should generally include both.
But if you want the energy spent on debunking some bullshit to work on other bullshit, logic is the way to go!
Follow-up – An unrolled Twitter thread, via study author Emily Vraga:
I've been asked a few times recently about how best to debunk misinformation. We've started work on v2 of the popular (but now outdated) Debunking Handbook, but for now, I thought I'd share a few main points, taken from https://t.co/SxoIOT20LQ by @Jess_Paynter et al.
— Ullrich Ecker (@UlliEcker) May 28, 2020
…Corrections are more effective if they do not just communicate that a piece of information is false (e.g., a simple retraction that a practice is not evidence-based), but also detail why it is false, and what led people to believe it in the first place.
A careful dissection of incorrect arguments can help promote truth. Detailed refutations are more effective than plain, stripped-down retractions or the provision of factual information alone.
A powerful correction ideally places emphasis on detailing facts and evidence in support of them. This is especially important if a piece of misinformation carries a specific function in a person’s mental model of an event or causality.
For example, if a person falsely believes in an autism epidemic brought about by vaccinations, then it is crucial to refute the misinformation and to concurrently provide alternative information to fill the “gap” created by the correction – – in this example, that the observed rise in autism rates is mostly due to broadened diagnostic criteria and heightened awareness of the condition.
Moreover, it is important to design refutations that use simple language to facilitate understanding, and an empathetic, non-confrontational tone.
There are six specific, additional elements thought to boost the effectiveness of a correction:
1. Source credibility—corrections are more effective if they come from a person or institution that is high in perceived credibility. The primary driver of this effect appears to be the source’s perceived trustworthiness rather than expertise. So anything that builds trust will help you be more effective at debunking down the track.
2. Self-affirmation interventions make potentially worldview-inconsistent corrections “easier to swallow”—affirming a person’s values makes them more open to worldview-inconsistent information, presumably by fostering resilience to the inherent identity threat.
3. Social norming—if either an injunctive or a descriptive norm is presented in support of a correction, it should facilitate acceptance of corrective information due to people’s aversion to social extremeness and the associated fear of social exclusion. For example, explaining that the vast majority of people engage in a desired behaviour (descriptive norm), and that it’s the right thing to do in order to achieve a common good (injunctive norm).
4. Warning people before exposing them to misinformation puts them cognitively on guard and may prevent them from initially believing the misinformation upon exposure, thus obviating the need for retrospective re-evaluation when receiving the correction.
- So don’t say “Myth or Fact? Vaccines cause autism […explanation…] No they don’t.”
- Say: “It’s a MYTH that there’s a link between vaccines and autism. The FACT is that vaccines are safe. […explanation]” (Btw, turns out the order of fact/myth – myth/fact doesn’t matter)
Warnings may also boost strategic monitoring and memory processes that can prevent reliance on misinformation even when it is activated by relevant cues at a later time.
5. Graphical representations can attract attention, facilitate information processing and retention, and quantify or disambiguate the corrective evidence, thus reducing the recipient’s ability to counter-argue inconvenient information. You know, 1 picture, 1000 words.
6. Salience of the core corrective message can enhance its effectiveness, presumably based on a link between enhanced fluency of processing and information impact. One factor that can enhance salience is making sure you repeat the misinformation (but once only!) when you refute it, contrary to earlier advice. But yeah, people need to know what it is you are correcting so they can co-activate misinformation and correction and update.
Note: this is not a fully comprehensive list. Depending on context, other factors will play a role – for example it can be important to highlight an expert consensus – and other information literacy interventions (including prebunking or inoculation) definitely have a place!
Top Climate Change and Clean Energy Stories:
(Crossposted with DailyKos)