Misinformation is not a virus, and you cannot be vaccinated against it

A few months ago, I published a review of Sander van der Linden’s book Foolproof: Why We Fall for Misinformation and How to Build Immunity. In the book, van der Linden argues for two main ideas: that misinformation is a contagious virus that infects people’s minds, causing them to form false beliefs and make bad decisions; and that people can nevertheless be inoculated against misinformation with a misinformation “vaccine”. Just as traditional vaccines work by exposing people to a weakened dose of a virus so that their immune system identifies and develops antibodies against it, van der Linden argues that people can be inoculated against misinformation by exposing them to a weakened dose of its “DNA”, a set of manipulative techniques which allegedly underlie the production of misinformation.

I strongly disagree with just about everything in the book, and I explained why in my review. The review is long and can be found here. Among many points and objections, I argue:

  • It is misleading to liken misinformation to a contagious virus. The analogy actively distorts our understanding of the psychological processes and motivations underlying people’s engagement with and acceptance of misinformation, the social mechanism and dynamics driving the transmission of false and misleading content, and the causes of societal misperceptions.
  • The alleged “DNA” of misinformation – namely, Discrediting, Emotion, Polarization, Impersonation, Conspiracy, and Trolling – are not usefully understood as “misinformation techniques.” Instead, four of them – discrediting, emotion, polarization, and conspiracy theorising – are not inherently manipulative or misinformative; they often characterise reliable content. Given that (at least relative to the definition of misinformation that van der Linden uses in the book) reliable information is far more prevalent than misinformation, using the presence of these techniques as a way of detecting misinformation is therefore likely to produce more false positives than true positives, especially given that people are already highly (in fact, overly) vigilant against manipulation.
  • The experimental data alleged to show that people can be inoculated against misinformation is weak. First, there is evidence that the interventions just make people more likely to rate all content (including true content) as false or manipulative. Given that people are already highly sceptical and given that reliable information is far more prevalent than misinformation, this is disastrous. Second, the experimental literature tests the intervention with the use of synthetic examples of misinformation; in other words, the researchers just make up the examples of misinformation themselves. Not only are they not real examples of misinformation, but the only reason they are alleged to qualify as misinformation is because they feature the putative misinformation techniques. In my view, this is highly methodologically dubious. It means that researchers can create test items that will maximise the likelihood that their intervention will appear successful, and there is little reason to believe that success in the experiments will generalise to greater success at detecting misinformation in the real world.

Many smart and knowledgeable people strongly disagreed with the claims and arguments in my review. There is lots of interesting discussion and pushback in the Twitter thread here, including from van der Linden.

Two people that really didn’t like the piece were philosophers Andy Norman and Lee McIntyre. In a response to my review, they accuse me of being in denial, “intemperate”, unconstructive, and burying my head in the sand. They chastise me for challenging “a top scientist on an empirical question” without bringing “receipts”. They are “disappointed” in my “one-sided review,” which is “specious.” Stephan Lewandowsky agrees with their analysis, calling the arguments from my review “rather poor.”

In this post I will respond to Norman and McIntyre’s criticisms of my claims and arguments, as well as similar criticisms I have encountered elsewhere, both online and offline. Specifically, I will address four common claims:

1. that there are “fingerprints” of misinformation, detectable surface-level markers of claims that people can learn to recognise by playing interactive games or watching short instructional videos;

2. that I cherry pick results from the scientific literature, which, when considered in full, presents overwhelming evidence for inoculation theory;

3. that misinformation is usefully likened to a contagious virus;

4. that I downplay the threat of misinformation.

1. Are there fingerprints of misinformation?

The central idea of inoculation theory that van der Linden argues for in his book is that misinformation is associated with surface-level fingerprints such as the use of emotional language, conspiratorial language, polarising language, and language intended to discredit other viewpoints. Given this, if people can be taught to identify these surface-level fingerprints, they will become better at detecting misinformation.

It is worth unpacking this idea carefully. First, what do I mean by “surface-level” markers? It is helpful to distinguish it from two properties that are obviously not surface-level: a claim’s truth value (i.e., whether it is true or false), and the intentions of the person communicating it. Plainly you cannot detect these things solely by examining a claim. Whether a claim is true depends on whether it accurately represents how things are; whether a claim is deceptive depends on the underlying intentions of the person communicating it.

Fingerprints of misinformation are not supposed to be like this. They are supposed to be surface-level markers of claims that can be detected without pre-existing knowledge of the claim’s truth value or the intentions of those communicating it. Indeed, this is supposed to be what makes them valuable: by learning that emotional language, polarising language, etc., are fingerprints of misinformation, people will be able to detect misinformation without needing any knowledge about whether the claims are true or what motivates the people communicating them.

To illustrate this, consider some of the items used in van der Linden and colleagues’ experiments and the games designed to inoculate people against misinformation. This is supposed to illustrate conspiratorial language:

This is supposed to illustrate emotional language:

And this is supposed to illustrate polarising language:

So, the idea is: you can detect that these claims feature the “fingerprints” of misinformation solely by detecting the presence of things like emotional language, conspiratorial language, and polarising language. You do not need subject-specific knowledge to evaluate whether the claims are true or what motivates the people communicating them. (If you already knew the claims were false or deceptive, you wouldn’t need to bother with fingerprints).

Fingerprints of misinformation, then, are surface-level features of claims alleged to be associated with misinformation. This brings me to the second point: What does it mean to say that fingerprints are “associated with misinformation”? You might think it means that they are more common among misinformation than among reliable information. This is how many social scientists talk about fingerprints of misinformation and related ideas. However, it is completely wrong.

For something to be a predictor of misinformation, it is not enough that it is more prevalent among misinformation than among reliable information (i.e., p(x|misinformation) > p(x|reliable information). You need to know the inverse probability: are claims featuring the relevant marker more likely to be misinformation than reliable information (i.e.., p(misinformation|x) > p(reliable information|x)? This is a different probability because it depends on base rates. Suppose, for example, that 90% of misinformation uses emotional language and only 10% of reliable information uses emotional language. You might think that would make emotional language a fingerprint of misinformation – but you would be wrong! If 90% of information is reliable information, the presence of emotional language has no diagnostic value at all. If 95% of information is reliable information, using the presence of emotional language to detect misinformation will produce more false positives than true positives.

[Edit: my friend Ben Tappin has pointed out that some of the reasoning in the preceding paragraph is not quite right; see here for details].

Given this, for something, X, to be a fingerprint of misinformation, the following two conditions must hold:

i. X is a surface-level feature of claims;

ii. Claims featuring X are more likely to be misinformation than reliable information.

Many people, including proponents of inoculation theory, are confused about the foregoing points. In any case, with these preliminaries out of the way, in my review of van der Linden’s book I make several points concerning why we should not expect misinformation to have fingerprints in the sense just defined.

1. Whether a claim is true/false (or informative/misleading) depends on the relationship of that claim to the world, not on intrinsic (i.e., surface-level) features of the claim. Given this, we should be sceptical that most misinformation will be associated with fingerprints.

2. It is in the interests of misinformation producers to communicate without leaving simple recognisable fingerprints. (There is an obvious game-theoretic argument here; if there were simple fingerprints of misinformation, misinformation producers would benefit from communicating in ways that lack them). Hence at least for effective misinformation – the kind that we should presumably be worried about – we should not expect there to be fingerprints.

3. The most widespread and effective misinformation techniques do not leave fingerprints. For example, cherry-picking does not leave fingerprints at the level of claims. If someone presents you with a cherry (a true claim), and you know nothing about the distribution of cherries (the population of events from which they are reporting) or their motivations, then you are not in a position to know that it has been cherry-picked.

4. What van der Linden characterises as “misinformation techniques” (e.g., emotional language, polarising language, conspiratorial language, discrediting) are not inherently manipulative or misinformative; they are often used in communicating accurate and important information. For example, much content using emotional language is true and important.

5. Even if certain markers are more prevalent among low-quality misinformation than among true information, the fact that the former constitute a negligible part of the informational ecosystem implies that they are unlikely to be diagnostic of misinformation. (This is why the point about base rates is important, and why in my review I emphasised that fake news and low-quality conspiracy theories make up a very small percentage of the information that most people are exposed to).

I will now consider the criticisms of these arguments advanced by Norman and McIntyre.

Response 1: Fingerprints of misinformation merely flag that misinformation might be happening.

According to van der Linden in Foolproof, “discrediting” contrary viewpoints is a “misinformation technique”. I disagree. Whether discrediting is appropriate or inappropriate depends on whether the relevant source deserves to be discredited. Here is what I wrote:

It is true that misinformation producers often seek to discredit those who challenge their claims, but it is equally true that mainstream and reliable outlets often seek to discredit fringe and extremist content. Van der Linden’s own book, after all, is a sustained attempt to discredit misinformation producers. What is distinctive about misinformation is that it seeks to discredit the wrong sources, not that it engages in discrediting at all.

Norman and McIntyre concede that, yes, sometimes discrediting is ok, but that nevertheless it is a useful flag that manipulation might be occurring:

“[van der Linden] does not claim that discrediting is an infallible sign of falsehood. The idea is rather that discrediting is a flag that information manipulation might be happening. If your guru tries to persuade you that your family is not to be trusted, you’d be wise to discount his words rather than theirs: He’s probably playing you. The same goes for sources that try to discredit “the mainstream media.” “Intrinsic” falsehood is a red herring of Williams’ devising, as any charitable reading of Foolproof will confirm.

Yes, if your guru tries to persuade you that your family is not to be trusted, you should discount his words rather than theirs. And what about if your family tells you that your guru is not to be trusted? You should… still listen to them. Clearly, then, the mere presence of discrediting does not inform how you should respond; the question is whether certain sources deserve to be discredited, and that depends on context and what is in fact the case. That was exactly my point.

Maybe the objection is that the presence of discrediting merely tells you that information manipulation might be happening, not that it is likely to be happening? But it is always the case that information manipulation might be happening, and people are already aware of this fact and hence vigilant against manipulation. As I pointed out in my review, there is considerable evidence that human beings are epistemically vigilant, and good reason to believe that people are in fact overly suspicious of manipulation. Norman and McIntyre do not engage with this part of my review at all.

Response 2: Truth depends in part on what words mean

Second, Williams employs a false dichotomy. When he states that “a claim’s truth depends not on identifiable features of the claim but on the world” he implies that it must be either/or. But the truth of an empirical claim depends on both identifiable features of the claim and on the state of the world. For example, the truth of “Our solar system has eight planets” depends on the meaning of “planet” and on the configuration of matter near our sun.

Yes, truth depends not just on the world but also on what words mean. If “planets” meant golden retrievers, it would be false to say that our solar system has eight planets. The point is: holding the meanings of terms fixed, what is true or false depends not on surface-level properties of a claim but on whether it accurately represents how things are in the world. I thought this was sufficiently obvious that it didn’t need to be stated explicitly in my review, but I’m grateful to Norman and McIntyre for making it clear to anyone who was confused on this point.

Response 3: The title of my review was bad

Sometimes, we need to examine the world to settle a truth claim; other times, it pays to examine the claim itself. Either approach can reveal the claim to be more problematic than it appears. The same goes for information more generally: scrutinizing it can bring latent problems to light.

Ironically, the title of Williams’s review—“The Fake News About Fake News”—uses a manipulation technique that van der Linden treats at length in Foolproof. He calls it the “You are fake news effect.” Here’s the idea: Scholars and responsible fact-checkers tend to employ careful analysis, judicious reasoning, and neutral language to call out mistakes, as these are signals of objectivity. By contrast, “You are fake news!” has become a cheap way for bad actors to dismiss inconvenient points of view. Williams should know better: Serious scholars shouldn’t stoop to calling one another—or any serious scholarly work—”fake news.”

Let me try to reconstruct this argument. First, it is claimed that we can sometimes figure out whether a claim is true not by examining the world but by examining the claim itself. Second, it is implied that examining the title of my review helps to illustrate this. The title uses the term “fake news”; bad actors often use the term “fake news” to dismiss inconvenient points of view; therefore… I’m genuinely not sure what is supposed to follow from this, actually. Is the idea that because the title of my review uses the term “fake news”, that means the review itself is likely misinformation? That can’t be right. Or is the idea just that using the term “fake news” is rude? That might be right, but it is irrelevant to the topic of misinformation. I found it quite rude when Norman and McIntyre wrote that I’m in denial, am burying my head in the sand, that they are “disappointed” in me, etc., yet I don’t think this implies that their piece is misinformation.

I didn’t write the title of my review. In fact, it’s literally the only part of the 4,500-word review that I didn’t write. (Editors always choose the title in these cases; the title I suggested was “There is no vaccine for misinformation”). If van der Linden and others found it disrespectful, that’s regretful. However, I will note that van der Linden describes claims that he thinks are mistaken as “fake news” in the book that I was reviewing. What exactly is the standard here? That misinformation researchers can dismiss claims by classifying them as “fake news”, but it is beyond the pale to classify the claims of misinformation researchers as fake news? Notice that I did not only call van der Linden’s claims “fake news”; I also added a substantial 4,500-word footnote to the title employing “careful analysis, judicious reasoning, and neutral language” to explain why I think the relevant claims are wrong.

Response 4: Evoking negative emotions makes a claim inherently manipulative

Another example: “Most Republicans (or Democrats) are evil” employs what van der Linden calls the polarization technique. You don’t need to know anything about the state of the world to understand that such a claim is polarizing. The fact that it evokes strong negative emotions is another sign that it’s manipulative rather than factual.

I disagree with this. First, the problem with the claim that most Republicans or Democrats are evil is that it is false. In my view, there is no such thing as evil, but even if I did believe in evil, it would still be preposterous to claim that all supporters of a major political party are evil because it’s not true. If it were true, the claim would be fine, irrespective of whether it is polarizing or not.

Second, yes, van der Linden claims that using polarizing language is a manipulation technique. He characterises it as a technique that attempts “to move people away from the political centre.” Clearly this is bad if one is a centrist; if one is not a centrist, however, one will think that lots of polarizing content is perfectly legitimate. As I note in my review, literally every progressive movement throughout history has used language that could be characterised as polarizing. Indeed, such movements were and are often dismissed precisely on the grounds that they are “divisive” (i.e., polarizing). Norman and McIntyre simply don’t engage with my arguments here; they merely re-assert van der Linden’s view.

Third, the idea that evoking strong negative emotions is a sign that something is manipulative is simply wrong in my view. Whether claims evoking strong negative emotions are appropriate depends on the context; some things should evoke emotions like anger, disgust, outrage, and so on. Of course, sometimes people do try to manipulate audiences by activating these emotions, but often these appeals to emotions are legitimate, and often propagandists attempt to manipulate audiences through the use of dispassionate, neutral, ostensibly objective language that masks a manipulative agenda. If one calls torture “enhanced interrogation”, this is an attempt to misrepresent reality by communicating in ways that do not activate appropriate negative emotions.

Response 5: Emotional language bad, neutral language good

Consider the claim: “Trump’s racist policies horribly devastated our country.” Although the underlying facts may support the case, you can make the same claim in a more neutral and factual manner: “Trump’s policies have negatively impacted U.S. race relations.” Because the former attempts to play on our emotions, we should assign it less weight.

Again, in my view this is entirely a matter of subjective preferences. I can see why news organisations attempting to win public trust among diverse audiences might opt for the latter expression over the former (assuming for the moment the claim is true), but the idea that using emotional language is inherently a fingerprint of misinformation across contexts – which is the very point being debated – is wrong. “Enhanced interrogation” is less emotive than “torture”, and yet much more misleading; “collateral damage” is a propagandistic, neutral-sounding euphemism for “we accidently killed civilians”.

Consider Orwell’s famous remarks on how propaganda and deceptive political language often involve the misleading use of dispassionate, neutral language, which

“is needed if one wants to name things without calling up mental pictures of them. Consider for instance some comfortable English professor defending Russian totalitarianism. He cannot say outright, ‘I believe in killing off your opponents when you can get good results by doing so’. Probably, therefore, he will say something like this:

While freely conceding that the Soviet régime exhibits certain features which the humanitarian may be inclined to deplore, we must, I think, agree that a certain curtailment of the right to political opposition is an unavoidable concomitant of transitional periods, and that the rigours which the Russian people have been called upon to undergo have been amply justified in the sphere of concrete achievement.”

(I’m grateful to Henry Shevlin for reminding me of this great passage).

The point is this: in some contexts, emotional language is inappropriate; in others, it is necessary, and the use of dispassionate, neutral language is deceptive and manipulative. In fact, in some contexts the very act of painting emotional language as unreliable itself constitutes a propaganda technique; throughout human history, for example, the views of women and other subordinate groups have often been dismissed by appeal to the emotionality of their communication. A rule like “statements featuring emotional language should be assigned less weight than statements featuring neutral language” does not do justice to this complexity and context-variability of norms of communication.

Response 6: Conspiratorial cognition is bad

Williams also objects to the inclusion of “conspiracy” on van der Linden’s list of manipulation techniques. His grounds? Some conspiracies are real, hence “the mere presence of conspiracy theorizing—however we define it—cannot be a distinguishing mark of misinformation.” But real conspiracies discovered through responsible investigation are quite different from the “conspiracy cognition” that van der Linden warns against. The latter, it turns out, involves a rich cocktail of “overriding suspicion,” “incoherence,” “nefarious intent,” and the like.[7] Again, we find distinct markers that can help differentiate reliable from unreliable content.

This response rests on a confusion. I agree that there are forms of conspiratorial cognition that are not conducive to forming accurate beliefs, but that’s not at issue; the question is whether the mere presence of conspiratorial language is a fingerprint of misinformation. It is this claim that is central to the hypothesis that conspiracy theorising is a simple surface-level marker of misinformative claims that people can learn to detect by playing short interactive games or watching cute instructional videos. My point in the review is that the mere allegation of a conspiracy is not diagnostic of misinformation, not that there are no differences between reliable and unreliable ways of thinking and reasoning aout the world.

Response 7: There is empirical evidence for the existence of fingerprints of misinformation

Van der Linden’s view that misinformation has distinctive “fingerprints” is solidly based on empirical evidence.[8] A study published in a Nature journal, for example, found that misinformation makes use of negative emotions at a rate that is 20 times that of accurate information.[9] Its conclusion? “Deceptive content differs from reliable sources in terms of cognitive effort and the appeal to emotions.” The point is that close examination of a claim can reveal it to be problematic even before one tries to fact-check it.

This is the best objection to my arguments, and I have heard it from many others, including van der Linden himself. The objection is that there are empirical results demonstrating that misinformation has fingerprints. If so, the kinds of arguments I raised in the review and above are irrelevant. Even if I can point to some cases where emotional language is associated with reliable information, for example, social scientists are not interested in exceptionless rules; they are interested in empirical regularities. To the extent that certain regularities are supported by extensive empirical research, my “armchair” reflections are irrelevant.

There are several reasons why this objection is not persuasive. First, to demonstrate that misinformation is associated with fingerprints, empirical research must ensure that the samples of reliable information and misinformation it draws upon are representative of these categories. Most studies make no effort to do this, however. For example, the Nature paper The fingerprints of misinformation that Norman and McIntyre cite simply runs a content analysis comparing articles from the New York Times, The Wall Street Journal, and the Guardian (classified as reliable information) to articles from websites from a Fake News Corpus, almost all of which are extremely fringe, low-quality websites. What we end up learning is that outlets like the New York Times tend to use bigger words, longer sentences, and less emotive language than articles from fringe clickbait, fake news, and conspiratorial websites like Infowars.

Does this demonstrate that misinformation in general features more emotive language? Of course not. Elite, broadsheet newspapers are not at all representative of accurate and reliable communication, and the sample of misinformation is not at all representative of false or misleading information. Here are just some of the kinds of misinformation it misses: misinformation from commentators on social media websites; elite corporate misinformation; misinformation from mainstream political elites and campaigns; governmental misinformation; misinformative elite commentary; misinformation within science due to fraud, extremely questionable research practices, and so on; misinformation disseminated within religions (in my admittedly controversial view, that means all of religion); misinformation associated with partisan news outlets; and, most tellingly, misinformation published by outlets like the New York Times, the Guardian, and The Wall Street Journal, misinformation which is much more consequential than that produced by extremely fringe websites with a tiny audience of avid conspiracy theorists and kooks.

This is a general pattern with misinformation research. Researchers make broad, sweeping generalisations about misinformation on the basis of studies that focus on samples of misinformation that are extremely unrepresentative of, and hence cannot license inferences about, false or misleading content as a whole. Indeed, they focus almost exclusively on fringe, low-quality misinformation.

Second, even if one rejects this concern, the kinds of outlets included in the Fake News Corpus make up a negligible part of the informational ecosystem. Given this, the problem of base rates emerges. Even if fake news features much more emotional language than one finds in mainstream news, the overwhelming amount of news that the overwhelming majority of people encounter in Western, democratic societies comes from mainstream news. When you combine this with the fact that people are already highly suspicious of manipulation and deception, increasing people’s sensitivity to the possibility that something like emotional language is evidence of manipulation seems likely to have negative results, causing them to dismiss more reliable information than misinformation.

Third, it is worth asking why fake news websites and other fringe, conspiratorial websites seem to be differentiable from more reliable media. This is surprising if one thinks that the function of such websites is to persuade their audiences. For this end, one would expect these websites to try to imitate more reliable, authoritative outlets. The very fact that they don’t – that their content is often so much more hyperbolic, extreme, over-the-top, emotional, and so on – suggests that maybe this is because persuasion is not their goal. As others have argued, much engagement with low-quality misinformation is not for epistemic reasons; it is for humour, entertainment, trolling, ingroup signalling, and so on. If that is right, attempting to inoculate people against the markers associated with that kind of misinformation will not have much value; even the small minority of the population that engages with it is not much persuaded by it anyway.

Having said all of this, it is of course ultimately an empirical question whether misinformation has simple surface-level fingerprints. Of course it can’t be decided by purely armchair reasoning. Nevertheless, it’s highly misleading to suggest that my review attempts to produce armchair refutations of a claim for which there is vast empirical evidence. My argument is rather that both the theoretical justifications and the existing empirical evidence for the claim are extremely weak and often confused. Given this, an enormous amount of scepticism is warranted.

Response 8: It is important to be alert to manipulative rhetorical tactics

Examine “All amphibians are slimy, so lizards are slimy” and you’re apt to notice that it assumes—falsely—that lizards are amphibians. Spotting this can neutralize the argument’s power to deceive, and you needn’t touch any lizards in the process. Being mindfully attentive to the properties of the information you consume is fundamental to wisdom. Isn’t that the point of the Socratic Method? And philosophical inquiry more generally? Surely it makes sense to be alert to manipulative rhetorical tactics.

Yes, there are invalid arguments, and their invalidity can be determined independent of assessing the truth value of their premises and conclusions. If Foolproof argued that we should assess arguments using logical reasoning, I wouldn’t have had an issue with it, although – as somebody who teaches logic at university – I am extremely sceptical that it actually improves people’s ability to discriminate good from bad arguments in the real world.  

I also agree that we should be “mindfully attentive to the properties of the information” we consume, and that it makes sense to be “alert to manipulative rhetorical tactics.” But first, my point is that van der Linden’s framework does not help us to evaluate information, and second, people are already highly – and, as I argue in the review, excessively – alert to manipulation. Trying to make people even more paranoid about being manipulated is not a solution; it is likely to make things worse. Instead, we should be trying to build trust in institutions, including by making those institutions more trustworthy.

Norman and McIntyre go on:

Bad actors use bits of truth to construct false narratives. To get from one to the other, though, they almost always employ fear-mongering, discrediting, polarizing language, trolling, or the like. Van der Linden offers a practical guide to spotting such techniques—a way to free ourselves from much information manipulation.

This is not an argument; it is simply a re-assertion of the views that I argued are mistaken.

2. Do I cherry-pick evidence?

I will now move on to the second general response I have received to my review, both from Norman and McIntyre and also from van der Linden and others: namely, that my review cherry-picks evidence.

Here is what Norman and McIntyre say:

Williams does cite one study showing that, sometimes, psychological inoculation doesn’t improve people’s discernment between true and false news.[10] He cites another that seems to indicate that (contrary to van der Linden’s claims) debunking is superior to prebunking.[11] But these results are cherry-picked. The latter didn’t test inoculation theory as described by van der Linden, and a systematic review of the literature shows that prebunking is superior to debunking.[12] Indeed, dozens of studies show that inoculation and prebunking work.[13] Many such findings have been replicated in the lab, and a field study with millions of people on YouTube shows that inoculation can improve people’s “real-world” capacity to distinguish real and fake news.[14]

There is some confusion here. First, it is important to distinguish efforts to prebunk specific claims from efforts to inoculate people against misinformation in general by teaching them its distinctive markers. In my review, I don’t focus on the former project at all. So, the fact that prebunking specific claims works does not contradict anything I wrote. Further, I also wrote that prebunking specific claims might be more effective than debunking them, although the evidence is mixed. Norman and McIntyre appear to think that the evidence is not mixed and that we can know with certainty that prebunking is superior. I don’t have strong opinions on this.

My focus was on the claim that people can be inoculated against misinformation in general by learning to identify its surface-level fingerprints. In the review, I advanced two main arguments against this claim: that evidence seems to show that such “inoculations” just make people more sceptical across the board; and that the experiments use “synthetic” examples of misinformation to test efficacy in ways that are extremely methodologically dubious.

Consider the first claim: did I cherry-pick evidence showing that inoculation increases scepticism towards all claims? Norman and McIntyre say that I only cite “one study” in defence of this view. In fact, the paper I cite is a meta-analysis that reviews data from five influential articles purporting to show that inoculation against misinformation works. The paper demonstrates that when analysed properly the data show that these studies did not improve discrimination between true (reliable) and false (manipulative) content; they just made people more likely to rate all information as false or manipulative.

Interestingly, after I published the review, van der Linden and his team published results testing their intervention. Here is what they write:

participants did not become significantly better at general news veracity discernment after playing the Bad News Game… [W]e observed that while people improved in the detection of fake news, they also became worse at the detection of real news. Looking further at response biases, we can also see that the Bad News Game might increase general distrust in news headlines…

This is exactly in line with what Ariana Modirrousta-Galian and Philip Anthony Higham observe in the meta-analysis that I cite in my paper. In another article that came out last month (October 2023) involving four teams of undergraduate students testing the same intervention in ways that addressed methodological flaws from original studies, they find that gamified inoculation against misinformation “did not improve discrimination. This converges with findings reported by Modirrousta-Galian and Higham.”

What about the “field study with millions of people on Youtube” allegedly showing that “inoculation can improve people’s “real-world” capacity to distinguish real and fake news”? From what I can see, this study really does reveal that the intervention helped people to discriminate between items classified as “reliable” and “manipulative” by the experimenters. However, as I argued in my review, the test items used in this experiment and all the others that I am familiar with are synthetic. They are created by the experimenters, and they are alleged to qualify as misinformation solely because they exemplify what the experimenters believe are markers of misinformation.

To see how problematic this is, suppose I decide that whenever the word “xylophone” occurs, it is a marker of misinformation. Given this, I build a set of test items and stipulate that those statements that don’t feature the word “xylophone” qualify as reliable information whereas those statements that do feature the word qualify as misinformation. Now suppose I give people an interactive game, or I show them videos, instructing them that the term “xylophone” is a marker of misinformation, and I test their ability to discriminate between reliable information and misinformation both before and after playing the game/watching the video. If they have improved, have we learned anything about misinformation? Of course not. What we have learned is that you can teach people to identify certain markers, especially if you have the freedom to build your own test items that exhibit those markers to a cartoonish degree.

Norman and McIntyre do not engage with this objection at all – they do not even mention it in their response – and nobody else has either.

They continue:

Williams dismisses one of van der Linden’s findings as an “artifact of experimental design” on the grounds that the “stories used in the study were common knowledge” to test subjects in the U.S. and the U.K. But the very same findings were replicated by independent studies using different headlines about local news from India.[15] Our advice? If you’re going to challenge one of the world’s top scientists on an empirical question, you’d better have the receipts.

In fact, one of the “receipts” I referenced in my piece explicitly addresses this study in India. Ariana Modirrousta-Galian and Philip Anthony observe that the article using the Indian sample has “uncharacteristic” results in their meta-analysis—in other words, it is an outlier relative to the rest of the empirical literature—and suggest, among other explanations, that this might be because of design flaws in the experiment (e.g., it lacked a control). Nevertheless, they find that when they include the results of this article in their meta-analysis it makes no difference to the overall assessment. In other words, Norman and McIntyre cherry-picked an outlier study which, even when included in the meta-analysis that I explicitly referenced in my review, does not undermine the argument I made.  

3. Is misinformation usefully likened to a virus?

Third, let me consider another response I received from many people to my review: that misinformation is usefully likened to a virus and in fact scientific research demonstrates that this is the case. Specifically, researchers have taken mathematical models from epidemiology and adapted them to track the spread of misinformation through social networks. According to Joe Bak-Coleman, this shows that a virus “is a good model for the spread of certain types of misinformation on short to medium time scales,” and “we can put…to rest” any scepticism about the analogy.

Here is what Norman and McIntyre say:

“Williams dismisses as “hype” the “viral” analogy that runs through Foolproof. In this, he fails to show serious engagement with a remarkably fruitful idea. Mathematical models show that misinformation literally does spread like a virus.[16] Indeed, no serious computational scientist would dispute that epidemiological models also work to describe information diffusion. None of this means that we all have simple, easily infected minds. On the contrary, van der Linden carefully dissects the psychological literature to distinguish when people are more likely versus less likely to be fooled.”

There is a confusion here. Of course, nobody denies that that false and misleading ideas sometimes propagate rapidly—that is, “go viral”—through social networks, including online ones. Humans are ultra-social animals who obsessively communicate and influence each other, and ideas, traits, and behaviours are often transmitted quickly through communities. Given this, it is not surprising that one can adapt mathematical models from epidemiology to describe the propagation of information. There is nothing unique about misinformation that makes it amenable to such modelling, however; the models will apply equally to engaging truths, juicy gossip, funny jokes, new fashions, and so on.

If misinformation researchers find it useful to describe the transmission of information (or anything else) through social networks using epidemiological models of viral contagion, that is obviously fine. The point that I made in my review is that the virus analogy actively distorts our understanding of the psychological and social mechanisms through which people form false beliefs and communicate misinformation, which are fundamentally different to the mechanisms by which contagious diseases spread. For example:

  • People are not passive victims of misinformation with self-replicating interests of its own; instead, people actively seek out misleading information that promotes their interests, and much misinformation is best understood in terms of a marketplace of rationalisations that caters to this consumer demand.
  • Ideas, false or otherwise, do not replicate, virus-like, from mind to mind; in communicating, people process information with sophisticated and vigilant cognitive mechanisms and interpret information in selective and idiosyncratic ways.
  • Information does not cause beliefs via mere exposure. Instead, people evaluate the competence and trustworthiness of sources and are strongly disposed to reject messages in tension with their pre-existing beliefs.
  • Human communication is scaffolded (and distorted) by social processes involving norms, reputations, and institutions that have no parallels in biological processes of viral contagion.
  • Misinformation is often a symptom of deep-rooted intuitions and societal pathologies, not an exogenous cause of beliefs and behaviour.
  • Misinformation is often propagated in highly intentional, sophisticated ways by elites in the service of propagandistic goals; such intentional targeting has no parallels in the mindless spread of contagious viruses.

These ideas are all elaborated on at length in my review with links to the scientific literature. So yes, people influence each other and often pass on information that they encounter to others in their social network. We already knew that, but if one wants to describe it mathematically with epidemiological models that’s fine. The point is that there are nevertheless many reasons why viral contagion and information transmission are radically different such that conceptualising misinformation as a contagious virus actively distorts our understanding of the phenomenon.

4. Do I downplay the threat of misinformation?

Finally, let met address the point that my review downplays the threats posed by misinformation. Norman and McIntyre write:

van der Linden devotes an entire chapter to the claim that only a minority of people are impacted by fake news, and carefully takes the reader through the limitations of these studies. And even if it were true that not many people are influenced by misinformation, it’s clear that disinformation can swing elections decided by small margins. Fake news doesn’t have to be widely believed to undermine democracy.

Weaponized information is as old as time. Now, though, bad actors can “micro-target” their messaging and populate millions of social media feeds with content designed to be triggering. They can exploit algorithms that amplify “viral” content, deploy armies of bots, and—coming soon to an election near you—leverage artificial intelligence. Yet Williams would have us believe that “misinformation is not widespread”—that “its causal role in social events is either unsubstantiated or greatly overstated.” No cause for alarm here, folks: Just go about your business.

Here, Norman and McIntyre are objecting to the claim in my review that misinformation in the narrow sense of that term is relatively rare and largely symptomatic of other psychological and social factors. They object (i) that even if fake news is not widely believed, it can still swing elections decided by small margins; and (ii) that bad actors can now micro-target messages and use artificial intelligence to spread misinformation.

I agree that fake news could still have important societal effects even if it doesn’t persuade many people, but my understanding of the empirical literature (e.g., here, here, and here) is that there is not strong evidence that it has such effects; given this, I think that alarmist narratives concerning an “infodemic” of the sort that are advanced in Foolproof are highly misleading. I also agree that micro-targeting and advances in generative AI are certainly worth worrying about, although I think even here it is important to base our beliefs on available evidence, and this evidence does not seem to support many of the alarmist narratives about these phenomena.

More generally, I think there is often a confusion in talking about misinformation. In the review, I focus on misinformation in the very narrow sense of that term, where it refers to something like demonstrably false content of the sort one finds in fake news, outright lies, absurd conspiracy theories, and so on, because this is overwhelmingly the kind of misinformation that van der Linden focuses on in the book. I think it is true in Western democracies that this kind of misinformation is not very widespread, is overwhelmingly consumed by audiences to affirm or rationalise pre-existing beliefs and attitudes, and is therefore largely symptomatic of other problems (e.g., institutional distrust, polarization, anti-establishment worldviews, and so on). That doesn’t mean it is wholly epiphenomenal. It is still important to study such phenomena, and as democratic citizens we should call out lies, falsehoods, and bullshit when we encounter it, but our understanding of misinformation should be based on the evidence, not on a partisan moral panic that greatly exaggerates the threat.

Nevertheless, I think that misleading information in a broader sense is pervasive and likely highly consequential in shaping public attitudes and behaviours. However, very little of the high-quality misleading information that we actually need to worry about takes the form of fake news, absurd conspiracy theories, and so on; it involves political, corporate, cultural, and other elites producing subtle and sophisticated forms of biased revelation (i.e., cherry picking), spin, exaggeration, framing, misleading causal narratives, testimony from fringe but genuinely credentialed experts, and other such tactics. I regret that some readers of the review interpreted me as dismissing the prevalence and consequences of misleading content like this. Instead, my point was that it is not helpful to understand such phenomena with simplistic analogies likening misinformation to a contagious virus, and that we cannot protect ourselves from propaganda, manipulation, and misleading content with cute interventions like a “misinformation vaccine.” In fact, as with lots of modern misinformation research, in my view these highly influential ideas are an example of misinformation in the broader sense of that term: they form part of a misleading narrative, rooted in exaggeration, spin, and cherry-picked empirical results, that distort our collective understanding of the world in consequential ways.

Leave a comment