Is emotionality a fingerprint of misinformation?


——

According to this thread, anyone who is sceptical that emotionality (the tendency to evoke emotions) is a fingerprint of misinformation is simply ignorant of scientific consensus on the topic. I disagree and would like to explain why.

First, it’s important to be clear about which claim is being evaluated. If emotionality is a fingerprint of misinformation, misinformation on average must exhibit higher rates of emotionality than reliable information. That’s a striking claim which if true has important implications

Here are some things that obviously *cannot* support this claim. It is not enough to point to some instances of misinformation and show that they exhibit emotionality. Lots of reliable information exhibits emotionality and lots of misinformation is dispassionate.

Likewise it is not enough to show that people sometimes manipulate others by influencing their emotions. People sometimes draw attention to important truths by activating emotions and people sometimes manipulate by using neutral, superficially “objective” communication.

So the question is not whether misinformation is sometimes linked to emotionality (of course it is), or whether people sometimes attempt to manipulate others by activating their emotions (of course they do).

The question is whether misinformation on average exhibits higher rates of emotionality than reliable info. That’s what you need to show if you want to use emotionality as a flag of misinfo and teach people that emotionality is a “misinformation technique”.

How would one establish this? First, one would need to *define* misinformation in such a way that it could be reliably and objectively identified and measured. This already creates numerous issues.

As Joe Uscinski points out, existing attempts to define misinformation are mired in confusion and subjectivity, typically just boiling down to “ideas with which I, the researcher, personally disagree.”

To see some of issues, consider defining misinformation as false info. That *obviously* can’t work. First, it assumes an absurd God-like perspective from which misinformation researchers have somehow transcended fallibility and managed to comprehensively evaluate all claims.

Second, much false content is not even intended to be believed. It simply involves jokes, irony, trolling, etc. (Interestingly this plausibly applies to *a lot* of so-called “fake news,” which is often propagated and disseminated without anyone seriously believing it.)

Third, most misleading content is not straightforwardly false, including almost all the most insidious forms of propaganda. Most obviously, “biased revelation” (roughly: cherry picking) involves strategically communicating a biased sample of truths.

For much other propagandistic communication – images, analogies, simplistic narratives, and so on – it’s not even clear they have truth values, and yet this content is still highly misleading and massively consequential in our informational ecosystem.

For these sorts of reasons, many researchers want to define misinformation not as false information but as misleading information. Fair enough. But notice how *expansive* and *nebulous* the category of misleading information is. Moreover, it opens up deep issues of its own.

What makes info misleading? Is it info that leads to false beliefs? But much misleading info leads to true but selective beliefs. Is it info that leads to selective beliefs, then? But communication and belief are *necessarily* selective. A comprehensive world model is a fantasy.

Instead, misleading information seems to be information that *in some sense* leads to the “wrong” beliefs. Anyone who thinks “information that in some sense leads to the wrong beliefs about a topic” is amenable to objective, scientific measurement is kidding themselves.

The first problem with attempts to identify the “fingerprints” of misinformation, then, is that the very concept of misinformation is mired in confusion and subjectivity. But that is *just* the first problem; the problems get even worse.

Even if one sets aside these issues as merely “philosophical” and assumes that misinformation is a legitimate scientific category, any attempt to identify the fingerprints of misinformation must somehow analyse a representative sample of reliable information and misinformation.

Now a second problem emerges: when claiming that misinformation exhibits higher emotionality than reliable info, misinformation researchers *never* focus on anything like representative samples of these categories.

Instead, they typically use mainstream news (BBC, NYT) as proxies for reliable information and fake news as a proxy for misinformation.

To be clear: I don’t deny *fake news* exhibits higher emotionality than mainstream news. What I deny is that this observation provides a sufficient evidential basis for establishing that misinformation in general exhibits higher emotionality than reliable information.

First, fake news (i.e., wholly fabricated events that mimics real news) is *absurdly* unrepresentative of misleading content. It’s extremely rare; has negligible real-world consequences; and it barely influences the v. small minority of the population who consume it.

If one is interested in misleading and politically consequential communication, fake news is thus basically irrelevant, as I argued here. This isn’t reflected in the disproportionate scientific and media attention to fake news but it is true.

Where is more insidious misleading content that actually influences people? Lots of it is in elite mainstream news outlets – i.e. the very outlets misinformation researchers often simply *define* as reliable information in their analyses of the fingerprints of misinformation 🙂

There’s not just highly partisan news, which in the US is basically *all news*. Even less partisan news coverage often leads to systematically distorted beliefs because of how it reports on a highly nonrandom sample of events.

And then consider some of the most shocking recent propaganda. 20 years ago, the US invaded Iraq on the basis of falsehoods about WMDS. Like most American imperialism, it was (even by the NYT’s later admission) legitimated by skewed reporting in elite, mainstream outlets.

That’s just scratching the surface. Unlike socially and politically irrelevant fake news, one kind of communication that actually shapes societies is forecasts by elite commentators and experts. As work by Tetlock and others have shown, such forecasting reliability is abysmal and greatly unconstrained by evidence or rationality. As a result much highly influential forecasting is misleading and harmful..   

Similarly, we are currently going through a “replication crisis” (in addition to many other crises) in the social sciences. A truly shocking amount of published scientific research is completely unreliable. Again, unlike fake news, social science actually shapes politics through various channels – and yet much research is rife with misinformation.

Moreover, if one is looking at misinformation, in my view pretty much everything in major religious texts should be included as well. Again, unlike socially irrelevant fake news, people – literally billions of people – are actually influenced by the contents of those books.

And on and on. The BBC has recently decided it is going to combat “disinformation”. In so doing, it greatly exaggerates the problems it confronts and propagates a tremendous amount of unfounded misinformation which literally millions of people read.

The basic point is that an *enormous* amount of mainstream, elite communication is highly misleading. On *any* definition of misinformation as misleading content, it would have to be included in analyses that purport to reveal “fingerprints” of misinformation *in general.* Does it get included in such analyses? Nope.

Fake news is a tiny, extremely unrepresentative, basically politically irrelevant example of misleading content. Therefore it is extremely unscientific to take evidence that *fake news* exhibits higher rate of emotionality as a sufficient basis for drawing inferences about communication and misinformation in general.

That’s misinformation. What would it be to focus on a representative sample of reliable information? As already noted, it is often just stipulated within this research that everything that comes out of elite, mainstream news or public health authorities is reliable information – a claim which is extremely problematic for numerous reasons.

But in any case, reliable human communication includes *much, much* more than news. Sander van der Linden points out that anti-vaxx content tends to exhibit higher emotionality than pro-vax content. Fair enough. But is pro-vax content of the sort pushed by public health authorities representative of reliable communication more broadly?

Historically, groups that criticise elites and elite consensus were denigrated as being overly emotional. Partly that rests on unfounded stereotypes; but it also rests on the fact that critics of a system are often angry and outraged. That’s true of modern anti-vaxxers; historically, it is also true of many anti-racists.

If one took existing “scientific” research on the “discovered” fingerprints of misinformation and compared the communication of Walter Cronkite with, say, Malcolm X, it is clearly the latter – whose content was often angry and highly polarising – that would fail the test. Could this tell us something interesting about this research?

In general, if one wants to claim that reliable communication exhibits lower rates of emotionality, one must ensure one is looking at a representative sample of reliable communication. Arbitrarily restricting the focus to NYT headlines or public health advice and then drawing wild extrapolations is bad science.

In my view, these basic reflections undermine the litany of studies that van der Linden thinks are sufficient to establish the strong claim that emotionality is a fingerprint of misinformation. Not only do most such studies often rest on extremely dubious methodological practices in their own right, but they make claims about misinformation and “inoculation” from misinformation based on experimental designs where no there is no serious reflection on whether the instances of reliable info and misinformation are in fact representative of these extremely broad categories.

The bottom line: These issues are complicated. Don’t be discouraged from questioning putative consensus “findings” of misinformation research on the basis that the scientific literature lends overwhelming support to a certain claim.

In my experience, these findings are typically just artefacts of contestable and often highly dubious methodological assumptions and massively over-hyped extrapolations from narrow experimental results.   

Given the sheer amount of deference this research gets from governments, international organisations, think tanks, and big corporations when it comes to designing interventions and regulating our informational ecosystem, that strikes me as a problem.

Leave a comment