Any attempt to understand human psychology must reconcile two characteristics of our species: the extraordinarily adaptive character of human intelligence in some contexts, and the conspicuous manifestations of stupidity and irrationality in others.
On the one hand, human beings are remarkable. Not morally or aesthetically remarkable, but remarkable from a quasi-engineering perspective. With astonishing energetic efficiency and speed, even young children engage in a range of psychological tasks with a degree of success and flexibility far beyond the capabilities of our most impressive artificial intelligence. Of course, we share much of this flexible intelligence with other animals, but Homo Sapiens has a suite of capacities that it can plausibly call its own: capacities for language-based reasoning, deliberating, planning, and reflecting. “Man is a rational animal,” declared Aristotle, and he had a point.
Over two millennia later, the science fiction novelist Robert Heinlein offered a witty response: “Man is not a rational animal; he is a rationalizing animal.” With this comeback, Heinlein joined a long tradition of thought in philosophy and psychology that focuses on the many ways in which our species distinguishes itself in the animal kingdom not by our rationality but by the extraordinary depths of irrationality, bias, and self-justification to which we are prone.
For example, there is what people routinely believe: from the religious and ideological myths, superstitions, and conspiracy theories that defy all evidence and reason to the more mundane self-serving and self-aggrandizing beliefs that people gravitate towards. These latter illusions result from a bizarre feature of the human mind: in addition to lying to others, we also lie to ourselves – a quirk of human psychology sustained by those psychological defense mechanisms that so pre-occupied the great psychoanalysts: denial, repression, deflection, and more.
Then there is what people do not believe – the things that we are suspiciously ignorant of, whether they be our own flaws and failings, or the flaws and failings of the groups and communities to which we belong.
Of course, underlying such peculiar beliefs and ignorance is the way in which we think and reason. We seek out evidence that confirms what we believe and ignore or neglect evidence in tension with it. We drink our own Kool-Aid. As Heinlein’s quote reflects, we just as often use our powers of reason to scramble around for post hoc rationalisations of our beliefs than to arrive at the truth.
What unifies such apparent failures is simple: their flagrant violation of what philosophers and psychologists call “epistemic rationality.” Epistemic rationality involves seeking out and processing information in ways conducive to forming genuine knowledge about the world (rather than, say, unjustified or superstitious beliefs). It is manifest in a certain idealised picture of scientific inquiry, and it requires conformity to norms of logic and probability theory.
What is striking about human psychology is how often we systematically violate basic norms of epistemic rationality. We distort and sabotage the information at our disposal, and ignore or neglect information that is unfavourable or unflattering to us. Such epistemic irrationality would be easier to understand if it were a universal feature of human psychology, but it is not: in many everyday contexts, we are remarkably adept at integrating and processing information. Indeed, it is not uncommon in neuroscience and psychology—not to mention economics—to claim that we are literally optimal at doing so.
Why, then, does the species that named itself Homo Sapiens (Wise Man) so often fall prey to bizarre—even comical—unwisdom?
Of course, human irrationality is a soup composed of many ingredients, and this fact is reflected in the psychological literature. For example, some psychologists point to the practical limitations on human cognition, and the trade-offs between efficiency and accuracy that such limitations entail. As Herbert Simon famously put it, we must satisfice rather than optimise. Others point to the constraints and imperfections of the evolutionary process, a process that reliably gives rise to suboptimal designs. (Just think of the human spinal column). Still others point to asymmetries in the costs of different errors: just as a smoke alarm biased towards false alarms can be superior at detecting real fires than a more tentative system that carefully gathers evidence before leaping to conclusions, many biases in human psychology might result from a similar drive to avoid costly mistakes.
These standard explanations, however, apply to all species. To make progress in understanding what is genuinely unique about human irrationality, I believe that we must focus on what is genuinely unique about our species: the hypersocial niche that we inhabit. Our principal goals are social goals, not least because our material survival is and always has been dependent on our ability to navigate the social environment. Unlike other social animals, however, our social environment is composed of language-using gossipy primates with the ability to read the contents of our beliefs and desires. These agents are variously competitive, cooperative, and coalitional, and our survival and success are almost wholly dependent on the impressions that we make on them.
In this context, I believe that much of the epistemic irrationality characteristic of human psychology can be accounted for by the following fact:
Unlike the merely natural world, the social world that we inhabit often rewards epistemic irrationality and punishes epistemic rationality.
It is a fundamental insight of disciplines such as evolutionary biology and game theory that the introduction of other agents can radically transform the incentives that agents confront. The peacock’s costly and unwieldly tail, for example, illustrates how seemingly maladaptive traits can function as rational solutions to the problem of persuading other agents of one’s value. Indeed, the costs of energetic expenditure and increased risk of predation generated by such unwieldly tails are a feature, not a bug – a way of credibly signalling that the relevant peacocks have a range of sexually desirable traits (to peahens, at least). Similarly, extensive research in game theory reveals how sabotaging and neglecting information can be an optimal strategy in competitive interactions with other agents.
A compelling but still largely heterodox line of thought in psychology draws on these and other insights to argue that many seemingly maladaptive and irrational features of human psychology are in fact rational solutions to the peculiar social problems that we confront. To give only three examples:
It has been argued that many of our beliefs are best understood as social tools (see, e.g., here, here, and here). We form such beliefs not because they are best supported by the available evidence but because they enable us to achieve various social goals. They enable us to fit in and stand out. They signal our commitment to cooperative social norms and advertise our loyalty to our coalition. They enable us to persuade other agents of our social value. Of course, we, the believers, are blissfully oblivious to such social functions of our beliefs, not least because we always have a rationalisation at hand when somebody challenges them. According to this research, however, this obliviousness is itself a kind of self-deception.
Similarly, it has been argued that much of our suspicious ignorance is socially adaptive ignorance. Knowledge can be risky for social creatures like us. It renders us accountable for our actions in uncomfortable ways, it threatens the relationships that we rely upon for support and belonging, and it challenges the myths that sustain our self-esteem and social groups.
Finally, much of human reasoning is plausibly socially adaptive reasoning. According to an increasingly influential position in psychology, the self-serving and self-justifying character of human reason is a feature, not a bug, enabling us to achieve various social goals such as persuading our peers, defending our reputations, and cheering on our tribes that are often in tension with epistemic goals.
My view is that these insights transform our understanding of human psychology. “Thinking is for doing,” to paraphrase William James, and the chief thing that humans need to do is get along and get ahead in social games in which knowledge and rationality are often losing moves. An enormous body of research in both psychology and social epistemology focuses on how communities are essential in generating knowledge, but there is far less work focusing on the way in which social interactions and relationships threaten and often undermine knowledge. I believe that the latter phenomenon has implications for an enormous range of psychological phenomena: self-deception, motivated cognition, rationalisation, confabulation, motivated ignorance, reason, rationality, and even consciousness.
Further, although epistemic irrationality is born in individual minds, it does not die there. It transforms and often corrupts the broader social world. Individuals inflict their self-deception and overconfidence on others, and competing coalitions become trapped in self-serving echo chambers in which group loyalty is valued over truth. Understanding the social incentives that drive epistemic irrationality is thus crucial for understanding—and ultimately improving—human society.
From 2019 to 2023, I will be exploring these ideas as a Junior Research Fellow at Corpus Christi College, Cambridge. Because I am a philosopher, I don’t intend to do any original experimental work, although I do hope to be able to collaborate with neuroscientists and experimental psychologists in the future. Instead, my project will have three primary components.
The first is theoretical unification and synthesis. The fundamental insight that the social world often incentivizes epistemic irrationality has appeared in numerous guises across a disparate range of scientific fields (social psychology, anthropology, game theory, evolutionary psychology, etc.). I aim to draw this work together into a unified story, extracting the core insight from additional and often much more controversial assumptions to which it is often aligned, and spell this insight out as clearly as possible.
The second is to clarify and advance the most compelling theoretical and evidential case for the existence and importance of social incentives in driving epistemic irrationality. Many of the phenomena that I have outlined have alternative—and typically much more influential—explanations in the psychological literature. For example, the dominant explanation of much of epistemic irrationality in social psychology appeals to the idea that people seek to protect their “self-concept” or “self-esteem” or “reduce cognitive dissonance.” I think that such explanations are typically far less compelling than those which focus on the distorting effects of social incentives. Demonstrating this is a central aim of the project.
Finally, I aim to trace out the most important implications of these ideas for foundational questions in psychology and the philosophy of mind. As I have noted, I think that the basic insight that the social world incentivizes epistemic irrationality has implications for an enormous body of psychological phenomena. I will be especially pre-occupied with two issues, however.
The first concerns what is sometimes called the “rationality wars” or the “Great Rationality Debate.” Focusing on the way in which many examples of epistemic irrationality are in fact practically rational or adaptive in the social world that we inhabit makes an important contribution to the age-old controversy of how rational human beings are.
The second concerns the nature and functions of beliefs. Philosophers tend to hold peculiar views about beliefs – for example, that “beliefs aim at the truth,” or, worse, that beliefs are “necessarily” responsive to evidence and reasons. Focusing on the social functions of beliefs challenges many such orthodoxies and forces major revisions in our understanding of beliefs and belief formation.
I began this blog post with a central puzzle of human psychology: why does an otherwise intelligent, competent, and often rational species so frequently exhibit extraordinary examples of stupidity and irrationality? Situating the human mind in its proper context – a hypersocial niche with fundamentally different incentives to the merely natural world – offers a compelling answer to this question that warrants substantially more attention and research than it currently enjoys.