In this blog post I’m going to briefly(ish!) outline the contents of the second chapter of my doctoral thesis. This chapter is probably the most boring of the thesis, but it lays the groundwork for some of the views that I defend in later chapters, as well as the methodological approach that I follow throughout the thesis as a whole.
Chapter Aims
The aim of the chapter is twofold:
- To briefly outline some foundational controversies concerning mental representation in cognitive science and its philosophy.
I call these controversies the “representation wars.” This expression is typically used to describe a set of enduring controversies concerning the importance of mental representations in cognitive processes. I extend the term so that it also covers controversies concerning what might opaquely be called the “nature” of mental representations among representationalists. These latter controversies have often been just as heated, enduring, and intractable as debates over the importance of representations. (Consider the notorious imagery debates, for example, or the never-ending debate over the importance of combinatorial symbols in neural computation).
- I argue that a frustrating characteristic of the representation wars is the lack of any consensus on how to adjudicate them—specifically, a lack of consensus on two questions: first, what does it take for a neural structure to qualify as a mental representation? Second, what do we want from a theory of mental representation?
To help solve this problem, I motivate two claims:
- Mental representations should be understood as functional analogues of external representations.
- A theory of mental representation should be exclusively motivated by the explanatory constraints of our best cognitive science, and not—as many philosopher assume—folk psychological or semantic intuitions.
The Importance of Mental Representations in Cognitive Processes
Ok, so the first set of controversies concerns how important mental representations are in cognitive processes. On one side of this debate—the representational side—lies orthodox cognitive science. On the other side sits a motley crew of neo-behaviourists, physicalist reductionists, and those at the so-called “radical” end of embodied cognition, broadly construed.
With respect to many aspects of cognition—memory, imagery, complex reasoning, planning, and so on—there really is no controversy here as far as I’m concerned: of course such capacities require that we form mental states that represent things.
Nevertheless, there are enduring debates over to what extent “lower-level” cognitive capacities—perception, sensorimotor processing, “everyday coping,” etc.—rely on internal representations.
What gives? Presumably to what extent psychological capacities draw on mental representations is a straightforward empirical issue. Why, then, has the issue been so difficult to reach a consensus empirical opinion on?
There are many answers to this question, but one that is especially frustrating—and interesting from a philosophical perspective—is this:
- In addition to first-order controversies concerning how the mind works, such controversies are hampered by second-order disagreements concerning how it would have to work to qualify as representational.
Clearly, if we are going to adjudicate controversies over how important mental representations are in cognitive processes, we need to know what mental representations are—what it would take for a neural structure to qualify as a mental representation.
Call this latter question the
Constitutive Question: what are the distinctive properties and relations in virtue of which something qualifies as a mental representation?
If one turns to the scientific and philosophical literature, however, it is remarkably difficult to find either consensus on this issue or answers that are capable of operationalising debates concerning the importance of mental representations in the mind.
To see this, consider two superficially plausible—and widespread answers—that one finds in the literature.
Deferring to Scientific Usage?
One obvious way of answering the Constitutive Question is this: mental representations are just whatever cognitive scientists say they are. Full stop. After all, the term ‘mental representation’ is a theoretical concept, not a term of ordinary discourse. As such, it is plausible that it should mean whatever cognitive scientists who draw on the concept mean by it.
There are at least two problems with this:
- As many philosophers have now pointed out (especially William Ramsey), many in contemporary neuroscience and machine learning (especially neural network modelling) use the term “representation” just to mean a state of a cognitive system (i.e. neural network) that responds selectively to certain bodily and environmental conditions. (Think edge detectors or face detectors). But: (1) There seems to be no loss replacing all talk of “representation” in such contexts with appeal to terms such as “detectors,” “tracking states,” and so on. (2) Nobody has ever—indeed, could ever—deny that cognitive systems possess states that respond selectively to certain conditions. The representational theory of mind thus becomes a banality.
- Many other cognitive scientists—especially those in traditions like embodied cognition and enactivism—explicitly deny that differential responsiveness to bodily or environmental conditions is sufficient for representational status. As such, deferring to scientific usage is not possible, because there is no consistent scientific usage of the term.
Deferring to Philosophical Usage?
When many philosophers try to answer the Constitutive Question, they typically claim that mental representations are just mental states with semantic properties or content, where this is typically understood in terms of satisfaction conditions of some kind (e.g. my belief that Paris is the capital of France is true if and only if certain conditions are satisfied: namely, that Paris is in fact the capital of France).
This kind of view also leads to the widespread view in philosophy that debates about the importance of mental representations in cognitive processes boil down to whether it is possible to explain in “naturalistic” terms how states of the brain come to express specific contents (see below).
Here is the problem with this view: it doesn’t help to operationalise the debate at all.
- How do we tell whether the structures inside our brain have semantic properties—are about anything?
- Semantic characterisations are cheap. One can describe just about anything as instantiating “semantically evaluable” states, e.g. “my thermometer believes the temperature is rising; it has a strong desire to lower the temperature; etc…” An answer to the Constitutive Question should explain when such semantic characterisations are explanatorily useful and when they are not.
- More generally, this answer does not explain what it is for something to function as a mental representation.
Functional Analogues of External Representations
Here is the answer to the Constitutive Question that I prefer: mental representations are structures within the brain that function like familiar external representations such as maps, models, diagrams, pictures, sentences, and so on.
On this view, when we posit mental representations, what we are saying is something like this: “You see how external representations work? Well, there are things inside the brain that work kind of like that.”
Why accept this view?
- I think it captures how the concept of mental representation has in fact historically been understood in both philosophy and cognitive science. (Of course, it doesn’t capture all uses of the term “representation”—e.g. the idea that mere selective response is sufficient for representational status—but it helps to explain what is problematic about such uses).
- It renders debates over the importance of mental representations in cognitive processes both substantive and interesting. It is not trivial that intelligent agents are reliant on structures within the brain that function like external representations.
For an example, think of the concept of cognitive maps. External maps are structures that perform specific functions for people in virtue of possessing specific kinds of characteristics. Often we want to navigate an area, but to do this effectively requires information to which we do not have access. A map re-presents this information by displaying the relevant layout of the environment in an accessible form for a representation-user.
When Edward Tolman posited “cognitive maps” in 1948, what he was saying is that mammals make use of structures within the brain that function kind of like such familiar external maps—specifically, that there are structures within the brain that re-present the spatial layout of environments in a form that can be used to solve the kinds of problems that external maps solve: navigation, planning, and so on. Of course, Tolman had no idea how neural structures could exhibit these characteristics. His conjecture—now vindicated by over half a century of neuroscientific research—was simply that there must be such internal maps in order to account for the kinds of behaviours that rats (and other mammals) exhibit.
Summary
To summarise, then, I think that debates over how important mental representations are in cognitive processes should be understood as debates over whether the structures implicated in our cognitive mechanisms perform similar functional roles to paradigmatic public representations. Of course, this claim is hardly original, and there are lots of qualifications and complications here, which I address at greater length in the actual thesis. Further, I expect that many people would find this proposal so obvious as to not even need stating. (I’ve also spoken to others who find it so absurd that they can’t imagine how anyone could state it). Nevertheless, I think that it provides an especially useful way of adjudicating debates over the importance of mental representations in cognitive processes that I return to when I consider challenges to representationalist accounts of perception in Chapter 8.
In the next chapter (and corollary blog post) I will outline the work of Kenneth Craik, who drew on just this conception of mental representation to argue that intelligent animals make use of neural structures that function as models of target processes in the body and environment—models that perform a specific task that Craik believed to be central to adaptive success: prediction.
The Nature of Mental Representations
As noted above, in addition to controversies over the importance of mental representations in cognitive processes, there is also a set of foundational controversies in cognitive science among representationalists concerning the “nature” of mental representations.
Here there are really two issues:
- Format
First, what form do mental representations take in the brain? Specifically, what are the structural properties of the neural vehicles of mental representation? One classic way of taxonomizing this debate is by distinguishing between symbolists—those who think that representational vehicles take the form of arbitrary symbols amenable to rule-governed composition and transformation—and connectionists—those who think of representations as distributed patterns of activity and the configurations of “weighted” connections in neural networks underlying their transformations.
This taxonomy is probably too coarse-grained, however, neglecting the enormous variety of views about representational format within these different research programmes, and it neglects the number of theories that straddle the divide.
In any case, questions about representational format are straightforwardly scientific questions. As such, I won’t dwell on this issue here.
- Content
The second question concerns not the format of mental representations but their contents. It asks: what determines the contents of the mental representations inside our brains? What makes it the case that some pattern of activity in my brain represents, say, my grandmother rather than something else (e.g. my grandfather) or nothing at all?
Unlike questions about format, this question has been pursued almost entirely by philosophers. The philosophical literature contains many “theories of content” attempting to answer this question of “content determination”: causal theories, co-variance theories, informational theories, functional role theories, and so on.
I won’t bother summarising this (ENORMOUS) literature here. Instead, I want to ask a broader question: what are such theories trying to accomplish? What would count as success?
Here I think that there are two answers that one can find in the literature.
Naturalizing Folk Psychology
According to one view, a theory of representational content is in effect a theory that “naturalizes” folk psychology. To “naturalize” an area, here is what you do:
- First, take all of the statements that we hold to be true about an area. In this case: all of the statements about the mind that we are led to produce by “folk psychology,” namely the commonsense framework that we each use to describe, interpret, predict, and explain one another’s mental states and behaviour.
- Second, show that all of these statements can be translated into a set of different statements that draw exclusively on the concepts countenanced by a view called “metaphysical naturalism.” These concepts include things like causation, covariation, etc., but they do not include concepts like content, meaning, reason, truth, and so on. Thus the challenge is to explain how our folk psychological states come to express the contents they do by appeal to only those properties and relations countenanced by a “scientific metaphysics.”
I bang on about this project at much greater length in the thesis chapter. For now, I will just say this: I think that it is a fundamental mistake to identify this project with the project of providing a theory of mental representation.
There are two reasons for this. First, many of the representational constructs from cognitive science and AI have nothing to do with folk psychology. As such, this project is extremely parochial. Second, it is not obvious why any theory in science should have to accommodate folk intuitions of any kind, whether about the mind or anything else. Of course, it is an interesting question how much of our folk psychological understanding of the mind is consistent with research in cognitive science. But we should be open to the possibility that the answer to this question is: not very much.
A Different Project
Given this, what should a theory of mental representation seek to achieve? Basically, I think that it should take the following form: “this is how mental representation and content determination should be understood if such and such a theory/research programme in cognitive science is along the right lines…”
This effectively makes the project one in the philosophy of science, not “naturalistic metaphysics.” Just as philosophers of biology seeking to explicate the concept of, say, natural selection as it features in evolutionary biology should not be constrained by commonsense intuitions about biology, and philosophers of physics trying to explicate the concept of time shouldn’t be constrained by our folk intuitions about time, likewise for theories of mental representation.
Before concluding, one might think that there is a tension here: on the one hand, I am arguing that the concept of mental representation should be understood as drawing on our commonsense understanding of external representations, whereas I am now arguing that theories of mental representation should not be constrained by folk intuitions.
The tension here is illusory. Consider, for example, the proposal that a biological structure functions as a pump. In making this claim, we are drawing on our ordinary understanding of pumps. Nevertheless, whether there is a structure that exhibits this functional profile, and the nature of the structure that performs that function, are exclusively scientific issues.
Likewise: I think that a useful way of understanding the “representational theory of mind” is in terms of the claim that the neural mechanisms that underlie our psychological capacities make use of structures that function similarly to external representations. Nevertheless, the properties of the structures that perform these functions, and how they acquire their contents, are exclusively scientific issues.
Conclusion
To summarise, then:
- When we posit mental representations, what we are doing is arguing that there are things within the brain that function kind of like external representations—like maps, models, graphs, diagrams, pictures, etc. Of course, so far I have not said very much about the functional profile of external representations. For one kind of representation, at least—namely, models—I address this issue at much greater length in future chapters (posts).
- A theory of mental representation should be exclusively answerable to the explanatory constraints of our best cognitive science. In future chapters (posts), I argue that our most promising body of research in this area implies a conception of mental representation in terms of causal generative models that share an abstract structure with those processes responsible for generating the data to which the brain is exposed.