Ok, this is the last substantive chapter in the thesis. In the next post—the concluding one—I’ll summarise the main lessons of previous chapters and outline some important questions and issues to address in future work.
In this chapter I address another challenge to representationalist accounts of the mind, this one focused on representational content. Roughly, the challenge is this:
- An essential property of mental representations is content.
- Content doesn’t exist—or at least content of the sort posited in contemporary cognitive science doesn’t exist.
For non-philosophers (perhaps for some philosophers as well), this likely sounds like gibberish. What does it mean?
Content
Representations have content. To a first approximation, what this means is that representations represent the world as being a certain way. Philosophers typically cash this out in terms of concepts like “satisfaction conditions” or “veridicality conditions.” The basic idea is that representations specify conditions that the world satisfies. If the conditions are indeed satisfied, the representation is accurate; if they aren’t, the representation isn’t.
As Hutto and Myin put it,
“Just what is content? At its simplest, there is content wherever there are specified conditions of satisfaction. And there is true or accurate content wherever the conditions specified are, in fact, instantiated.”
Some examples:
- A belief that Paris is the capital of France has truth conditions: it is true if and only if Paris is in fact the capital of France.
- A perception that the two lines in the Müller-Lyer illusion are of different lengths has accuracy conditions: the perceptual experience is accurate if and only if the two lines are in fact of different lengths. (The experience is thus inaccurate).
- A map of London has accuracy conditions: it is accurate if and only if London has the spatial layout that the map depicts.
The Hard Problem of Content?
At least since the work of the philosopher W.V.O. Quine, a number of philosophers have been sceptical of the existence of content. Why?
For Quine and many philosophers since, the only things that we should think exist are those described by the natural sciences. (In fact, Quine’s conception of what reality consists in was much more austere even than this—something I will ignore here). As such, things like quarks, electrons, molecules, neurons, neural networks, causes, correlations and so on are OK. On the other hand, the idea that physical structures could have meaning—could be about things beyond themselves, could have content—becomes mysterious. What makes it the case that some pattern of activity in my brain has the content “a dog is in front of me” rather than some other content or none at all? After all, the pattern of activity is just a physical event. There is nothing intrinsically about such an event that means anything.
Philosophers who push this line of argument contend that there is in fact nothing in the “physical” or “natural” world that makes it the case that things have contents because content doesn’t exist.
These “eliminativists” about content (they want to eliminate content attributions from science) have put forward many arguments for this conclusion. One of the most famous points out that representational content implies the possibility of error or misrepresentation. That is, representations specify conditions that the world might not satisfy. For example, if you believe that Paris is the capital of Germany, your belief misrepresents reality. It is remarkably difficult to explain what—in purely “physical” terms—could make this the case, however. In the physical world, things just happen. The idea that anything is in error—or, indeed, that anything is correct—thus seems like a human construct.
There are roughly two kinds of eliminativists in this sense.
First, there are philosophers like Quine and, more recently, Alex Rosenberg who just deny that content or meaning exists across the board.
Second, there are philosophers that argue that these worries apply only to the kinds of content attributions found in subpersonal cognitive science. They argue that humans (i.e. whole full-blooded humans, not their sub-personal neural mechanisms) nevertheless do have contentful mental states because <insert something nebulous about our unique status as sociolinguistic creatures and norm-governed social practices>.
The Mind as a Predictive Modelling Engine
Needless to say, I think such philosophers are mistaken. If the arguments of this thesis are correct, then contemporary neuroscience vindicates a conception of the mind/brain as a representational organ, constructing idealised models of those causal processes in the world responsible for generating the data to which the brain is exposed.
Nevertheless, I haven’t really addressed the enormous philosophical literature on content in the thesis (and I don’t in this chapter either). Why not?
There are two reasons.
First, as I argued in Chapter (post) 2, I think it is just a mistake to adjudicate questions about the existence and importance of mental representations in this way. The interesting question is whether structures in the brain function like representations. I have argued that—in the case of neural networks in the neocortex, at least—they do: they function as idealised structural surrogates for target domains, i.e. as models. This should be enough to secure their status as representations. Questions about content determination are thus less pressing. If our best cognitive science requires us to posit contentful representations, then a proper deference to science implies that there are in fact contentful representations. (And I don’t know what “philosophical naturalism” could possibly mean other than this deference to science).
Nevertheless, a kind of proto-theory of content determination has in fact emerged from previous chapters: one in which content is grounded in the structural similarity between generative models and their target domains in the body and world. I will return to this shortly.
Second, and relatedly, much of the philosophical literature on content starts with our folk psychological prejudices about content and then tries to explain how they can be accommodated within the “natural” world. As I noted in Chapter 2, I think this gets thing backwards: a theory of mental representation should start with our most promising cognitive science, and then explore its implications for questions concerning content. These implications might very well be deeply surprising relative to our folk intuitions about meaning and representation.
In this sense I think that the project of explaining content determination should be a job for science (and philosophy of science), not metaphysics. And this is an issue because—as far as I’m concerned—much of the science simply isn’t in yet. Although I’ve outlined a theory of mental representation in previous posts, this theory has been highly schematic, and numerous questions obviously remain:
- Exactly what is represented in different cortical areas?
- How determinate are the contents of our mental representations?
- How many generative models do we command (see next post)?
- What effects do language and structured social practices in which our mental representations are evaluated by others have on mental representation?
And so on.
As such, I don’t have a worked-out theory of content determination to offer.
Instead, in the rest of this post I’ll do two things.
First, I’ll briefly show how the theory of mental representation that emerges from previous chapters differs from standard folk psychology-inspired views about mental representation in philosophy.
Second, I’ll sketch in very broad, schematic outline the very beginnings of how I think content determination should be understood within the predictive mind.
The Fodorian Model
Much of philosophical work on mental representation and questions concerning content determination has taken place within what I will call the “Fodorian model” of mental representation, after the towering influence of Jerry Fodor in its formation.
The Fodorian model has three components.
The first is a commitment to the view that our commonsense psychological understanding of ourselves is correct. As such, it is incumbent on cognitive science to explain how the mind works in a way that conforms to our folk psychological understanding of ourselves. For Fodor (and many other philosophers), this means that the basic mental states that guide thought and action are propositional attitudes, i.e. attitudes like believing, desiring, intending, and so on, which we take up towards propositions (e.g. one might believe that Trump is the president, and desire that he not be).
The second is the view that these propositional attitudes get their contents in the same way that the propositional contents of our sentences do: namely, via a language-like system of representation in the brain with mental words that concatenate to form mental sentences that express these propositional contents.
The third is a view about how the brain might house a language-like system of representation of this kind that underlies thought and reasoning. It claims that the brain is a kind of digital computer, that mental words are formally individuated arbitrary symbols that concatenate to form mental sentences, and that thinking is the execution of algorithms that operate on these mental sentences.
In this context, the challenge of explaining how our mental states come to have the contents they do becomes the challenge of explaining what determines the reference mapping from our mental words to those features of the world that they refer to. By far the most influential strategy here is to appeal to some species of causal relation between mental words and their putative referents. For example, the reason the mental word #dog# in my brain means dog is because it is reliably caused by the presence of dogs.
The Mind as a Predictive Modelling Engine
There are many respects in which the theory of mental representation that I’ve outlined in previous chapters differs from the Fodorian model. Here are just a few.
First, it’s not motivated by the attempt to accommodate folk psychology. Instead, it’s motivated by research in the cognitive sciences that itself focuses on a range of phenomena that folk psychology appears to be silent on: how brains overcome the noise and ambiguity of sensory data, how sensorimotor control overcomes signalling days, deep learning, how unsupervised learning is possible, and so on.
Second, the Fodorian model is associated with the view that reasoning is logical in character, whereas the representations and inferential procedures outlined in this thesis are probabilistic and statistical.
Third, the representational character of generative models is not based on causal relations between mental words and their referents but rather the holistic structural similarity between generative models and their targets domain in the world. In fact, as I will note shortly, it is via acquiring accurate generative models that brains acquire the capacity to enter into internal states that are reliably caused by features of the environment.
Finally, unlike the Fodorian model, a generative model-based theory of mental representation offers at least a schematic explanation of how unsupervised learning is possible. What is interesting about this explanation is how it reverses standard theoretical approaches to mental representation in philosophy. Rather than starting with a theory of representation and then trying to explain how misrepresentation is possible, theories such as predictive processing start with a focus on a kind of system-accessible representational error, and then seek to explain how accurate representation of the world emerges from that.
Content Determination in the Predictive Mind
Given this, how should content determination be understood within a conception of mental representation in terms of predictive modelling?
Again, I don’t have a full-blown theory here, but here are some preliminary remarks. For more on this topic, see here, here, and here.
First, just like models more broadly, a generative model represents what it is supposed to be structurally similar to—namely, its target. I’ve argued that the target of a generative model is the causal process responsible for generating the data to which it is exposed. A generative model is thus accurate to the extent that it contains variables and parameters that map onto the actual causal-statistical structure of its target domain. The representational significance of model variables is thus dependent on their place within the broader model structure. For example, a model variable represents dogs if it plays the same kind of causal-statistical role within our internal model of the world as dogs actually play within the causal-statistical structure of the world.
Second, in the case of online perception (rather than, say, imagination), a generative model is accurate to the extent that it correctly indexes the evolving state of the target domain (i.e. the state of the world we are currently perceiving), where this means identifying a vector of variable-values that corresponds to the actual state of the relevant features of the world.
In the most basic case, one can think of this in terms of concepts like covariation and indication that have played such an enormous role in philosophical theories of content. The idea is this: the capacity to reliably indicate the state of the world is dependent on installing a generative model that correctly recapitulates the world’s causal-statistical structure.
Think of it like this. In the case of vision, we want states of our visual system to reliably covary with the presence of the relevant features of the world that we are visually attending to. For example, we want to indicate that there is a table in front of our eyes if and only if there is in fact a table in front of our eyes.
If we have an accurate generative model—one that actually captures the causal-statistical structure of the relevant target domain—then identifying the state of the model capable of generating (top-down) the relevant visual input to which we are exposed under such conditions will result in successful indication or covariation of this kind. The upshot is that—in theories such as predictive processing—it is our capacity to represent the world, to build models of the world, that explains our capacity to enter states that reliably covary with features of the world, rather than the other way round.
Summary
Of course, there are many complications, qualifications, open questions, and so on . I flesh this story out somewhat in the thesis chapter, but because this post is once more getting too long, I’ll stop there.