In this series of blog posts, I’m going to write up brief overviews of the contents of my doctoral thesis, “The Mind as a Predictive Modelling Engine: Generative Models, Structural Similarity, and Mental Representation.” In this post I’ll summarise Chapter 1, which outlines the philosophical context of the thesis, the chief claims I defend, and the contents of the next 10 chapters.
My overarching aim in the thesis is to outline and defend a systematic theory of mental representation, or at least significant aspects of mental representation, which draws on recent research in neuroscience, machine learning, and cognitive psychology. According to this theory, our capacity to represent the world in perception, imagination, and (some aspects of) thought is underpinned by generative models in the brain that share an abstract structure with bodily and environmental domains. I attempt to clarify what this means, relate it to advances in the contemporary cognitive sciences, and then explore some of its important implications—and limitations—in the context of questions concerning mental representation in the philosophy of mind and cognitive science.
The Philosophy of Mental Representation?
Ok, so the first question that arises here is probably: what business does a philosopher have in advancing a theory of mental representation? Questions concerning mental representation—their importance in cognitive processes, the distinctive functions they perform, the format they take in the brain, and so on—seem like ordinary scientific questions.
Nevertheless, recent decades have seen an enormous body of philosophical work addressing such questions. What gives?
I think that at least part of the answer here is this: the concept of mental representation exhibits a strange—bizarre, in fact—dual profile in the contemporary cognitive sciences.
On the one hand, the concept is foundational in cognitive science, and it has been at least since the so-called “cognitive revolution” of the late 1950s and 1960s. From this perspective, a commitment to viewing the mind as a representational organ is partly definitive of orthodox cognitive science, uniting research programmes and disciplines that otherwise disagree sharply about the basic make-up of the mind.
On the other hand, the concept has always been mired in deep controversy and confusion. There are at least three aspects to this:
- The substantial strand of research that seeks to either marginalise or eliminate mental representations from cognitive theorising (e.g. radical embodied cognition, enactivism, dynamical systems theory, physicalist reductionism, etc.);
- Deep controversies over the form that mental representations take in the brain among proponents of orthodox of cognitive science;
- A set of enduring philosophical and conceptual issues concerning mental representation. What are mental representations? What distinctive functions do they perform? How can they play causal roles in cognitive mechanisms? How can the weird properties of contents and reasons that they bring with them be integrated into a scientifically responsible metaphysics?
In other words, the literature on mental representation is a mess. One aim of the substantial engagement with the topic of mental representation among naturalistic philosophers in recent decades can be understood as an attempt to help clear this mess up. Specifically, much of this research engages the following three—deeply related—questions.
Three Questions about Mental Representation
The first question is the most basic: what distinctive functions do mental representations peform? Without an answer to this question, there can be no way of adjudicating one of the most heated controversies in cognitive science: how important are mental representations in cognitive processes? Despite this, it is remarkably difficult to find clarity or consensus on this question in the scientific or philosophical literature.
The second question concerns not what typifying characteristics mental representations have in general but the specific characteristics of the mental representations inside our brains. Here there have really been two issues. First, what form do mental representations take in the brain? Although this question is straightforwardly empirical, it is a question on which philosophers have made substantial contributions, clarifying, critiquing, and cheerleading different answers. Second, how do the mental representations inside our brains acquire their specific contents? What makes it the case that some pattern of activity in my brain represents dogs rather than something else or nothing at all?
Finally, how do representational explanations work in cognitive science? How can the relational properties that mental representations bear to what they represent be causally relevant to their role in cognitive mechanisms?
Distinguishing these three questions is slightly artificial given the many obvious and important ways in which they interrelate. Nevertheless, they have generated at least partially distinguishable threads in the substantial body of philosophical work on mental representation in recent decades.
The theory of mental representation that I outline and defend in later chapters can be understood as providing answers to these questions. Of course, it doesn’t solve all the foregoing puzzles about mental representation, but it does constitute genuine progress–or so I argue.
Briefly—very briefly—the answers I provide to such questions are as follows:
- I argue for the view—endorsed by many but not all psychologists and philosophers—that mental representations should be understood as functional analogues of public representations. That is, when we posit mental representations in cognitive science, what we are doing—or at least what we should be doing—is claiming that the relevant neural structures function kind of like familiar external representations such as maps, graphs, diagrams, pictures, models, and so on. I will explain why this view is not totally anodyne in the next blog post.
- I argue that many features of mental representation should be understood in terms of the paradigm of generative models from machine learning and cognitive science, where generative models are models that share an abstract structure with the causal processes responsible for generating the data to which the brain is exposed. The theory I focus on for the most part as an illustration of this conception of mental representation is predictive processing, according to which the brain functions as an engine of prediction error minimization. I also draw on other (i.e. non-predictive coding) applications of generative model-based learning and processing, however, which I will touch on in later blog posts. I argue that these generative models function as structural representations, i.e. representations that capitalize on structural similarity to those features of the body and world that we represent in perception, imagination, and thought.
- Finally, with respect to the explanatory question, I draw on the work of a number of philosophers—Peter Godfrey-Smith, Nick Shea, Paweł Gładziejewski, Marcin Miłkowski, and others—to argue that positing generative models offers causal explanations of psychological capacities insofar as: (1) the successful exercise of these capacities is causally dependent on prediction, and (2) successful prediction is causally dependent on accurate generative models—on models that reliably recapitulate the actual causal-statistical structure of target domains in the body and world.
That’s that, then. Before I briefly outline the contents of forthcoming chapters, I want to briefly flag three qualifications.
- A Framework
First, I intend the “theory” of mental representation that I outline and defend to be much more abstract than ordinary scientific theories. Specifically, I intend it to be a framework that subsumes a body of related research in the cognitive sciences, draws out the implications of that research for the topic of mental representation, and distinguishes that research and its implications from other accounts of the nature of mental representation in cognitive science and philosophy. As such, it inherits whatever empirical support it enjoys from the substantial scientific research that it subsumes. Relatedly, then…
Second, the theory is obviously not original to me. Most obviously, the scientific research is not mine, but I also draw on a large body of philosophical work, both on predictive model-based theories of representation—e.g. people like Rick Grush, Dan Ryder, Jakob Hohwy, Andy Clark, Paweł Gładziejewski, Alex Kiefer, and many more—and on structuralist theories of mental representation—e.g. Robert Cummins, Randy Gallistel, Paul Churchland, and many more.
My aim in the thesis is to draw together a large body of often disparate research to outline a systematic way of thinking about the nature of mental representation, which I then apply in novel areas.
ASIDE: in general, I’ve always thought that attempts at substantial originality in philosophy and science are overrated, and in any case I am no good at that kind of originality. I’d much rather integrate the important work of others who have gotten a lot right, carefully tying together that work, identifying what is important in it, and building on it, even if that means putting forward no ground-breaking new claims. As practical advice, this also makes sitting down to write an 80,000 word thesis much easier and less daunting, and—in my experience, at least—it is impossible to put 80,000 words to paper without coming out with some original claims.
The final thing is that—unlike, perhaps, some proponents of predictive processing—I do not think that the mind is only a predictive modelling engine, whatever that might mean. Adaptive success in intelligent agents like us and other animals is a many splendored thing, and involves much more than the reliance on generative models.
My claim is instead just that one important thing that our brains do is predictive or generative modelling, and that this fact is important when it comes to understanding the nature of mental representation.
Importantly, I do not even touch on every aspect of mental representation. One conspicuous area that I do not address—except from some speculations in the concluding chapter—is the role of public representational technologies such as natural language and mathematical symbols in human cognition. My target is to clarify those aspects of mental representation that we share with other mammals (and probably some birds), and even there I don’t think that a generative model-based account of mental representation can accommodate all the relevant phenomena.
OUTLINE OF THESIS CHAPTERS.
With that in mind, here’s a brief overview of the thesis chapters. If all goes to plan, I will write brief blog posts on each of these:
Chapter 2: The Representation Wars. Here I consider controversies over the existence, importance, and nature of mental representation. I motivate the claims (1) that mental representations should be understood as functional analogues of external representations inside the brain and (2) that a theory of mental representation should be exclusively concerned with spelling out the implications of our most promising cognitive science, and not–as some philosophers assume–accommodating our commonsense intuitions about the mind.
Chapter 3: Craik’s Hypothesis on the Nature of Thought. I outline three insights from the mid-twentieth century philosopher, psychologist, and cybernetician Kenneth Craik that provide the schematic framework for the theory of mental representation that I then elaborate and defend in later chapters: first, an account of mental representation in terms of idealised models that share an abstract structure with target domains; second, an appreciation of prediction as the core function of such models; and third, a regulatory (i.e. cybernetic) understanding of brain function.
Chapter 4: Models as Idealised Structural Surrogates. I consider in more detail how to understand model-based representation. Drawing on examples of cartographic maps, scale models, and mathematical models, I argue that models in general should be understood as idealised structural surrogates for target domains, and I explain what this means.
Chapter 5: Could the Mind be a Modelling Engine? I explain how one can understand the functional profile of ‘idealised structural surrogate for target domain’ in the context of the neural mechanisms that underlie our psychological capacities.
Chapter 6: Generative Models and the Predictive Mind. I draw on research from cognitive science and machine learning that offers some support for a conception of the mind as a predictive modelling engine. I first introduce generative models, I then explain how generative models can be understood in terms of probabilistic graphical models, and then I introduce predictive processing and its conception of the neocortex as a hierarchically structured generative model-based ‘prediction machine’.
Chapter 7: Predictive Processing and Craik’s Three Insights. I show how the research outlined in Chapter 6 both vindicates and deepens the three ideas that I extracted from Craik’s work in Chapter 3.
Chapter 8: The World is Not Its Own Best Generative Model. I argue against the ‘replacement hypothesis’, the view that embodied interactions with the environment replace the need for mental representations in cognitive processes. I argue that such “radical embodied” views neglect the flexible predictive capacities that underlie sensorimotor processing and “everyday coping” more generally.
Chapter 9: Modelling the Umwelt. I show how a generative model-based theory of mental representation can accommodate the “organism-relativity” of experience, i.e. the way in which our experience of the world implicates idiosyncratic features of us—our contingent interests, morphology, response profile, and so on.
Chapter 10: Generative Intentionality. I consider the claim that representational content is so metaphysically problematic that it should be eliminated from cognitive theorising. I contrast a generative model-based view of mental representation that I advocate with the ‘language of thought hypothesis’ and I sketch the beginnings of a theory of content determination for the predictive mind.
Chapter 11: Conclusion. I summarise the chief claims of previous chapters and outline three important future research questions for the theory of mental representation that I defend concerning: (1) how many generative models intelligent agents command; (2) the explanatory scope of a generative model-based view of mental representation; and (3) the role of public representational technologies such as natural language in a generative model-based theory of mental representation.