Chapter 5: Could the Mind be a Modelling Engine?

In the previous post I outlined some of the chief characteristics of model-based representation. I argued that models should be understood as idealised structural surrogates for target domains.

In this short post I’ll explain how this functional profile can be understood in the context of the neural mechanisms that underlie our psychological capacities. This chapter (post) is one of the most “philosophical” in the thesis, and so likely of less interest to some readers.

Models and Structural Similarity

Models are representations that capitalize on structural similarity to their targets. If there are mental models, then, they must share an abstract structure with those domains in the world that we represent in perception, imagination, and thought.

I suspect that for many in philosophy, this fact is already enough to bury a model-based understanding of mental representation. A consensus arose in twentieth century philosophy according to which one cannot explain mental representation—or representation more broadly—in terms of similarity of any kind.

There were many arguments that generated this consensus, but two stand out.

The first argument against similarity-based accounts of mental representation is that they are inconsistent with even a vapid materialism. For example, the neural states that underlie our representation of, say, the colour red are not themselves red. The neural states that underlie our capacity to perceive tables do not themselves look like tables. Etc.

The familiar response to this challenge focuses on the abstract structural character of the relevant resemblance relation. Structural similarity is in an important sense substrate neutral, insofar as all that it requires is that two systems share an abstract relational organisation—that the pattern of relations among elements of the representational system resembles the pattern of relations among those features of the world that the system represents. Crucially, this means that the kinds of relations need not be the same.

Of course, this merely gives one the in-principle possibility of structural similarity. It says nothing about which structures the brain builds models of or how representational systems in the brain might inherit the abstract structure of target domains. I return to the first issue below and the second issue in the next chapter (post).

The second challenge contends that similarity—even structural similarity—has the wrong properties to explain representation. For example, similarity is ubiquitous—everything is similar to everything else in some respect or other—whereas representation is not. Further, similarity is reflexive and symmetric—everything is similar to itself and if A is similar to B in respect R then B is similar to A in respect R as well—but representations do not have these characteristics: a map does not represent itself, and London does not represent maps of London.

Again, the response to this challenge is also familiar: representation is not a dyadic relation. Instead, it involves a triadic relation between the representation, its target, and the representation-user that exploits the former as a representation of the latter. In this context, appeals to structural similarity are supposed to explain how the representation-user can successfully use the model as a representation of its target.

This undercuts worries about ubiquity, reflexivity, and symmetry. For example, although a given map is similar to an infinite range of things along an infinite number of possible dimensions, and maps are similar to themselves, only one of the similarity relations is actually exploited by a map-user in using a map—namely, the similarity between the map and the spatial layout of the relevant terrain.

Once more, however, this does not take us very far. What it shows is this: if we are to make sense of the concept of mental models, we must be able to understand how the targets of such models are determined and what it means for representation-using mechanisms in the brain to exploit the similarity between mental models and target domains.

I will now consider all these issues in turn.

Which Structures?

First, then, if the mind is a modelling engine, which structures does it build models of? Craik argued that the core function of such models is prediction. Prediction is possible only in the presence of regularity. Thus a Craikian view seems to lead us to the view that mental models target the world’s “regularity structure,” which can be understood most abstractly in terms of the world’s causal-statistical structure.

In my thesis I go into more depth into how causal and statistical relations should be understood. Roughly, I endorse so-called “interventionist” accounts of causal relevance and I understand statistical relations in terms of the concept of mutual information, where two variables are mutually informative if observation of the values of one reliably reduces uncertainty about the values of the other (generalised to cover multivariate networks).

It’s important to bear in mind how general the concept of causal-statistical structure is here. For example, in vision we perceive a whole range of worldly features: the shapes of objects, their colours, their textures, their positions, the lighting in the scene, and so on. Nevertheless, these features all play specific causal roles in generating the specific statistical patterns of proximal stimulation received at our sensory transducers.

One of my favourite ideas in the literature here comes from the philosopher Dan Ryder (drawing on other philosophers like Ruth Millikan), who argues that the neocortex builds models of regularities in the world organised around sources of mutual information.

To keep things short, though, I’ll leave things here for now. In the next two chapters I’ll outline research from the cognitive sciences that I think lends support to a conception of the neocortex as a general-purpose modelling engine, building causal models of those processes implicated in generating the sensory data to which it is exposed.

Mental Models and Targets

Models have targets: the domain that the model is intended to be a model of. What determines the targets of our mental models? Importantly, it cannot be structural similarity. A model will be structurally similar to an indefinite number of possible things. (For example, a model inside one’s brain might fortuitously share an abstract structure with specifically arranged grains of sand somewhere, but this doesn’t mean the model represents that arrangement). Further, if a model represents what it is structurally similar to, then misrepresentation would be impossible, because misrepresentation arises when a model fails to adequately represent the structure of its target.

Given this, what does determine the targets of our mental models? (I can imagine scientists rolling their eyes at this question. Silly philosophers with their pointless questions).

In any case, I think that the answer to this question can be addressed relatively straightforwardly. Again, my answer draws heavily on the work of Dan Ryder and some others.

First, then, insofar as an organism commands a general-purpose mental modelling engine, presumably this is not an accident. That is, presumably such a complex neural mechanism evolved for a specific reason: namely, to model the world’s causal-statistical structure. Analogously to the fact that the function of a heart is to pump blood, then, and not, say, make specific rhythmic sounds, the target of a modelling engine is the world’s causal-statistical structure and not, say, particular arrangements of sand somewhere, because targeting the world’s causal-statistical structure is what it evolved to do.

(There are some complex philosophical questions hereabouts concerning how to understand the concept of function and so on, which I won’t dwell on here).

That accounts for the generic target of the mind’s modelling engine (if there is such a modelling engine). What determines the targets of particular mental models, though, or particular parts of broader models?

Here I think that one can appeal to that aspect of the world causally involved in the relevant model’s production. For example, the reason that models in our visual system target the “visual world”—shapes, colours, textures, etc.—is because these are the features of the world causally involved in their construction.

This will become clearer in the next post when I draw on the concept of generative models, which can be understood as targeting the process responsible for generating the data to which they are exposed.

Exploiting Structural Similarity?

Finally, I pointed out above that the concept of structural similarity is important insofar as it provides an explanation of how a representation-user can successfully use the model as a representation of the relevant target.

For example, the structural similarity between the layout of a map and, say, the layout of London can be appealed to in explaining how someone using that map successfully makes their way from Big Ben to the Houses of Parliament.

How can this be understood in the context of mental models?

Here I draw on the work of people like Peter Godfrey-Smith, Nick Shea, William Ramsey, Paweł Gładziejewski, and Marcin Miłkowski.

Roughly, here is how I think the basic story goes. To say that structural similarity is exploited by representation-using mechanisms is to say the following: the successful exercise of the psychological capacities produced by the relevant representation-using mechanisms is causally dependent on the structural similarity between the models used by those mechanisms and their targets.

As Paweł Gładziejewski and Marcin Miłkowski point out, the concept of causal dependence here can be understood as follows: roughly, if the model did not share an abstract structure with its target, the relevant capacities would not be exercised successfully (with some obvious qualifications, i.e. other enabling conditions are in place).

In the case of mental models, what capacities are dependent on them in this way? Again, if we take our lead from Craik, we will be led to the view that successful prediction is causally dependent on accurate mental models. In the next posts I will explain how this link between structural similarity and successful prediction can be exploited to use failures of prediction—prediction errors—to build models that accurately recapitulate the structure of target domains.

Nevertheless, organisms are not simply trying to predict just for the sake of it, but rather because this predictive capacity underlies a host of other capacities. As such, one can imagine a cascade of levels of dependence: the successful exercise of our psychological capacities is dependent on prediction, and successful prediction is dependent on accurate mental models.

Exactly how to tell such a story will depend on the relevant scientific details. My point here is just that such a story can be told.

Summary

To summarise, then:

  • There is nothing incoherent in the idea that the mind/brain builds and exploits idealised structural surrogates for target domains—i.e. models.
  • Insofar as these models underlie prediction, it is plausible that they target the body and world’s causal and statistical structure.
  • To say that representation-using mechanisms in the brain exploit the structural similarity between mental models and target domains is to say the following: the successful exercise of psychological capacities directed at target domains is causally dependent on the structural similarity between mental models and those targets.

Of course, so far this has all been reaaally theoretical. That is, what I have shown so far is that the idea that the mind functions as a predictive modelling engine is coherent. I haven’t given any reason for endorsing this idea, however.

I turn to that issue in the next few chapters (posts).

 

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s