Chapter 4. Models as Idealised Structural Surrogates

In the previous chapter I outlined Kenneth Craik’s suggestion that we should understand mental representation in terms of models in the brain, where, for Craik, a model is a kind of representation that capitalizes on relational similarity to its target.

In this short post, I’m going to briefly elaborate on how I think the concept of a model should be understood. In my view, models should be understood as idealised structural surrogates for target domains. I think that this characterisation captures everything from simple cartographic maps to the double helix model of DNA to the complex mathematical and computational models used in the sciences and engineering. Crucially, I think that it also characterises the core functional profile of the models inside our brains.

My characterisation of models draws heavily on the work of philosophers of science such as Ronald Giere, Peter Godfrey-Smith, and Michael Weisberg.

Drawing on work from the philosophy of science might seem strange in an investigation into the nature of mental representation. The kinds of models used in science and engineering are public models—either scale models in the external world or abstract structures typically specified by people using mathematical symbols (see below). These models require intelligence, interpretation, and often complex social interactions. It is obviously no good importing those characteristics into the brain. The point of positing mental representations is to explain intelligence.

Nevertheless, as I argued in Chapter 2, I think that when we posit mental representations, what we are doing is precisely saying that there are things inside the brain that function kind of like the external representations that we are familiar with from everyday life, science, and so on. (“Cognitive maps,” a “language of thought,” “mental models”).

Of course, not all of the characteristics of these external representations are—or could be—carried over into our understanding of how the brain works. On my view, though, to say that there are mental representations is precisely to say that there are things inside the brain that exhibit the same core functional profile as external representations.

If there are mental models, then, I think that—insofar as they are genuinely models—they must exhibit the same core functional profile as external models.

What is this core functional profile?

Models as Idealised Structural Surrogates

There are five characteristics of models that I think are especially important. Before I get to these, though, it’s worth stepping back and considering what the general function of a model is.

A model is in general a surrogate for—that is, a stand-in or proxy for—something else. This raises the question: why bother with models? Why interact with a model when one can just interact with the thing it is a model of “directly”? The answer is that often we want to coordinate specific kinds of actions with respect to a part of the world, but these actions require access to information that the world itself cannot easily provide us with. Under those conditions, the model re-presents those characteristics of the relevant part of the world in a form that is more easily accessed and manipulated.

Imagine you want to get from A to B in a new city, for example. This requires information about the city’s spatial layout. If you could impossibly jump out of your skin and survey the city’s layout “directly,” there would be no need for a map. What a map does for you is re-present those aspects of the city layout in a form that is accessible.

Although this example focuses on simple spatial features of a domain, the point generalises beyond this. By re-presenting the structure of a domain—whether spatial, covariational, causal, etc.—in an accessible form, models allow model-users to coordinate their actions with features of a domain that they do not have ready access to. If there were a God, it would thus have no use for models. For limited creatures like us, though, the capacity to build simplified, accessible and easily manipulable surrogates for features of the world can be extremely useful.

This will become clearer in future posts when I turn to mental models. For now, here are five characteristics of external models that will be important in what follows.

  1. Model and Target

First, and most obviously, one must be able to distinguish between a model and what it is a model of. What a model is a model of is what I call its “target.”

In the case of ordinary external models, there is no really no issue in specifying what their targets are: we decide. Why is a double helix (scale) model of DNA a model of DNA molecules? Because we said so. In building that model, the intention is to build a model of that particular target.

As I will return to in the next post, this is obviously not how the targets of our mental models can be determined (if there are mental models). Therefore there must be some other account of what determines the targets of our mental models.

  1. Structural Similarity

Second, how do models work? That is, what enables us to use a model as a surrogate or stand-in for a target?

The answer that I prefer—although I don’t have the space to properly defend it here—is that models in general are in an important sense like what they are supposed to represent (when they are accurate). That is, they are supposed to be similar in some respects to their targets. Specifically, they are supposed to be structurally similar to their targets.

This is obvious if you focus on simple cartographic maps, in which the spatial layout of the map is supposed to mirror the spatial layout of its target terrain.

It is also obvious in the case of three-dimensional scale models, in which the model is supposed to preserve the relative spatial relations of its target. (Think of a toy model of a car).

It is also obvious in scale models such as orreries (model solar systems), which not only mimic the relative sizes, spatial relations and positions among planets but also use a system of gears to mimic the motions of the planets as they orbit the Sun.

It is perhaps less obvious when it comes to mathematical models. Consider, for example, simple dynamic models of swinging pendulums specified by differential equations. Here the relevant mathematical symbols used to express the equations obviously do not literally resemble the swinging pendulum. That’s true, but these equations specify (describe) an abstract structure, whose similarity to the relevant characteristics of the target can then be evaluated.

(If you’re interested in following this view up, I would check out the work of Ronald Giere).

A few things before I continue:

  1. Sometimes philosophers say that maps or models more generally capitalize on isomorphism or homomorphism. But that is almost never—perhaps just never—the case. Models do not literally mirror their targets, as I will point out in the next section. Appeal to “similarity” captures the fact that models are rarely (perhaps never) perfectly accurate, which mathematical concepts like isomorphism would require.
  2. Philosophers are often sceptical of appeals to similarity. They say things like: “isn’t everything similar to everything else in some respect or other?” True, but only one of the similarity relations is actually used by the relevant representation-user, and it is that one which is relevant. I will say a bit more about this in the next post.

3. Idealisation

Models are structural surrogates for target domains. Nevertheless, models are typically—perhaps always—idealised, however. There are at least two dimensions to this.

First, models are highly selective. Just as a map will not literally recapitulate every aspect of a terrain, models in general replicate only those features of the target relevant to the representation-user’s interests (see below).

Second, models are rarely (if ever) perfectly accurate with respect to those features that that they do represent. In models, accuracy is not a binary variable (e.g. true/false) but a continuous one: models can be more or less accurate. There are different ways in which models—even good models—can depart from literal isomorphism. For example, often they radically simplify the characteristics of a target to make the model more convenient to use.

  1. Two Species of Accuracy

I just said that accuracy in models is a graded notion. Importantly, though, there are at least two important species of accuracy in models.

The first evaluates to what extent the model as a whole is similar to the target.

The second arises in using the model to index the state of the target. For example, just as we can ask to what extent a supply/demand model in economics captures the covariational relations among price, supply, and demand for a given product, we can also use the model to index the current state of the market (price=x, supply=y, demand=z).

Importantly, this second form of evaluation is mediated through and dependent on the first: what gets indexed is determined by its “place” within the larger model structure, and it is only meaningful to talk of this second form of representation once one has at least a broadly accurate model in the first place. (Trying to locate oneself in Paris with a map of London falls under the category of “not even wrong”).

  1. Interest-relativity

Models are “interest-relative” insofar as interests—the interests of the relevant model-users—determines which features of the target domain the model should represent and how much accuracy is desirable.

For example, suppose you’ve had a few pints in a new city and want to make your way back to the hotel. Your friend draws the route on a napkin. Under those conditions, a maximally accurate map of the relevant terrain that exhaustively captures its layout would not be useful. For this reason, the goodness of a model can diverge from its accuracy. Sometimes a model can be too accurate, and sometimes a highly inaccurate model (a drunken napkin map that captures extremely coarse-grained spatial relationships) can be extremely useful, given the model-user’s goals.

Conclusion

To summarise, I think that models should be understood as idealised structural surrogates for target domains.

If one wants to make sense of the idea of mental models, then, one needs to be able to make sense of at least three things:

  1. What determines the targets of mental models?
  2. How can mental models share an abstract structure with their target domains?
  3. What does it mean to say that a representation-user uses a model as a surrogate for a domain when it comes to mental models inside the brain?

I will address these questions in the next post.

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s