In this chapter I’m going to address another challenge to a representationalist understanding of perception. This one is less precise than the one that I addressed in the previous post, but it has nevertheless had a large influence in motivating anti-representationalism among a motley crew of anti-representationalists: pragmatists, ecological psychologists, enactivists, and so on.
Roughly, the challenge is this: the concept of internal representation implies an implausibly reconstructive theory of perception that is inconsistent with the radical organism-relativity of perceptual experience.
What does this mean?
In trying to understand this argument, it is useful to break it down into two steps: first, on the pragmatic nature of perception and intelligence more broadly; second, on the “organism-relativity” of experience that this pragmatic understanding of the mind is supposed to imply.
The Pragmatic Mind
The American pragmatists—C.S. Peirce, William James, John Dewey—all felt that traditional philosophy and psychology had been insufficiently sensitive to the pragmatic nature of intelligence—to the fact that we have minds to get things done. Dewey, for example, famously criticised what he called the “spectator theory of knowledge,” according to which the mind is viewed as something that floats free of our practical engagements with the world. (He argued that Western philosophy had treated us as mere spectators on the world rather than actors within it).
Both William James and John Dewey drew on evolution to bolster this pragmatic perspective. The brain is an organ that evolved because of its adaptive value—because it enables certain organisms to do things that they couldn’t otherwise do.
In more recent work among proponents of embodied cognition—a research programme strongly influenced by American pragmatism—one also hears reference to the idea that the brain should be understood as a controller for coordinating the organism’s actions with those aspects of the environment relevant to their practical goals.
So what, you might ask? Well, proponents of this kind of pragmatic perspective on intelligence often draw on this perspective to challenge a historically influential philosophical understanding of the relationship between the mind and the world.
According to this understanding, there is a reality that exists independently of minds. We—agents with minds—then form mental representations of this independent reality, which we use to reason about it. When they are accurate, these mental representations in some sense mirror what is out there.
It is this idea—the idea of the mind as a “mirror of nature,” to use the philosopher Richard Rorty’s famous phrase—that proponents of the “organism-relativity” of experience deny. Specifically, they argue that our perception of the world is not a passive reflection of some independent reality. Rather, it inextricably involves contingent features of us: our morphology, our practical interests, our idiosyncratic response profile, and so on.
You might—with some justice—wonder what on earth this means. Here are some examples.
First, one often reads about the concept of the “Umwelt” in the embodied cognition literature, a concept put forward by von Uexküll. The concept is used to describe the world as it is experienced by a particular kind of creature. The idea is that different organisms do not experience the same world but rather different worlds inextricably conditioned by features of themselves:
“Even though a number of creatures may occupy the same environment, they all have a different umwelt because their respective nervous systems are designed by evolution to seek out and respond only to those aspects of the environment that are relevant.”
Second, J.J. Gibson famously proposed that organisms perceive the world in terms of affordances—in terms of the opportunities for action that the environment offers the organism. Given that different organisms have different behaviours and interests, the idea—once more—is that the world we experience is fundamentally determined by us, and thus that agents with different behaviours and interests perceive different worlds.
Finally, and relatedly, the concept of organism-relativity is central to the research programme of enactivism in cognitive science. For example, enactivists say things like the following:
“Instead of internally representing an external world in some Cartesian sense… [organism] enact an environment inseparable from their own structure and actions”; they “constitute (disclose) a world that bears the stamp of their own structure.”
Ok, so this all sounds Very Cool and everything, but what exactly is the challenge to representationalism supposed to consist in here? Why can’t mental representations reflect idiosyncrasies of the representer?
Nevertheless, in the current context one can make the challenge more precise. Recall that at the core of the theory of mental representation that I have defended is the concept of structural similarity: the idea that there are models in our brains that share an abstract structure with target domains in the body and world. It thus seems to imply that we build models that mirror—that copy, reflect, recapitulate—the structure of the objective world around us.
One way of understanding the appeal to organism-relativity is in terms of its rejection of this idea. According to proponents of pragmatism, enactivism, and so on, once we recognise the many profound ways in which idiosyncratic features of us are implicated in the construction of our experienced world, we will be forced to give up on the idea that the proper way to understand the mind-world relationship is in terms of representation—in terms of mental states re-presenting an independently identifiable external reality.
Are they right? No. In fact, I think that this whole literature is just a big mess, so don’t worry too much if you haven’t followed the plot up until now.
Here is my argument:
- One can accommodate some aspects of the “organism-relativity” of experience by just focusing on standard features of models: namely, the fact that they are selective and idealised.
- One can also accommodate our perception of things like response-dependent properties (see below) and affordances within a generative model-based theory of mental representation.
In addition to showing that the organism-relativity of experience can be accommodated within a generative model-based theory of mental representation, however, I will also argue that it is important to push back against some of the more radical claims of organism-relativity. For the most part, our mental models provide largely accurate representations of objective regularities in the world.
Here is an obvious fact: we do not contain a mental representation in our brains of the position of all the subatomic particles in the universe.
In this sense, then, of course our mental models are selective. There is an infinite number of features of the universe that we do not represent.
This raises the question: what is the principle by which our brains determine what is worth representing?
If we follow the broadly cybernetic understanding of brain function that I outlined in the 7th blog post, an attractive answer is this: the brain models only those features of the word relevant to its capacity to regulate the internal conditions of the organism effectively. In other words, we model the world from the perspective of our contingent physiological needs. (Remember that regulation here means maintaining essential variables—body temperature, blood sugar, heart rate, fluid balance, the concentration of various chemicals, etc.—within proper bounds).
Lisa Feldman-Barrett introduces the helpful concept of the “affective niche” to capture those features of the world relevant to homeostatic regulation: “anything outside of your affective niche,” she writes, “is just noise: your brain issues no predictions about it, and you do not notice it.”
It often seems that the only thing theorists mean by pointing to organism-relativity is this focus on selectivity. Selectivity is obviously not inconsistent with the idea that we build models of the world’s causal-statistical structure, however.
Not only are our models selective, but they are no doubt idealised—that is, not perfectly accurate—in many respects. In this sense they are not literal mirrors of their targets.
How could it be otherwise? When building models, there is typically a complex trade-off between accuracy and various pragmatic considerations such as computational tractability, simplicity of use, and so on. In the case of generative models, then, they will no doubt be highly simplified working models that trade off accuracy against energetic efficiency, speed, and so on.
One nice example of this is the idea that our capacity to interact with the physics of our environments is underpinned by a physics engine similar to the programmes that underlies the design of interactive video games—an idea I mentioned in previous posts. Such engines are in one sense pretty accurate: they involve literally reconstructing physical properties of the scene—the shapes, mass, etc. of objects, and the forces being deployed on them—and run simulations via approximations to Newtonian mechanics. Nevertheless, they also rely on all kinds of hacks and simplifications to ensure that they can be run for quick and flexible predictive simulations that run “faster than reality.” For example, many objects are simply excluded from the relevant simulations and objects at rest and at motion are treated differently.
There are some theorists in the literature who argue that accuracy is simply irrelevant to our mental models. I think that these arguments typically rest on various simple confusions. For example:
- They set up a false contrast between pragmatic success and veridical representation. (Often the former requires the latter!)
- They confuse how exhaustive a representation is with how accurate it is.
Here are some properties that saturate our experienced world: disgustingness, ugliness, beauty, loveliness, deliciousness, sexiness, funniness. Although these are features of the world that we seem to experience and respond to, they do not seem in any sense to be features of the “objective” world. Nothing is lovely or funny independently of the fact that certain organisms find them lovely or funny.
As many have argued, these properties of the world are therefore in some sense response dependent: their existence is dependent on the kinds of contingent responses we exhibit to the world.
As such, they seem to pose a problem for a theory of mental representation based on models that share an abstract structure with regularities in the world. In this case, the relevant phenomena are not features of the objective world at all. As such, appeal to things like selectivity and idealisation will not help.
As Dennett has pointed out, though, this is too quick. Here is why: insofar as brains are prediction machines, we are obvious targets of such prediction machinery. As such, in addition to identifying those features of the world that generate the patterns of proximal sensory input that we receive, our brains must also identify those features of the world that reliably elicit our contingent responses, where “responses” here should be understood liberally to include not just our outward behaviour but also various internal physiological responses. In this context, something like sexiness is just a latent variable in a generative model that includes those features of the world capable of eliciting certain contingent physiological responses and behaviours from us.
Another vexed topic in this area is the concept of affordances. Sometimes it’s clear that by the term “affordance” people just mean what I have already talked about: our experience of the world is selective, idealised, and as much concerned with those features of the word responsible for eliciting our behaviours as generating our sensory signals.
Others intend by the term something potentially more radical, however.
A classic example is research on the so-called size-weight illusion: when people are given two objects of the same weight but different sizes, they will reliably judge the smaller object to be heavier. A standard explanation of this in the literature is that it results from a kind of prediction error-based calculation that results from the combination of expecting the smaller object to be lighter with the actual experience of it.
According to some proponents of a broadly ecological perspective, this explanation is misguided. Instead, they contend that people are actually tracking—and accurately tracking—something different from weight altogether when they make those judgments: namely, throwability. Throwability is not a function of weight alone but of a whole network of variables that include both characteristics of the object and—crucially—characteristics of the thrower: e.g. hand size, strength, and so on.
The example serves to illustrate a broader view: namely, that often people do not perceive observer-independent properties such as weight or mass but rather features of the world inextricably determined by the capabilities of the perceiver—that is, affordances.
Here is what I think about this.
First, I don’t think that we only perceive affordances in this sense. Generative model-based processing is based on identifying those objective features of a domain responsible for generating proximal sensory inputs. The advantage of such “reconstructive” perception (i.e. reconstructing the structure of the objective world) is that it installs a body of information that is highly purpose neutral. Just look around you and consider the open-ended number of predictions you could make about your environment—about what would happen under an open-ended number of possible interventions on the environment. In other words, for organisms like us that must coordinate extremely intricate behaviours with a world at multiple spatiotemporal scales, it pays to represent it largely accurately and not in terms of myopic perception-action relations.
Second, insofar as we perceive affordances, there is good reason to think that it involves predictive modelling. Although affordances are not features of the “mind-independent” world, they are objective features of the world-organism relationship, and the regularities they are implicated in can be captured in models. Further, because of their role in action, they are obvious targets of prediction machinery: for example, simulating the outcomes of various kinds of motor behaviour, overcoming signalling delays, and so on. As such, I think that we build predictive affordance models: models that capitalize on structural similarity to regularities implicating our abilities and features of the world in a form that can be used to generate highly adaptive predictions.
To summarise, then, a generative model-based theory of mental representation can accommodate the fact that we experience the world from the perspective of an embodied organism with a specific morphology and idiosyncratic interests and responses.