We are born into this world profoundly alone, our strange, unbounded minds trapped in our ordinary, earthwormy bodies—the condition that led Nietzche to refer to us, wonderingly, as “hybrids of plants and of ghosts.” We spend our lives trying to overcome this fundamental separation, but we can never entirely surmount it. Try as we might, we can’t gain direct access to other people’s inner worlds—to their thoughts and feelings, their private histories, their secret desires, their deepest beliefs. Nor can we grant them direct access to our own. [Schulz, 2010, p 252]
The feeling of separateness from other people, so eloquently expressed by Kathryn Schulz in her book, Being Wrong, is rooted in a theory which almost all of us learn at our mother’s knee between the ages of three and six—the theory of other minds.
Human beings are self-conscious creatures. Your brain supports a model of the world, part of which is a model of yourself. When you were around four, your self-model became sophisticated enough to support a secondary, higher-level model—a model of your model of the world, and within that, a model of your model of yourself. You began to draw the subjective-objective distinction. Or to put it more simply, as Kathryn Schulz has, you realized you could be wrong about things. Reality was not always the way you perceived and believed it to be.
Your model of the world also contains models of other people. The breakthrough development that psychologists call “the theory of other minds” came about when you endowed your models of other people with secondary models—their models of the world and the people in it (including yourself).
In a famous 1983 study of Austrian kindergarten pupils entitled “Beliefs About Beliefs,” Heinz Wimmer and Josef Perner showed striking differences between children of the ages of four and six in understanding the cognitive states of others.
The children were shown a scenario in which the protagonist, Maxi, put a piece of chocolate into a blue cupboard, then went to play. While he was gone, his mother took the chocolate out of the blue cupboard, used some of it in cooking, then put it away in a different, green cupboard. Then Maxi returned, and the children were asked, “Where will he look for the chocolate?” An overwhelming majority of the children aged six or more picked the blue cupboard, but two-thirds of the four-year-olds pointed to the green cupboard.
The four-year-olds’ failure was not a lapse of memory, as was established by their correct responses to the followup question, “Where did Maxi put the chocolate in the first place?” Variants of the experiment also ruled out other hypothetical explanations, such as that they answered impulsively, without thinking. The explanation left standing was that the four-year-olds were unable, but the six-year-olds were able, to distinguish what Maxi knew from what they themselves knew.
The model of minds—the subjective-objective distinction—is a useful one, with immense explanatory power. No one has yet come up with an alternative theory that would allow us to dispense with it. An understanding of ignorance lets us be effective teachers, patiently filling in the gaps in our pupils’ knowledge instead of blithely assuming that they’re on the same page to start with. It also has more sinister applications, enabling us to manipulate one another by means of outright deception, or half-truths that emphasize one side of a story, omitting inconvenient facts. The model is well grounded in reality. None of us are acquainted with more than a tiny fraction of the truth. We are all like the blind men feeling different parts of the elephant which is the objective world. We are much better off to acknowledge than to deny this fact.
That said, it’s important to remember the theory’s roots. The subjective-objective distinction is good at explaining ignorance and error. It does not help to explain our cognitive successes. Instead, it tends to make a mystery of them. It can even induce the feeling that each of us inhabits his or her own private subjective world, profoundly and necessarily isolated from others. In this post, I will challenge the assumption that this feeling of aloneness is grounded in reality. I suspect it does not represent the truth, but an imperfection in our model of minds.
The idea that subjective experience is necessarily private, experienced by the subject and only by the subject, is entrenched in Western thought, a philosophical idea that enjoys wide, uncritical acceptance outside the discipline of philosophy. Antonio Damasio’s Self Comes to Mind is a modern attempt to explain the subjective self from a neuroscientific perspective. Like many other writers on this subject, Damasio assumes, without evidence or argument, that minds are private. The book’s first significant reference to privacy comes in the context of a discussion of visual representations in the brains of monkeys:
…it is possible to uncover, in a monkey’s visual cortex, a strong correlation between the structure of a visual stimulus (e.g., a circle or a cross) and the pattern of activity it evokes. This was first shown by Roger Tootell in brain tissue obtained from monkeys. However, in no circumstances can we “observe” the monkey’s visual experience—the images the monkey itself sees. Images—visual, auditory, or of whatever other variety one may wish—are available directly but only to the owner of the mind in which they occur. They are private and unobservable by a third party. All the third party can do is guess. [Damasio, 2010, pp 69-70]
There are several reasons to drag the assumption of mental privacy into the light, one of which is that it leads rather directly to a central ‘mystery of consciousness’: the idea that subjective experience is somehow ineffable, incommunicable. I believe we should try to dispel mysteries wherever we can, as they impede understanding. It is unenlightening to be told that my experience of the colour green is always and inevitably unavailable to you. It makes one wonder what kind of thing consciousness is, how it could ever have evolved, when it evolved—at what level in the phylogenetic tree—and so on. This leads to entertaining the sorts of questions that have consumed so much cerebral effort, to so little avail, over the past two millennia of philosophy of mind.
We can start to dispel the mystery of consciousness by cataloguing the features of the actual models of reality our brains support, and considering their biological function—the evolutionary advantages they conferred on the human organism. I want to emphasize that the usefulness of the subjective-objective distinction is rooted in its ability to explain ignorance and error. It helps us both to overcome these shortcomings by effective teaching, and to manipulate others by deliberate deception, two abilities which bolster our competitive advantage as a species. The idea that subjective experience is always and inevitably private, on the other hand, explains nothing, and is useless for survival. The behaviour it produces, fruitless head-scratching and argument, leads me to suspect it to be a useless byproduct, or vestigial appendage, of the otherwise valuable theory of other minds.
The Solipsistic Temptation
A second reason to question the idea of privacy is that it induces the temptation of solipsism. If our minds are private, each of us is directly acquainted only with his subjective mental representations. The objective world is inferred, at best. This famously led some of our most illustrious thinkers into philosophical skepticism about the external world. Descartes and Berkeley each entertained the notion that, for all they knew, nothing existed outside the mind. Contemporary philosophers enjoy trotting out the supposition that they are brains floating in vats, with a completely inaccurate experience of the world contrived by a team of neuroscientists expertly managing inputs and outputs at the periphery of their nervous systems. Hollywood built the hugely successful Matrix series on this conceit.
The solipsistic temptation is built into the structure of our model of reality, according to which the subject is directly exposed only to the model, never to the objective world itself. We should resist concluding that the possibility entertained by solipsism is a real one, or that experience is truly private, before making a determined search for a flaw in the model, or a better model to replace it with.
The discovery of the subjective-objective distinction coincides with the discovery of error; but error implies the possibility of truth. It is important to distinguish between accurate and inaccurate representations of reality. To succumb to the solipsistic temptation, supposing that all our mental representations may be inaccurate—that either reality is completely different from what we think it is, or perhaps there is no objective reality at all—is to fall victim to a “greedy reductionism” (using Dennett’s term) that blurs or denies real differences.
The idea that experience is private makes solipsism plausible, and strengthens the temptation to take that reductionist leap. But a better model of subjective minds and objective reality would not blur the very distinction on which it was founded.
A third reason to look for improvements to the subjective-objective model is that it makes people feel more separate from one another than they are. The passage from Kathryn Schulz with which I opened this post feelingly describes this phenomenon, as does this one:
There is a story (which is so lovely that I hope it’s true, although I haven’t been able to verify it) that someone once asked the South African writer J.M. Coetzee to name his favourite novel. Coetzee replied that it was Daniel Defoe’s Robinson Crusoe—because, he explained, the story of a man alone on an island is the only story there is.
Crusoe named his small island Despair, and the choice was apt. Despair—the deep, existential kind—stems from the awareness that we are each marooned on the island of our self, that we will live and die there alone. We are cut off from all the other islands, no matter how numerous and nearby they appear; we cannot swim across the straits, or swap our island for a different one, or even know for sure that the other ones exist outside the spell of our own senses. Certainly we cannot know the particulars of life on those islands—the full inner experience of our mother or our best friend or our sweetheart or our child. There is, between us and them—between us and everything—an irremediable rift. [Schulz, 2010, p. 258]
When we are being reflective, the ‘other minds’ model can seem inescapable. That may be why it looms large for philosophers who, when they are doing philosophy, are almost always reflective. Even Kathryn Shulz (who studied philosophy) seems to believe that, although our “acts of instant interpersonal comprehension are among the most mundane facts of life,” the model of separateness truly portrays reality.
If I am truly knowable from the inside, no one but me can truly know me. This isolation within ourselves can be mitigated (by intimacy with other people) and it can be dodged (by not thinking about it), but it cannot be eradicated. It is…the fundamental condition of our existence. [Schulz, 2010, p. 258]
Why is it necessary to elevate the separateness between us to a “fundamental condition of our existence,” while downplaying the countless successes in communication which Schulz admits are of great importance in our lives?
When the Privacy Theory Doesn’t Work
The passages from Being Wrong quoted above are all the more striking in the context of the chapter that contains them, which emphasizes the successes in human communication that we take for granted and hardly think about. Schulz writes about how important it is for people, especially as young children, to communicate their needs to others. “Our very survival depends on our caretakers understanding and meeting our needs—first and foremost for physical comfort and safety, and secondarily (but scarcely less crucially) for emotional reassurance and closeness.”
It makes sense, then, that we care so much about getting other people right. And it makes sense, too, that, overall, we are astonishingly good at it. The phone rings and you pick it up and your mother says hi, and you know—from a thousand miles away, with only one syllable to work with—that something is wrong. An expression flickers across a stranger’s face and you have a very good chance of correctly deducing his feelings. You and a friend sit through a particularly ludicrous meeting together and carefully avoid catching each other’s eyes, because if you did, you would each know so much about what was going on in the other’s mind that you would both laugh out loud. These acts of instant interpersonal comprehension are among the most mundane facts of life; we experience them dozens of times a day, mostly without noticing. Yet they are among the most extraordinary of human abilities. To understand someone else, to fathom what’s going on in her world, to see into her mind and heart: if at first this is what makes staying alive possible, ultimately, it is what makes life worthwhile. [Schulz, 2010, p. 250]
During those moments of comprehension, the “other minds theory” is inactive. Its considerable explanatory power is idle, without application. While we are engaged in effective communication, we do not think of ourselves as separate minds signalling to one another through the cloudy medium of the objective world. The experience of ‘getting through’ to someone is less ‘through a glass, darkly,’ than ‘face to face.’
The theory of other minds can get in the way when we communicate. To think about what you have not disclosed to your lover is to erect a barrier between you. ‘Self-consciousness’—the awareness of how one appears to others in a social situation—is a recognized obstacle to effective communication and action.
Most of us are reminded of the theory of other minds only when we have a use for it: when we are trying to decide how far to trust someone, or how our words and actions are likely to be taken by a listener who does not share our background. The subjective-objective distinction comes to the fore when we suspect we are deceived, or ignorant of an important fact—when wondering whether that movement in the bushes was a large predator, a harmless rodent, or just the shadow of leaves in the wind. When we ‘get things right,’ being well enough acquainted with the facts of our situation to engage with them effectively, our awareness of other minds tends to fade, and with it, the whole subjective-objective distinction. When I do rough carpentry, replacing rotten boards in a sundeck, my thoughts run to the lumber, to grain and warp, to goodness or badness of fit, to the task of hammering nails home without bending them. If I’m working in good light, I am unlikely to dwell on the possibilities of misperception and illusion. If I play a fast physical game like table tennis, I do not contemplate the differences between my opponent’s mind and my own. Table tennis is not poker. The game is clear and open; all ‘cards’ are on the table. My task is to return my opponent’s serve and make it difficult for him to return mine. Usually, in table tennis, an awareness of my opponent’s body position and momentum is more valuable to me than an awareness of his mental state. Although in exceptional circumstances, such as when I know he is upset, I might use that to my advantage, the theory of other minds is almost useless to me in an ordinary, friendly game of table tennis.
Is There a Better Theory?
Is there an alternative to the model that says subjective experience is private?
The flip side of the privacy coin is infallibility. There is no room in the model for us to be wrong about our own subjective experience. It just is what it is.
But of course we can be wrong about our own mental states. People’s psychological assessments of themselves are notoriously inaccurate.
The self-assessments we get wrong are different from the ones we allegedly cannot be wrong about. I may flatter myself by thinking that I’m braver than I really am, but I cannot make a mistake about how my red filing cabinet looks to me.
In computing terms, my experience of red can be described as ‘raw’ or primitive data. Any information-processing system (and minds are information-processing systems, whatever else they are as well) has primitive data elements that are input to the system, simply ‘given.’ They are neither right nor wrong in themselves. The concepts of accuracy and error only appear when we take the intentional stance, and say the data is ‘about’ something other than the data itself.
A digital photograph is a more or less accurate representation of a scene in the world. But the sequence of 1’s and 0’s that constitutes the digital image in the system’s memory can only be wrong insofar as it is considered to be a representation of something else.
If the image is distorted or inaccurate, we might say the system is ‘wrong about’ that part of the world which the image represents. It is not wrong about the image itself.
If the system goes on to make a second-order representation of the image, it also runs the risk of being wrong about that. A second-order representation could be a simple copy of the image (lossy or lossless), or a record with the title of the image and a pointer to its memory location and size. As applied to minds, a second-order representation in minds might be a thought about one’s own visual experience, for example, that it is blurrier than it used to be (indicating the need for a new prescription). My assessment of my own bravery is another higher-order representation. There is always room for error in higher-order representations of data because to describe them as higher-order is to describe them as about something—in this case, other data.
The logic of information systems, minds included, requires having informational primitives. Until we consider an image as a representation of something, there is no room for error; the image is what it is. But that is a very limited claim, which does not entail privacy. The data primitives of a computing system are not private to the system. It’s easy to copy a digital image from one computer to another. In fact, there is no difficulty about copying data at any level of representation, low-level or high-level; nor is it difficult to verify that the copy is accurate. “Primitive,” therefore, does not imply “private.”
Complete and verifiably accurate information transfer is much easier for computers than for biological organisms. That is a matter of contingent fact, on this planet, at this time, which may explain why, having formed higher-level representations of our internal representations of external reality, we tend to slide from “primitive” to “private.” If you are red-green colour-blind and I am not, I will be forever frustrated in my attempts to explain to you the difference in colour I see between red and green traffic lights. Beyond saying, “I see a vivid difference in colour between them, as vivid as the difference between blue and orange,” I cannot say much that will help you understand what I’m talking about. But that is a useful thing to say, which should help you understand what I’m talking about—a colour difference (not brightness, size, or position). So what am I failing to communicate? The experience itself; the redness and greenness I experience!
Having to resort to exclamation marks and italics to make a point is a sign that we have ceased to speak scientifically. What is going on here? Perhaps an aspect of our subjective experience is outside the domain of science, necessarily so, because the facts of science are objective, publicly verifiable, whereas subjective experience is…subjective. Or perhaps we are misled by our own self-model, illegitimately extending a useful theory into areas in which it is not useful.
I dislike metaphysical mysteries. Although to solve the mystery of consciousness may be beyond the scope of this post, we can at least begin to explore other ways of thinking about it.
Human experience is a kind of information; and the concept of information is less mysterious than the folk-psychological notion of subjectivity. It is useful to think about how artificial systems represent information, because we have a much better understanding of how our artifacts work than about how we do. A binary digital image is an ordered series of bits, each of which has one of two possible states conventionally represented as 1 and 0. The same image can be represented by high and low voltage states in an array of transistors in a computer’s memory, or in optical form on a CD, or it can be printed in Arabic numerals on paper. The image can readily be converted from any of these media to any other. It can also be translated into a different sequence of 1’s and 0’s. This may be done in order to generate the same image on different computer hardware, or to compress it without loss of information to reduce its storage requirements. The same information is expressed in different data primitives.
To extend this analogy to human minds, if you and I are viewing the same scene, and our visual experience of the scene has the same informational content, then I see what you see. The question whether you see a tree in the same shade of green as I see it is misleading, because it falsely suggests that you have information which is not available to me. If the question is reinterpreted as asking whether the data primitives of your visual experience are the same as mine, then it makes sense—but, unless I am a neuroscientist trying to develop a way to copy my experience from my brain into yours, I have no reason to care about the answer. Neither does a photographer care whether or not his photograph is represented by the same sequence of bits on an Apple computer and a PC. He only cares that the pixel resolution and the colour depth are the same, and that the display hardware does an equally good job of rendering the display.
The real currency of perception is the ability to detect contrasts—differences and similarities. If neither you nor I are colour-blind, we can share the experience of the state of a traffic light, or a glorious sunset. Unless we are philosophizing, the subject of whether your experience is different from mine does not arise. It is only when we fail to make the same distinctions that the difference between us comes to the forefront—when you see, on the other side of the lake, a deer which (without my glasses) I cannot make out.
The theory of minds makes sense when it deals with their informational content. I can distinguish colours my friend cannot; he knows more about sailing than I do. The differences between the information represented in my mind and in my friend’s can be assessed without reference to how each of us represents this information internally. The ‘how’ of internal representations is irrelevant to informational content.
In order to understand the behaviour of other people well enough to interact effectively with them, we must take the intentional stance. But taking the intentional stance does not require believing in a deep metaphysical separation. Machines can communicate their internal states fully, and run emulations of one another. The fact that people cannot do so points mainly to technical limitations. I know of no reason to rule out technology which would allow us to share mental states fully. My preliminary answer to the question posed in this section is, yes, a better theory is possible, one which recognizes that “primitive” does not imply “private.” Such a theory would eliminate much, if not all, of the mystery of the subjective/objective distinction, without impairing its ability to do its proper work of distinguishing between mental models and the external realities they represent.
Mirror Neurons – A Deeper Communion
Recent advances in neuroscience have done better than folk psychology in explaining why people are, as Kathryn Schulz says, “so astonishingly good” at getting each other right. They also go far towards dispelling the notion that each of us is marooned on the private island of his or her subjective experience. There is mounting evidence that the brains of humans and other primates contain special ‘mirror neurons’ dedicated to crossing the divide between them. Far from being isolated, we are, compared to most species, exquisitely well adapted for social communication.
On the intuitive model that, until recently, I naively assumed, interpersonal communication is a cognitive process, a matter of interpreting our sensory experience of one another in much the way that we look at a tool and figure out its purpose, or—having learned to read—scan the pages of a book and mentally recreate the words of its author. It turns out that understanding the actions and facial expressions of other people begins at a much more primitive level. It does not have to be learned, being built in. This hard-wired capability enables us to learn from one another by imitation, efficiently and directly, and gives us deep, precognitive experience of the emotional states of other people—capabilities which have contributed greatly to our species’ remarkable success in the evolutionary competition.
Instead of being ‘profoundly alone,’ we are connected to each other by broadband communications channels. The next post will explore the implications of this new research for understanding the self.
Damasio, Antonio (2010), Self Comes to Mind, Random House.
Dennett, Daniel (1995), Darwin’s Dangerous Idea, Simon and Schuster.
Gallese, Keysers and Rizzolatti (2004), “A unifying view of the basis of social cognition,” Trends in Cognitive Sciences, Sept. 2004
Wimmer and Perner (1983), “Beliefs about beliefs: representation and constraining function of wrong beliefs in young children’s understanding of deception” Cognition, 1983, 103-128