Is the Google Car Conscious? Ethics and Artificial Minds

As a software developer, I am attracted by Thomas Metzinger’s functional level of description because it can be read as a high-level functional specification for consciousness and the self.  If someone could build an artificial system that meets the specification, he or she would have created a conscious being!   That would certainly be an interesting project.  Perhaps having a philosopher write the functional spec is exactly what’s called for to rescue AI from the back-eddies in which it has slowly revolved for several decades.

Although computers have made impressive progress in competing with human beings—advancing from checkers to chess championships, winning at trivia games and outperforming human experts in knowledge of specific domains—this success is due more to faster hardware, improved search techniques, and truly massive storage than to breakthrough advances in software architecture.  Yes, software can ‘learn,’ by using feedback from its own failures and successes to modify its behaviour when attempting similar problems in the future.  Yet the holy grail of AI, the Turing Test—to pass which a computer must be able to successfully masquerade as a human being by carrying on a convincing conversation with human interlocutors who are trying to tell the difference—still seems as distant a goal as it did when Alan Turing proposed it in 1950.  It is likely to remain so until we develop machine analogues of consciousness and emotion, by which I mean emotions both of self-concern and of concern for others.

Actually, I’m not convinced that the Turing Test should be the holy grail.  If the Turing Test is a competition between human and machine, it is not on a level playing field.  Requiring a computer to match the abilities of a person is no fairer than requiring a person to match the abilities of a computer—and no human being could pass that test.  Very probably, all that will be needed to weed out silicon imposters for a long time to come is to ask questions about human social interactions—about the complex feelings of love, anger, respect, and shame that children experience towards their parents as they grow up, about interactions with siblings and peers, conflicting desires for companionship and for solitude, the cauldron of feelings surrounding courtship and mating.  An artificial system is at such a great disadvantage in the social domain, the result is a foregone conclusion.  In order to pass the Turing Test, a system would need the ability to fake it all, to simulate an organic body it does not possess, to carry on a deceit more elaborate than the most accomplished human liar could sustain.  For these reasons, the Turing Test sets the bar too high.

The Turing Test should be recast as the kind of test we might use to determine whether some alien species, with a body fundamentally unlike ours, and a society organized along entirely different lines, was conscious and intelligent.  In devising such a test, Metzinger’s functional descriptions might be very helpful.

How to Build Consciousness

If I set about to build a conscious information-processing system along Metzinger’s lines, my design would certainly include a world-model.  Even by conventional software design principles that do not aim at AI or consciousness, a world-model is a good thing.   Consider, for example, an information-processing system that uses input from cameras, audio sensors, 3-D range finders, GPS, and motion detectors to drive a car safely through traffic.  If I were to design such a system, I would certainly include a three-dimensional world-model as a “layer”.  It would be a relatively stable data model, mediating between the fragmented, unintegrated and volatile layer of raw input from cameras, accelerometers, and other sensors, and the output layer of motions applied to the steering mechanism, gearshift, brakes, clutch, and accelator, also fragmented and highly volatile.  A driving program which tried to go “directly” from inputs to outputs, without representing its domain as a world-model, probably would make inefficient use of computing resources, and would certainly be fragile and hard to maintain.  I would expect it to crash frequently (pun intended).

As you may be aware, driverless systems now exist, including some in the late stages of development and testing.  One such system, Google’s driverless car, has safely navigated more than 140,000 miles of California streets and freeways in all sorts of traffic.  Google’s system uses a world-model, called a ‘map,’ but a map which is dynamically enhanced with the real-time positions and trajectories of other vehicles, pedestrians, and hazards, including wild animals caught in the headlights.  Of course, the world-model of Google’s driverless system also includes a self-model representing the vehicle it controls.

Having a dynamically updated world model is not, I think, sufficient for consciousness.  (It may not even be necessary; but I suspect that a conscious system that lacked a stable world model would be so different from ourselves—would appear so bizarre—that we would have difficulty answering the question whether it is conscious or not.)

Besides a world model, what else should be included in a functional specification for consciousness?  Of Metzinger’s eleven ‘constraints,’ he flags only three as necessary for a minimally conscious being: global availability, activation within a window of presence, and transparency. Although I am not privy to the technical details of the Google driverless car project, I’d hazard a guess that it satisfies all these conditions.

Certainly, a moving window of presence, centered on ‘now,’ is a central feature of the data design, and is obvious in the visualizations that Google provides, such as this one:

The Google car’s self-model (pictured just above centre) indicates its current position within the world model.  Its recent past is shown by the green path behind the vehicle (extending towards the viewer), and its planned route is represented by the continuation of the path above the vehicle, making a left turn at the intersection.  Some of the obstacles, vehicles, and pedestrians are framed by colour-coded 3D bounding boxes.  I do not have a key to the colours, but it looks as though all vehicles which could possibly cross the path of the Google car are outlined in magenta—perhaps flagging them for special attention.

The above image is not the driverless vehicle’s world model itself, just a visual representation of it.  The real model would consist in a linked network of data structures in the main memory of the program.  There is probably more detail in the model itself, and certainly a higher spatial resolution, than meets the eye.

According to Metzinger’s first constraint, information is consciously represented in a system only if it is “globally available for deliberately guided attention, cognitive reference, and control of action.”  The function of global availability is to maximize flexibility in behavioural response.

The Google system’s designers describe it as making decisions about driving.  To successfully negotiate traffic, the driverless car must be flexible enough to change its world-model, and its driving plan, quickly, on the fly, in response to unanticipated input from its sensors.  If it is operating on a plan to turn left at the intersection when the light turns green, it must be able to revise this plan if some idiot in the cross lane runs a red light.  In driving, hazards can appear from all quarters: a jaywalking pedestrian, a weaving bike courier, a pothole marked by traffic cones, an ambulance siren.  To have safely driven 140,000 miles in California, the driverless car must have proved itself capable of managing all those situations and more.  Global availability of a wide variety of information about its world, and flexibility in behavioural  response, are what kept it out of trouble.

What about transparency?  Without being privy to the internals of the Google system, I’d hazard a guess that its world-model is largely transparent to the system.  It interacts with the world through its world-model.   The system’s successes and failures are represented to the system as states of its world-model .  The system is, necessarily, oblivious to facts about the world which are not represented in its world-model, and such facts are incapable of changing the system’s behaviour.

If the Google driverless car is really smart, though, its world-model may not be fully transparent.  There must be occasions on which the system makes perceptual errors, gets things wrong.  Perhaps it sometimes identifies a car as ‘parked,’ hence immobile, when in fact the car contains a driver, who may suddenly fling his door open into the path of an oncoming vehicle.  In order to drive well, the Google car should be able to correct its own perceptions when it receives new and better information.  This trick requires the system to recognize that its model might be wrong in certain particulars—that the system itself is capable of misperceiving—that its model of reality is supported by varying degrees of cognitive justification.  A really good driverless car must be able to make the basic distinction between appearance and reality.  Sometimes bits of its world model must turn opaque, so that the system may evaluate and reconsider its interpretation.

According to Metzinger, global availability, a window of presence, and transparency are all that are needed for a system to be minimally conscious.  A case can be made that the Google driverless car satisfies all three criteria.

It’s an interesting exercise to go through all the rest of Metzinger’s constraints.   It seems to me that the Google car satisfies most of them.  It probably satisfies constraint 8, offline activation; such systems are usually designed to work offline, with prerecorded data as input, because it helps prevent damage to the equipment when testing.  And constraint 11, adaptivity, is certainly satisfied by a driverless car.  The system is designed to avoid   danger to itself.

Should we conclude that the bright folks at Google have created artificial consciousness without really intending to, as an unplanned side effect of building a safe, driverless vehicle?   That seems a stretch.  I thought artificial consciousness was something deep and mysterious, a distant goal that the most advanced AI projects didn’t even begin to approximate.  We’re talking about consciousness!  The ineffable smell of sandalwood, etc.

Well, I see three possibilities.  One is that the Google car is conscious; another is that Metzinger’s list of criteria leaves out something vital; and the third is that I am interpreting Metzinger’s criteria wrongly—despite superficial appearances, the Google car does not really satisfy the constraints of global availability, activation within a window of presence, and transparency.

If the problem is with my interpretation, then it would be useful if someone in Metzinger’s camp devised a list of tests which could be applied to systems (both artificial and natural) to determine whether they are conscious or not.  And if someone thinks Metzinger’s list falls short, she should rise to the challenge to fill in its deficiencies.  Absent either of those eventualities, I find myself leaning to the view that the Google car is conscious—and that consciousness is not as deep and mysterious as I thought it was.  Consciousness is a property that is bound to emerge functionally in any system that is smart and flexible and adaptable enough to drive 140,000 miles safely in California traffic.

Having said that, I will add qualifications.  If the Google car is conscious, its consciousness is very different from ours.  The Google car is unlikely to be self-conscious, in the way humans understand self-consciousness.  The Google car may not have anything resembling emotions.  It may be incapable of pleasure and suffering.  And it probably lacks long-term episodic memory.  Of course, not knowing what the Google designers built into their system, I don’t know that all of these aspects of human consciousness are missing.  I suspect they are lacking, because they are not needed for the system to carry out its function of driving safely.

Human self-consciousness is part of our social psychology—it is our awareness of ourselves as perceived by others.  The Google car probably gets along fine without a theory of other minds.  We can hardly expect it to simulate the minds of the human drivers and pedestrians it encounters every day.  Rather than trying to understand their behaviour ‘from the inside,’ it probably does well enough by objectifying them—regarding them as other objects on the road, that, collectively, follow certain statistical patterns, patterns from which individuals occasionally deviate wildly (slamming on the brakes, pulling a U-turn, driving off the road, heading in the wrong direction on a one-way street).

Self-consciousness might, however, be a useful feature in a later-generation Google car, after the technology is commercialized (as it will be, perhaps within this decade).  The prototype Google car is one of a kind; it has no peers.  But when the roads are crowded with Google cars, it may be very useful for them to emulate one another—to see the situation from the other’s point of view—when predicting the likely behaviour of other driverless vehicles.  For machines, emulation of other machines is easy and straightforward, and stands a greater chance of success than many of our own attempts to divine the motives of our fellow drivers.

The Google car doesn’t need big, animal emotions like anger, terror, hunger, and lust.  But it may benefit from a limited emotional analogue built into the system—something resembling comfort and discomfort.  A well-designed driverless car will try to avoid situations in which its ability to maneuvre itself out of danger is compromised, such as following a slow semi on the freeway, with another semi closing in fast from behind and buses on both sides blocking escape.  The system plans its path through traffic by running forward scenarios and evaluating them.  A scenario that put it into a traffic box from which it could not escape would be assigned a higher ‘discomfort index’ than an alternative scenario which left more options open.  The driverless vehicle must always be juggling competing values, assessing tradeoffs between speed-to-destination, fuel economy, ride comfort, and safety.  The programmers would assign weighting factors to these different values (weighting factors which the human rider/operator could perhaps modify, within limits).  Ultimately, the system makes the choice with the lowest overall ‘discomfort index’—an index represented by a number.  I say confidently that it would be a number, because I know how software works; but being a number is a detail of implementation which has nothing to do with the higher level of description we started with—the level of the system’s experience.  Although implemented as a number, its functional role in the system is that of discomfort—a quality that the system will try to reduce, or avoid altogether.

A Driver from Mars

But someone will object that I am begging the question.  Before asking whether or not the Google car’s experience includes something like the sensation of discomfort, we must first settle the question whether the Google car has experience at all.   Before asking what it is like to be a driverless vehicle, we should ask whether it is like anything, or like nothing.

I propose to sidestep this question.  It has been hotly contested by many others with very little illumination of the subject as a result.   I will restrict my remarks to a few reasons why I am not attracted by the position that the Google car, or a more advanced successor, cannot be conscious.  One is that it seems to rest on a prejudicial bias in favour of naturally-evolved organisms and against artificial entities.  Suppose an expedition brings back a living creature from Mars.  To begin with, all scientists know about this creature is that is motile, because it moves around and grasps things with its tentacles—that it  must have sense organs, because it responds differentially to different colours, shapes, and sounds—that it consumes a litre of vegetable oil every day—and that its behaviour can be modified by small electric shocks applied as negative reinforcement.  In order to test the creature’s capabilities, the scientists show it the operation of a motor vehicle, then allow it to take the controls, ‘teaching’ it with electric shocks to obey the rules of the road and avoid collisions with  other vehicles, pedestrians, and inanimate objects.  After a few weeks of practice, the creature proves so adept at handling the vehicle that the scientists decide to turn it loose.  The creature safely drives 140,000 miles on California roads, with no human intervention required.  When this amazing story became a matter of public knowledge, I suggest most people would conclude without hesitation that the Martian creature was conscious.  How else could it have done what it did?  We living organisms are readily disposed to attribute mental properties to other living organisms, if we see them doing things, solving problems, that would engage our own mental faculties.  We are less generous in attributing mental properties to artificial entities.  I don’t know what scientific principle could justify this difference in description applied to two classes of system, if the systems are on a par behaviourally, responding with equal flexibility and nuance to the challenges of a dynamically changing environment.

How Programmers Talk

A second, related point to which, as a software developer, I can personally attest is that the people who work most closely with complex artificial systems—the programmers—attribute mental properties to them all the time.  I have had many conversations about what one module ‘knows’ that another does not, about what a system is ‘trying’ to do, about the kind of data a program is ‘happy’ or ‘unhappy’ with, and so on.  These are not philosophical discussions; they are practical ones—conversations between programmers doing their job of diagnosing problems, fixing bugs, improving system performance.  The remarks are unsentimental, and are also, I think, free from misleading anthropomorphism.  Programmers use mental descriptors because they are the best available to convey information about what is going on with these complex, multi-layered systems.  Although an alternative, ‘reductionist’ account which avoids mentalistic language is always, in principle, available, it is not practically available; such an account would be too long and complex to be understood in conversations around the whiteboard.  It would be rather like trying to express, “Your proposal makes me uncomfortable,” in the neuroscientific terms of what is happening in my brain (involving my language centres, Wernicke’s and Broca’s areas, my prefrontal cortex, my amygdala, etc. etc.).

Metzinger’s work, in offering functional descriptions of fundamental aspects of consciousness, avoids the intellectual quagmire of ineffability, and opens a door to the possibility of scientific progress.

Let us return to the Google car.  I said that the system likely lacks long-term episodic memory, because it wouldn’t need that in order to drive well.  The system does need to learn from its failures and successes, but it can do so without retaining a catalogue of detailed incidents from its past, as most human beings do.  If you are tempted to say that the Google car is not conscious for that reason, you should think hard about whether you would draw a parallel conclusion about the many human beings who are unable to lay down episodic memories because of lesions to the hippocampus, or because of dementia.

Suffering of Artificial Beings

I also said that the Google car may lack states that are close analogues of human pleasure and pain.   I say this because, like episodic memory, full-blown pleasure and pain—agony and ecstacy—do not seem necessary in a system whose job is to be a chauffeur.  The system can be taught to avoid collisions without subjecting it to excruciating pain if it crumples a fender.  Physical pain is overkill—necessary, perhaps, for a system that depends, every day, on the integrity of its own body to obtain food and avoid predators in a jungle environment, but not for a driverless car which will receive its gas and oil changes regardless of its performance, and which will be taken to a body shop for any needed repairs.  All the driverless car needs in the way of motivation is something like the ‘discomfort index’ outlined above.  And if an elevated discomfort index can correctly be described as an uncomfortable experience for the system, it need not be worse than very mild discomfort.  The driverless car does not need to be strongly motivated to do its job well, because it has no competing motivations.  It does not get bored, or want to speed to the next gas station because it needs to pee, or aspire to a more lofty career than chauffeuring.  When it finds itself in a dangerous situation, its ‘discomfort index’ will be elevated; it will then explore the options available to it through forward scenarios, and choose one which promises to reduce the riskiness of its situation to a level it is ‘comfortable’ with.   This does not require elevated pulse rate, sweaty hands, abdominal qualms, muscular tightness or any of the other physical symptoms you or I might experience in relentless traffic on a California freeway.  We have those symptoms because the ‘fight-or-flight’ responses that evolved among our ancestors on the African savannahs over millions of years are a very poor match for the threats we encounter in modern life, behind the wheel or in the office.  Because robots are designed to meet modern challenges, and very specific ones at that, there is absolutely no need for them not to be content with their lot almost all the time.

Metzinger warns his readers against creating artificial beings capable of suffering.  He opposes attempting to create an artificial self on ethical grounds, because of the risk of “increasing the amount of suffering, misery, and confusion on the planet.”

One way…to look at biological evolution on our planet is as a process that has created an expanding ocean of suffering and confusion where there previously was none.  … We should not accelerate it without need. (Metzinger 2004, p 621)

Metzinger warns that the first conscious machines could turn out like mentally retarded infants, in that:

They would suffer from all kinds of functional and representational deficits too.  But they would now also subjectively experience those deficits.  …

If they had a transparent world-model embedded in a virtual window of presence, then a reality would appear to them.  They would be minimally conscious.  If, as advanced robots, they even had a stable bodily self-model, then they could feel sensory pain as their own pain, including all the consequences resulting from bad human engineering.  But particularly if their postbiotic PSM [phenomenal self-model] were actually anchored in biological hardware, things might be much worse.  If they had an emotional self-model, then they could truly suffer—possibly even in degrees or intensity or qualitative richness that we as their creators cannot imagine, because it is entirely alien to us.  If, in addition, they possessed a cognitive self-model, they could potentially not only conceive of their bizarre situation but also intellectually suffer from the fact that they never had anything like the “dignity” so important to their creators.  They might be able to consciously represent the obvious fact that they are only second-rate subjects, used as exchangeable experimental tools by some other type of self-modeling system, which obviously doesn’t know what it is doing and which must have lost control of its own actions long ago.  Can you imagine what it would be like to be such a mentally retarded phenomenal clone of the first generation?  Alternatively, can you imagine what it would be like to “come to” as a more advanced artificial subject, only to discover that, although possessing a distinct sense of self, you are just a commodity, a scientific tool never created and certainly not to be treated as an end in itself?  (Metzinger 2004, p 621-622)\

Metzinger’s scenarios are suffused with the horror of Frankenstein.   Standing, as we are, on the brink of vast new technological capabilities, we are well advised to proceed with caution.  We could easily screw up and create unintended suffering; our track record is not exemplary.

However, I do think Metzinger’s worries are overblown.  The Google car has a transparent world-model embedded in a window of presence; it also has a stable bodily self-model; but it does not, I think, “feel sensory pain…including all the consequences resulting from bad human engineering.”  The pain would have to come from somewhere—it would have to be designed in.  But the system design doesn’t call for pain, so the Google engineers are unlikely to waste their efforts building it in.  The same applies to the kinds of emotional suffering, alien or otherwise, Metzinger describes.  The Google car seems to be a contented thing; why would we build something that was not?

What about the worry that artificial systems might “intellectually suffer from the fact that they never had anything like the ‘dignity’ so important to their creators?”  This is harder to assess.  Any such system would have to be much more capable than a driverless car.  To understand that there is such a thing as “dignity,” which it was denied, it would have to be able to simulate human minds, and would quite possibly be capable of passing the Turing test!  Metzinger seems to fear that we will build a very advanced system, with a higher level of consciousness than dogs or domestic animals (who don’t, we assume, fret about not being accorded full human dignity), which we will then treat as a commodity.  Is this a legitimate ethical concern?  Given our history of exploitation of  our own species, including slavery and the subjection of women, I’d say it certainly is.  But it is not a worry we face yet, because we cannot yet build a system that is conscious at a human level.  And I do not think we are likely to build one by accident.

I favour the further development of the driverless car, and similar technology used for constructive purposes, because of its undeniable benefits.  (But I have misgivings about military and security applications of similar technology.  Can you picture driverless vehicles adapted for crowd control—or for ground combat?)  The primary motive for Sebastian Thrun, one of the creators of the Google car, is safety: almost all traffic accidents are caused by human error.  Other benefits include fuel savings, more efficient use of highway infrastructure, and freeing up the attention and effort of the human driver for more productive activities.

Another point worth mentioning is that there is absolutely no need to build existential angst into artificial systems.  Even a system with episodic memory and the ability to engage in long-term planning, possessing a world-model and self-model extending far into the future, a system that can predict its own destruction, need not care about its inevitable demise.  It would have to be programmed to care.  Depending on the purposes of the system, its designers may or may not see fit to include such programming.  No such choice was exercised in the design of us, natural creatures of blind evolution.  As a result, we tend to think that every conscious being who is aware of its own death must inevitably care about it; the alternative, not to care, seems impossible, even illogical.  But it is neither.

References

Metzinger, T. (2004),  Being No One.  MIT Press.

Return to the Phantom Self home page.

9 Replies to “Is the Google Car Conscious? Ethics and Artificial Minds”

  1. Very interesting analysis. I would love to read prof. Metzinger’s answer to this.
    It is my suspicion however that Google Car does not satisfy the “global availability” constraint. It’s just hard to believe that this car would actually builds a real internal world model upon which all the modules work and react. I think the illustration from Google that you’ve put in your article is just big simplification made for guys from their PR section. It seems much more probable that each of the modules work as Christoph Koch’s “zombie agents” – simple algorithms reacting upon certain patterns of data, not requiring any real “world model”.

  2. By the way, being perfectly able to drive in zombie mode (happens a lot to me – “how in earth did I get here”?) shows that consciousness is simply not necessary for this type of activity. Zombie Agents – simple, unflexible algorythms, do all the job.

  3. I too do a lot of driving in “zombie mode,” paying little attention and with little memory afterwards of how I got where I am. But I couldn’t drive safely in traffic for 140,000 miles without full consciousness occasionally kicking in. There are plenty of occasions when I need to pay full attention and use my best judgement: passing on a two-lane highway, driving in fog or on ice, driving through jaywalking pedestrians at night after fireworks at the beach, getting past cows on the road without hitting or stampeding them, figuring out where a siren is coming from and getting out of the way, etc. etc. If it can do all that, I think the Google car meets a reasonable ‘functional’ specification for consciousness.

  4. But still, isn’t it perfectly conceivable that all of those situations you mentioned could be handled by simple algorithms “if x –> then y”? In humans such situations require consciousness only because they are of a new type, there isn’t yet any algorithm to handle them – but I’m sure the brain would build such an algorithm if we met those situations over and over again. Google Car’s software can include any amount of inbuilt algorithms from the start. Also, remember that Google Car operates on much more reliable and multimodal data than human brain, there is much less guessing, there’s much more measuring. It’s calculations can include such data as car’s exact speed, precise wetness of the road, and measured distance from all the obstacles around it in 360 degrees – such precise data makes consciousness unnecessary, math and logical rules are probably enough.
    Of course, we won’t really know until we actually ask engineers from Google – does Google Car create an unified model of the world, constructed using all it’s sensory capacities, upon which it acts? Or is there just a large set of rules or algorithms comparing data from the sensors, reacting in a fixed way to certain fixed patterns? My gut feeling is that the second option is the case.

  5. I agree that the ability to respond appropriately to novel situations is an important functional test for consciousness. I’ve developed a lot of software, and I just can’t imagine that an approach which tried to anticipate every situation the Google car would encounter on the road, and hard-code everything, would work. I suspect the Google car’s software architecture includes some breakthroughs in flexibility. But you’re right, I don’t know, and it would be nice to hear from the Google engineers on that point.

  6. I really think a key part of consciousness is goals. I would say a thing/being is conscious if it has the flexibility to choose some of its goals. If all its goals are programmed into it, either by genetic material or by a computer programmer, than it is not conscious. Humans, for example, are not programmed genetically to try to get an A in french or to lose/gain weight, and a dog is not programmed to seek out wrestling matches with cats (as mine did). I don’t think there are any machines yet that would satisfy this definition of consciousness.

  7. Are you relating the idea of free will to that of consciousness?

    Suppose I wrote a program that selected one of several goals for itself (e.g. win at chess, lose at chess, or play to a draw) based on a generated random number, itself seeded by some unpredictable event like the number of milliseconds between the user’s first two mouse clicks. I think we’d agree that the fact that the selection was not hard-coded is not enough to make my program conscious: it’s still just an automated chess-playing program that rigidly follows rules. Then the difficulty is to specify a test for “chooses goals” that doesn’t beg the question. If we encounter some creature (maybe we don’t even know whether it’s a living organism or a machine), what tests would prove or disprove whether it can exercise choice in this way? And (a separate question, until the link is established) what tests would show whether or not it is conscious?

Leave a Reply

Your email address will not be published.