As a software developer, I am attracted by Thomas Metzinger’s functional level of description because it can be read as a high-level functional specification for consciousness and the self. If someone could build an artificial system that meets the specification, he or she would have created a conscious being! That would certainly be an interesting project. Perhaps having a philosopher write the functional spec is exactly what’s called for to rescue AI from the back-eddies in which it has slowly revolved for several decades.
Although computers have made impressive progress in competing with human beings—advancing from checkers to chess championships, winning at trivia games and outperforming human experts in knowledge of specific domains—this success is due more to faster hardware, improved search techniques, and truly massive storage than to breakthrough advances in software architecture. Yes, software can ‘learn,’ by using feedback from its own failures and successes to modify its behaviour when attempting similar problems in the future. Yet the holy grail of AI, the Turing Test—to pass which a computer must be able to successfully masquerade as a human being by carrying on a convincing conversation with human interlocutors who are trying to tell the difference—still seems as distant a goal as it did when Alan Turing proposed it in 1950. It is likely to remain so until we develop machine analogues of consciousness and emotion, by which I mean emotions both of self-concern and of concern for others. Continue reading “Is the Google Car Conscious? Ethics and Artificial Minds”