中国A片

Can machines ever be conscious?

April 5, 1996

Igor Aleksander believes they can; Jaron lanier disagrees

Can machines be conscious? "I should co-co! You should see my new video recorder. Set it up to record Neighbours and it gives you the early evening news instead. It's got a will of its own."

People are only too ready to assign consciousness to inanimate objects; ancient Egyptians ascribed consciousness to the statue of Memnon because it emitted sounds as air expanded within it, warmed by the rays of the sun. However, were someone at a party to reveal that he was made of silicon chips, a perfectly understandable reaction might be "Good heavens! I've been talking to an unconscious zombie all evening". Curiously, people ascribe consciousness to the daftest selection of objects and then argue that sophisticated information processing machines are definitely not conscious. The judgement as to whether a manufactured object is conscious need not be a property of the machine, it is in the gift of the beholder. So, were it possible to make artefacts which some people agree are conscious, it would be impossible to do this in a universally agreed way.

Philosophers call this the "third person" problem; it is impossible, some argue, to tell whether anything outside ourselves is conscious or not. It is impossible (argues the philosopher Thomas Nagel) to know what it is like to be a bat. You may work out every last detail of the working of a bat's brain but this will not tell you what it is like to be that bat. So the only organism of whose consciousness I can be absolutely sure is me. This is the "first person" hallmark of consciousness. So, philosophically minded people will perfectly properly argue that science and the engineering of machines, being about objects outside ourselves, cannot cope with anything but third-person problems and therefore cannot explain what consciousness is.

中国A片

ADVERTISEMENT

My argument is that the engineer can conjure up a sneaky finesse of this problem. She can ask the question, "What properties does an organism, artificial or real, need to have to decide that it itself is conscious?" If it can be argued that an object does not have the machinery to be able to decide it is conscious, then any argument that it might be conscious is not based on any principle and is just arbitrary attribution.

To work with neural machines (one of which -called Magnus - is used by me and my co-workers) is to ask precisely the above question. It is not about machines that everyone will agree are conscious. The consciousness of anything we produce could be refuted simply though "third person" doubts. But we are defining the nature of a minimum of machinery required for a machine to be able to operate in a "first-person" way. The critics of this approach, particularly those who are critical of artificial intelligence as a programmer's act of puppeteering, will say: "but any old machine can be given a voice synthesiser that regularly says things like 'I think, therefore I am.'"

中国A片

ADVERTISEMENT

Creation of animated computer puppets is precisely what we are not doing. Magnus is generating explanations of "first person" mechanisms and particularly how these may be derived from the operation of neural networks. Neural networks are not used because they mimic the brain, but because they have the qualities to learn and generalise in what seems to be a fundamental fashion. So no wonder that the brain has such things too. Typical are questions as to how an artificial neural net might store a vivid, recallable impression of sights, sounds, smells, feelings and tastes; how the same net, given ways of acting on its environment, builds a representation of what, in the world around it, it can achieve and modify; how it then goes on to absorb language with which it can express its own internal activity (thoughts?) to a human interlocutor. Theory has it that such an organism could build up emotions from instincts and even have a true "will of its own". Indeed, the philosophers' concept of "qualia" can be shown not to be outside the representational power of a neural net.

Some thinkers do not like what we do and argue that "mere computer simulation" cannot capture the "sentience" of a living being. But what we do is not "mere simulation", it is an inquiry into the nature of sentient organisms that aims at an explanation of the first person and merely uses a computer as a useful tool with which to develop and demonstrate ideas about mechanisms. Others argue that no matter what mechanism we may discover, this could well be necessary but not sufficient to explain consciousness. What is missing is something that is not available to science. There is a gap, they argue, between neural mechanisms and consciousness.

I disagree. Their viewpoint is no more than a belief. Phlogiston and spatial ether were such beliefs until explanations were developed which made them unnecessary, redundant. Some philosophers may accurately call me an eliminativist, but it is up to them to show that neural mechanisms are insufficient for any reason other than that they believe this to be the case.

Igor Aleksander is professor of neural systems engineering, Imperial College, London.

中国A片

ADVERTISEMENT

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Sponsored

ADVERTISEMENT