中国A片

Cunning humans, selfless machines

April 28, 1995

Using neural networks to mimic the way the brain works brings us no closer to understanding consciousness, argues David Salt. Neural network technology seems to cause rational researchers to start babbling about their belief systems, in particular about the possibility that the human mind can be modelled using such computer networks and that such modelling could explain the mystery of human consciousness.

Imperial College's Igor Aleksander, a leading neural network researcher, has claimed that we already "have a theory to describe 'artificially' conscious machines and we believe that the same theoretical framework can be used to explain human consciousness". John Taylor, director of the centre for neural networks at King's College, London, has outlined a theory of modelling the human mind he believes could give rise to the subjective features of consciousness. So what is it that makes these researchers confident we are heading towards finding an answer for the problem of consciousness?

Academic interest in artificial intelligence was kickstarted by the first AI conference at Dartmouth College, New Hampshire, in 1956. Oddly enough, work on neural networks was carried out in the 1950s but was virtually abandoned because computer processing was simply not powerful enough to perform the large number of computations required by models of any size. It therefore fell largely to researchers using "symbolic" AI techniques to develop the field.

AI is a very broad research area referring to work as diverse as robotics and vision systems, software programming, natural language processing, building parallel processing computers, computer-assisted learning, cognitive science using computational techniques and the philosophy of mind. Even consciousness began to creep into AI research during the 1980s after a long period of not being considered a respectable topic.

中国A片

ADVERTISEMENT

However, the central concerns of AI over the past 40 years have revolved around the question of how human beings represent knowledge and how we can build computer models to check the various theories of knowledge representation. At a very simple level the idea that human beings are rule-following machines has proved very popular and the production of artificially intelligent machines has concentrated on turning out increasingly complex software algorithms that mimic the supposed human propensity to follow rules.

This symbolic AI approach has been so successful, at least in academic terms, that it is now commonplace for people with not the slightest interest in computing to believe that the human brain is a computer and our thoughts and rationality are the software that runs on it. We seem to have forgotten that at previous phases in our history the mind was compared to clockwork devices, to hydraulic systems or to whatever the scientific orthodoxy happened to be. It is almost heretical these days to question the mind/software, brain/hardware view of intelligence. However, as far as the symbolic AI approach is concerned, there are now severe doubts as to whether the development of software algorithms based on rule-following knowledge representation schemes is the solution to the production of artificially intelligent machines.

中国A片

ADVERTISEMENT

In 1972 Hubert Dreyfus published a classic book entitled What Computers Can't Do and in 1993 the revised version, What Computers Still Can't Do. Dreyfus puts the case that AI, as based on producing software that follows rules and facts, is a degenerative science. The early promise of symbolic AI has never been fulfilled and, while we can produce useful programs such as expert systems that can be applied to limited domains of knowledge, the idea of an all-purpose AI program based on rules is not feasible in the near future. Of course, many thousands of researchers still work on symbolic AI and Dreyfus has as many critics as he does supporters. Nevertheless there is a substantial body of opinion that supports his position and "traditional'' AI research now has a large question mark hanging over it.

Further attacks on "good old-fashioned AI'' came from a different quarter. The established test, albeit a crude one, has long been the Turing Test. If an interrogator sitting at a keyboard and corresponding with a hidden machine believes she is conversing with a human, then the hidden machine can be said to be intelligent. John Searle, a philosopher at the University of California, tried to show that a machine could pass the Turing Test and yet not have any understanding of what it was doing, much less exhibit intelligence. Searle's Chinese Room thought experiment concerns a human locked in a room who is passed Chinese symbols through a slot in the door. This human, who understands only English, has the task of matching the symbols with symbols in a rulebook that has been provided. The rules may tell him to write new symbols on a piece of paper and pass them back out of the room. In effect, the human receives a question in Chinese and, by using a rulebook, is able to provide an answer in Chinese, even though he does not understand a word of the language. This, argues Searle, is precisely what a computer program is doing: receiving, manipulating and writing symbols without any understanding of what it is doing. Such systems cannot therefore be called "intelligent''.

The questioning of the efficacy of rule-based reasoning coupled with the charge that, anyway, such systems are in no way "intelligent", had serious repercussions. One of the effects was to split AI researchers into two camps - "weak" and "strong" AI supporters. The "weak" camp is characterised by writers such as Searle and takes the view that an artificial intelligence based on available AI techniques is not possible. "Strong" AI supporters believe it is possible to create an artificially intelligent machine.

One technology, however, does bind the various supporters of the strong school of AI. It differs in many respects from "good old-fashioned AI'' in that it is not reliant on rule-based ideas; it is an elegant mathematical technique that is said to be analogous to the way the human mind functions. Enter the neural network or connectionist paradigm.

So what is a neural network? The idea is that we can produce a computational equivalent of the human brain by modelling the way that the brain works. Brains are collections of neurons that are highly interconnected. Estimates of the number of neurons in a human brain range between 50 billion and 100 billion. Neurons may be grouped together within the brain in order to carry out specialised functions. Studies of patients with brain lesions suggest some activities are localised to specific collections of neurons while other capabilities may be controlled by widely distributed neuronal activity. The possible interconnections between the neurons within a brain are so astronomical that we can think of trying to model only very small portions.

An important feature of human neuronal activity seems to be that operations are done in parallel. In other words, the brain does not wait for one activity to finish before another is initiated, rather it processes whole collections of activity at the same time. Activities within the brain involve both electrical impulses and transmission by chemicals.

中国A片

ADVERTISEMENT

A feature of most computers is that they execute their stored programs in a strictly sequential fashion, that is, the procedures in a program have to be done one step at a time, each instruction being completed before the next one can be processed. However, we can introduce parallelism into computers either with a hardware solution (have more than one central processing unit) or with a software solution where parallel execution can be simulated using a time-slicing approach. The production of artificial neural networks most often relies on a software simulation of parallel processing, although hardware solutions have also been implemented. In principle there is nothing that prevents us from implementing ANNs that are very close analogues of portions of human neural networks.

ANNs are constructed in layers, starting with an input layer, finishing with an output layer and having one or more "hidden" layers. The idea is that given a certain set of input data the network should be trained to produce certain desirable outputs. For example, given a handwritten letter the desirable output from the system might be that the letter is a "C''. Multiple examples of handwritten letter Cs are presented to the network and it is trained to recognise these letters. The training is essentially a matter of using some simple mathematical rules to adjust the strength of connections between the layers within the model until the desirable output is achieved.

中国A片

ADVERTISEMENT

Where ANNs score over the old rule-based systems is that they are not specifically programmed to obey stepwise rules but learn by experience. In just the way that a human infant learns to recognise shapes by exposure to many examples, so do neural networks. What is even more impressive about ANNs is their ability to classify patterns they have never seen before. Presented with a novel version of a shape that it has been trained to recognise, a neural network can often guess the correct classification. This in a sense is the Holy Grail that symbolic AI failed to locate - an artificially created network can both learn from its experience and classify new events it has not encountered before.

For some observers the behaviour of neural networks seems miraculous. However, we should be less interested in the behaviour of these systems than in the way that they operate. But what would a high-level conceptual description of this process reveal? We start out with a set of inputs and we instruct the system to produce a certain type of outputs; to achieve this it is allowed to modify the weightings of its various connections. But there is nothing mysterious in this process - the program merely follows a set of instructions using fixed mathematical techniques until the desired result or a near match is achieved. Although it may have the appearance of learning, it is merely working out an efficient match between the methods available to it and the desired outputs.

Neural network simulations are examples of a technique that has been largely developed by researchers in AI. They are non-procedural in the sense that they do not follow a step-by-step program to accomplish their task. But they do rely on a set of algorithms devised by humans that will determine the possible ways in which they will function. The programs are constrained to carry out only those procedures their algorithms will allow. The fact that neural networks learn by trial and error and that they can recognise novel patterns is not because they have any autonomous intelligence but because their algorithmic specification gives rise to these behaviours in just the same way that differing programming paradigms give rise to specific types of behaviour.

There is no basis for attributing intelligence or consciousness to neural networks. There is no logical reason to suppose that neural networks are conscious or will attain consciousness, nor is there any falsifiable hypothesis that can be used to defend such a position. The only thing in their favour is the ability to self-organise much more efficiently than programs based on symbolic paradigms.

Supporters of the belief that neural networks have consciousness, or will achieve consciousness, offer the argument that when we have a sufficiently complex system or set of systems, a consequence of that complexity will be the emergence of conscious intelligent behaviour. This is much the same as the theory of epiphenomenalism, which holds that consciousness and the sense of the self are secondary products of the physical processes that go on in the brain. Although this may or may not be true, the causal link that shows how consciousness arises from the physical states is yet to be proven. And even if consciousness is shown to be an incidental by-product of human brains there is still a requirement to show how and why the same phenomenon would arise in machine brains.

We need a proper scientific theory of how physical states can give rise to mental states and without this we have no method of testing the hypothesis that a complex computer program or computer hardware will produce, as a by-product, consciousness. Neural networks are doing nothing more than applying algorithmic procedures to produce specified desirable outputs. No scientific credibility should be attached to the belief that consciousness must be an emergent property of complex systems. It is no more than a mystical belief and what is surprising is that the high priests of this religion have been able to persuade so many to join their cargo-cult.

中国A片

ADVERTISEMENT

David Salt is head of computing, University of Derby.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Sponsored

ADVERTISEMENT