The Matrix got many otherwise not-so-philosophical minds ruminating on the nature of reality. But the scenario depicted in the movie is ridiculous: human brains being kept by intelligent machines just to produce power.
There is, however, a related scenario that is more plausible and a serious line of reasoning that leads from the possibility of this scenario to a striking conclusion about the world we live in. I call this the simulation argument. Perhaps its most startling lesson is that there is a significant probability that you are living in a computer simulation. I mean this literally: if the simulation hypothesis is true, you exist in a virtual reality simulated in a computer built by some advanced civilisation. Your brain, too, is merely a part of that simulation.
What grounds could we have for taking this hypothesis seriously? Before getting to the gist of the simulation argument, let us consider some of its preliminaries. One of these is the assumption of “substrate independence”.
This is the idea that conscious minds could in principle be implemented not only on carbon-based biological neurons (such as those inside your head) but also on some other computational substrate such as silicon-based processors.
Of course, the computers we have today are not powerful enough to run the computational processes that take place in your brain. Even if they were, we wouldn’t know how to program them to do it. But ultimately, what allows you to have conscious experiences is not the fact that your brain is made of squishy biological matter but rather that it implements a certain computational architecture. This assumption is quite widely (although not universally) accepted among cognitive scientists and philosophers of mind. For the purposes of this article, we shall take it for granted.
Given substrate independence, it is in principle possible to implement a human mind on a sufficiently fast computer. Doing so would require very powerful hardware that we do not yet have. It would also require advanced programming abilities, or sophisticated ways of making a very detailed scan of a human brain that could then be uploaded to the computer. Although we will not be able to do this in the near future, the difficulty appears to be merely technical. There is no known physical law or material constraint that would prevent a sufficiently technologically advanced civilisation from implementing human minds in computers.
Our second preliminary is that we can estimate, at least roughly, how much computing power it would take to implement a human mind along with a virtual reality that would seem completely realistic for it to interact with. Furthermore, we can establish lower bounds on how powerful the computers of an advanced civilisation could be. Technological futurists have already produced designs for physically possible computers that could be built using advanced molecular manufacturing technology.
The upshot of such an analysis is that a technologically mature civilisation that has developed at least those technologies that we already know are physically possible would be able to build computers powerful enough to run an astronomical number of human-like minds, even if only a tiny fraction of their resources was used for that purpose.
If you are such a simulated mind, there might be no direct observational way for you to tell; the virtual reality that you would be living in would look and feel perfectly real. But all that this shows, so far, is that you could never be completely sure that you are not living in a simulation.
This result is only moderately interesting. You could still regard the hypothesis as too improbable to be taken seriously.
Now we get to the core of the simulation argument. This does not purport to demonstrate that you are in a simulation. Instead, it shows that we should accept as true at least one of the following three propositions: (1) The chances that a species at our current level of development can avoid going extinct before becoming technologically mature is negligibly small (2) Almost no technologically mature civilisations are interested in running computer simulations of minds like ours (3) You are almost certainly in a simulation.
Each of these three propositions may be prima facieimplausible; yet, if the simulation argument is correct, at least one is true (it does not tell us which).
While the full hypothesis employs some probability theory and formalism, the gist of it can be understood in intuitive terms. Suppose that proposition (1) is false. Then a significant fraction of all species at our level of development eventually becomes technologically mature. Suppose, further, that (2) is false, too. Then some significant fraction of these species that have become technologically mature will use some portion of their computational resources to run computer simulations of minds like ours. But, as we saw earlier, the number of simulated minds that any such technologically mature civilisation could run is astronomically huge.
Therefore, if both (1) and (2) are false, there will be an astronomically huge number of simulated minds like ours. If we work out the numbers, we find that there would be vastly many more such simulated minds than there would be non-simulated minds running on organic brains. In other words, almost all minds like yours, having the kinds of experiences that you have, would be simulated rather than biological. Therefore, by a very weak principle of indifference, you would have to think that you are probably one of these simulated minds rather than one of the exceptional ones that are running on biological neurons.
So if you think that (1) and (2) are both false, you should accept (3). It is not coherent to reject all three propositions. In reality, we do not have much specific information to tell us which of the three propositions might be true. In this situation, it might be reasonable to distribute our credence roughly evenly between the three possibilities, giving each of them a substantial probability.
Let us consider the options in a little more detail. Possibility (1) is relatively straightforward. For example, maybe there is some highly dangerous technology that every sufficiently advanced civilisation develops, and which then destroys them. Let us hope that this is not the case.
Possibility (2) requires that there is a strong convergence among all sufficiently advanced civilisations: almost none of them is interested in running computer simulations of minds like ours, and almost none of them contains any relatively wealthy individuals who are interested in doing that and are free to act on their desires. One can imagine various reasons that may lead some civilisations to forgo running simulations, but for (2) to obtain, virtually all civilisations would have to do that. If this were true, it would constitute an interesting constraint on the future evolution of advanced intelligent life.
The third possibility is the philosophically most intriguing. If (3) is correct, you are almost certainly now living in a computer simulation that was created by some advanced civilisation. What kind of empirical implications would this have? How should it change the way you live your life?
Your first reaction might think that if (3) is true, one would go crazy if one seriously thought that one was living in a simulation.
To reason thus would be an error. Even if we were in a simulation, the best way to predict what would happen next in our simulation is still the ordinary methods - extrapolation of past trends, scientific modelling, common sense and so on. To a first approximation, if you thought you were in a simulation, you should get on with your life in much the same way as if you were convinced that you are living a non-simulated life at the bottom level of reality.
The simulation hypothesis, however, may have some subtle effects on rational everyday behaviour. To the extent that you think that you understand the motives of the simulators, you can use that understanding to predict what will happen in the simulated world they created. If you think that there is a chance that the simulator of this world happens to be, say, a true-to-faith descendant of some contemporary Christian fundamentalist, you might conjecture that he or she has set up the simulation in such a way that the simulated beings will be rewarded or punished according to Christian moral criteria.
An afterlife would, of course, be a real possibility for a simulated creature (who could either be continued in a different simulation after her death or even be “uploaded” into the simulator’s universe and perhaps be provided with an artificial body there). Your fate in that afterlife could be made to depend on how you behaved in your present simulated incarnation.
Other possible reasons for running simulations include the artistic, scientific or recreational. In the absence of grounds for expecting one kind of simulation rather than another, however, we have to fall back on the ordinary empirical methods for getting about in the world.
If we are in a simulation, is it possible that we could know that for certain? If the simulators don’t want us to find out, we probably never will. But if they choose to reveal themselves, they could certainly do so.
Maybe a window informing you of the fact would pop up in front of you, or maybe they would “upload” you into their world. Another event that would let us conclude with a very high degree of confidence that we are in a simulation is if we ever reach the point where we are about to switch on our own simulations. If we start running simulations, that would be very strong evidence against (1) and (2). That would leave us with only (3).
Nick Bostrom is a British Academy postdoctoral fellow in the philosophy faculty at Oxford University. His simulation argument - - is published in The Philosophical Quarterly.
?
Doom: it’s not Black Death but men in black leather
Mankind’s future has been looking pretty grim in recent years.
Anyone who caught The Matrix in 1999, A.I. in 2001 and Minority Report in 2002 might be forgiven for not feeling too positive about the shape of things to come. And it isn’t going to get much better in 2003 either, with Equilibrium positing a world where all human feeling is suppressed, and the first of this year’s two Matrix sequels due to open this weekend. Even opera has joined in, with the English National Opera producing the UK premiere of Poul Ruder’s dystopian music drama The Handmaid’s Tale , based on Margaret Atwood’s book.
Where has all this gloom come from, and why are we so attracted to these bleak prefigurements?
Once upon a time, the future looked bright and trouble-free. We would be high-tech, squeaky clean and well organised, floating serenely across the solar system to the strains of Blue Danube . But not anymore.
Martin Skinner, lecturer in psychology at Warwick University, studies contemporary trends and believes that our images of the future are greatly influenced by what we fear. “In a culture that is in the thrall of technology, there is always going to be a susceptibility to the vision of that technology getting out of control,” he says. Skinner notes that even as society embraces technology, there is a fear that something this good and powerful could turn nasty. “Most people have at least some lingering doubts about the consequences of increasingly powerful information technology, genetic tampering and the sustainability of the technological incline.”
While these fears may be influenced by some very real technological horrors unleashed in the 20th century, the question has to be asked why significantly more frightening centuries - such as the 14th and 15th, when the future of mankind hung in the balance - didn’t also spawn dystopic art forms. Lyn Pykett, professor of English at the University of Wales, Aberystwyth, suspects that experiences such as the Black Death produced dystopic religions instead. “I suppose it was experienced as an act of nature or of God - it caused cultural upheaval but was not of itself a direct product of culture. Perhaps today we put our doubts and fears into literature because we no longer give such credence to religion.”
The concept of dystopia has been around for nearly 150 years. The first attributed use of the term is by John Stuart Mill when addressing the House of Commons on March 12 1868: “It is, perhaps, too complimentary to call them Utopians, they ought rather to be called dys-topians, or caco-topians.
What is commonly called Utopian is something too good to be practicable; but what they appear to favour is too bad to be practicable.”
Mill’s new coinage inverted our notion of the utopian - something that had been extrapolated from Thomas More’s 16th-century image of idealised communal life. This seminal utopia was located not in an imaginary future but on an imaginary island. In the opinion of Richard Sargeantson, who lectures on the history of utopianism at Cambridge University, writers of the time felt free to take liberties with geography. “Early utopianism, by which I mean 1516-1789, was always set in distant countries because the world was still terra incognita ,” he says.
Sergeantson recognises that our appetite for dystopia developed later. “There were no dystopias before the French revolution, probably because there were no real radical threats to society,” he says. “There were backhanded utopias, satires such as Swift’s Gulliver’s Travels and Joseph Hall’s Mundud Alter et Idem , but these were not dystopias of the kind that emerged after 1789. I think the French revolution and The Terror had a big effect - people could see that ambitious radical schemes could have terrible consequences.”
For dystopias to arise in the popular imagination, mankind first needed a concept of future. But with the eccentric and cryptic exception of Nostradamus, it wasn’t until 1731 that a thorough attempt to conceive of life in subsequent centuries was published. Samuel Madden’s Memoirs of the 20th Century pictured England flourishing, 200 years hence, under King George VI and was stunningly original.
Then came Louis-Sebastien Mercier’s L’Annee 2440 , Sergeantson says. “It was one of the bestselling books of the ancien régime , but my sense is that images of the future - both utopian and dystopian - only really got going after the French revolution,” he says. “By then, geographical discovery had given us a clearer idea of the world we lived in so writers had to turn their imagination more to the past - and to the future.”
In the opinion of Andrew Milner, a cultural historian at Monash University, Australia, early literary prognoses were almost always upbeat. “Edward Bellamy’s scientistic Looking Backward 2000-1887 and William Morris’ very different, neo-Romantic News from Nowhere both attest to the wide political significance of 19th-century utopianism.”
But every action provokes reaction. Against the Victorian idealism of Jules Verne and Bulwer-Lytton emerged proto-dystopians such as H. G. Wells ( The Time Machine and When the Sleeper Awakes ) and Sir George Chesney, whose The Battle of Dorking prefigured George Orwell in its futuristic depiction of Britain being crushed under the European jackboot.
The 20th century saw utopia becoming increasingly gadget-oriented.
Technology would set us free, argued writers such as David Goodman Croly, John Jacob Astor and John Maynard Keynes. “But a parallel decentring of utopianism developed on the right and the anti-communist left,” Milner says. “The dominant literary and philosophical modes became the critique of communism and other totalitarianisms as utopias gone sour, which led directly to dystopias such as Zamiatin’s We and Orwell’s Nineteen Eighty-Four .”
By the mid-20th century, the long utopian tradition appeared to have fallen wholly into disrepute. “This moment is taken up and popularised in the American science fiction of the 1950s and 1960s,” Milner says. “It was Chad Walsh who popularised ‘dystopian’ ideas and terminology in his writings.”
Walsh and his compatriot Philip K. Dick produced some seminal dystopian scenarios in the 1960s, including stories that were to spawn films such as Total Recall , Blade Runner and Minority Report . The notion of failed utopias also caught the popular imagination at this time with dystopian fantasies such as Zardoz and Logan’s Run . All shared the same basic backstory: technology has run wild, the planet is damaged, survivors hide themselves away and create a bizarre belief system that is supposed to support their continued existence but in fact becomes a principle cause of repression.
Paranoia runs through all these fictions, but do they reflect a genuine popular pessimism about mankind’s future? Skinner detects a certain amount of fashionable gloom.
“At the individual level, one might note that the recent cinematic dystopias are for a reasonably young audience,” he says. “I doubt if many of this audience are overly concerned with genetic manipulation or sustainable technologies, but they do like to see a more anarchic social order where the rules, regulations and hierarchies - at the bottom of which they feel they are stuck - have been re-ordered. In these films, the whole oppressive-seeming nature of society is cast aside and there is room for the rebel, the individual, the unsophisticated or untutored to rise and be influential.”
Whether directors Ridley Scott ( Blade Runner and Alien ) or James Cameron ( Terminator and Aliens ) share such a bleak view of the 21st century is debatable. But they both know that dressing everyone in black leather and making wild with the mayhem is big box office. Since 2000, even that barometer of mainstream commercial taste Steven Spielberg, whose early glimpses of the future - Close Encounters and E.T. - were distinctly optimistic, has joined the gloom industry with A.I. and Minority Report . Meanwhile, Kurt Wimmer’s Equilibrium features Christian Bale as a kung fu-fighting Winston Smith figure whose martial-arts prowess blasts Big Brother away.
So how gloomy is the long-term prognosis for fashionable dystopia? Milner is one of a number of academics who believe utopia is on its way back. “In the late 20th century, there was a real utopian revival, associated with anti-racism and the movement for gay rights, feminism, environmentalism and their reconciliation in eco-feminism,” he says. “It found significant aesthetic expression in the science fiction of writers such as Ursula Le Guin, Joanna Russ, Marge Piercy, Samuel Delaney and, more recently, Kim Stanley Robinson and Iain M. Banks.”
At Cambridge, Gareth Stedman Jones, professor of political science and director of the Centre for History and Economics, also believes that utopianism has not been vanquished. “There are various reasons, both good and meretricious, why the utopianisms of the 1790-1920 period broke down and were succeeded by an array of dystopian visions in the second half of the 20th century - mainly distrust of knowledge and science. But I see no reason why this should be our permanent state of mind.”
Nevertheless, there’s no doubt that for the moment dystopia continues to fascinate - and to photograph really well. Maybe in the end, we will have to tire of the black leather trousers before we revise our intellectual perspective.