中国A片

The end is nigh

二月 16, 1996

John Leslie argues that if you view the certainty of global warming, the likelihood of nuclear war and the possibility of grey goo calamity from the perspective of the doomsday argument, nobody should bet on humanity's long-term survival

What are the chances of Doom Soon, the imminent extinction of humankind? I believe that humans may have little more than a half chance of surviving the next 500 years. Inclined at first to say that the risk of Doom Within Five Centuries was only about 5 per cent, I found myself changing this to 40 per cent. I reached this conclusion after considering the various dangers facing us in the light of what has come to be known as "the doomsday argument", which has made me much less optimistic about the future of humankind.

Where did the figure of 5 per cent come from? Although a "guesstimate", it was not pulled out of thin air. Consider the threat of nuclear war, for a start. The collapse of the Soviet Union has not made the bombs vanish, and the danger of accidental war may be higher than ever. What is more, nuclear weaponry is spreading even to poor nations.

Biological warfare has also come to be very threatening through advances in genetic engineering. As early as 1985, Margaret Thatcher was saying that biological weapons were potentially as dangerous as nuclear ones. Essentially new diseases can be developed by gene splicing and then distributed in aerosol form. An aggressor nation could produce vaccines for protecting itself, but the vaccines might fail.

The growing pollution crisis is dramatically illustrated by the radioactive pollution from Chernobyl, yet the Soviet Union was already polluted to an extent equivalent to 20 Chernobyls by its nuclear bomb programme. A far more threatening case of pollution, however, is the production of gases that attack the ozone layer. This could really be disastrous since we cannot cover the world's plants and animals with sunburn cream. The vulnerability of oceanic phytoplankton, crucial to the health of the planet, is of special concern. Yet efforts to limit ozone destroyers are running into difficulties.

Greenhouse warming, produced by man-made increases in carbon dioxide, methane and other gases, could also be fatal. In order to reach the consensus needed for swaying politicians, the Intergovernmental Panel on Climate Change rejected worst-case estimates. Yet numerous climatologists believe that failure to cut greenhouse gas emissions would mean a large chance of a greenhouse runaway. Methane - molecule for molecule some 30 times as powerful a warmer as carbon dioxide - could be released in huge amounts from warming lands and continental shelves. More warming, more methane; more methane, more warming. And then other factors join in, temperatures shooting upwards calamitously.

Greenhouse warming, ozone losses, the poisoning of air, land and water by thousands of chemicals now synthesised industrially, the destruction of habitats such as rain forests, coral reefs and wetlands, and the concentration of modern agriculture on just a few species - all might culminate in environmental collapse. Earth's population doubling-time is now as short as 35 years. By the year 2010, every second person will live in a city.

In cities, diseases quickly develop dozens of new strains which are then spread worldwide by travellers. When it first reached Australia myxomatosis killed all but two in every 1,000 infected rabbits. Might not something like that hit us humans soon? Just how lucky are we that Aids is not spread by coughing, like the related visna virus that infects sheep?

More exotic dangers are also worth a mention. You may know that a supernova could destroy earth's ozone layer or that we could suffer a comet or asteroid impact such as that which exterminated the dinosaurs, but have you heard of the quark-matter and vacuum-metastability disasters that might result from experiments in high energy physics? In the first, an initially small lump of quark matter, produced by some new particle accelerator, changes all that it touches into more of itself until the entire earth is consumed. Admittedly people think that any quark matter would instead repel ordinary matter, but nobody can yet be sure. And the popular argument that cosmic rays would already have seeded a quark-matter disaster, if one were physically possible, perhaps forgets that the rays would be producing quark matter only in minuscule lumps and in the upper atmosphere. These lumps could all of them decay radioactively long before they touched enough ordinary matter to allow them to grow. Again, while even head-on collisions between cosmic rays - some of them pack as much punch as rifle bullets - have not reached the energies needed for initiating a vacuum-metastability disaster, some physicists think such energies will be attainable during the next two centuries.

The idea here is that space is filled with a field which is like a ball trapped in a hollow. It cannot run downhill unless given a violent shove. Some future physicist's nice new accelerator might produce such a shove, creating a tiny bubble in which the field "dropped to the valley". The bubble would expand at nearly the speed of light, destroy the whole galaxy, and then just keep on going.

To turn to something less bizarre, there are the possible dangers of genetic engineering for agricultural purposes. Conceivably this could end in tragedy: for instance in a "green scum disaster" in which some ingeniously modified organism reproduced itself well enough to choke out everything else. Or again, a rather similar "grey goo calamity" might one day be caused by nanotechnology. This means use of very tiny machines able to produce all sorts of things, including more of themselves, because they carry their own highly miniaturised computers.

For the grey goo, however, one would probably need to wait another two or three centuries. Not highly miniaturised computers but full-sized ones are what pose the main threat in the near future. They control nuclear weapons to a degree kept secret from us. They govern increasingly important information systems which might break down catastrophically. And people working on artificial intelligence often expect computers to overtake humans during the coming century. Several of them suggest that it might then be a good thing if absolutely all of us were replaced by computers, which could be happier and longer-lived than humans as well as being smarter. At least one expert has announced that he is working towards this. The computers might soon be working towards it themselves.

Bad philosophy might be playing a role here, since it is far from clear that computers - no matter how smart - would ever be truly conscious and therefore have lives worth living. And bad philosophy is definitely present in ethical theories which tell us that the extinction of all intelligent life would be no tragedy. Numerous philosophers now think of ethics as just a way of reducing conflicts between people, if there are any people. They teach that merely possible people of future generations, people who would never in fact exist if, say, we lost the ozone layer, cannot possess anything as real as a right to be born. Such doctrines would be yet another threat to our survival, assuming that anyone listened to philosophers.

Such dangers are worrying enough but when you consider them in the light of the doomsday argument - discovered a dozen years ago by the Cambridge cosmologist Brandon Carter - then the threat they pose is greatly magnified. Carter himself kept fairly quiet about the doomsday argument but Richard Gott, professor of astrophysics at Princeton, rediscovered it recently for himself. As Carter and Gott point out, it is a natural application of the "anthropic principle" stated by Carter in the early 1970s.

The anthropic principle reminds us that we may well live in a highly unusual place, time or universe. Even if almost all places, times or universes were utterly hostile to life, we intelligent life-forms would have to find that our place, our time and our universe were life-permitting. But while it can in this way encourage us to think our location exceptional, "anthropic" reasoning can also warn us against thinking it more exceptional than is necessary for intelligent beings to find themselves there. Suppose, for instance, that 100,000 technological civilisations, all of roughly the same size, will evolve during the lifetime of our universe. Some technological civilisation has to be the very earliest, but don't hurry to believe that you live in it. If all individuals in these civilisations believed such a thing, only about one in 100,000 would be right.

What if the human race really were to become extinct soon, perhaps during the next century? This would mean that you and I had been rather ordinarily located in human population history. Because of the population explosion, roughly one in ten of all humans who have entered the world are still alive with you and me. But what if the human race instead survived for many more centuries at its present size, or what if it managed to colonise its entire galaxy, which could take just a few million years? You and I would then have been humans in an exceptionally tiny class: perhaps only one in 10,000 would have lived when we did, or maybe fewer than one in ten million. Now, we ought to have some reluctance to think ourselves so highly exceptional, when it is fairly easy to believe instead that the human race will soon become extinct through nuclear warfare, loss of the ozone layer, or some other disaster.

That, in a nutshell, is the doomsday argument. It uses the principle, central to Bayesian probability theory, that hypotheses become weightier when they make sense of what is actually observed. Just like any other, the hypothesis that humankind will soon become extinct could be weighed with the help of this principle. If the human race suffered extinction shortly, you would have been rather ordinarily located in human population history; if it survived for millions of years, you would have been rather exceptionally located. Draw your conclusions. While the risk of Doom Soon might at first seem small, the doomsday argument could lead us to view it as rather disturbingly large.

John Leslie is professor of philosophy at the University of Guelph. His The End of the World: the science and ethics of human extinction will be published in March by Routledge, price Pounds 16.99.

请先注册再继续

为何要注册?

  • 注册是免费的,而且十分便捷
  • 注册成功后,您每月可免费阅读3篇文章
  • 订阅我们的邮件
注册
Please 登录 or 注册 to read this article.
ADVERTISEMENT