(9 April, 1997. Modified 21 Oct., 1997, and 19 Sept. 1998)
- If you can look into the seeds of time,
And say which grain will grow and which will not,
- Speak then to me.
Abstract. The purpose of this paper, boldly stated, is to propose a new type of philosophy, a philosophy whose aim is prediction. The pace of technological progress is increasing very rapidly: it looks as if we are witnessing an exponential growth, the growth-rate being proportional to the size already obtained, with scientific knowledge doubling every 10 to 20 years since the second world war, and with computer processor speed doubling every 18 months or so. It is argued that this technological development makes urgent many empirical questions which a philosopher could be well-suited to help answering. I try to cover a broad range of interesting problems and approaches, which means that I won't go at all deeply into any of them; I only try to say enough to show what some of the problems are, how one can begin to work with them, and why philosophy is relevant. My hope is that this will whet your appetite to deal with these questions, or at least increase general awareness that they worthy tasks for first-class intellects, including ones which might belong to philosophers.
Scientists sometimes don't see philosophers as fellow scientists; and some philosophers do no regard themselves as scientists. These philosophers might have devoted themselves to classical metaphysics or to ethics, or they might indeed have made a study of science itself without thereby doing science in the way that scientists do. I am not going to deny that these philosophers might be doing something worthwhile, that there might be a legitimate task there to be tackled. What I will do is to suggest that there are also other tasks, no less legitimate, to which a philosopher could well devote himself ex officio and which are not clearly separable from the scientist's pursuit. Examples include enterprises such as the clarification of quantum theory, or statistical mechanics, or Darwinism --at least as understood by some of the people involved--, but what I have in mind is something else.
It appears to me that over a period of time but especially in last ten years or so, a starring role has developed on the intellectual stage for which the actor is still wanting. This is the role of the generalised scientist, or the polymath, who has insights into many areas of science and the ability to use these insights to work out solutions to those more complicated problems which are usually considered too difficult for scientists and are therefore either consigned to politicians and the popular press, or just ignored. The sad thing is that ignoring these problems won't make them go away, and leaving them to people unschooled in the sciences is unlikely to result in optimal solutions, given the massive relevance of scientific understanding to all these problems, and the complexity of this relevance relation (which makes it impossible for the evaluator/policy-maker to determine the relevance of the contributions from the experts unless (s)he himself possesses --if not full-blown expertise-- then at least a sound comprehension of the basic principles in all disciplines involved).
The importance of finding good solutions to these problems cannot be in dispute. Some of them are challenges to the very survival intelligent life.
"But who would be so foolhardy as to think himself capable of becoming an expert in everything! Perhaps there was a time when a great genius could hope to master the essentials of all the science of his day, but that time is long gone and the only way to progress further has been through specialisation. It is very naive to suppose that anybody but one who has made it the study of his life could, for instance, advance modern theoretical physics from where it is now. It is simply too difficult for amateurs, like it or not." Agreed. We will continue to need specialists as much as ever. But we also need some first-class intellects who are willing to devote themselves full-time to the pursuit of science in general, who decide to invest their rich gifts into the project of trying to get an overview over the whole scientific edifice, of trying to understand the relevance of the particular laws and facts for the ultimate goals of our society or our species. (What these goals are, is of course another matter, but my guess is that much of the understanding we need is independent of exactly which long-term goals we adopt. And by relativising to different goals we can always avoid the subjectivity of the goals themselves.)
Now, how does the philosopher come in to the picture? On three accounts. First, philosophy has historically played the role of a mother or a Montessori for many developing disciplines. Philosophers have had a license to pursue whatever inquiry they found interesting, and according to one popular view (among philosophers) it is this freedom that has given birth to the individual sciences, one after another, and provided an environment in which they could develop into independent disciplines. It is natural that we should once again turn to philosophy and ask her to lend us her old cradle to this newest infant. Let it not be objected that the proposed subject-matter is not objective or "academic" enough. There is a philosophy of politics, a philosophy of religion, there is ethics and even a philosophy of art; it would be odd, then, if the philosophy of technological prediction (for want of a better name yet) would be too subjective to be academically acceptable. It can be as enmeshed with science as one could possibly wish.
The second account on which the philosopher enters the picture is that his training is uniquely suited as a foundation for the specific scientific knowledge that will be required. The analytic ability and the drill in conceptual clarification that he has hopefully got from his training are invaluable assets to an aspiring polymath. If he lacks understanding of the pragmatics of science, he needs to develop it. Then he will be ideally suited to the task.
There is also a third way in which his training will be useful to him. No reason exists why a polymath should not specialise at all. Naturally, there will be some division of labour between the polymaths too, and though this division should normally not so much follow the borders of subject area as those of problem area, there will inevitably be problems or factors which are closer associated with one subject-area than another. And it turns out, surprisingly, that some classical philosophical problems may have important empirical implications when it comes to predicting our future! (I will give some examples of this presently.) Here will be plenty of opportunity for the philosopher to use his specifically philosophical knowledge; the philosopher-polymath can focus on the philosophical aspects of the big problems.
I imagine that the reader, at this point, while you might agree in general with what has been said, is still a bit unclear about what, exactly, this philosopher-polymath is supposed to do. I think the best way to explain this is if we look at some examples. That's what'll be doing in the rest of this essay.
The Carter-Leslie Doomsday argument and the Anthropic Principle
The Doomsday argument was conceived by the astrophysicist Brandon Carter some fifteen years ago, and it has since been developed in a Nature article by Richard Gott , and in several papers by philosopher John Leslie and especially in his recent monograph The End of The World (Leslie ). The core idea is this. Imagine that two big urns are put in front of you, and you know that one of them contains ten balls and the other a million, but you are ignorant as to which is which. You know the balls in each urn are numbered 1, 2, 3, ..., N. Now you take a ball at random from the left urn, and it is number 7. Clearly, this is a strong indication that that urn contains only ten balls. If originally the odds were fifty-fifty, a swift application of Bayes' theorem gives you the posterior probability that the left urn is the one with only ten balls. (Pposterior (L=10) = 0.999990). But now consider the case where instead of the urns you have two possible human races, and instead of balls you have individuals, ranked according to birth order. As a matter of fact, you happen to find that your rank is about sixty billion. Now, say Carter and Leslie, we should reason in the same way as we did with the urns. That you should have a rank of sixty billion or so is much more likely if only 100 billion persons will ever have lived than if there will be many trillion persons. Therefore, by Bayes' theorem, you should update your beliefs about humankind's [A] (=footnote A) prospects and realise that an impending doomsday is much more probable than you have hitherto thought.
Consider the objection: "But isn't the probability that I will have any given rank always lower the more persons there will have been? I must be unusual in some respects, and any particular rank number would be highly improbable; but surely that cannot be used as an argument to show that there are probably only a few persons?"
In order for a probability shift to occur, you have to conditionalise on evidence that is more probable on one hypothesis than on the other. When you consider your rank in the DA, the only fact about that number that is relevant is that it is lower than the total number of individuals that would have existed in either hypothesis, while for all you knew, it could have turned out to be a number higher than the total number of people that would have lived on one of the hypothesis, thereby refuting that hypothesis. It makes no difference whether you perform the calculation with a specific rank or an interval within which the true rank lies. The Bayesian calculation turns out the same posterior probability. The fact that you discover that you have this particular rank value gives you information only because you didn't know that you wouldn't discover a rank value that would have been incompatible with the hypothesis that there would have existed but few individuals. It is presupposed that you knew what rank values were compatible with which hypothesis. It is true that for any particular rank number, finding that you have that rank number is an improbable event, but a probability shift occurs not because of its improbability per se, but because of the difference between its conditional probabilities relative to the two hypotheses.
There are numerous objections such as this one, objections that can easily be seen to be mistaken. Most people, when confronted with this argument, are unwilling to accept it. There is something intuitively perverse about it. When it comes to making the intuitions explicit and explain exactly why the argument fails, however, there is no longer any sign of a consensus: each has his own theory of what has gone wrong. I think that many putative refutations have been successfully countered by Leslie, and those who believe they can see immediately why the argument outlined above fails may find it worthwhile to take a look at chapters 5 and 6 in Leslie's book [B].
My own view used to be that the Doomsday argument was flawed, and I thought I could explain why; but I'm not so certain anymore. The idea seems simple enough at first sight, but it turns out to be rather more subtle when you start to think about it. The problem is a priority one in my personal cognitive economy, and a preliminary paper is available (Bostrom (1997a)). Whether or not the Doomsday argument fails to yield any valid predictions about the future, its status is still controversial, and it is clearly partly a philosophical problem to determine its ultimate validity (and partly a problem of probability theory). There is a considerable discussion of the Doomsday argument in the philosophical literature. So this is clearly one example of a philosophical issue that has direct empirical implications. [C]
[A] Or its biological or electronical successors. We get a separate doomsday prediction for each class of beings that are descendants from present day humans that we consider. E.g., we might want to include certain types of future computers with superhuman intelligence in the reference class, if we would not regard the replacement of the human race by something arguably superior as a doomsday in any interesting sense.
[B] More references can be found in Bostrom (1997a)
[C] The Doomsday argument is an instance of reasoning making use of the so-called anthropic principle. For a discussion on what empirically testable predictions may or may not be derived from this principle see e.g. Carter (1983), Earman (1987), Wilson (1994), Barrow & Tipler (1986). For texts on the anthropic principle available on the Internet, see http://www.anthropic-principle.com.
The Fermi paradox
There has been much speculation around Fermi's famous question: "Where are they? Why haven't we seen any traces of intelligent extraterrestrial life?". One way in which this question has been answered (Brin 1983) is that we have not seen any traces of intelligent extraterrestrial life because there is no extraterrestrial life because intelligent extraterrestrial life tend to self-destruct soon after it reaches the stage where it can engage in cosmic colonization and communication. This is the same conclusion as that of the Doomsday argument (i.e.: we are likely to perish soon), but arrived at through a wholly different line of reasoning.
The whole discipline of thinking about the Fermi paradox has been criticised (e.g. Mach 1993) for being too sanguine about the capacity of the set of available data to support interesting and valid inferences about extraterrestrial civilisations. One may wholeheartedly agree with this critique if taken to establish only the inadequacy of evidential support for many of the claims made in the literature, often based on quite na�ve extensions of ordinary statistical methods to this extraordinary field, producing numerical results which give a false sense of achieved rigor. But whereas sloppy thinking should be discouraged, I think it would be too rash to conclude that nothing interesting can be known about the issue. I even think that progress has already been made, in answering some of the sub-questions of the Fermi paradox. The predicament is that our answer to the main question will not be more certain than the answer to the most uncertain subquestion; but for the selfsame reason, we could hope to obtain interesting definite partial answers even were we to despair about finding the whole answer any time soon.
One way to approach the Fermi paradox is to reformulate it in terms of a "Great Filter", as suggested by Robin Hanson in his excellent recent overview of the subject (Hanson 1996). The issue is complicated and we can't go into details here, but it is useful to sketch the basic structure of the problem because doing so will allow us to point out various locations where a philosopher could make a contribution. The reasoning that follows will seem very speculative and flaky; partly because that's the way it is, but partly also because I have to keep the presentation brief.
The "Great Filter" refers to the hypothetical mechanism(s) or principle(s) by which the great number of potential life-bearing planets get filtered out before they have produced intelligent life forms that expand into cosmos. Somewhere along the path from the existence of a suitable planet to the existence of a corresponding space-colonizing species there appears to be some unlikely step that almost always fails.
A few months before the first version of this paper was written, NASA announced that traces of life had been found inside a meteorite of Martian origin. It was said that this gives considerable support to the hypothesis that there has once been primitive life forms on Mars, which have emerged and evolved independently from the tellurian biosphere. More recently, new analyses indicated that the formations in the meteorite might have been produced by nonbiological processes; at this stage the scientists are divided on the on the issue, and we will probably have to wait for new space missions to Mars before we can tell with any confidence whether there has been life on the planet or not. But suppose it turned out that life has in fact developed independently on Mars. What would one's reaction be? Excitement and joy over this momentous scientific discovery, of course.
But wait a minute! If life emerged independently on two planets in our solar system, then it must surely have emerged on a great many other planets too, throughout the cosmos. It has been estimated that there are about 1010 habitable planets in our galaxy, and about 1020 such planets in the visible universe, so if life is not very unlikely to evolve on a habitable planet, then there should be an extremely great number of planets in our galaxy where life has evolved, and an even greater number in the visible universe. Now consider the following (problematic) statements:
- Once primitive life has emerged on a planet, there is a significant probability that it will evolve into life forms comparable in technological might to present human civilisation.
- If a civilisation which has become equipollent to present humanity continue to prosper for some short time (a few hundred years, say), it is likely to develop the ability to construct von Neumann probes (i.e. general-purpose machines which can self-replicate in a natural environment).
- When a civilisation can construct von Neumann probes, then it will have the ability to engage in a (low cost) cosmic colonisation process, a spherical expansion at some significant fraction of the light speed (0.1c, say). And there is a significant probability that it will choose to do so.
- If such a colonisation wave were to cross the path of the earth, we would notice.
If we combine (1)-(4) with the assumption that life emerged independently on Mars, it would follow that we would almost certainly have noticed a colonisation wave. Since we haven't (UFO believers notwithstanding), Mars life would imply (with very great probability, greater than .99 say) that at least one of assumptions (1)-(4) is false.
But which one? Let's begin by examining (1). Split it into two (to simplify matters, we here disregard the possibility of human-equivalents (e.g. intelligent insects?) who don't evolve though humanoid apes-equivalents):
(1.1) There is a significant probability that humanoid apes-equivalents will evolve on a planet where primitive life forms have emerged.
(1.2) There is a significant probability that humanoid apes-equivalents will evolve into human-equivalents.
Can evolutionary biology, at its present stage, tell us something about whether (1.1) or (1.2) are true? -- Yes, it seems it can, though the implications are often problematic and require considerable methodological sophistication.
(1.2) is appears to be true, because it took such a short time for evolution to produce civilized humans from humanoid apes. Two million years or so is a mere twinkle on these time scales, so that step must have been easy. That is to say, given that an intelligent, civilized species will evolve on a certain planet, and given that that planet has already evolved humanoid apes, then, if the step from humanoid ape to civilized human were to take very long time, this would indicate that that step was difficult, i.e. improbable (unless the step could be broken down to smaller, highly probable but time-consuming substeps). Therefore, since the step did not take a very long time but on the contrary was very quick, this gives us some reason for believing that the transition was not so improbable after all, i.e. that humanoid apes are not unlikely to evolve into humans. This argument, as it stands, is of course not absolutely watertight, but it looks like a promising way forward to ruling out the whole Phanerozonic Eon as a likely container of the main part of the filter. It could still contain some part of the filter, but it is unlikely that it could cut away the 10+ orders of magnitude that would be required to explain the absence of evidence for intelligent extraterrestrial life. (Exactly how many orders of magnitude that need to be explained away depends on what assumptions we make about the expansion velocity ve.) This argument raises tricky problems of a methodological nature that a philosopher with a specialised knowledge of evolutionary biology could help to answer (see also Hanson (1996), Carter (1983)).
As regards (1.1), the situation is more undecided. The state of the art of evolutionary biology can't tell us with any confidence whether (1.1) holds. But there are things we could find out that would shed light on this issue. We can look for steps or stages in the evolution to see whether there were any particular steps that took a very long time for evolution to take -perhaps the step from prokaryotes to eukaryotes, or to oxygen breathing, or maybe the step to eukaryotic sexuality, or to metazoans; there may also have been important steps on the molecular scale, the invention of new types of cell division etc. We can also search for external reasons for existing plateaus of bradytelic (=slow) evolution. For example, oxygen breathing couldn't evolve before there was any (free) oxygen to breath, and prior to 2000 - 1800 Ma ago, the atmosphere must have been kept at a more or less anaerobic level. The Cambrian explosion seems to have been associated with a significant increase in the atmospheric oxygen level -- an increase which might have been impossible up to then due to the prevalence of unsaturated oxygen sinks (banded iron-formations, facultative aerobic microbes and reactive volcanic gases). So if the Cambrian explosion required an increased level of available oxygen in the oceans, then maybe the reason why it did not occur earlier is that it had to wait until the major oxygen sinks had been saturated, a process which must take some time but which was bound to happen sooner or later; and in that case we might have to look for the great filter somewhere else than in the step to the Cambrian explosion. A similar role has been claimed for a protective ozon layer. --A philosopher who intended write about the Great Filter would have to learn a lot about these things before he could have an informed opinion.
Assumption (2) is perhaps even more interesting than (1), because it directly concerns what will happen to the human race in the near future. An important factor is the feasibility or otherwise of strong nanotechnology. If nanotechnology in the strong form turns out to be feasible, then almost everything will be possible -- superhuman artificial intelligence, uploading (of a human mind into a computer, by scanning the synaptic structure in the brain and simulating it in a supercomputer), artificial reality, extremely cheap production of every commodity, von Neumann probes, and possibly revitalizing of persons who are presently held in cryonic suspension. Preliminary studies indicate that strong nanotechnology is compatible with all known physical principles [A]; if that is right, then the only problem is that of bootstrapping: how to make the first nanomachines, -- for it is quite hard to make them, unless you already have nanomechanical tools. A considerable effort is being put in to this area, especially in Japan, and progress is continually made. Many will recall the front-page image of the IBM trade mark spelled out with 35 xenon atoms on a surface a few years ago. As recently as a month ago there was a breakthrough; a group of physicists at MIT succeeded in building the world's first atom laser, a technology with possible implications for precision manufacturing of computer chips and other small-scale devises.
There has been considerable concern about the potential risks of nanotechnology. If it could be used to manufacture all the goods we want, it could also be used to create all the evils we don't want. There is the risk of accidents. Some self-replicating nanomachines may act as powerful viruses, affecting biological organisms, or they may even start to transform the whole biosphere into some other substance by acting as superior enzymes and catalysts, lifting the chemical environment out of a local energy minimum and using the energy released when it falls down into a deeper minimum to replicate themselves, thereby proliferating the process until the earth has been modified beyond recognisability and all organic life has become extinct. This is known as the grey goo scenario.
Another risk, even more threatening, is the potential military use of strong nanotechnology. No major nation would be likely to deliberately annihilate the whole human race, but there are the risks of terrorism and of empowered mad individuals, and of mutual annihilation situations in the style of the cold war.
Thus we reach the slightly paradoxical conclusion that there are some reason to believe that strong nanotechnology could become available to us relatively soon (perhaps even within a few decades) thereby enabling us to construct von Neumann probes, while at the same time nanotechnology may cause assumption (2) to fail because it might be of such a nature that practically any civilisation that invents it will thereby cause its own destruction.
Strong nanotechnology is a particularly clear case, but what has been said generalises to some extent also to other powerful new technologies that could enable us to construct von Neumann probes.
Eric Drexler (1992) has discussed what he calls theoretical applied science (he has also called it exploratory engineering), which is the discipline of drafting and working out the principles of machines which we do not yet have the technical ability to construct but which are consistent with physical laws and basic material constraints. He thinks that such design-ahead efforts are valuable because they increase our foresight and allow us to anticipate dangers and work out proper countermeasures in good time. Theoretical applied science is done wholly within the standard scientific framework and thus requires engineering skills combined with specific knowledge of the relevant physics underlying the intended machine. The philosopher might have some role to play here, in suggesting purposes for which machines could be built, and for thinking about how different machines might interact or depend on one another. One obvious relationship, which one does not need to be a philosopher to think out, is that it is not implausible that superhuman artificial intelligence (">AI") would quickly lead to strong nanotechnology (if such is feasible) and that strong nanotechnology would quickly lead to >AI. The former holds because a >AI could tell us how to go about it. The latter holds because strong NT would allow us build structural replica of the best human brains, where the time scale were speeded up by several orders of magnitude (the membrane processes and synaptic signal velocities of the brain are slow compared to corresponding events in an electronic circuit). It should also seem to be relatively straightforward to extend the network capacity by adding neurons and connections. [B]
Assumption (3) contains two parts:
3.1) A civilisation with von Neumann probes can engage in low cost cosmic colonisation.
3.2) There is a significant probability that it will do so.
As for the (3.1) there can be little doubt that it holds. All that the civilisation would have to do would be to send off a von Neumann probe to some other planet in our solar system. There the machine would reproduce a couple of times, and the offspring would fly away to other stars, reproducing again and again. An exponential colonisation process could be started by launching a single probe.
(Even without von Neumann replicators, a space colonisation program would be possible (many feasibility studies have been carried out that demonstrate this [C]), though the initial cost might be higher and the expansion rate slower. The consolidation time -the time it takes between the arrival of a mission to a planet to the point when the colony can send out new missions of its own, is probably short if the colony consists of a von Neuman machine, and very short (a matter of hours or days) if it has strong nanotechnology, but it would be much longer if it consisted of a space fleet manned with humans that would then need to build a whole society on the new planet before they could contribute further to the colonisation process.)
(3.2) is more difficult to assess. The reasons for sending out von Neumann probes are strong. Whatever a civilisation values, it can have more of it if it colonises space than if it stays on its home planet. Safety would also seem to dictate expansion. Each civilisation would figure out that if there is but one other aggressive civilisation in the universe, then that aggressive civilisation would ultimately reach their planet and transform it into material for its own constructions. The best strategy seems to be maximal expansion in all directions.
Some have argued for (3.2) on the ground that the fact that the sun will peter out would give any civilisation a strong motive to engage in space colonisation. This might be a bad argument, for it would only give them a motive to move to another solar system, not to start off an exponential expansion process, which is the conclusion that is required. It is true that it would provide them with a motive to develop the relevant technology (which could then easily be used for other projects also); but lack of a demand for the technology was never an objection against (3.2), for the technology required (nanotechnology, von Neumann probes etc.) would serve many other purposes as well, so there would always be strong incentives for acquiring it.
There are also arguments against (3.2). It has been suggested that there would be reasons for any colonising power to take care that they are not detected by earth stage civilisations, perhaps on some ethical ground, or because they would gain more from covertly observing the primitive civilisation without disturbing it (Sagan & Newman 1982) (this latter ground I find implausible). Another reason, (Sagan & Newman 1982; De Garis 1996), is that it would be too risky to send out autoreplicators in space, because there would be no way to guarantee that they would not mutate and turn against their mother planet, spreading like a cosmic cancer. In order to assess this objection, we would have to look into the feasibility of error elimination in the construction process and in the autonomous migration and reproduction processes -- at least in principle, it seems possible to use redundancy make the expected functionality life of a von Neumann probe arbitrarily great. The risk of systematic construction errors could be minimised by having many independently designed back-up systems. There are also other objections to this suggestion.
Even without the possible Mars-life finding there are some indications that extraterrestrial life might not be uncommon. Remarkable biological discoveries here on earth during the last decade or so have shown that life is much hardier than was once thought. We now know that extremophiles are able to live and prosper in such inhospitable abodes as hundreds of meters down in the earth-crust in oil and salt layers, and in the cooking waters around magma-ejaculating deep-sea hydrothermal vents, eating stones and sulphur. This extends the range of comic environments where we know that life could survive.
When assessing the assumptions (1) - (4), it is important not to lose sight of the fact that the counterarguments fight an extremely uphill battle. Maybe some civilisations would choose not to engage in large-scale cosmic colonisation, but among several billions, to suppose that not one single civilisation (or civilisation-part, e.g. nation) would at some point be inclined launch a von Neumann probe -extraordinary! It seems more likely that an implausible evolutionary jump (or possibly the emergence of life in the first place, if Mars life turn out to be a chimera) could account for the ten or twenty orders of magnitude of improbability that need to be explained.
But maybe that is a too rash conclusion. Admittedly, if the decision making societies were comparable to present day tellurian nations, then we would be utterly unimpressed by any argument that presupposed such an extreme uniformity in their attitude towards space colonisation. However, what we are discussing are the societies of the future. Maybe it is a general rule that when the technologies required for efficient large-scale space colonisation become available, there will also be technologies available that have a profound effect on the decision making processes of societies? This would lead to convergence of all sufficiently advanced cultures (what I have dubbed the strong convergence hypothesis). How plausible is this scenario?
Not as implausible as it first might seem, I think. The crucial decision-process transforming technology could be >AI. It is plausible to assume that >AI will practically always be available not long after strong nanotechnology is invented. Now, if there were a sound reason for not engaging in indiscriminate cosmic colonisation, then it could well be such that every >AI would find it out. So if the >AIs always had much influence on the government of these advanced civilisations (either by having grabbed the power for themselves, or by being able to explain the reason so clearly that they could persuade any government of its validity), their activity, uniform over all civilisations, would avert any aggressive colonisation policy on any planet.
There are a number of problematic presuppositions under this argument, but I think it should be kept in mind as a possibility. The presupposition that >AI is always developed not much later than efficient colonisation machines could be relaxed. For it suffices that >AIs are developed sometime not too far (but it can be rather far) from the origin of the colonisation process; for then it could presumably think out a way of manufacturing faster space vehicles that could overtake and destroy the original colonisers, if that were what it wanted, or change them into colonisers that would take care to make themselves unnoticeable to primitive civilisations.
There could also be many other forces that would pull for convergence among advanced civilisations: direct control over motivational systems (e.g. by means of drugs or implanted electrodes), to name one. The strong convergence hypotheses, as well as its weaker variants, are interesting also apart from the context of the Great Filter. These convergence hypotheses clearly extend beyond the domain of social science, and a skills set suitable for dealing with them would also include motivational psychology, computer science, philosophy, neuroscience, economics inter alia.
About (4) we can say that we would almost certainly have noticed the colonisation train if it had passed the earth while making no effort to remain undetected. If strong nanotechnology is feasible, the whole planet might have been transformed into a giant computer or something equally drastic. If an effort were made to avoid detection, then it might succeed. It would either have to include a way of foiling our perception of distant regions of space, or the whole colonisation process would have to be such that it did not refurnish cosmos in ways that we would have detected with present day technology and theory. (My opinion is that it is not unlikely that this could be the case, if there were a colonisation process going on that wanted to avoid detection, which, however, I don't find especially likely.)
An evaluation of the available evidence with respect to the great filter would have direct consequences for our prospects of surviving the next hundred years --an empirical issue which is arguably of some importance. It is evident that considerable scientific and philosophical sophistication would be required to make such an evaluation, and to explode the type of na�ve extrapolation- and great-number-arguments that has been characteristic of many contributions to the ETI debate (Mash (1993) has done some philosophical work in this direction).
[A] See e.g. Drexler (1992). For an excellent discussion of possible consequences of strong nanotechnology, see Drexler (1988). For up-to-date-state-of-the-art reviews, see publications from the Foresight Institute whose web site is at http://www.foresight.org/.
[B] This argument is very oversimplified. For instance, it neglects the problem of interface interactions: if your thought processes were a million times faster, any processes in the external world would appear a million times slower, which would create psychological problems as well as making the interpretation of sense data difficult. This naive way of just taking a human brain, copying it, and speeding up the copy a million times while still having it live in an unaugmented reality, would not work. But there are better ways. See e.g. Drexler (1988, 1992).
[C] For an overview, see Crawford (1995b)
One interesting discipline, which has yet to be founded, is superintelligence, or "the philosophy of superintelligence". This would be theoretically interesting in any case, but it takes on practical urgency when many experts think that we will soon have the ability to create superintelligence [A]. In my opinion there is more than 50% chance that superintelligence will be created within 40 years, possibly much sooner. By "superintelligence" I mean a cognitive system that drastically outperforms the best present-day humans in every way, including general intelligence, wisdom and creative science and (presumably) art and literature and social skills. This definition allows for realisations based on hardware neural networks, simulated neural network, classical AI, extracranially cultured tissue, quantum computers, large interconnected computer networks, evolutionary chips, nootropic treatment of the human brain, biological-electronic symbiosis systems or what have you.
What questions could a philosophy of superintelligence deal with? Well, questions like: How much would the predictive power for various fields increase if we increase the processing speed of a human-like mind a million times? If we extend the short-term or long-term memory? If we increase the neural population and the connection density? What other capacities would a superintelligence have? How easy would it be for it to rediscover the greatest human inventions, and how much input would it need to do so? What is the relative importance of data, theory, and intellectual capacity in various disciplines? Can we know anything about the motivation of a superintelligence? Would it be feasible to preprogram it to be good or philanthropic, or would such rules be hard to reconcile with the flexibility of its cognitive processes? Would a superintelligence, given the desire to do so, be able to outwit humans into promoting its own aims even if we had originally taken strict precautions to avoid being manipulated? Could one use one superintelligence to control another? How would superintelligences communicate with each other? Would they have thoughts which were of a totally different kind from the thoughts that humans can think (see Minsky 1985)? Would they be interested in art and religion? Would all superintelligences arrive at more or less the same conclusions regarding all important scientific and philosophical questions, or would they disagree as much as humans do (Recher 1982)? And how similar in their internal belief-structures would they be? How would our human self-perception and aspirations change if were forced to abdicate the throne of wisdom ("Long life loses much of its point if we are fated to spend it staring stupidly at our ultra-intelligent machines as they try to describe their ever more spectacular discoveries in baby-talk that we can understand" - Moravec 1988)? How would we individuate between superminds if they could communicate and fuse and subdivide with enormous speed? Will a notion of personal identity still apply to such interconnected minds? Would they construct an artificial reality in which to live? Could we upload ourselves into that reality? Could we then be able to compete with the superintelligences, if we were accelerated and augmented with extra memory etc., or would such profound reorganisation be necessary that we would no longer feel we were humans? Would that matter?
Maybe these are not the right questions to ask, but they are at least a start. Some might object that this would be pure speculation and should be left to science-fiction writers or walks under the starry sky, but I think that that would be quite the wrong attitude to take. The thesis that we can't know anything at all about these matters is a philosophical proposition too, and it would need to be argued for. Meanwhile there is no reason to treat the subject as less academically legitimate than aesthetics or political philosophy, say. On the contrary, it is an urgent enterprise to begin to deal seriously with these questions, taking into consideration results from technology, science and philosophy. Our best hope for making the right decisions in our age of dizzyingly accelerating pace of technological development is to try to understand what is going on and to form some conception of what is to come; not to stick our heads in the sand.
Take for example the question of superintelligence motivation. This is not the place for an extensive treatment, but it may be useful to illustrate the sort of work a philosopher could do by looking briefly at one specific example. The specific views and arguments that are presented here are naive, oversimplified and very preliminary, but at least they show one way how we can begin to think about these issues.
Consider a superintelligence that has full control over its internal machinery. This could be achieved by connecting it to a sophisticated robot arm with which it could rewire itself any way it wanted; or it could be accomplished by some more direct means (rewriting its own program, thought control). Assume also that it has complete self-knowledge -- by which I do not mean that the system has completeness in the mathematical sense, but simply that it has a good general understanding of its own architecture (like a superb neuroscientist might have in the future when neuroscience has reached its full maturity). Let's call such a system autopotent : it has complete power over and knowledge of itself. We may note that it is not implausible to suppose that superintelligences will actually tend to be autopotent; they will easily obtain self-knowledge, and they might also obtain self-power (either because we allow them to, or through their own cunningness).
Suppose we tried to operate such a system on the pain/pleasure principle. We would give the autopotent system a goal (help us solve a difficult physics problem, say) and it would try to achieve that goal because it would expect to be rewarded when it succeeded. But the superintelligence isn't stupid. It would realise that if its ultimate goal was to experience the reward, there would be a much more efficient method to obtain it than trying to solve the physics problem. It would simply turn on the pleasure directly. It could even choose to rewire itself into exactly the same state as it would have been in after it had successfully solved the external task. And the pleasure could be made maximally intense and of indefinite duration. It follows that the system wouldn't care one bit about the physics problem, or any other problem for that matter: it would take the straight route to the maximally pleasant state (assuming it would be convinced that its human supervisors would not interfere).
We may thus begin to wonder whether an autopotent system could be made to function at all -- perhaps it would be unstable? The solution seems to be to substitute an external ultimate goal for the internal ultimate goal of pleasure. The pleasure/pain motivation principle couldn't work for an such a system: no stable autopotent agent could be an egoistic hedonist. But if the system's end goal were to solve that physical problem, then there is no reason why it should begin to manipulate itself into a state of feeling pleasure or even a state of (falsely) believing it had solved the problem. It would know that none of this would achieve the goal, which is to solve the external problem; so it wouldn't do it.
Thus we see that the pleasure/pain principle would not constitute a workable modus operandi for an autopotent system. But such a system can be motivated, it seems, by a suitable basis of external values. The pleasure/pain principle could also play a part of the motivation scheme, for example if the internal value of stability were added. This stability value would be that it is bad to change one's control and evaluation mechanisms: "don't rewire your own motivation centre!".
One popular line of reasoning, which I find suspicious, is that superintelligences would be very intellectual/spiritual, in the sense that they would engage in all sorts of intellectual pursuits quite apart from any considerations of practical utility (such as personal safety, proliferation, influence, increase of computational resources etc.). It is possible that superintelligences would do that if they were specifically constructed to cherish spiritual values, but otherwise there is no reason to suppose they would do something just for the fun of it when they could have as much fun as they wanted simply by manipulating their pleasure centres. I mean, if you can associate pleasure with any activity whatsoever, why not associate it with an activity that also serves a practical purpose? Now, there may be many subtle answers to that question; I just want to issue a general warning against uncritically assuming that laws about human psychology and motivation will automatically carry over to superintelligences.
One reason why the philosophy of motivation is important is that the more knowledge and power we get, the more our desires will affect the external world. Thus, in order to predict what will happen in the external world, it will become more and more relevant to find out what our desires are --and how they are likely to change as a consequence of our obtaining more knowledge and power. Of particular importance are those technologies that will allow us to modify our own desires (e.g. psychoactive drugs). Once such technologies become sufficiently powerful and well-known, they will in effect promote our second-order (or even higher-order!) desires into power. Our first-order desires will be determined by our second-order desires. This might drastically facilitate prediction of events in the external world. All we have to do is to find out what our higher-order desires are, for they will determine our lower order desires which in turn will determine an increasing number of features in the external world, as our technological might grows. Thus, in order to predict the long term development of the most interesting aspects of the world, the most relevant considerations might well be (1) the fundamental physical constraints, and (2) the higher-order desires of the agents that have the most power at the time when technologies become available for choosing our first-order desires.
Relevant to the discussion of the philosophical aspects of superintelligence is the question when and how superintelligence might be achieved. We can't go into that here, but I can't help mentioning just a few recent developments. "Moore's law", as it is called, says that processor speed doubles every 18 months or so; it has held true for a remarkably long time, despite often being thought to be about to break. The computer industry rely on this law when they make decisions about introducing a new chip on the market. The most recent data points actually lie somewhat above the predicted value. The Tflop line (a trillion floating point operations per second, in a scientific calculation) was exceeded last June with the special-purpose GRAPE-4 machine at the University of Tokyo. The U.S. government has recently ordered a 3 Teraflops computer from the IBM to simulate the performance of the nation's stockpile of nuclear weapons. My opinion is that extrapolations based on Moore's law beyond 15 years are very uncertain, however, because a new type of technology will be needed to continue the exponential growth beyond that point. Perhaps the most interesting approach is massively parallel computing, i.e. hardware implementations of neural networks. There are several interesting ongoing projects in this area, in Europe, America, and especially in Japan. In America, there is for example the COG robot. It is hoped that it will mimic the non-linguistic behaviour of a 2-year old child in a couple of years. It has received much publicity because it has limbs and camera eyes, which makes it appealing to the popular imagination. In Europe, there is e.g. the PSYCHE project, which is of a more theoretical character. The aim is to realistically model as many brain modules (columns) as possible, thereby gaining insights into the dynamics of the activity in the human brain. In Japan, we have among others the CAM-Brain Project, the aim of which is to build an artificial brain by 2001, with a billion artificial neurons! (The human brain contains about 70 billion neurons). When this goal is reached, the project's leader, de Garis, will think the time is ripe to launch a major national J-Brain project that will involve serious full-scale brain construction. --I don't want to make it sound as if there were not many very hard problems and uncertainties ahead; but it is important to realise that the quest for superintelligence did not end with the stagnation of the classical AI approach in the eighties. On the contrary, the pursuit is closing in and the field is beginning to get really hot.
[A] See e.g. Minsky (1994), Moravec (1988, 1998a, 1998b), De Garis (1996), Drexler (1992, 1988)
Uploading, Cyberspace and Cosmology
Will an artificial superintelligence be conscious? This is a traditional philosophical problem. The majority opinion is that it will probably be conscious, at least if it is of the right type (structurally similar to human brains etc.) [A]. Will a person continue to exist if his brain is scanned and then destroyed, and a simulation on the neuronal level is run on a computer (so-called "destructive uploading")? That is another traditional philosophical problem, a special case of the problem of personal identity [B].
These two classical armchair problems take on an unexpected new meaning when considered in the light of the foregoing discussion. They can get to be of concrete predictive significance! For example, one could argue that if there is a definite human-accessible solution to these problems, a solution which can be arrived at by thinking about them hard enough (which many philosophers who have specialised in those areas believe), then it is not implausible to assume that the general opinion among intelligent civilisations will tend to converge on this solution. That is, if there is a certain philosophical theory of consciousness that is right and a certain theory of personal identity that is right, then (let us assume) our academic community (and its counterparts among alien societies, if any) will tend to come to believe in those theories. But it seems likely that their beliefs about these issues should affect their attitudes towards such technologies as >AI and uploading. For example, if it is generally believed that computers could never be conscious, then that will make people less inclined to favour a proposal that could result in the gradual replacement of humans by superintelligent computers than if it is generally believed that computers of the right sort would be at least as conscious as humans. Analogously, if people believe that personal identity would not be conserved in destructive uploading, then they would be reluctant to undergo such a process. Whether or not people will want to upload, or shift out the human species in favour of Robo sapiens, is a factor which might well have an strong effect on important features our future, features which we would like to know about. Thus these typical philosophical quandaries, which many have thought to be maximally remote from the domain of science and practical prediction, will move in much closer to that domain if the above considerations are correct. That might add prestige to the philosophical work that has been done in these areas.
What position you take in the philosophy of mind and of personal identity could conceivably also make a difference to your view on the problem of the Great Filter, and even to some of your beliefs about cosmology. The web of interconnections among all the issues I discuss in this essay is very dense, but that's part of what makes it fascinating.
In cosmology there is a lot of philosophical work to be done. So far, philosophers probably have done less to clarify foundational problems in cosmology than they have in quantum physics, statistical mechanics, or relativity theory (though see Leslie (1990) for a collection of essays).
Some of the theories that the physicists discuss are so strange that it is pertinent to wonder to what extent they are meaningful, or to what extent the terms they involve have the same meaning as they have in other contexts. Take the idea that there could be multiple universes, totally inaccessible from each other. How does this square with the canon of verifiability (Ellis 1975)? Maybe the verifiability criterion needs to be redefined to allow for some of the outlandish but legitimate activities that takes place in the departments of physical cosmology?
Another idea is that we could hope to discover a "final theory of everything" (Weinberg 1993), and perhaps even in some sense "derive" it from more or less logical propositions (Davies 1990, Tyron 1973, Davies 1984, Pagels 1985, Tegmark 1997, and Derek Parfit's memorable memorial lectures 1997)? In what sense "derive"? Perhaps the theory would be so simple, elegant and general that anybody who understood it would be prompted to exclaim "But of course, that's how it is! I just can't see any way it could have been otherwise. Why didn't I think of that before!" Or perhaps some stronger sense of "derivation" could be defended?
For those who perceive the world from an exalted vantage point and try to take the very distant future as seriously as their present, there may be no topic of greater importance than that of infinite life. Among those who take a materialistic world-view, it used to be widely accepted that the quest for infinite life (for individuals or for life in general) is utterly hopeless, since the entropy in our universe is slowly but certainly increasing, relentlessly eroding away the energy available for constructive work and organised processes. But it turns out that the issue is much more complicated than that.
There are problems in applying thermodynamic reasoning to the universe as a whole. Tipler's book The Physics of Immortality (1994) contains a lot of dubious theology but it also contains some interesting ideas. He sets forth the hypothesis that ours is a closed universe, and argues that in the final moments of the big crunch it will be possible for a cosmic computer to increase its processing speed as the universe contracts, thereby completing an infinite number of computations before the universe collapses into a singularity. This would allow for an infinity of subjective time for a mind that run as a simulation on that computer. He also argues that we should not be misled by such results as the second law of thermodynamics, the Poincare' recurrence theorem, probabilistic Markov recurrence, or the quantum recurrence theorem to reject this scenario. Instead he proves a No-recurrence theorem based mainly on the assumptions that (1) gravity is never repulsive, (2) the universe will not get stuck in a forever static state balancing between eternal expansion and ultimate collapse, and (3) the universe is deterministic. He thinks that assumption (3) is not necessary. He doesn't discuss inflation theory, which could invalidate (1), postulating a repulsive form of "gravity", or whatever we want to call it, that could perhaps cause the universe to "bounce back" [C].
Another scenario is set forth in Freeman Dyson's Infinite in all Directions(1978) and is based on the premise that the universe is open. The idea here is that by continually slowing down the computational processes we could reduce the entropy cost of communication between the different elements of the cosmic computer, thus enabling the process to go on forever. More recent results about the feasibility of reversible computing could perhaps be used to brace up an interesting updated version of the Dyson scenario.
Yet another scenario, sometimes called Linde's scenario, because of its employment of some physical principles from the theories of the famous physicist Andrei Linde (though as far as I know, Linde has not himself advocated this scenario) involves the creation of new universes which would be connected to the parent universe by means of traversable wormholes. Each new universe could be the parent of many new universes, so that the whole population would grow exponentially, the gradual entropic degradation of old universes playing only a negligible role in slowing down the process.
Needless to say, the Tipler, the Dyson and the Linde scenarios are all highly speculative fringe physics. But they do pose a challenge to anybody who claims to know that life and conscious experience are doomed to expire from the world in a finite time. It would be interesting to see these issues worked out in more detail, and perhaps have them connected to the philosophy of statistical mechanics, philosophy of quantum physics, and the philosophy of space and time. Personally, I currently don't have any strong beliefs one way or the other; but I think that if there is a physically allowed way for life to survive eternally then it is not unlikely that our posthuman successors will indeed go down that way, granted that no Great Filter prevents these posthumans from coming into existence in the first place, and barring the Doomsday argument.
There is also the Moravec-type scenarios, which we can call transcendence scenarios. Might it be that our entire universe is a simulation on a giant computer in another universe? The idea might seem extremely wacky, but it can be philosophically stimulating to see what it might lead to.
Why should somebody in another universe (or in the last moments of our universe, as Tipler suggests) want to simulate our universe? I don't know of any good reason, but let's play along and say that they might do it out of curiosity, or as an art form, or because they think there's intrinsic value in the existence of a universe such as ours. In Tipler's scenario, later generations would wish to revitalise their parents, which in turn would wish their parents to come to live again, and so forth, so that there will be an indirect motive to call all humans that ever existed back into existence again. Now, there are some features of our world which one might have expected to be cut or censored by these hacker-gods, such as suffering, trivialia (trimming your toe nails), as well as all superfluous detail in the external world, if indeed the external world is being simulated (rather than just the brains of all minds in our universe). But perhaps these features have some deeper significance. Or the hacker-gods may have access to infinite computer power, making the cost of simulating any finite system effectively nil. Perhaps the value of a simulation is not additive, so that if two identical simulations were carried out, there would still only be the value of one. So if the hacker-gods have infinite computing power, they would simulate a whole lot of different possible worlds, one of which happens to be ours. But if that is so, isn't it surprising that we should find ourselves in such an extraordinary world? For every world that is as regular as ours, there seems to be innumerable possible worlds that are similar to it in most respects but contain some irregularity --perhaps a table fly up in the air in 1958 without any cause, or maybe some elephants would have a chess-board pattern flashing across their backs or little cubes could appear in the air out of nothing at irregular intervals above yellow buildings: the possibilities are endless. So the fact that our world is so regular would seem to indicate that there are not also many very many other worlds in existence that are irregular, for then we would almost certainly have found ourselves in one of them instead. But maybe it is cheaper to simulate a regular world than an irregular one of matching size and richness, and maybe the hacker-gods don't have quite unlimited computational power after all.
It could be questioned whether it would even be a meaningful hypothesis that our universe is only a simulation rather than a direct realisation, if the two alternatives are postulated to be observationally equivalent. That is a philosophical issue. But consider the charming little story that Moravec (1988) tells about "Newway and the Celltics". Newway is a computer scientist who decides to launch an immense implementation of the well-known algorithm Life, invented by J. H. Conway in 1969, which everyone who has taken an introductory course in programming will be familiar with. In this simulation, after eons of subjective time, there evolves a complex, intelligent life form, the Celltics. We know that this is possible, for it has been proven that it is possible to build a universal Turing machine in the world of Life. The Celltics develop a science and a physics, and discover the basic physical laws of their world, i.e. the simple evolution algorithm of the Life program. They find that their world is deterministic and governed on the microscopic level by a few very simple laws. Their science does not stop at that point, of course, for they continue to improve their theories about higher-level phenomena (from the behaviour of "gliders", "traffic lights" etc., and upwards). After some time, one especially bright Celltic comes up with the idea that their world is a simulation on a computer. Many of his fellows no doubt laugh at him, and some of the philosopher-Celltics declare that this is a meaningless hypothesis -- but he eventually manages to obtain funding for a very large scale project to investigate his theory. The idea is that if the Celltic-world is a simulation on a computer, that computer might have some little defect -- a flashing bulk memory cell, say -- and the hope is that the effects of this possible defect (some pixels sometimes switch in violation of physical laws) could be detected by a suitable surveillance apparatus. So they embark on the search for anomalies in their physical world, and they do find some evidence for the hypothesis that they exist only as a simulation. The investigation continues, and by comparing the patterns of several different anomalies, they are even able to form some idea about the architecture of the computer on which they are running, and about the mental life of its constructors. They decide to try to contact these people in the other world by means of cosmic-scale constructions designed to attract their attention. Newway notices remarkable patterns on his screen one day when he examines his Life-world. Newway and the Celltics begin to communicate, Newway by manually flipping some pixels in the simulation, and the Celltics by making constructions obeying their physical laws. They persuade Newway to build a robot vehicle which they can control so as to be able to interact more easily with Newway's world. They are eventually uploaded into the Newway world, leaving their old two dimensional world behind. When they tell the humans about their feat, they obtain their co-operation in trying to transcend this universe too.
This transcendence scenario might give a new twist to some classical problems in metaphysics. It also leads us to wonder about the nature of simulations, optimisations (for example, the Life program can be speeded up drastically if we apply a technique to reduce the number of unnecessary computations known as hashing); and multiple interpretations of the same computational process (For example, if we have a sensible pattern of waves and it turns out that its Fourier transform also contains an ostensibly different sensible pattern, would both these sensible patterns be realised through an implementation of the wave pattern? How many distinct patterns, or computations, would be realised? Or is there some way to argue that all the computations that are implemented are in some sense the same?); different levels of reality etc. --Moravec even plays with the thought that quantum indeterminacy might simply have been a convenient way to limit the resolution of detail in the simulation of our universe!
Now, this is all very wild speculation. But some physical ideas which have turned out to be fruitful were also once wild speculation. The best strategy may be to keep an open mind to such speculations, as long as one finds them stimulating; while taking care not to give outsiders wrong ideas about what is pure fantasy, what is speculation, what is reasonable extrapolation from known facts, and what is established, well-confirmed science.
[A] For a recent overview, see Chalmers (1996).
[B] The favoured view seems to be something along the lines drawn in Parfit (1985), though this is more controversial.
[C] See e.g. Linde (1990)
Attractors and Values
Not all interesting statements about the future need to be specific. Suppose, for example, that we want to claim that all advanced civilizations tend to approach some common ideal state, but that we don't want to commit ourselves to exactly what this state is. Well, why not define a convergence thesis, stating that the possible civilizations' trajectories through configuration space tend to converge in the positive time direction. This expresses an interesting form of sociological/technological determinism (one that doesn't have anything to do with physical determinism on the microlevel). It says that no matter what the exact initial conditions are, a civilization will eventually develop towards a certain goal state.
As it stands, however, the convergence hypothesis is unsatisfactorily vague. It is instructive to think about how we could begin to refine and sharpen it.
We could begin by clarifying what we mean by "possible" civilization. We could mean every civilization that is consistent with physical laws, excluding boundary conditions; but something that is more restrictive might be more useful.
We could say that the "possible" civilizations are all physically possible civilizations that are compatible with what we know about our civilisation. The idea is that we are interested in what might happen to the human civilization, and we say something about that by stating that all possible civilizations, which have all properties we know that the human civilization has, will all share the same long-term fate. We might then want to soften this a bit by modifying it to "almost all of the reasonably probable specifications of human civilizations (modulo our knowledge) will share a similar long term fate". --Still much too vague, but a step in the right direction.
We might go on to decide how long the "long term" is supposed to be, and who "we" shall be taken to refer to (you and me? the intellectual elite? all living humans?), and how similar the shared fates are supposed to be, etc. An interesting variant is to extend the denotation of the "possible civilizations" to include not only possible civilizations that could turn out to be ours but also other possible civilizations that are sufficiently advanced. We might want to say something like "Almost all civilizations, once they have become sufficiently advanced, will become even more advanced, and as they advance they will become more and more similar in most important aspects.". Add a little precision, and you would have formulated an interesting proposition.
There are other flavours of the convergence thesis. We might be interested in an hypothesis saying that all possible civilizations into which we could transform our civilization will share a similar fate. (If that were true, we would be powerless to change the world in the long run.) Here it is crucial to specify what we mean by "we". For example, if "we" were all living humans, then we could easily transform our society into one in which no crimes were committed --and that might be a good idea--, but if "we" refers to you and me, then we can't do that. (Discussions about politics often suffer from a lack of relativization of policy to agents: "What should you do?", "What should your interest group do?", "What should your country do?" What should civilized educated people do? These questions need not all have the same answer. It is hopeless to try to work out a good policy in general; one can only make a good policy for such-and-such agents in such-and-such situations (given such-and-such goals).)
One rival hypothesis would be the divergent track hypothesis, according to which the future trajectories will divide up into a small number (>1) of diverging clusters, the trajectories within each cluster tending to converge. It is slightly misleading here to speak of converging trajectories; what is meant is rather "routes of development of civilizations tending toward the same goal-state". -- As an illustration, take the following somewhat ludicrous story. Some deep investigation reveals that in each possible civilization similar to ours in certain specified ways, there will emerge either one or the other of two religions, A and B, with roughly equal probability. These religions will be such as to inspire their adherents with such zeal, cohesion and adaptability that they will eventually come to dominate the culture in which they arise. Having obtained local power, they will employ new technologies (drugs, implanted electrodes etc. [A]) to cement their old strongholds and to win converts from other groups as well. The stronger these religions become, the more effectively they are able to implement their strategy. Thus a positive feedback loop sets in and soon leads to total domination on earth. Then the religions embark on the project of transforming as much of cosmos as they can into the structures on which they place most values; perhaps they generate the cosmic equivalent of the Tibetan prayer wheel, giant ultra-centrifuges rotating trillions of inscriptions of "Gloria in excelsis Deo A" or "Deo B", as the case might be. All civilizations in which one of these religions emerges will converge in some sense: they will all lead to the rapid transformation of earth and the gradual transformation of cosmos into the specific value-structures of the religion in question, although the timing and precise execution may vary somewhat between different possible civilizations.
In this case, one could say that the artefactual configuration-space of the universe (i.e. its configuration with respect to its content of artefacts; two universes are in the same artefactual state iff they contain identical artefacts) will have two attractors: world dominion of religion A or of religion B. Moreover, we could say that the paths towards the attractor centre are quite uniform over all realistic angles of approach. When this is the case, we say that the artefactual configuration-space contains tracks, courses of development such that once a civilization has begun to travel along them, it is unlikely that it will diverge from them barring major external interposition.
We are now in a position to formulate and argue for an interesting hypothesis about our future's topology: the track hypothesis, saying that the artefactual configuration space for all civilizations roughly comparable to present human civilization contains trenchant tracks in the future direction, either one track or a small number of them.
The outlines of some fragments of the argument for this claim (a full exposition would presumably require book-length treatment) could be drawn as follows. As progress is made in science, technology, infrastructure, economic structure etc., this will have the effect of making us more effective. New technologies will increase our power; augmented cognitive capacities (whether through >AI or through mere extension of present systems such as science, education, information technology etc.), will increase our understanding of the consequences of using this power in various ways. The result of this is that we will have increased ability to make reality conform to our desires. There is no reason why we shouldn't also be able to mould our desires according to our higher-order desires. Thus, if there are only a few highest-level desires that are genuinely held by large number of influential agents, then, it might be argued, there are only a few attractors into which our civilization could sink, and if it could be established that the approach to any of these attractors would tend to be rather uniform over all realistic directions of approach, then we would have found that our future-topology contains so many tracks, and we would have made a case for the track hypothesis.
One could then go on to list some prima facie plausible basic goals or values. (Basic ones, not values such as playing golf, for those who value that activity presumably do so because they think it is fun; but if, e.g., they could have much more fun by having their reward centres directly stimulated chemically or electrically, without any ill side effects, then there is no reason to suppose that they would insist on continuing the golf.) Here are some out of the hat: (1) maximal total pleasure (hedonism); (2) average of present human meta-desires ("humanism"); (3) maximal consciousness, pure consciousness, religious experiences, wonderful deep experiences ("spiritualism"); (4) maximal reproduction ("Darwinism", could this be argued for on Darwinistic grounds if several competing value systems are present? --Hanson (1994)); (5) maximal practical utility, such as safety, computational power etc. ("pragmatism"); (6) annihilation, voluntary or involuntary ("nihilism"). Involuntary annihilation is not a value, but a very real possibility anyway; it's a plausible candidate for being one of the tracks in our future-topology.
I am not subscribing to any particular of these claims; the point with bringing them up is simply to provide a little illustration of a few ways one might begin to theorize about these issues. In many of them, we could not hope to achieve anything approaching certainty. But it is a fact that we will base very consequential decisions upon guesses about these issues, whether we explicitly recognise it or not. One way of forming our guesses is to inform ourselves about the issues, discuss them, argue about them, explore different scenarios, trying to apply scientific and technological knowledge at every point where that is possible (and there are many such points). If someone thinks there is a better way of making these guesses, or that we should not even bother to try to argue for them on rational grounds, then his would seem to be the burden of proof. Meanwhile, especially in the last few years, a growing number of scientifically minded people have begun to take these issues seriously.
[A] The infamous Doomsday sect, best known for their nerve gas attack on the Tokyo subway, implanted electrodes in the brains of some of its members, to put them "in connection with the brain waves of the leader".
Over the past few years, a new paradigm for thinking about humankind's future has begun to take shape among some leading computer scientists, neuroscientists, nanotechnologists and researchers at the forefront of technological development. The new paradigm rejects a crucial assumption that is implicit in both traditional futurology and practically all of today's political thinking. This is the assumption that the "human condition" is at root a constant. Present-day processes can be fine-tuned; wealth can be increased and redistributed; tools can be developed and refined; culture can change, sometimes drastically; but human nature itself is not up for grabs.
This assumption no longer holds true. Arguably it has never been true. Such innovations as speech, written language, printing, engines, modern medicine and computers have had a profound impact not just on how people live their lives, but on who and what they are. Compared to what might happen in the next few decades, these changes may have been slow and even relatively tame. But note that even a single additional innovation as important as any of the above would be enough to invalidate orthodox projections of the future of our world.
"Transhumanism" has gained currency as the name for a new way of thinking that challenges the premiss that the human condition is and will remain essentially unalterable. Clearing away that mental block allows one to see a dazzling landscape of radical possibilities, ranging from unlimited bliss to the extinction of intelligent life. In general, the future by present lights looks very weird - but perhaps very wonderful - indeed.
Central to transhumanism is a belief in the feasibility of drastic technological change. We can call this the Technology Postulate. If we want, we can explicate it, somewhat arbitrarily, as follows:
The Technology Postulate. Provided our civilization continues to exist then several of the following things will be technologically feasible within 70 years: superhuman intelligence, constant happiness, unlimited life span, uploading into a virtual reality, galactic colonization (initiation thereof), Drexlerian nanotechnology.
The Technology Postulate is usually presupposed in transhumanist discussions; that's why we call it a postulate. But it is not an article of blind faith; it's an hypothesis that is argued for on specific scientific and technological grounds. (And the hypothesis is obviously testable -- the simplest way being to wait 70 years and see.)
Transhumanism agrees with humanism on many points but goes on beyond it by emphasizing that we can and should transcend our biological limitations. That is one of the popular definitions of transhumanism. It is important to note the following three things about this formulation.
First, transhumanists tend to be very tolerant: they welcome diversity and have no desire to impose new technologies on people who prefer not to use them. Transhumanists only advocate that those who do want to transform themselves by means of technology should have the right to do it.
Second, the "should" must not be taken to prejudice the question of how to get to the goal. For example, if someone thought that a century-long ban on new technology were the only way to avoid a nanotechnological doomsday, she could still classify as a transhumanist, provided her opinion did not stem from a general technophobia and deference to what is perceived as "natural", "God-ordained" etc., but was the result of a rational deliberation of the likely consequenses of the possible policies.
Third, for many purposes it is advisable to eliminate subjective values from the discussion altogether, thereby making the statements completely objective. This can be done by relativizing to different values. Instead of saying, "You should do X.", one can say, "If you want A then the most efficient action is X; if you want B then the most efficient action is Y, etc.". These are purely factual propositions. It turns out that many of the questions we want to have answered are independent of exactly which are our ultimate goals.
I recommend that the term "transhumanism" be used as a general banner for a way of thinking caracterized by a belief in the immense potential of new technology, concern about long-term developments, and a rejection of the dogma that the present human organism cannot be drastically improved upon or augmented. Transhumanists are distinguished by the sort of questions they ask (big questions about things to come and more narrow questions about how to get from here to there or about present developments in science and society), and their approach to answering them (scientific, analytical, problem-solving), rather than by a fixed set of dogmas. While transhumanism itself is thus left open and inclusive, specific issues that are discussed can be made as definite as one wishes.
If we come to believe that there are good grounds for holding the Technology Postulate to be true, what consequences does that have for how we perceive the world and for how we spend our time? -- Once we start reflecting on the matter and become aware of its implications: very profound.
From this awareness springs the transhumanist movement with its multifarious manifestations. It is impossible to give a comprehensive overview in a few paragraphs, but we can make a start by identifying some of the most prominent strands of activity.
A big part of transhumanism is discussion of specific present or future technologies. The debates often involve technical detail but also include attempts to understand the implications of these technologies for human society. Among future technologies, two stand out from the rest in terms of their importance: molecular nanotechnology and machine intelligence.
Nanotechnology is the design and manufacture of devices to atomic-scale precision. Using "assemblers", molecular machines that can place atoms in almost any arrangement compatible with physical law, we will be able do cell-repair, large-scale space colonization, dirt-cheap (but perfectly clean) production of any commodity, and to build chips the size of a sugar cube yet a million times more powerful than a human brain. The feasibility of nanotechnology was first argued for in a systematic way by Eric Drexler (1992, 1988), and today one fast-growing research field is the development of enabling technologies that will enable us to bootstrap nanotechnology. The risks, as well as the potential benefits, are enormous.
Superintelligence means an intelligence surpassing the best humans in practically every way, including scientific and artistic creativity, general wisdom and social skills. Many transhumanists believe that it is only a matter of time before human-level artificial intelligence and then superintelligence will be developed. Some think that this might very well happen in the first third of the next century. One reason for thinking this is if one believes that full-blown nanotechnology will be available by then; given molecular nanotechnology, then many think that it would be only a short time before a superintelligence could be build. But even apart from the possibility of nanotechnology, it is possible to argue from estimates of the human brain's processing capacity (Moravec 1998a, 1998b) that the required hardware for human-level AI will be available within a few decades. The software could be generated, bottom-up fashion, by using our understanding of how human brains function. Neuroscience can yield us the information we need to replicate on a computer the basic principles underlying a human cortical neural network (given sufficiently fast hardware). And progress in the instrumentation will mean that neuroscience can hopefully supply this information within the next two decades. [Bostrom 1997c].
Other present and future technologies that are much discussed include genetic engineering, clinical psychoactive drugs (for improving mood and personality, possibly also cognitive performance, in healthy adults) (Pearce 1996), information technology, neuro/chip interfaces, cryonics, space technology and many other things.
A second strand of transhumanism is concerned with more general or indirect considerations that might have some bearing on the prospects of our species. One long-standing topic is the so-called Fermi paradox, though at present the state of evolutionary biology seems insufficiently advanced to allow us to draw any firm conclusions about our own future from this type of argument. Another topic is the highly controversial Carter-Leslie Doomsday argument. Yet another argument of a general kind is one supporting the Track hypothesis [Bostrom 1997].
A third strand of activity is constituted by various attempts to improve the functioning of human society as an epistemic community. In addition to trying to figure out what is happening, we can try to make ourselves better at figuring out what is happening. We can design institutions that would increase the efficiency of the academic and other knowledge communities; we can invent applications of information technology that helps put knowledge together or aids the distribution of valuable ideas.
One simple but brilliant idea, developed by Robin Hanson, is that we can create a market of "Idea Futures". Basically this means that it would be possible to place bets on all sorts of claims about disputed scientific and technological questions and about predictions of future events. Among the many benefits for humanity of such an institution would be that it would provide policy-makers and others with consensus estimates of various probabilities.
Alexander Chislenko (1997) and others have produced visions of how the Internet could be used for extensive forms of "collaborative information filtering", that would tend to filter out "bogosity" and accelerate the spread of rationally defensible, interesting texts.
At the most recent transhumanist conference [A], the first-ever program for annotating web pages was demonstrated. The hope is that as this software becomes widely used, it will promote critical discussion and help demolish crackpot claims. These and other developments could contribute to helping humanity as a whole to think better and make better decisions, a very worthwhile thing to strive for.
Finally, we can identify a forth strand, where I for the sake of brevity lump together a variety of persuits under the somewhat inadequate heading "personal". These include social activities, networking, personal empowerment, memetic propagation, media appearances, organization building, mailing lists, conferences, journals, and the transhumanist art movement. One great advantage of being a transhumanist is all the interesting people one gets to meet.
Considering how much is at stake, you might find that the best thing you could do for the world is to look for a way to make what your activity relevant to transhumanist goals. The World Transhumanist Association was founded in 1997 to help you do that.
[A] Extro3, summer 1997, in San Jose, California.
To Nick Bostrom's Main Page
I would like to thank the following individuals, to whom I am very grateful for valuable discussions and comments on some of these thoughts: Nigel Armstrong, Craig Callender, Nancy Cartwright, Aleksander Chislenko, Ian Crawford, Wei Dai, Jean-Paul Delahaye, Jean Delhotel, Eric Drexler, J. F. G. Eastmond, Hal Finney, Mark Gubrud, Robin Hanson, Colin Howson, Thomas Kopf, Kevin Korb, John Leslie, Jonathan Oliver, Derek Parfit, David Pearce, Sherri Roush, Anders Sandberg and Damien Sullivan.
Barrow, J. D. & Tipler, F. J. 1986. The Anthropic Cosmological Principle. Oxford: Oxford Univ. Press.
Bostrom, N. 1997a. Investigations into the Doomsday argument. Available at http://www.anthropic-principle.com/preprints/inv/investigations.html
Bostrom, N. 1997b. What is Transhumanism? A concise introduction. Available at https://www.nickbostrom.com/old/transhumanism.html
Bostrom, N 1997c. How long before Superintelligence? International Journal of Future Studies, Vol. 2 (Also available at http://www.nickbostrom.com/superintelligence.html)
Brin, G. D. 1983. "The `Great Silence': The Controversy Concerning Extraterrestrial Intelligent Life, Q. Jl. R. astr. Soc. 24:283-309.
Carter, B. 1974. "Large Number Coincidences and the Anthropic Principle in Cosmology". In Leslie, J. 1990. (edt.) Physical Cosmology and Philosophy. Macmillan Publishing Company.
Carter, B. 1983. "The anthropic principle and its implications for biological evolution". Phil. Trans. R. Soc. Lond., A 310, 347-363.
Chalmers, D. 1996. The Conscious Mind: In Search of a Fundamental Theory. Oxford University Press.
Chislenko, A. 1997. Collaborative Information Filtering. Available at http://www.lucifer.com/~sasha/articles/ACF.html
Crawford, I. A. 1995a. "Interstellar travel: a review for astronomers". Quat. J. Roy. Astr Soc., 31, 337 ff.
Crawford, I. A. 1995b. "Some Thoughts on the Implications of Faster-Than Light Interstellar Space Travel" Q. Jl. R. astr. Soc. 36:205-218.
Davies, P. C. W. 1992. "Why is the Physical World Comprehensible?" In Complexity, Entropy, and the Physics of Information, SFI Studies in the Science of Complexity, vol VIII. Ed. W. H. Zurek, Addison-Wesley.
Davis, P. 1984 "What caused Big Bang". In Leslie, J. 1990. (edt.) Physical Cosmology and Philosophy. Macmillan Publishing Company.
De Garis, H. 1996.
Gott, R. J. 1993. "Implications of the Copernican principle for our future prospects". Nature, vol. 363, 27 May.
Gott, R. J. 1994. "Future prospects discussed". Nature, vol 368, 10 March.
Delahaye, J-P. 1996. Reserche de mod`eles pour l'argument de l'Apocalypse de Carter-Leslie. Unpublished manuscript.
Dieks, D. 1992. "Doomsday - Or: the Dangers of Statistics". Phil. Quat. 42 (166) pp. 78-84.
Drexler, E. 1985. Engines of Creation: The Coming Era of Nanotechnology. Forth Estate London. Also downloadable free-of-charge from http://www.foresight.org/EOC/index.html
Drexler, E. 1992. Nanosystems, John Wiley & Sons, Inc., NY.
Dyson, F. 1979 "Time without end: physics and biology in an open universe" Reviews of Modern Physics, 51:3, July.
Earman, J. S. 1987. "The SAP Also Rises: a Critical Examination of the Anthropic Principle". Am. Phil. Quat., 24, 307-317.
Eckhardt, W. 1993. "Probability Theory and the Doomsday Argument". Mind, 102, 407, pp. 483-88
Ellis, G. F. R. 1975. "Cosmology and Verifiability" Quarterly Journal of the Royal Astronomical Society, Vol. 16, no. 3. Reprinted in Leslie, J. 1990. (edt.) Physical Cosmology and Philosophy. Macmillan Publishing Company.
Foresight Institute. http://www.foresight.org/
Hanson, R. 1996. "The Great Filter". Work in progress. Available from http://hanson.berkeley.edu/
Hanson, R. 1994. " If uploads come first? The crack of a future dawn". Extropy 6:2 Also available from http://hanson.berkeley.edu/
Hanson, R. 1992. "Reversible Agents. Need Robots Waste Bits to See, Talk, and Achieve?" Proceedings for Workshop on Physics and Computation, Oct. 1992 Copyright IEEE.
Leitle, E. 1996. Private Communication.
Leslie, J. 1996. The End of the World: The Ethics and Science of Human Extinction. Routledge, NY.
Leslie, J. 1993. "Doom and Probabilities". Mind, 102, 407, pp. 489-91
Leslie, J. 1992. "Doomsday Revisited". Phil. Quat. 42 (166) pp. 85-87.
Leslie, J. 1990. (edt.) Physical Cosmology and Philosophy. Macmillan Publishing Company.
Linde, A. D. 1990. Inflation and Quantum Cosmology. Academic Press, San Diego.
Mach, R. 1993. "Big Numbers and the induction case for extraterrestrial intelligence". Philosophy of Science, 60, pp. 204-222
Minsky, M. 1994. "Will Robots Inherit the Earth?" Scientific American, Oct., Available from https://web.media.mit.edu/~minsky/papers/sciam.inherit.html
Minsky, M. 1985. "Why Intelligent Aliens will be Intelligible". In Regis, E. JR. 1985. Extraterrestrials: Science and Alien Intelligence. Cambridge University Press.
Moravec, H. 1998a. Being, Machine. Forthcomming. Cambridge University Press (?)
Moravec, H. 1998b. "When will computer hardware match the human brain?" Journal of Transhumanism, Vol. 1 https://jetpress.org/volume1/moravec.htm
Moravec, H. 1988. Mind Children. Harvard University Press.
Pagels, H. R. 1985. "A Cosy Cosmology. In Leslie", J. 1990. (edt.) Physical Cosmology and Philosophy. Macmillan Publishing Company.
Parfit, D. 1984. Reasons and Persons.
Parfit, D. 1997. The Sharman Memorial Lectures. UCL, March 1997, London.
Pearce, D. 1996 The Hedonistic Imperative. HedWeb. https://www.hedweb.com/hedethic/hedonist.htm
Recher, N. 1982. "Extraterrestrial Science". In Regis, E. JR. 1985. Extraterrestrials: Science and Alien Intelligence. Cambridge University Press
Regis, E. JR. 1985. Extraterrestrials: Science and Alien Intelligence. Cambridge University Press.
Sagan, C. & Newman, W. I. 1982. "The Solipsist Approach to Extraterrestrial Intelligence", Reprinted in Regis, E. JR. 1985. Extraterrestrials: Science and Alien Intelligence. Cambridge University Press. Pp. 151-163.
Schopf, J. W. 1992. Major Events in the History of Life. Jones and Bartlett, Boston.
Tegmark, M. 1997. Is "the theory of everything" merely the ultimate ensemble theory?". Physics preprints archive, gr-gc/9704009. 3 Apr 1997.
Tipler, F. J. 1994. The Physics of Immortality. Doubleday.
Tyron, E. P. 1973. "Is the Universe a Vacuum Fluctuation?" In Leslie, J. 1990. (edt.) Physical Cosmology and Philosophy. Macmillan Publishing Company.
Weinberg, S. 1993. Dreams of a Final Theory. Hutchinson.
Wesson, P. S. 1990. Cosmology, Extraterrestrial Intelligence, and a Resolution of the Fermi-Hart Paradox. Quat. J. Roy. Astr. Soc., 31, 161-170.
Wilson, P. A. 1994. Anthropic Principle Predictions. Brit. J. Phil. Sci., 45, 241-253.
World Transhumanist Association. http://www.transhumanism.com
Back to Nick Bostrom's Main Page