Transhumanist Values
Nick Bostrom
Department of Philosophy
Yale University
Version
of April 18, 2001
Transhumanism is a loosely defined movement that has developed over the past two decades but most rapidly in the past few years [2,3]. It represents is an interdisciplinary approach to understanding and evaluating the ethical, social and strategic issues raised by present and anticipated future technologies. The focus is especially on those technologies that either pose a threat to the survival of the human civilization, or, in contrast, promise to create opportunities for overcoming fundamental human limitations.
Examples of technologies of the latter kind are those that may enable radically extended human lifespan, the elimination of disease and unnecessary suffering, or the enhancement of human intellectual, physical and emotional capacities.
In other words, transhumanism is the study of the means and obstacles to humanity using technological and other rational means to becoming posthumans, and of the ethical issues that are involved in this. ‘Posthumans’ is the term for the very much more advanced beings that humans may one day design themselves into if we manage to upgrade our current human nature and radically extend our capacities.
Transhumanism goes beyond secular humanism, with which it shares many assumptions, by advocating not only ”traditional” means (such as education) to improve the human condition, but also the use of science, technology, and other empirical methods to enhance the human organism itself, physiologically. Transhumanists view human nature as a work-in-progress: a half-baked beginning that can be remolded in desirable ways through intelligent use of enhancement technologies. In this sense transhumanism is not only an area of study but also a world-view that has a value component. That value component is the subject of this paper.
In contrast to many other ethical outlooks, which in practice often tend to be reactive when it comes dealing with new technologies (viewing developments like human cloning mainly as threats to our received moral intuitions), the transhumanist philosophy is guided by an evolving vision to take a more proactive approach to technology policy. The vision, painted in broad strokes, is this: To enable those who so wish to live much longer and healthier lives, to enhance their memory and other intellectual faculties, to refine their emotional experiences and subjective sense of well-being, and generally to achieve a greater degree of control over their own lives. For transhumanists, this positive goal has replaced customary injunctions against “playing God” or “messing with Nature” or “tampering with our human essence” or other manifestations of “punishable hubris”.
It is a transhumanist characteristic to think seriously and in some detail about the bigger picture: the long-term fate of humankind and how anticipated future technologies will impact on the human condition. A common understanding is that it would be naïve to think that the human condition and human nature will remain pretty much the same for very much longer. Instead, in the long run it is likely that developments such as molecular nanotechnology and machine intelligence will radically change the rules of the game. (Many, but not all, transhumanists expect that these developments will take place within the lifetime of many people who are alive today.) If this assumption is correct, or even if it just has a substantial probability of being correct, it will reflect back on how we should make policy choices now.
It may not be possible to disentangle exactly how much of transhumanism that derives from factual assumptions and how much is based on fundamental value judgments. But given this general background, let us zoom in and try to get a better understanding of what the transhumanist values are without troubling ourselves too much about which of these values are basic and which of them are derived from empirical beliefs. We shall begin with the deeper values, and then move on to see how they are connected to present-day issues, in for instance medical ethics.
The range of thoughts, feelings, experiences, and activities that are accessible to human organisms presumably constitute only tiny part of what is possible. There is no reason to think that the human mode of being is any more free of limitations imposed by our biological nature than are the modes of beings of other animals. Just as chimpanzees simply does not have the brainpower to understand what it is like to be human – the ambitions we humans have, our philosophies, the complexities of human society, or the depth of our relationships we can have with one another, so we humans lack the capacity to form a realistic intuitive understanding of what it would be like to be posthuman. Our mode of being spans but a narrow subspace of what is possible or permitted by the physical constrains of the universe (see figure 1). It is not farfetched to suppose that there are parts of this larger space that represent extremely valuable ways of existing. If only we could get to them!
What may some of these be? We can conceive, in the abstract at least, of pleasures whose blissfulness vastly exceeds what any human has yet experienced. We can imagine beings who reach a much greater level of personal development and maturity than humans because they have the opportunity to live for several hundreds or for thousands of years with full bodily and mental vigor. We can conceive of intellects that are enormously smarter than human brains: that read through books in seconds; that are much cleverer philosophers than we are; that create artworks that would strike us as fantastic masterpieces even if we could understand them merely on the most superficial level. We can imagine a love that is stronger and purer than any of us has ever felt. And so on. The point is our everyday intuitions about values are likely constrained by our narrow of experience and our limited powers of imagination. We should leave room in our thinking for the possibility that as we develop greater capacities, we shall discover values that will strike us as far higher than those we could realize when we were still un-enhanced biological humans.
Transhumanism affirms the
quest to develop further so that we can explore these hitherto inaccessible
reams of value. Technological enhancement of the human organism is a necessary
means to this end. There is a limit to how much can be achieved by low-tech
means such as education, philosophical contemplation, or moral self-scrutiny
(the methods suggested by classical philosophers with perfectionist leanings,
for example Plato, Aristotle, and Nietzsche); or by means of creating a fairer
and better society (as for example Marx envisioned). This is of course not to
denigrate what we can do with the tools we have today. Yet transhumanists,
ultimately, want to go further.
If this is the grand vision, what are the more particular values that translate it into practice for transhumanists?
To start with, transhumanists typically place great emphasis on individual freedom and individual choice, especially when it comes to enhancement technologies. The reason for this is twofold.
First, it is a fact that humans differ widely in their conceptions of what their own perfection would consist in. Some want to develop in one direction, others in different directions, and some prefer to stay pretty much the way they are (whether because of religious or other motives). It would neither be feasible nor desirable to impose one common standard that we should all aspire to. The best approach then is to let people choose for themselves which enhancement technologies they want to use on themselves, if any. Obviously there would be restrictions on this general principle to the extent that individual choices impact in substantial ways on other people. But the mere fact that somebody else is disgusted or morally affronted by somebody’s using technology to reshape herself is normally not a valid reason for coercive interference.
The second reason for this element of individualism is the poor track record of collective decision-making in the domain of human improvement. The eugenics movement, for example, is thoroughly discredited; and other collectivist utopian projects have mostly been total failures.
The preferred alternative to collective decisions in this area, then, is leaving it up to individuals to make their own choices. This general preference however is compatible with recognizing that there can be cases of technologies which are so dangerous that they must be regulated. For example, there has been extensive discussion among transhumanists and other people who think seriously about the future about the hazards of molecular nanotechnology. In its mature form (which is probably still a couple of decades off, at least) it would enable the construction of self-replicating machines on the molecular scale, a kind of mechanical bacteria, that could feed on organic matter and cause the extinction of intelligent life on Earth [5-7]. Obviously, such a technology must not be allowed to fall into the wrong hands, at least not until adequate defenses have been developed.
Some people would be quick to draw the conclusion that, given this description of the eventual capabilities that nanotechnology will unleash, we should ban research into nanotechnology. (This has indeed been suggested in a recent much-debated article by Bill Joy [10].) This conclusion would seem wrongheaded, not so much because it fails to take into account of the potential for good that could come from nanotechnology it were safely developed, but primarily because the suggested remedy is likely to increase the risk rather than diminish it. (This has been agued by several people in response to Joy; see e. g. Ralph Merkle [13].) Since nanotechnology, in contrast to for instance nuclear technology, requires neither rare raw materials nor large manufacturing plants, it would be impossible to make a ban 100% effective in today’s world-order and with present surveillance technologies. Merkle argues that if we outlaw research into nanotechnology then only outlaws will develop it, which would be the worst possible outcome. By pursuing nanotechnology we stand a chance of also developing defense measures that could thwart an attack by a rogue nation or a terrorist group. However, the balance of reasons for or against various regulatory measures may well change as the practicalities such interventions are altered by geopolitical developments or by innovations in surveillance technology.
The discussions around this issue illustrate another aspect of transhumanism in practice. Transhumanists insist that our received moral precepts and intuitions are not in general sufficient to guide policy. Instead, debates about controversial technology policy must be framed within a wider discourse that takes account of anticipated future technological developments and the direction in which humankind is headed. Of course, such discourse will inevitably contain speculative elements and a scientific consensus will in many cases not be possible. But that’s just a fact of life, and we should at least recognize it and make it explicit. An important value in transhumanism is therefore to encourage research into and public debate on the topics that are relevant to humanity’s future. There is much at stake, and discussion thus far has been lacking in both quantity and quality. (Much of the most careful thinking about “far-future” scenarios that I am aware of, however, has however been done by people who are involved in transhumanism.)
We can thus include in our list of transhumanist values that of promoting understanding of where we are and were we are headed. This value entrains others: critical thinking, open-mindedness, scientific inquiry, and open discussion are all important helps for increasing society’s intellectual readiness.
Furthermore, transhumanists take a keen interest in technological innovations that can improve decision-making. This includes both potential individual intelligence-amplification technologies (such as nootropics, memory and study techniques, wearable computers, information filtering software etc.) and innovations that would make us better as a knowledge community (e.g. Idea Futures [8], collaborative information filtering [4], improvements in the peer review process, better scientific methodology, and various applications of information technology). In the future, the quest for increased understanding may also be a powerful argument for developing greater-than-human machine intelligence.
Because transhumanists expect many of the central conditions for human existence to change over the coming decades, another transhumanist value is that of questioning assumptions and false limits. We cannot rely on past modes of thinking or a fixed philosophy to help us navigate the new circumstances. Instead, there is a general sense of current principles being temporary and open to revision. Transhumanism is a dynamic philosophy, intended to evolve as new information becomes available or new challenges emerge. One transhumanist value is therefore to cultivate a questioning attitude and a willingness to revise one’s beliefs and assumptions.
When entering uncharted waters it is crucial that we not lose our bearing. A robust sense of traditional human values is therefore paramount. To make the right choices, humanity needs more than scientists and people of technical expertise. We need emotionally and intellectually mature people with a deep sense of humanity: an appreciation of how far we had to travel to get to where we are now, and of the enormous richness and diversity of human cultures, characters, and experiences and achievements. Transhumanism imports from secular humanism the ideal of the fully-developed and well-rounded personality. We can’t all be renaissance geniuses, but we can strive to constantly refine ourselves and to broaden our intellectual horizons.
Transhumanism tends towards pragmatism – not in the sense of a specific philosophical thesis, but in the sense of holding the engineering-mentality and the entrepreneur-spirit in high esteem: taking a constructive, problem-solving approach to challenges, favoring methods that experience tells us give good results, and taking the initiative to “do something about it” rather than just sit around complaining. This is one sense in which transhumanism is optimistic. It is not optimistic in the sense of having an inflated belief in the probability of success or a passive conviction that things will all turn out well, nor in the Panglossian sense of excusing the shortcomings of status quo.
Finally, transhumanism advocates the well-being of all sentience, whether in artificial intellects, humans, non-human animals or possible extraterrestrial species. Racism, sexism, speciesism, belligerent nationalism and religious intolerance are unacceptable. In addition to the usual grounds for finding such practices morally objectionable, there is an additional specifically transhumanist motivation for this. In order to prepare a time when the human species may start branching out in various directions, we need to start now to strongly encourage the development of moral sentiments that are broad enough encompass within the sphere of moral concern sentiences that are different from current selves. We can go beyond mere tolerance to actively encouraging people who experiment with nonstandard life-styles, because by facing up to prejudices they ultimately expand the range of choices available to others. And we may all delight in the richness and diversity of life to which such individuals disproportionately contribute simply by being who are.
Figure 2 summarizes the transhumanist values that we have been discussing.
Core Value
Derived Values· Nothing wrong about “tampering with nature”; the idea of hubris rejected
|
Figure 2. Table of transhumanist values
Other values can be added to those in figure 2. As with any theory, you can get more specific versions of transhumanism by adding claims. There are multiple, mutually incompatible ways of doing so. Transhumanism is not a monolithic worldview and there was not a single inventor or founding work that defines what it is and what it isn’t. Transhumanism is a fairly modest set of shared assumptions and values that can be fleshed out in various ways. People will have different opinions on what should be added; they may all be transhumanists, but of very different sorts.
Some examples of currents within transhumanism are: extropianism (defined by the Extropian Principles, authored by Max More [16]), singularitarianism [9,20] (adding the hypothesis that the transition to a posthuman world will be a sudden event, elicited by the creation of runaway machine intelligence), David Pearce’s Hedonistic Imperative [17] (combining transhumanism with a form of hedonistic utilitarianism), democratic transhumanism (adding emphasis on social awareness and democratic decision-procedures), and survivalist transhumanism (placing especial importance on personal survival and longevity). One could say that there are as many versions of transhumanism as there are serious transhumanist thinkers. We must also keep in mind that the transhumanist outlook is still very much in the process of formation, so any characterization must be tentative.
We can further illustrate the transhumanist position by considering the perspective it gives on two controversial issues in medical ethics: euthanasia and human cloning.
The transhumanist position on death is clear and simple: death should ideally be voluntary. This means, on the one hand, strongly favoring research into human life-extension (or more exactly: human health-span extension) and on the other hand it means advocating the right to voluntary euthanasia. The usual provisos apply – that the decision to end one’s life should be well-considered and informed and not result of a momentary fit of madness etc. – but the basic principle that the individual should normally have the right to decide this most personal question, when to end her own existence, is fully affirmed. Transhumanists also find nothing wrong with another person voluntarily agreeing to help out in the act; so voluntary assisted suicide is also okay. The importance of the right to end one’s life may become even more important if the aging process is one day decelerated or stopped.
Regarding the prospect of human cloning, it is important first of all to place it within the framework of reasonable expectations of what it will and will not lead to. Many objections against human cloning seem to derive from a belief in unrealistic scenarios, such as an evil dictator using the technology to create an army of Arnold Schwarzenegger clones. Aside from the obvious impracticality of such an undertaking (requiring tens of thousands of willing surrogate mothers; a delay of two decades before there is a return on your investment; the dubious utility twenty years from now of large numbers of brawny soldiers; and the possibility and likelihood that these “Schwarzeneggers” will prefer to do other things with their lives than serve in your army), and aside from the fact that this would at best be an argument against some specific misuses of the technology rather than against human cloning as such, one should also realize that long before there would be a sufficient number of adult clones of humans to have any significant impact on the world, there will be much more powerful reproductive technologies available that will make cloning a relatively moot issue. Genetic engineering will enable parents to not just copy existing genomes, but to create new genomes to design, handing you the option of choosing genes for your offspring that correlate with health, intelligence, longevity, physical attractiveness, a pleasant temperament, and other desirable traits. (And it is likely that before there is a large enough number of such genetically-enhanced adults to have a significant impact, there will be even more potent ways of enhancing human capacities, making even this technology obsolete.) Simply put, transhumanists think that human cloning is over-hyped.
Alternatively, it is sometimes insinuated that clones would not be fully human or would lack some aspect of human dignity (e.g. by Nigel Cameron [19] and Leon Kass [11]). That is a view that transhumanists strongly reject. A clone would have the same rights and dignity as any genetically identical twin or other human being. We should judge people on the basis of what they are and what they do, not on the basis of the causal mechanisms (which were in any case beyond their control) that determined how they came to into existence. (For an example of a better contribution by an ethicist to the cloning debate, see [18].)
Transhumanists also hold that there is no special ethical merit in playing genetic roulette. Letting chance determine the genetic identity of our children may spare us from directly confronting some difficult choices. But the innocence we might think we gain is illusory because we are in effect making a choice when we decide to “go natural”, a choice that can have as devastating long-term consequences as a failed attempt to intervene.
One cogent concern for treading carefully in the early stages of a technology like human cloning is the risk that it could result in birth defects. However, one should keep in mind that no progress or medical breakthrough is possible without taking risks. We need a sense of proportion. American settlers (a population which included pregnant women) accepted great risks for themselves and their children when they decided to leave their home countries to go west. Yet most of us don’t think that they were acting immorally by accepting these risks. Today’s terra incognita may be medical rather than geographical, but still need adventurers.
Even more important than its ramifications for current topics in medical ethics is the contribution transhumanism can make in identifying crucial value issues and in suggesting ideas for how to begin to analyze the choices that humanity will be confronted with over the coming decades. Some of these choices will have profound consequences for the future of our species, and it is therefore important to start to think about them as early as possible so that we have more time to deliberate.
For instance, we could begin to think about what we should do when we get the ability to build superintelligent machines [1,12,14,15,21]. What goals should they be given? Who should decide on that? How could we best use superintelligence to help answer our ethical conundrums? As another instance, we could begin thinking about what to do with molecular assemblers; how they need to be regulated; and what infringements on national sovereignty or personal privacy that may be legitimized by the necessity to prevent nanotechnology from being used to destructive ends. We could also confront more vigorously than we already do the ethical problems involving the creation of persons – what interventions do we have the right or duty to perform to enhance the lives of not-yet-conceived persons? Is a larger population, other things equal, better than a small population? And so on. I have no doubt that creatively engaging these problems will lead to interesting and results.
1. Bostrom, N. (1998). How Long Before Superintelligence? International Journal of Futures Studies, 2.
2. Bostrom, N. et al. (1998). The Transhumanist Declaration. URL: http://www.transhumanism.com/declaration.htm.
3. Bostrom, N. et al. (1999). The Transhumanist FAQ. URL: https://www.transhumanist.com/.
4. Chislenko, A. (1997). Automated Collaborative Filtering and Semantic Transports. URL: http://www.lucifer.com/%7Esasha/articles/ACF.html.
5. Drexler, E. (1985). Engines of Creation: The Coming Era of Nanotechnology. London: Forth Estate.
6. Drexler, E. (1992). Nanosystems. New York: John Wiley & Sons, Inc.
7. Freitas, R. (2000). Some Limits to Global Ecophagy by Biovorous Nanoreplicators, with Public Policy Recommendations. Zyvex preprint, May 2000.
8. Hanson, R. (1995). Could Gambling Save Science? Encouraging an Honest Consensus. Social Epistemology, 9:1, 3-33.
9. Hanson, R. et al. (1998). A Critical Discussion of Vinge's Singularity Concept. Extropy Online.
10. Joy, B. (2000). Why the future doesn't need us. Wired, 8.04.
11. Kass, L. (1997). The Wisdom of Repugnance. The New Republic, 2 June 1997, 22.
12. Kurzweil, R. (1999). The Age of Spiritual Machines: When computers exceed human intelligence. New York: Viking.
13. Merkle, R. (2000). Nanotechnology: Designs for the Future. Ubiquity, 1(20).
14. Moravec, H. (1989). Mind Children. Harvard: Harvard University Press.
15. Moravec, H. (1999). Robot: Mere Machine to Transcendent Mind. New York: Oxford University Press.
16. More, M. (1999). The Extropian Principles 3.0. URL: http://www.extropy.com/extprn3.htm.
17. Pearce, D. (2001). The Hedonistic Imperative. URL: https://www.hedweb.com/hedab.htm.
18. Pence, G. (1998). Who's Afraid of Human Cloning? Oxford: Rowman & Littlefield.
19. Sharp, D., & Sharn, L. (1997). Big Questions for Humanity. USA Today, 1 April 1997.
20. Vinge, V. (1993). The Coming Technological Singularity. Whole Earth Review, Winter issue.
21. Yudkowsky, E. (2001). Friendly AI 0.9. URL: http://singinst.org/CaTAI/friendly/contents.html.