Nick Bostrom
[Futures, Vol. 35: 8, pp. 901-906]
In an earlier paper in this journal[1], I sought to defend the claims that (1) substantial probability should be assigned to the hypothesis that machines will outsmart humans within 50 years, (2) such an event would have immense ramifications for many important areas of human concern, and that consequently (3) serious attention should be given to this scenario. Here, I will address a number of points made by several commentators.
1.
I am grateful to the five commentators on my previous paper for their reflections and criticisms. Here, I shall try to respond to some of their concerns.
Ellen Jenkins[2], in her commentary, agrees with (3). She objects, however, to (1), claiming that I have confused “progress towards the immediate and plausible goals of AI with progress towards their more ambitious aims to create autonomous, hyper-intelligent beings.”
On the contrary, however, I am fully aware that the overwhelming amount of research in AI today is directed towards smaller and more immediate goals. The arguments I presented for (1) did not rest on any premiss to the effect that current AI research is directed to the creation of hyper-intelligent beings. Jenkins offers no argument for rejecting (1) other than by noting that many AI-researchers doubt the possibility of fully autonomous and conscious AI. This fact is not inconsistent with (1), especially considering that many other AI-researchers do believe in that possibility. (Incidentally, my target paper makes no assertion about whether future machines would be conscious.)
Jenkins further notes that I claim “that computer-environment interaction is a trivial problem, easily solved with robotic limbs and video cameras,” which she thinks is my “by far most astounding claim.” – “It assumes,” she say, “that this level of experience (signal processing) is somehow equivalent to the human level of experience complete with our cultural and social frameworks and the role played by emotion in perception and cognition.” I deny that it assumes that.
Granted, normal human sensory experience and interactivity is far richer than current I/O technologies can provide for computers. However, we know from cases of humans who have severe sensory and/or motor deficits that even highly restricted I/O channels are compatible with the development of normal cognitive faculties in humans. People can learn to get by with very narrow-bandwidth I/O, especially if the deficits are present since early in life, indeed with much less than current information technology can provide to computers, which includes high-resolution video at high frame rate, high-quality stereo sound, as well as many other sensory capabilities that humans lack (e.g. infrared vision). This suggests that we could easily provide machines with a sufficiently rich I/O. The hypothesis that I/O will be a limiting factor in progress in human-level machine intelligence does not fit the evidence.
Jenkins points out the notorious difficulty of measuring intelligence, but then goes on to assert that “to AI developers, and to Bostrom, levels of intelligence are expressions of processing power & speed of calculations (like millions of instructions per second). They have estimated human level intelligence by estimating the number of neurons in the brain and estimating the number of the interactions between them.”
This objection rests on a misunderstanding of role that the estimate of the human brain’s processing power plays in my argument. Processing power is not used as a measure or a definition of intelligence, but simply as an estimate of what kind of hardware would be needed to replicate human cognition. Intelligence, however it is defined, will of course also be a function of the software running on this hardware.
Jenkins takes me to charge for not discussing the contributions emotions make to human functioning. My omission of explicit discussion of emotion should not be construed as a claim that emotions are unimportant in our thinking. In my brief article, I made no attempt to resolve the various faculties that contribute to human thinking; as far as my argument is concerned, affective computing may play an arbitrarily large part. Note that the strategy I outlined for developing human-equivalent intelligence (reverse-engineering the human brain) would yield machines using affective computing to the extent that that helps them function on a human level. Nowhere did I claim or imply that their software would allow for only logical or abstract reasoning.
The word “outsmart” appeared in the title of my original paper (“When Machines Outsmart Humans”). Jenkins notes that the term “outsmart” insinuates that the machines in question would necessarily be antagonistic to human and plot to depose us. I did not mean to imply that, and I concede that the title in this respect is infelicitous.
For reasons that I confess I find opaque, Jenkins seems to take comfort in our ability to “just be”:
If our ability to think is what separates us from the animals, it might well be that it is our ability to just be, our ability to 'not think' that would separate us from the thinking machines. Before we begin to feel threatened or advise others to feel threatened, we should make sure we actually are threatened.
I have several machines and gadgets that “just are” in a box under my bed, doing nothing. The alleged inability of machines to just be and not think seems hard to take seriously as a ground for rejecting (3), and it is disconcerting to note that at least one commentator appears to think otherwise.
2.
By contrast to Jenkins, Steve Fuller[3] agrees with (1) but seems to disagree with (2) and (3):
Bostrom commits the familiar fallacy of futurologists who see history as leading from the front (i.e. the elites), rather than the middle (i.e. the ordinary) or the rear (i.e. the disinherited). The recent round of global terrorism merely provides a violent instance of the general tendency – that the course of history is determined by knowledge that is available to a critical mass of reasonably well-resourced people who manage to engage in pursuits that reinforce each other. If Bostrom had provided some reasons for thinking that computers with superhuman intellects would be readily available to ordinary middle class people across the globe, then I would start taking his concerns more seriously.
For the record, although I do not discuss it in the original paper, my view is that the course of history can at different points be influenced by both those in “ the front,” those in “the middle,” and those in “the rear.”
I argue in the paper that the consequences of human-level and (shortly thereafter) of super-human levels of machine intelligence will be profound. One of the consequences that I point out – that AI programs can be copied at a very low cost suggests that shortly after the first human-level AI is created, there may come to exist a great number of such AIs. This would add to the impact. I do not know how long it will take after the breakthrough until “a critical mass of reasonably well-resourced people” get access to their own AIs. However, I would maintain that the consequences can be large even if initially only a small number of people have access to these AIs. Technological inventions made by these initial AIs can directly influence many fields. For example, the AIs may be very good at creating advanced nanotechnological designs, including molecular assemblers. Molecular assemblers could replicate themselves quickly and cheaply[4], and they could manufacture a large range of products; and these products could thus be made available on a wide scale in a short time. Products would include computer hardware, making further proliferation of AIs possible. Other products may include gadgets, materials, foods, spaceships, and weapons of mass destruction. It seems worth taking these consequences seriously ahead of time.
“Another questionable feature of Bostrom’s scenario,” writes Fuller, “is that the human denizens of 2050 will find the presence of superior machine intelligences profoundly disorienting.” – This, however, is not a feature of the scenario I described. I took no stand on the extent to which the presence of machine intelligences would be disorienting to future people.
Fuller opines,
if history is truly our guide, then these heuristics will probably fail to provide adequate solutions to genuinely new problems. Yet, the status of machines as epistemic (and, by extension, ethical) agents will probably hang on their handling of problems that have eluded human cognition. Thus, it is unclear that simply creating computers capable of solving familiar problems faster than humans will appear especially impressive to those entrusted with granting personhood.
The approach to developing human-equivalent machines that I outlined involved reverse-engineering the human brain. If this approach is successful, the resulting machines would be at least as capable of solving genuinely new problems as we humans are: they could make use of all heuristics that we use, and whatever other heuristics or methods are discovered in the process.
Fuller writes,
the distinctiveness of computer technology lies not in the capacity of machines to approximate human intelligence in the not-too-distant-future, but in the spread of today's state-of-the-art machines to a wider range of people who can use them alternately for profit, war, instruction or fun.
I am doubtful that it makes much sense to speak of “the distinctiveness” of computer technology, as if there were exactly one feature of computer technology alone worthy of consideration. The spread of today’s state-or-the-art machines to a wider range of people is interesting and may have significant consequences, but this fact does not detract from the need to also focus attention on the possible future event of the creation of truly intelligent machines.
3.
According to a third commentator, Jerry Ravetz[5], “any fantasies about those Turing machines just getting smarter and smarter are quickly cured by an acquaintance with the discussions of the malfunctioning of IT systems.” This view, incidentally, is also elaborated in the recent writings of Jaron Lanier. The gist of the argument is that increasingly complex software running on increasingly fast computers is also increasingly bug-ridden and bulky to such an extent as to cancel any gains in functionality.
In my view, this argument at best works only for the classical approach to AI, and even there only with significant qualifications – for classical AI is making some progress, only not as much and as rapidly as its early practitioners expected. However, the problem of software complexity need not apply to the same extent to bottom-up approaches. If we model our AI on the human brain, we would build it in such a way as to tolerate certain kinds of noise and errors. Human brains are built out of very unreliable and (on a neuronal level) somewhat haphazardly organized components; nevertheless it manages to function quite reliably in a wide range of circumstances. Classical AI, by contrast, tends to be brittle: it doesn’t bend or degenerate gracefully if something is not quite right, but instead ceases to function altogether. By incorporating the more robust error-tolerant paradigms used in cortical processing, as suggested in my original article, our AI programs may likewise be able to avoid the death from complexity.
The software problem does seems to be harder than both the problem of building the requisite computing power and the problem of providing adequate I/O (which, as I argued earlier, has already been essentially solved). But Ravetz does not provide any compelling ground for assuming that classical AI will never reach its goal (or even that it will not do so within 50 years), much less for dismissing bottom-up approaches, inspired by findings in neuroscience, or the approach based on direct emulation of a particular human brain in silico as in the uploading scenario which my target paper also described.
In another place of his commentary, Ravetz says that “Bostrom's essay is useful as a warning of the sorts of things we should start expecting now, so that the lines of struggle can be outlined without further delay.” Here, seemingly, he accepts the possibility that those “fantasies about Turing machines getting smarter and smarter” may not forever remain unreal.
While my paper’s purpose was indeed to encourage people in general, and futurists in particular, to begin to discuss the prospect of real AI in earnest, I wish to express some reservations about the suggestion that we should immediately begin to outline “the lines of struggle.” This injunction makes it sound as if the important thing were to begin fighting for something, anything, rather than to carefully begin to consider all sides of the issue. I trust that the author did not intend to imply this, but I would nevertheless want to emphasize that my position is that we as yet understand so little about the consequences of human-level and greater-than-human AI that we should approach the issue with open minds and try to avoid polarizing the debate from the outset. No doubt, this kind of machine intelligence has enormous potential for beneficial uses, as well as risks of various sorts of disasters.
4.
The forth commentator, Graham Molitor[6], seems to be accepting all of my paper’s main contentions. I shall therefore confine myself to two very brief remarks on his commentary.
First, he mentions that there are some supercomputers more powerful than the one my paper mentions as the most powerful one. This is because my paper was written in the year 1999, and computing power has continued to grow in the interim as predicted in accordance to Moore’s law.
Second, it is a persistent myth that humans use only a small percentage of our brain capacity. All parts of the brain are constantly active, and all parts that have been investigated seem to perform some function. Obviously, we can improve our performance by practice and education, but it makes no clear sense to say that we are operating at a certain percentage-level of maximum capacity.
5.
The last of the five commentators, Rakesh Kapoor[7], does not seem to take issue with any of the claims I tried to defend in my paper, but instead expresses his views that the development of human-level and greater machine intelligence is undesirable. “AI,” he writes,
is value-less and soul-less. It has no link with any human moral or spiritual values. … Will AI help us deal with issues that affect the not so better off half of the world population? Will it help us to overcome poverty and hunger – 1.2 billion people living on less than $1 a day, climate change, the destruction of natural ecosystems and the problems of war, terrorism and hatred?
If these questions are meant to rhetorically express a demand, note that it sets a very high standard for the introduction of a new technology. Not only does it have to confer some benefits to some people, but it must also reverse climate change, abolish world hunger and poverty, restore natural ecosystems, and solve the problems of war, terrorism and hatred. However, it would seem that even if AI could help even just slightly with even one of these problems, it could potentially count as a hugely valuable contribution. (Incidentally, it is hard to think of any technology that would have a greater chance of solving all these problems in one fell swoop than superintelligence, with the possible exception of molecular manufacturing – which, in all likelihood is one of the technologies that superintelligent machines would rapidly develop.)
I am in agreement with Kapoor as regards the tragedy of the vast unfair inequalities that exist in today’s world, and also in regard to the fact that there would be considerable risks involved in creating machine intelligence. However, machine intelligence might also serve to reduce certain other kinds of risk. An assessment of whether machine intelligence would produce a net increase or a net decrease in overall risk is beyond the scope of my original paper or this reply. (Even if it were to be found to increase overall risk, which is very far from obvious, we would still have to weigh that fact against its potential benefits. And if we determined that the risks outweighed the benefits, we would then have to question whether attempting to slow the development of machine intelligence would actually decrease its risks, a hypothesis that is also very far from obvious.)
Kapoor notes my participation in the transhumanist movement. Far from seeing technology as an end in itself, or thinking that what matters is only the thrill of the ride (the motives Kapoor attributes to artificial intelligence researchers, and by implication to transhumanists?), transhumanism is an effort to ethically steer technological development towards broad human benefit. The reason for my own involvement, and indeed the reason for writing the target article, is that I am deeply concerned about outcomes and destinations of technological advancement in general and of AI in particular.
References:
Bostrom, N. (2003) “When Machines Outsmart Humans,” Futures, 35:7.
Drexler, E. Nanosystems: Molecular Machinery, Manufacturing, and Computing. (New York: John Wiley, 1992).
Fuller, S. “When History Outsmarts Computers,” Futures, ??.
Jenkins, S. “Artificial Intelligence and the Real World,” Futures, ??.
Molitor, G. “Would machines actually cope?,” Futures, ??.
Kapoor, R. “When Humans Outsmart Themselves,” Futures, ??.
Ravetz, J. “Outsmarting Turing,” Futures, ??.
[1] Futures, 35:7 (2003).[3] Steve Fuller’s When History Outsmarts Computers[4] See e.g. Drexler, E. Nanosystems: Molecular Machinery, Manufacturing, and Computing. (New York: John Wiley, 1992).[5] Jerry Ravetz’s Outsmarting Turing[6] Graham Molitor’s Would machines actually cope?[7] Rakesh Kapoor’s When Humans Outsmart Themselves