Goo prophylaxis
How do we prevent a nanotechnological
disaster?
This document contains the postings I made to a huge thread on the
Extropians mailing list in the summer of 1997. I started the discussion
by asking how we might counteract nuclear proliferation, but the thread
soon mutated into a debate about how we should deal with the threat to
the survival of intelligent life posed by the development of nanotechnology.
You will probably disagree with many things I say, but then I don't even
myself now agree with everything I wrote. The nature of our discussion
was a joint exploration of ideas, an informal experimentation with different
viewpoints. The text that follows is unedited and probably contains many
typographical and other mistakes, but I hope it will convey some of the
excitement we all felt in discussing these important issues.
Subject: Hanson antiproliferation method?
Date sent: Mon, 18 Aug 1997 23:40:25
I would be interested to hear your opinions about the following
problem: (and especially Robin Hanson's opinion, since it's close to
How do we best deal with the danger of nuclear proliferation and
the spread of other weapons of mass destruction?
It is conceivable, for example, that there will be some early stages
of nanotechnology, before the onset of a singularity-like event,
where nanotechnology could provide destructive powers sufficient to
eliminate all intelligent life on earth. (And this might be before
any significant space-colonization has taken place.)
One possible answer, of course, is a world government. Does anybody
have what they think is a better idea, or an idea of how best to
implement a world government?
It seems clear to me that this problem needs to be addressed. I've
heard people defending "each one's right to his or her own nuke", but
that principle seems to me absurd. (Since each nuke could kill more
than a million people, and there probably are more than one
per million who would use his nuke, it would kill all of us.)
Subject: Re: Hanson antiproliferation method?
Date sent: Wed, 20 Aug 1997 21:47:29
> the world government were to limit itself exclusively to military
> affairs and the prevention and suppression of outbreaks of open
> warfare (local civil wars, and the like), leaving all other affairs
> to substantially smaller regional governments with free intermigration,
> then I might assent, but I don't see any path in that direction,
> and anything much more intrusive or activist than that is likely
> to provoke and escalate more disputes than it settles and resolves.
We will of course never have an "ideal" world government, but we
should be willing to pay a high prise in terms of inefficiency if
that will reduce the risk of total annihilation.
What are your grounds for thinking that a reformed, democratic United
Nations would provoke and escalate mote disputes than it would
Many wars seem to originate in differing interpretations of some
legal or historical circumstance. I have often thought, oh, if there
only were an impartial arbiter to which the rivalling nations could
submit their cases, and a responsible international force that
could implement the judgements.
The only realistic candidates in the foreseeable future are either
the
UN or a coalition led by the USA. I think it is dubious that most
people would accept the USA as an international Dad in the long run.
I would suggest a reformed UN in which the USA and the other powers
had an influence that were in some proportion to their real power.
This would presumably also lead to an inefficient bureaucracy that
wastes a few billion dollars per year, but so what?
> Nicholas, were these thoughts by any chance sparked by Eric Drexler's
> Extro-3 after-dinner quotation from Leon Trotsky? "You may not be
> very interested in war, but war is very interested in you!" or
No, these are issues I have long been worried about. Drexler
has told me that he intentionally de-emphasises the darker scenarios
in his speeches and publications for strategic reasons. One
of the reasons why Drexler is the person that I admire perhaps more
than anybody else is that he has not only realized how good the
future could be if things go well but he is also fully aware of how
Subject: Re: Hanson antiproliferation method?
Date sent: Wed, 20 Aug 1997 21:55:51
> I think the best way to deal with this is to try to minimize the
> risks, while making sure that if something awful happens you can
deal
> with it: in a nanotech world, I would invest in active shields
> (several of them, and tweak them to be slightly different frome
> everybdoy else's), in a world with plenty of biotech I would make
> sure I had connections to people with good medical knowledge (CDC?)
> and so on. Not ideal, but perhaps the best we can get.
Does anybody know of any work about the feasibility of active shields
as a defense against a nanotech enemy?
Drexler has told me that he belives there is a possible stable
situation with nanotech and intelligent life. I tend to agree with
this. The problem is it is not impossible that the only way to get
there is to "tunnel" through an impossible region of civilization
Subject: Re: Hanson antiproliferation method?
Date sent: Thu, 21 Aug 1997 21:25:42
> Nicholas Bostrom writes:
> > We will of course never have an "ideal" world government, but we
> > should be willing to pay a high prise in terms of inefficiency
if
> > that will reduce the risk of total annihilation.
> Sure, but it seems to me that most serious outbreaks of military
> disorderliness nowadays are civil wars, not international wars. I
> can't think of any reasons to believe that a world government would
> be a more effective means of reducing the risk of total annihilation
> from civil wars than the current setup, or an even more decentralized
UN has terminated the shooting in Bosnia, and there is some hope that
the peace will hold, though a shortage of resources might force a
withdrawal of the Nato troops next year. There are several other
examples of successful UN peace keeping missons (and some of failed
ones), but this is still a rather new phenomenon and learning is
still in progress, so I think there is room for hope, especially if
the UN is given adequate funding.
> > What are your grounds for thinking that a reformed, democratic
> > United Nations would provoke and escalate mote disputes than it
> Because, for instance, the reformed democratic government that I
> endure, the federal government of the United States of America and
> its wholly-owned subsidiaries, provokes and escalates more disputes
> than it settles. One could mention the police execution of the
> Black Panther leadership in Chicago a couple decades ago, or the
> mess in Waco, Texas a few years ago.
> > Many wars seem to originate in differing interpretations of some
> > legal or historical circumstance. I have often thought, oh, if
> > there only were an impartial arbiter to which the rivalling nations
> > could submit their cases, and a responsible international force
> > that could implement the judgements.
> What makes you think that a reformed, democratic United Nations
> would be impartial? (Or that their international force would be
> responsible, though I find this far easier to imagine than an
> impartial United Nations. The current United Nations is
UN itself would not need to be impartial (though it would presumably
be much less partial that the parts that are fighting), it would
only need to implement the decisions of some independent tribunal.
> > The only realistic candidates in the foreseeable future are either
> > the UN or a coalition led by the USA. I think it is dubious that
> > most people would accept the USA as an international Dad in the
> I think any increase in the influence that the US federal government
> has over world affairs would be disastrous. It has far too much
> busybody meddling influence as it is. I find both of your "realistic
> candidates" quite frightening.
The future is frightening, be brave.
> > I would suggest a reformed UN in which the USA and the
> > other powers had an influence that were in some proportion to their
> What the heck is "real power" and how do you propose to measure
> it? (I hope you're not going to say "in watts". ;)
By real power I mean the power they have in the world as opposed to
the power they have within the present UN. I don't have any specific
proposals for how to measure it, but I think a UN is less likely to
function well if some nations perceive that they are
underrepresented. (A rough measure of real power would be GNP.)
Subject: Re: Goo prophylaxis (was: Hanson antiproliferation method?)
Date sent: Thu, 21 Aug 1997 22:27:18
> On Wed, 20 Aug 1997, Nicholas Bostrom wrote:
> > Does anybody know of any work about the feasibility of active shields
> > as a defense against a nanotech enemy?
> We know immune systems do a fairly good job against natural "goo",
> although at a noticeable price: it consumes a lot of energy, and
we
> multicellular animals have evolved sex (with all its complications
> and further energy losses) to improve its chances.
> So I think active shields are feasible, it is just unlikely they
will
> be perfect. In the future even our equipment might get colds... :-)
If we want to use the immune system analogy, the term "active
shield" is perhaps a little misleading. One thinks of some kind of
spherical wall, but is what you have in mind somthing that would
permeate the whole domain? If it's just a wall, might it not be much
cheaper to blow a hole in it (with (nuclear?) explosives) than to
rebuild it? Then nanites could be sent in and devastate the
If it's not just a wall then there is still the question of power
balance. I agree with you that biology gives us hope in this respect:
higher organisms can and do survive in an environment with naturally
evolved viruses and bacteria. We need to consider:
(1) What if the parasites were designed instead of evolved? (Design
is better than sex. Remember, just because we might think sex is
more interesting doesn't mean it's more plausible!) Perhaps the fact
that the defence would also be designed would conterbalance this
(2) What if new chemical reactions are introduced? Will complicated
higher organisms still be viable? Exactly what properties of the
system does this depend on? Does anybody have any idea of how to get
a handle on this problem?
Subject: Re: Hanson antiproliferation method?
Date sent: Fri, 22 Aug 1997 11:45:17
> UN committed genocide against the Bosnian Muslims by enforcing an
> arms embargo against Yugoslavia at a time when most of the organized
> Yugoslavian armed forces, and almost all of their materiel, was in
> Serbian hands. You credit them with stopping, three years later,
> a bloodbath that they (inadvertently?) laid the groundwork for.
> Aided and abetted by the United States, of course.
The UN intervention was delayed too long. But for all I know, if it
hadn't intervened at all, the killing might still have gone on today,
or one of the combatants might have won and done terrible
things to the defeated people.
> A green plague of ten different high-latency fatal strains of
> virus will be quite useful to small potatoes terrorists. I
> don't think the size of the potatoes is at issue here. Small
> potatoes can be quite deadly, especially nowadays.
I agree. Still, the bigger the powers that fight, the sooner they
will start to have access to weapons of mass destruction. So even if
we can't eliminate all the small conflicts as well, we might be able
to delay the use of, say, destructive nanoreplicators untill adequate
defences have been developed.
> > UN itself would not need to be impartial (though it would presumably
> > be much less partial that the parts that are fighting), it would
> > only need to implement the decisions of some independent tribunal.
> I still don't understand why you think that "world government"
> or transferring more power from the current nations to the UN
> would increase the likelihood that large civil disputes would
> be settled by the decisions of an independent tribunal. If the
> disputants don't accept the decisions of the tribunal (and they
> often don't) what is the difference from the current situation?
The difference is that UN would be there to enforce the decision,
whether the disputants accepted it or not. If somebody issues a
"resolution", Sadam Hussein would wipe his ass with it; but a cruise
missile is something that needs to be taken into account.
> Military troubles are worst in the poorest nations. Allocating
> political power by GNP will do almost nothing to soothe these
> troubles. Better to keep poor countries and rich countries politically
> decoupled from one another. What you are proposing is to arrange
> for the rich countries to rule the poor countries (that's what your
> "representation according to 'real' power seems to come down to),
> which I guarantee will lead to war. It's been tried already.
A clarification of my earlier statement: I only said that it might
be
advisable (I am not really certain of this) that a nation's
representation in the UN is "in some proportion" to its real power.
This would mean that real power would be taken into account, but
other factors, such as population size or the extent to which a
nation would be affected by UN decisions would also be given a role.
The overall result would almost certainly be that poor nations were
given somewhat *more* influence than they have in the real world
Hey, let's take a step back and look at it this way. You have your
own values, involving, perhaps, personal freedom, immortality, to see
the ones you care for prosper etc. Surely you won't accept that
some fucked up religious extremist lay these values in ashes by
releasing some self-replicators, or that the inhabitants of your
city are gassed as a result of a failed blackmail attempt by a wicked
dictator. Yet these things will happen unless they are actively
prevented. One way one might try to prevent them is by having a
global organization with legislative powers that surveils the use of
the most dangerous technologies and prevents irresponsible agents
from acquiring them. The only enteties in the real world even
remotely resembling this are the UN and a US led coalision. In my
original post I asked if someone had a better proposal, and I hope
that someone has, but so far I have seen none.
Subject: Re: Goo prophylaxis (was: Hanson antiproliferation method?)
Date sent: Sat, 23 Aug 1997 12:44:55
> notice that something is building TNT out of matter in my vicinity,
> I'm not going to stay around...
Hmm, for concretness, let's assue you are the size of a house. With
nanotech, the time it takes to assemble sufficient TNT to blow a
house to pieces would be much less than the time it would take to
Ok, so you put wheels on your house, or wings and a jet engine. But
the aggressor could do the same; and at least today, small missiles
can usually catch up with big warplanes. Or even better: He could
build TNT all over the place to begin with, so you have nowhere to
Perhaps he can't build TNT all over the place, because there are
other people living there, secluded within their own walls. We can
call this the bee-hive scenario; there are still independent,
humanoid beings, and each has her own cell, the cells being packed
side by side in a three dimensional structure. The surface of hive
is a heat emitter and energy collector... No, I need to think this
through before I write about it. I might do a nanotech strategies
paper when I have finished the ones I am working on now (one of which
is on what a superintelligence could be expected to do etc.).
> If you know what defenses I have, then I'm vulnerable (think of the
> AIDS virus, which uses the immune system), but if I have a system
> which you know fairly little about, then it is harder to design a
> workable attack (hint: never let your enemy get fingernail clippings
> or spittle, he can use them to bring down black magic on you!). So
it
> might be a good idea to design *and* evolve your defenses to make
> them unique. And a good basic structure would give you time to act
> ("Oh shit! My immune nanites can't stop the infection. Let's call
Why design *and* evolve? I mean, obviously they would evolve if we
first design one version and then an improved version; but why would
they evolve in the sense we were discussing, i.e. by sexual
reproduction and natural selction? If there is an advantage in having
a system that is unknown to your enemy, then change the design often,
or include random elements. This avoids seriously maladaptive
offspring, introduces the unknowability exactly where it matters,
and is quicker. Besides, natural evolution would often be
predictable by the enemy if he knows the fitness landscape.
> > (2) What if new chemical reactions are introduced? Will complicated
> > higher organisms still be viable? Exactly what properties of the
> > system does this depend on? Does anybody have any idea of how to
get
> > a handle on this problem?
> Very broadly, the question seems to be if diamondoid mechanosynthetic
> or a aquaeous carbochemic biomass has the lowest chemical energy;
the
> biosphere would tunnel to the lowest if given a reaction pathway.
I
> guess diamond is the stablest, since cells cannot digest it, but
it
> might be too energy-intensive for nanites competing with each other
> to digest too (thick diamondoid sediments on the ocean floors; in
> time they will form a very fun form of "chalk").
> I think complicated organisms are still quite viable, since they
have
> the advantage of fast cultural evolution before biological evolution.
> It doesn't matter if their biology is about diamond or water.
I would find it very interesting if you could expand a bit on this.
Subject: Re: Goo prophylaxis (was: Hanson antiproliferation method?)
Date sent: Sun, 24 Aug 1997 15:21:20
> Design is good at jumping over deserts in the fitness landscape,
> while evolution is good at searching it. If you combine occasional
> re-design with evolution, you can add optimizations and clever tricks
> to the powerful abilities of evolutionary programming.
I think there is something that should be pointed out clearly here.
What you seem to mean is that genetic algorithms will play an
important role in designing the immune system. However, what you
often say is that *evolution* will do that. Now, that is not wrong,
but I might mislead some people. Although genetic algorithms can be
said to describe some form of evolution, there is a world of a
difference between natural, blind, Darwinian evolution and specially
designed genetic algorithms that can be Darwinian or Laplacian with
any number of sexes and variable sizes of the property chunks that
are inherited, where all the parameters can be played with by an
insightful experimenter who a guids the process with his knowledge
and overseeing intelligence. These are two separate things, but many
people tend to confuse them.
evolutionaly computing: designed & controlled
(natural) evolution: blind & controlling
> Design is good at jumping over deserts in the fitness landscape,
> while evolution is good at searching it
This would seem to lead to the prediction that the more that is known
the less useful will evolutionary computing be.
> a certain overhead, but it is an overhead for your immune-computer
> not for your design capabilities.
What do you mean? My desigen capabilities depend on the
design-programs I run. Since the designs in question are designs for
immune-defence systems, they would run on the "immune-computer".
Hence any overhead for the immune-computer is an overhead for my
> As an example, assume the worst scenario happens and an escaped badly
> programmed dishwashing nanite
This does not seem to be the worst scenario to me. The worst
scenario would be something deliberately built to eliminate all
life. (It would be even worse if it was designed to torture it.)
>starts to turn all organic life
Why just organic life? Why not dead organic substances, earth, etc.
And is there any good reason why it could not change the earth crust
into something with a higher binding energy?
> more of itself. It will spread with the speed of an bacterial
> infection, and be quite deadly.
Why couldn't it spread much faster? Bacteria are limited to some
specifid kinds of hosts, the nanites could attack any organic
material and many inorganic ones too.And if they were deliberately
designed, they could transform themself to missiles after they had
eaten enough, and then swoosh accross the seven sees in a very short
>Of course, as soon as this becomes
> known there will be several groups who quickly enclose themselves
in
> their already built underground bases (Cheyenne mountain is an
> example that exists today, and with this level of nanotech I think
> there will be more "nanosurvivalists" waiting for the disaster).
They might have to go there pretty quickly, like after a nuclear
alert. They will have to make sure that not a single little nanite
finds a way in. They will have to hope that the nanite doesn't eat
rocks and cement. They will have a limited time to figure out how to
use their very limited resources to eliminate a enemy that already
forms a think deadly layer over the whole earth. They have to hope
that the nanites weren't deliberately designed to pile up explosives
on top of their bunker and blow it all away. --Yes, they *could* make
it, at least in a Hollywood movie...
> while the biosphere turns to dishwashing goo there will be people
> around who are very motivated to find a weapon against it, for
> example a tailored "predator nanite" or something similar. It doesn't
> appear likely that the goo could wipe out all the people (just a
very
I'm sorry, but it does seem to me a bit like wishful thinking (and
reading to much SF?). I think I will call this the
go-hide-in-your-basement solution to the antiproliferation problem.
>, and then it would just evolve in an ordinary
(Unless it was designed not to evolve.)
The remarks you made seem predicated on the assumption that the
nanites will be comparable to a particularly virulent biological
plague. Suppose that this isn't true. Then the only method for
avoiding disaster in a society where there are many independet
individuals with full technological access is to have some kind of
active nanotech immune system. It seems to me that the reactions
towards higher binding energy would always have an advantage, so in
this situation there would only be two ways of maintaing status quo.
The first is if all the material were already very close to its
lowest energy state, so that no more reactions were economical. Does
anyone have a good design for a computer that would work under those
circumstances (we would all be uploads then).
The second is to have the immune system quickly eliminating any
plagues, and it could use the fact that it has access to more energy.
Aha, I just thought of a third way. The independent folks could all
live in a virtual reality that were designed so they could do no
major harm. They would have no access to the real reality, which
would be ruled by a single entity.
Subject: Re: Goo prophylaxis (was: Hanson antiproliferation method?)
Date sent: Mon, 25 Aug 1997 21:31:21
> > Why just organic life? Why not dead organic substances, earth,
etc.
> > And is there any good reason why it could not change the earth
crust
> > into something with a higher binding energy?
> Energy is the problem here. While I think a workable nanite based
on
> silicates could be made, the biosphere is the major source of
> high-energy chemicals on this planet. So it is the easiest target
and
> most useful; eating rocks would require so much energy that the
> growth would be slow and the threat fairly minimal. And the earth's
> crust appears to be at a very deep energy minimum, I don't see how
> you could get further without tremendous amounts of energy.
Well, as long as there exist a chemical compound with a higher
binding energy than the components have in naturally occuring rock,
doesn't this mean that the reaction would be *exotherm* so that the
only energy we will need is for start-up. And even if each digestive
cycle were to take a longer time than for organic substances, this
would not matter much since we are dealing with a exponetially
increasing number of parallel processes. --This is a question to
which we should be able to have a scientific answer already today.
> > > more of itself. It will spread with the speed of an bacterial
> > > infection, and be quite deadly.
> > Why couldn't it spread much faster? Bacteria are limited to some
> > specifid kinds of hosts, the nanites could attack any organic
> > material and many inorganic ones too.
> I based this on Drexler's calculations of replication. Bacteria are
> actually quite good replicators, thermodynamics seems to place some
> limits to replication speed at a certain energy level (and you cannot
> get much more than 1000 W/m^2 on the surface of the earth).
There are two separate issues here. Your first statement was meant
to
support your claim that we could hide away from bad nanites, so I
assumed you refered to the actual speed of a bacterial infections.
Your last statement says somthing not about the actual speed of
bacterial infections, but about the speed of a bacterial infection
in
ideal circumstances --continuous medium, unlimited food etc. I might
agree that in those circumstances there might not be much difference
between a nano-plague and a biological plague; but what we are
talking about is the real world, and therein bacteria are quite bad
replicators in the sense that a new strand of bacteria won't cover
the whole earth in a matter of hours. What are the Drexlerian
calculations you refer to?
> > And if they were deliberately designed, they could transform
> > themself to missiles after they had eaten enough, and then swoosh
> > accross the seven sees in a very short time.
> Yes, yes. But I'm trying to discuss the immune system problem here,
> not the deliberate weapons use problem (as I pointed out in my last
The immune system problem is part of the deliberate weapons use
problem. We want an immune system that can make us safe from a
Saddam Hussein with nanotech. If we were certain that everybody would
use nanotech in a peaceful and responsible way then we would not need
to care much about the immune problem. E. g. we could simply design
all the nanomachines to be non-evolvable. (This is done by building
them in such a way that any single one or a few mutations would lead
to a non-viable machine; only a cosmic coincidence would yield
something that can reproduce.) So I'm not sure about which problem
your are trying to solve.
> > to have the immune system quickly eliminating any
> > plagues, and it could use the fact that it has access to more energy.
> > A good design for this?
> I have been working on a system, but it is not yet written up.
> Basically, it will depend on what you want to defend.
I'm looking forward to reading about it; can you give us a spoiler?.
Is it supposed to work against designed plagues also?
> > Aha, I just thought of a third way. The independent folks could
all
> > live in a virtual reality that were designed so they could do no
> > major harm. They would have no access to the real reality, which
> > would be ruled by a single entity.
> And how do you trust the independents to not figure out how to
> subvert reality in some way, or the entity to wield its power well?
My original remark was only about a possibility: if we are lucky, the
entity would benevolent and then this could work. However, now
when I come to think about it, it might be possible to have a stable
postnanotech libertarian society after all, contrary to what I
previously believed. Here is my idea:
The scenario assumes that many humans value freedom and independent
personal existence higher than anything else. When nanotechnology
approaches, they realise that if freedom is allowed in a world with
strong nanotech, then some mad person will certainly design the
doomsday virus. So they realise they have to give up on freedom. But
then some bright person comes up with the idea that all people upload
and that only a robot is left with the ability to operate in the real
world. The whole system is hardwired so that the robot only executes
instructions that have been agreed upon by the majority of the
uploads. In their virtual reality, the uploads can do anything they
want: each one has unlimited individual freedom. The only thing they
can't do in the virtual reality is to mass murder a lot of other
uploads (the virtual physics doesn't allow destructive nanomachines
to be built, for example). The uploads cannot influence the external
world either, except when a majority decision can be made. But for
many decisions, this should be feasible: e.g. colonising the galaxy
to provide more Lebensraum etc. One can even imagine refinements of
this scheme such that each individual would have his own robot that
he could to what he liked with; though this presupposes that the
robots could be built in such a way that nobody could use their robot
to do anything that would endanger the computer on which they all
This is the only way I can think of that a very nearly
completely libertarian society, without any guardian or international
government, can exist long after the arrival of strong nanotech.
Subject: Re: Goo prophylaxis (was: Hanson antiproliferation method?)
Date sent: Mon, 25 Aug 1997 22:20:14
Eliezer S. Yudkowsky wrote:
> Nicholas Bostrom wrote:
> > > As an example, assume the worst scenario happens and an escaped
badly
> > > programmed dishwashing nanite
> > This does not seem to be the worst scenario to me. The worst
> > scenario would be something deliberately built to eliminate all
> > life. (It would be even worse if it was designed to torture it.)
> Very, very true. A lot of people on this list seem to lack a deep-seated
> faith in the innate perversity of the universe. I shudder to think
what would
> happen if they went up against a perverse Augmented human. Field
mice under a
I think I know what you mean by "the innate perversity of the
universe", but I can't think of any good way of defining or
explaining it. What would your explication be?
> > > It will spread with the speed of an bacterial
> > > infection, and be quite deadly.
> > Why couldn't it spread much faster? Bacteria are limited to some
> > specifid kinds of hosts, the nanites could attack any organic
> > material and many inorganic ones too.And if they were deliberately
> > designed, they could transform themself to missiles after they
had
> > eaten enough, and then swoosh accross the seven sees in a very
short
> I'd actually think that the infection would spread in multiple waves.
The
> first wave might be small pellets travelling at hypersonic speeds,
or even
> lightspeed computer viruses travelling to existing replicators. The
second
> wave would be a softening-up wave that would reproduce very quickly
and at
> high speed, taking small bites out of things and leaving third-wave
> replicators behind. The third wave would be immensely destructive,
the actual
> gray goo. The fourth wave, if any, would assemble things out of the
raw
> material thus produced.
> Note that these don't need to be different types of replicator. Each
"wave"
> could be a different mode of action, evoked by circumstances.
Yes. It should be possible to model the first two waves
mathematically. You have a roomful of nodes, and one node is the
starting node. The starting node emits colonizers. When a colonizer
arrives at a node, that node begins to emit colonizers too, after a
certain delay time. Each colonizer can be sent to a any node. Which
colonizers do you send to which nodes in order to colonize all nodes
in the shortest possible time?
> > that the nanites weren't deliberately designed to pile up explosives
> > on top of their bunker and blow it all away. --Yes, they *could*
make
> > it, at least in a Hollywood movie...
> I agree, except that they'll be using nukes, not ordinary explosives.
Or the
> nanites could surround the entire compound, lift it into space, and
toss it
Maybe nukes, but that presupposes that they have enough intelligence
to do Uranium mining and to put together a warhead. Chemical
explosives would be easier to have them manufacture if you couldn't
give them superintelligence. --Tossing it into the sun seems a bit
farfetched and unnecessary.
> "Who will guard the guardians?" - maybe nanotechnology would give
us a perfect
> lie detector. Nanotechnology in everyone's hands would be just like
giving
> every single human a complete set of "launch" buttons for the world's
nuclear
> weapons. Like it or not, nanotechnology cannot be widely and freely
> distributed or it will end in holocaust.
Yes, yes. At least in the absense of a working immune system, but we
have doubt's that such is possible. Do you have any concrete reason
why it could not work, though?
>Nanotechnology will be controlled by
> a single entity or a small group... just as nuclear weapons are today.
Right. Though see my Safe Libertarian Future scenario in my reply to
Anders. Would you agree that it would be a stable state? (I don't
claim that it is likely to happen.)
> If that entity is benevolent and Libertarian, utility nanites would
be
> released as they were programmed - to eliminate hunger, starvation,
old age,
> death, etc. The world would remain much the same, except most forms
of
> physically based pain and coercion would be eliminated. Other utilities
might
> be more flexible. No utility will give access to the forbidden molecular
> level, but many might give access to higher levels. People might
be able to
> edit their synapses or their tissue-level body structure. (The former
> scenario might result in Singularity in fairly short order.)
Yes. Never forget to mention the psychoactive drugs that will be
> If that entity is malevolent, immediate and indiscriminate use of
nuclear
> weapons would be free humanity's only hope of survival. Humanity
can survive
> nuclear war and fallout. It cannot survive molecular warfare.
That's unfortunately the way it looks.
Subject: Re: Goo prophylaxis (was: Hanson antiproliferation
Date sent: Mon, 25 Aug 1997 22:26:13
>Remember that we humans consistently overestimate the risks
>of huge disasters and underestimate small, common disasters, and that
>fear is the best way of making people flock to an "obvious solution",
>especially if it is nicely authoritarian.
You know, I think so too, but I think that is only 2/3 of the truth.
We underestimate small, common disasters, overestimate the risks of
huge disasters, and we underestimate the risks of absolutley enormous
disasters. Or we put them in the same category as the huge disasters
without realising that they may be millions of times worse.
Subject: Re: Goo prophylaxis (was: Hanson antiproliferation
Date sent: Tue, 26 Aug 1997 12:06:51
> > We underestimate small, common disasters, overestimate the risks
of
> > huge disasters, and we underestimate the risks of absolutley enormous
> > disasters. Or we put them in the same category as the huge disasters
> > without realising that they may be millions of times worse.
> Hmm, you mean like the risk for dinosaur killer asteroid impacts
> or vacuum decay? They seem to belong in the category of disasters
> that are so absurdly devastating that they do not appear real; we
> can relate to nuclear war and plagues of nanites, but not the
No, nuclear war and especially plagues of nanites belong to the
1. Small, common disaster: car accident
2. "huge" disaster: Tjernobyl, major earthquakes
3. enormous disasters: grey goo, all-out nuclear war
There is more difference between 2 and 3 than between 1 and 2.
I think people would do well to pay more attention to the dangers of
killer asteroids and vacuum decay, if it weren't for the fact that
there are much more probable disasters in category 3 that they should
Subject: Re: Goo prophylaxis
Date sent: Tue, 26 Aug 1997 12:06:03
[Eliezer Yudkowsky wrote:]
> > How much time would it take for a nanomachine to construct a nuclear
weapon?
> > I think we can assume that nano is at least as destructive as nuke.
> Numbers, please. It is easy to claim something like this, but is
it
> Building a nuke: you need around 10 kilograms of uranium 235 (or
whatever
> isotope it was). There is around 2 grams uranium / tonne in the crust
of the
> earth, of which 0.72% is U235, so to get 10 kg you need to process
around
> 7000 tonnes of crust. I'm not sure how much energy is required to
reduce
> the UO2 to pure U, but it is a noticeable amount (are there a chemist
> in the house?). Assuming the nanites cover a large patch with solar
> collectors, they can get around 500 W/m^2, which has to cover their
> replication, search through the crust, reduction, isotope separation
and return
> to the "base"; how much energy this is is a bit hard to tell right
now
> (it is 1.30 in the morning here :-), but it looks like it will take
> a while for the bomb-mold to blow up. A wild guess would be around
a
Even so, the nanites could cover continents with construction sites
so you couldn't bomd them all out. Dynamite should be much easier and
> Doing the same work as a nuke with nanites (i.e. disassembling everything
> within a few hundred meters and blasting everything within a few
kilometers)
> is rather tricky, since it is extremely energy intensive. You need
plenty
> of energy to do the disassembly (essentially you have to break most
molecular
As I asked in an ealier posing, wouldn't you *gain* energy if you
then reassemble them to compounds with higher binding energy?
> it is IMHO clear that we should not be overly worried about
> nano-built nuclear weapons but rather nano-ebola.
I think Eliezer was trying to establish a lower buond on the
destructive capability. The reason I begun to talk about dynamite was
that it would mean that we can be certain that the bunkers you
suggested wouldn't work agains a designed killer nanite.
> > Our immune systems are unimaginably more sophisticated than a virus
or a
> > bacterium, using controlled evolution to combat natural evolution.
And yet we
> > still suffer from colds and diseases. The only reason that the
viruses
> > haven't killed us outright is that it isn't good strategy.
> Exactly. So the major question is: is it possible to create a nanite
> infection that is deadly (or subtle) enough to wipe out all competition?
> Don't reflexively answer 'yes' to it, try to give a considered
> answer of why it is likely (or why not).
Yes, if it is the first nanite infection, there would be no
competition to compete with! (Biological organisms compete in a lower
> > It is easier to destroy than create!
> You repeat this as a mantra. And of course you have the second law
of
> thermodynamics on your side. The problem is that you do not attempt
to
> make quantitative comparisions between the strengths of different
systems,
> and instead rely on plausible-sounding arguments. That is definitely
> *not* a good strategy if you are trying to discuss something important
> where we do need a well planned policy.
I agree. We need to try to spell out arguments carefully.
> let's try a simple sketch to see how easy it is to vanquish a designed
That's the spirit we like on this list!
> The body is surrounded by an inert skin (say diamond); attempts to
> physically breach it can be detected from the inside and the surrounding
> region sealed. The rest of the organism (could be a transhuman, factory
> or a city) is compartmentalized with similar walls; infected sections
Well, if the organism is in a free environment, then the plague would
attack all parts of the skin simultaneously. The whole skin would
thus have to be shed. The plague would immediately attack again, and
soon the organism would run out of resources. Alternatively, the
plague could build explosives and blast your organism. So it seems
that your organism could not be a transhuman, factory or a city, but
would have to be the whole planet.
> Immune devices move around, interrogating "cells"
> (subsystems) by comparing their surface markings with allowed types
> (this list can be kept secret from someone who disassembles a device
> by using a trapdoor function), and occasionally disassembling the
> cell to check its innards. Other immune devices check the general
> activity, looking for deviations from the normal state
And if the organism is the whole planet, then this would of course
be
equivalent to a totalitarian state. (Deviations from the normal
state= activities of some individual that the state does not approve
of, even if that individual hasn't harmed anybody yet.)
It is still interesting to see where this will lead. This thread
continues to produce an extraordinary number of interesting comments!
Subject: Re: Goo prophylaxis (was: Hanson antiproliferation method?)
Date sent: Tue, 26 Aug 1997 12:07:38
Eliezer S. Yudkowsky wrote:
> Our immune systems are unimaginably more sophisticated than a virus
or a
> bacterium, using controlled evolution to combat natural evolution.
And yet we
> still suffer from colds and diseases. The only reason that the viruses
> haven't killed us outright is that it isn't good strategy.
Very good point about the biological immune system analogy.
> It is easier to destroy than create!
> Well... I'm not competent to estimate the percentage of the population
with
> the genius and expertise to design death goo. The "twisted" part
can pretty
> much be taken for granted. And I truly don't think death goo would
be that
> hard to design. If any human is even capable of designing an immune
system,
> then the average educated person will be capable of breaking it,
given time
> and effort. Any twisted genius will go through it like tissue paper.
> situation will be pretty much the same with nanotechnology... except
that a
> first strike will have a different probability of succeeding. If
that
> probability is high enough, MAD won't work and nano should be confined
to a
I agree. This single group could be demorcatic, though.
Subject: Re: Goo prophylaxis (was: Hanson antiproliferation method?)
Date sent: Tue, 26 Aug 1997 12:42:09
> Barring nuclear destruction, it is not clear that gray goo will win
> the battle. Gray goo is not interested solely in destruction. Rather,
> it is a replicator like everyone else; it seeks to preserve its own
> structure and function, it seeks to reproduce, it seeks to protect
itself.
> It must do these things in order to survive.
I prefer to use the term grey goo to denote whatever nanites cause
indiscriminate destruction. But anyway, nanites could be designed to
be interested solely in destruction.
>gray goo must replicate in order to be effective. A single gray goo
>disassembler will not cause much damage. And whatever mechanisms it
>uses to replicate will be vulnerable to attack just as much as the
>systems which it is trying to "eat".
Yes. The goo might fight a downhill battle, though, towards a lower
energy state? It would be interesting to think about this in more
> My prediction would be a band of "war zones", where the battle rages,
> with surges as one side or the other gets a local advantage. Between
> these zones would be relatively stable regions, dominated by cooperating
> replicators. But the border shifts, and occasionally a stable region
My intuitions are exactly the opposite. Your prediction seems to
presuppose that the first nanopower won't obtain world dominion, an
assumption I find very dubious. But even if we disregard the genesis
problem, I still doubt that the situation you describe would
be stable, though I don't have any short explanation of why I think
Subject: Re: Goo prophylaxis (was: Hanson antiproliferation method?)
Date sent: Wed, 27 Aug 1997 11:31:31
> > Safe Libertarian Future:
> > The scenario assumes that many humans value freedom and independent
> > personal existence higher than anything else. When nanotechnology
> > approaches, they realise that if freedom is allowed in a world
with
> > strong nanotech, then some mad person will certainly design the
> > doomsday virus. So they realise they have to give up on freedom.
But
> > then some bright person comes up with the idea that all people
upload
> > and that only a robot is left with the ability to operate in the
real
> > world. The whole system is hardwired so that the robot only executes
> > instructions that have been agreed upon by the majority of the
> > uploads. In their virtual reality, the uploads can do anything
they
> > want: each one has unlimited individual freedom. The only thing
they
> > can't do in the virtual reality is to mass murder a lot of other
> > uploads (the virtual physics doesn't allow destructive nanomachines
> > to be built, for example). The uploads cannot influence the external
> > world either, except when a majority decision can be made. But
for
> > many decisions, this should be feasible: e.g. colonising the galaxy
> > to provide more Lebensraum etc. One can even imagine refinements
of
> > this scheme such that each individual would have his own robot
that
> > he could to what he liked with; though this presupposes that the
> > robots could be built in such a way that nobody could use their
robot
> > to do anything that would endanger the computer on which they all
> > This is the only way I can think of that a very nearly
> > completely libertarian society, without any guardian or international
> > government, can exist long after the arrival of strong nanotech.
> There will always be (plenty) of people who'll refuse to get uploaded
> into some virtual reality asylum, what about them? Would they be
> forced to upload (upload or die!)?
>IMHO a democratic system
> like the one above is almost by definition a severe handicap in case
> of a conflict with "free" outsiders, because while the democrates
are
> busy debating and voting, the enemy has already launched his proton
There is a good chance we will never be attacked by aliens. And if
we
were, then those aliens would surely be smart enough not to attack
us
unless they were certain that they could easily win whatever we did.
If alien invasion really were an issue, we could develop an automated
missile launch system or something like that.
> What would happen if someone trashed the robot (the only outside
link),
> would the VRs be trapped in their "dreamworld" forever?
Well, I spoke of "the robot" figuratively. In reality this would
consist of millions of von Neumann probes expanding our computer in
> Anyway, I think the only way you can stay reasonably free *and* safe
is
> when everybody (possibly in small like-minded groups) leaves earth
and goes
> in different directions. A 1.000.000 lightyears or so seem (with
the
> current laws of physics) like a pretty safe barrier.
Well then you would agree that we need some temporary accomodation
for the next million years or so.
Subject: Re: Goo prophylaxis
Date sent: Wed, 27 Aug 1997 11:36:31
> On Aug 26, 12:25pm, "Nicholas Bostrom" wrote:
> } Anders Sandberg writes:
> }> isotope it was). There is around 2 grams uranium / tonne in the
crust of the
> } > earth, of which 0.72% is U235, so to get 10 kg you need to process
around
> } > 7000 tonnes of crust. I'm not sure how much energy is required
to reduce
> } > the UO2 to pure U, but it is a noticeable amount (are there a
chemist
> } Even so, the nanites could cover continents with construction sites
> } so you couldn't bomd them all out. Dynamite should be much easier
and
> What the hell are these nanites living on?!
Sun or chemical binding energy, for example.
>Their very life requires
> energy, not to mention, as Anders noted, the cost of trying to
> concentrate extremely diffuse and oxygen-bonded uranium.
Use dynamite then. But energy wouldn't be a problem.
> collection will take lots of area, be noticeable, and be exposed.
Shade
As I said, if it covers a whole continent you can't do that.
> And they're not living off of rock. It's hard to get lower energy
> states than found in a lot of rock without using nuclear processes.
> That's why aluminum mining is so expensive.
With nanotech, we might be able to catalyse arbitrary prosesses and
gain energy, as long as there is a net increase in chemical binding
> I challenge this "lower division" assumption. Antibodies can't gum
up
> the works of nanites; phagocytic cells can't enclose and dissolve
them?
> Hydrogren peroxide and free radicals are popular weapons. Oxidizing
> chemicals vs. small pieces of pure carbon; I bet on the white blood
cell.
Of course, nanomachines can do everything biological cells can, since
these are a special kind of nanomachines; but they will be able to
do
much more since they can be use all parts of design space, not only
that little corner that was available to evolution.
> And is the energy state of diamondoid material higher than that of
> organic material? Probably, in which case this gray goo plague needs
> constant input or can only grow by processing lots of material --
which
> means that it grows very slowly.
No, in that case the grey plague would not turn everything into
diamonid material but into something in a lower energy state than
organic material (which is in a very high energy state: that's
part of the reason why we eat beef and carrots instead of stones).
> } Well, if the organism is in a free environment, then the plague
would
> } attack all parts of the skin simultaneously. The whole skin would
> } thus have to be shed. The plague would immediately attack again,
and
> Wait, diamond nanites are attacking diamond skin? If there's so much
> energy for the plague to live on, obviously your outer defense shouldn't
> be a hard shell, it should be a friendly counter-plague, like the
> friendly flora living on our mucous membranes.
Yes, but even that would not save you long. You would run out of
resourses. The only way is to go out there and conquer the world
before anyone else does it.
> } soon the organism would run out of resources. Alternatively, the
> } plague could build explosives and blast your organism. So it seems
> Damn complex plague, building explosives in a coordinated manner.
You
> sure it doesn't have internal communiation lines that can be attacked?
But those communication lines are on the outside of your organism.
You would need an immune system that extends all over the place and
that does not allow any significant competition to arise anywhere.
That's what I'm saying. Your castle is not safe.
Subject: Re: Goo prophylaxis
Date sent: Wed, 27 Aug 1997 13:37:27
> But where does the energy in the explosives come from? Remember conservation
> of energy - when you make explosives you have to add energy beside
the
> chemical energy of the raw material to make the energetic but unstable
> TNT molecules (electrical energy -> chemical energy). The same is
true for
> the nanites - if they want to turn a lawn into a bomb, they need
more
> energy than will be released in the eventual blast.
However, they can turn a part of the lawn into a bomb by
digesting the other part of the lawn into something with higher
[Eliezer Yudkowsky wrote:]
> > It could go either way. Maybe the bad guys kidnap all the scientists
and then
> > use pain-center stimulation to achieve faster results. Idealism
may win
> > battles and break ties, but it is not a defense to be trusted.
> Or maybe the good guys spread a nanite which give all bad guys migraine
> when they think evil thoughts. Get real, this sort of unrealistic
Hollywood
> speculations is the last thing we need ("... if you do that, I'll
send in
> my *dragons*!" "Ha! I have an army of invisible pink unicorns that
will
:-) Of course this talk about kidnapping is silly, but I don't think
that speculation about a nanite that subtly changes people's
> It seems so. But as others have pointed out, your [Eliezer's]claim
that destruction is
> fundamentally more efficient than creation suggests that if I know
about
> your goo I can destroy it.
You would have to destroy it before it destroys you. The point is
that offense would have an enormous advantage over defense. This
means that somebody makes the first strike, and he conquers the
world. So there will only be one power in existence after the arrival
of strong nanotech (modulo some reservation that I have explained in
> they would need energy to breach the
> walls if they are inert, and it would be exothermic for the defenders
to
> thicken their walls. If they turned the surrounding compartments
into
> a "diamondoid scar" the energy in the infected compartment would
not
> be enough to breach the barrier, and the whole infection would require
> a lot of external support (which could be broken).
What about a tactic wherby the surrounded invader uses what little
energy he got to make a hole in the diamonoid wall. Then he enters
the hole and let the carbon atoms fall into their places again
(forming diamonid) behind him, thus regaining some fraction of the
energy he spent removing them, so he can dig further in etc. I
suppose he would inevitably lose some energy to thermal
vibrations, but how much? If the losses could be made small enough,
he would cut through the diamonid as easily as a glowing knife cuts
> > Nanosystems are always faced with "destruction by induction". That
is, you
> > can always destroy one cell; therefore you can destroy the whole
thing. To
> > defend against this, it is required that the system expand faster
than the
> > destruction OR that it be impossible to destroy one cell.
> Or that you can make the loss of a finite number of cells bearable.
If their
> loss removes the threat (for example by forming a nanoscar), then
the
A nanoscar would not help if its cells could be destroyed one by
> While I believe in active shields, space is IMHO the only
> really proven form of defense against goo.
Anders, I think I could be useful if you would clarify your position
on active shields. It seems to me that all what your arguments
attempt to support is the thesis that:
(A) "IF we can ignore the problem of genesis, and IF we can ignore
nanofacilitated macroscopic warfare, and IF we assume that the enemy
doesn't make use of a large part of the resources outside our castle
but voluntarily remains the same size as us, THEN a stable situation
with individual agents each with their own active shields is
I and Eliezer are saying you can't ignore the problem of genesis,
you have to consider macroscopic warfare, and the enemy is likely
to make use of any resources he can get hold of. It can be very
interesting to discuss (A), but it should be made clear that even if
(A) is true, the conclusion about the plausibility of active shields
rests on assumptions that will almost certainly not obtain.
Subject: Re: Goo prophylaxis
Date sent: Wed, 27 Aug 1997 22:03:49
The Low Golden Willow wrote:
> As Anders said, dynamite takes energy too.
Certainly, but the amount of chemical energy contained in a
medium-sized forest, for example, is enormous. The cost of a few
tonnes of TNT to a nanopower is like the cost of Mars bar
> } > collection will take lots of area, be noticeable, and be exposed.
Shade
> } > it, dust it, bomb it.
> } As I said, if it covers a whole continent you can't do that.
> Doesn't anyone think in terms of processes? How did it get to cover
a
> whole continent without anyone noticing?
By getting up very early in the morning, before anyone else was
awake. But seriously, the point is that it could happen so fast that
nobody has time to do anything about it before its too big to be
obliterated with bombs etc. But Ido agree that it's important to
think in terms of processes (i.e. not forgetting the genesis problem
-how could anything *become* that way?)
> A nanite invading an organism
> is a machine floating in an aqueous environment being glommed by
> antibodies as well as swallowed and exposed to highly oxidizing
> chemicals. "It's easier to destroy than create."
That organism would itself be floating in an aqueous environment
being glommed by nanites from all directions, unless your immune
system were global to begin with...
> how is the current world going to turn into isolated
> castles in a sea of goo?
By the nanites eating up everything that is not protected by a global
nano-immune system; we assumed that there were no such system.
> } But those communication lines are on the outside of your organism.
> } You would need an immune system that extends all over the place
and
> No, you need artillery.
Are you proposing that you are going to sit on the roof of your
bunker and hold the nanites back with a canon?
Subject: Re: Goo prophylaxis
Date sent: Wed, 27 Aug 1997 22:47:40
> At 03:58 PM 8/26/97 +0000, Nicholas Bostrom wrote:
> >My intuitions are exactly the opposite. Your prediction seems to
> >presuppose that the first nanopower won't obtain world dominion,
an
> >assumption I find very dubious.
> engineering will suddenly leap from one end of the scale to another.
All
> the plans I've seen and heard for getting to the first assembler
produce an
> assembler that is far from diamondoid. The hope is to use that to
produce a
> slightly better assembler, which will be used to produce the next
> material of which the nanotech is made, the more precise an assembler
can
> be, and the more tightly bonded the materials it can assemble. More
tightly
> bonded materials are stiffer, so it takes a stiff assember to make
a stiffer
> Each of these stages is very valuable in its own right, and will
open up a
> range of new possibilities. And each of them will require thousands
of
> genius-years to bootstrap to the next level. Nobody will get from
the first
> to the last level alone and in secret. By the time someone develops
> diamondoid weapons, the world will have years of experience in fighting
with
> weapons made out of kevlaroid, quartzoid and sapphireoid.
Interesting. There is no easy way to tell whether you are right. The
maturation time of nanotechnology is certainly an important parameter
in predicting the future. There are two things I could take issue
with. (1) Will nanotech mature slowly? And (2), given slow
maturation, will that mean that a multipolar world order can remain
Why exactly do you think every stage will take thousands of
genius years? (I presume you mean good-scientists years?) Isn't the
design work fairly tractable (Drexler has already produced some nice
designs) and it is mainly the lack of molecular tools that prevent
us
from starting building things? Better CAM would help a lot, and it
is
As you point out, each partial achievment would bring great benefits
to the power that makes it, so wouldn't this mean that it would have
a good chance of pushing further ahead, leaving the competition
If superinelligence (that could perform a thousand genius years in
a
short time) comes before nanotech, or is developed at an early stage
of nanotech, then the bottle neck would almost certainly be the
hardware, the molecular tools, and in such a case the maturation
process would be almost instantaneous.
I believe the answer to the second question is No. I think there
would either be a negotiated merger, or the stronger power would
obliterate the weaker, and then immediately rebuild itself, and then
expand spherically at a good fraction of the light speed. A Yes
answer to (2) would presuppose that a negotiated merger isn't made,
and that neither power can know with a great probability that it is
strongest, or that the stronger power has some absolute ethical
prohibition from attacking its weaker rival.
Subject: Re: Goo prophylaxis
Date sent: Wed, 27 Aug 1997 23:06:29
We need to answer the following questions:
Can a nanite colony produce a net energy output by eating
organic material/earth crust?
What is the best upper bound on the time it would take to digest a
given large volume of the substance?
We do not require that the end product is diamond, just that the
nanites can extract net energy from the material. My preliminary
guess is "Yes" to the first question (both wood and earth crust), and
"A very short time." to the second (because the time should be largly
independent on the volume, since the nanites can multiply and divide
> At 03:07 PM 8/27/97 +0200, Anders Sandberg wrote:
> >Could we get some numbers here? What is the average enthalpy of
> >organic stuff compared to diamond?
> I don't have the numbers handy, but I know the relative magnitudes.
These
> bonds have about the same energy:
> These are much lower in energy:
Subject: Re: Goo prophylaxis (was: Hanson antiproliferation method?)
Date sent: Wed, 27 Aug 1997 23:45:42
> The only difference today is that your
> preferred scenario could come to pass and one group could already
be
> powerful enough to prevent competitive research.
Yes, that is one of the main reasons. Another is that if one
nanopower is stronger than another, then it can eliminate it and
easily repair all damages it contracted in the war.
>Nicholas seems to be on the side of the Borg, whereas personally I'm
>for the Armadillos; I can think of few things worse than being the
>only entity in the universe.
No, I have not sided with anyone. I have only discussed the factual
problem, what is likely to happen; not, what would be nicest if it
happened. I think it's important not to let our desires influence the
way we estimate which scenario is most probable.
Subject: Re: Goo prophylaxis
Date sent: Thu, 28 Aug 1997 11:09:12
The Low Golden Willow wrote:
> On Aug 27, 11:07pm, "Nicholas Bostrom" wrote:
> Hal's right; we are having a problem agreeing on scenarios. I don't
> associate "gray goo" with "nanopower". I've been mostly assuming
gray
> goo is disassmblers run amok. If not, how is the source controlling
and
> protecting itself from the nanites? (How are the nanites not eating
> themselves?) Plenty of room for perversion, here.
I agree with your agreement with Hal, that we have problem agreeing
on scenarios. As I have explained earlier, I don't think that the
problem of accidental goo is very interesting, for we can design away
the possibility of mutation. I am therefore focusing on deliberately
designed destruction-goo; in the simplest case we can assume a
religious fanatic that wants to destroy the world. The basic issue
has been whether a private (non-global) immune system can deal with
> } > } As I said, if it covers a whole continent you can't do that.
> } By getting up very early in the morning, before anyone else was
> } awake. But seriously, the point is that it could happen so fast
that
> } nobody has time to do anything about it before its too big to be
> I don't think they can spread that fast.
A low tech scenario, which would not allocate any sophisticated
mobiliy to the nanites themselves, would be that somebody sprays them
over a forest from a small aeroplane during the night. Most people
would not want to do that, of course, but it suffices that there is
one such individual in the world and then we are all dead.
> And if you accept immune systems
> fighting nanites then you can't assume the organism is in a sea of
goo,
> because to make that sea the nanites would have had to eat lots of
other
> organisms with immune systems.
No, they could eat dead organic material, or inorganic material.
> Personally I'm still suspicious of this conception of nanites. Little
> atomic manipulators made of a single type of material
No, nanotechnology makes use of many types of materials.
> capable than whole slews of complex enzymes, often dependent on
> different transition metals, crawling around in a vast variety of
> environments, and being able to take over the world in their first
No, not in their first generation. Even when nanotech itself is
mature, the "little atomic manipulators" have to multiply themselves
many times over before they reach macrosopic quantities.
Subject: Re: Goo prophylaxis
Date sent: Thu, 28 Aug 1997 11:20:18
> > Nicholas seems to be on the side of the Borg, whereas
> > personally I'm for the Armadillos; I can think of few things worse
> > than being the only entity in the universe.
> This wasn't my take on Nicholas' scenario. Everyone lives in VR,
and
> nobody is allowed out into the physical world. So life would indeed
have
> some limitations. But this is far from a Borg collective. The individual
> lives of people within the VR could be as diverse and varied as under
any
Yes, that was what I had in mind.
> Actually, Nicholas' scenario could be adjusted to allow people to
"go
> native" and live out in the real world, as long as they were restricted
> from access to technology which could threaten the computers running
the
> VR where everybody lives. A few die-hard realists living on a south
seas
> island (who insist on walking on *real sand*, not the VR stuff that
just
> *seems* real) wouldn't bother anyone.
> Still, Nicholas' idea can hardly be considered consistent with principles
> of non-coercion, especially in the early days when people are rounded
up to
> be vaporized (and uploaded).
Yes, there would be this one point of violation of individual
rights. As we all should know, the fit between ideology and reality
is seldom perfect; and it's the latter's fault but the former's
Subject: Re: Re: Goo prophylaxis
Date sent: Thu, 28 Aug 1997 19:48:56
> In a message dated 97-08-28 04:51:03 EDT, you write:
> << or the stronger power would
> obliterate the weaker, and then immediately rebuild itself, and then
> expand spherically at a good fraction of the light speed. A Yes
> answer to (2) would presuppose that a negotiated merger isn't made,
> and that neither power can know with a great probability that it
is
> strongest, or that the stronger power has some absolute ethical
> prohibition from attacking its weaker rival. >>
> Interesting...but I've got some problems with the whole argument.
> Why would a nanotech society be interested in conquest and expansion?
All
> the traditional reasons for such are presently evaporating
Well, to begin with we have Malthus. Darwinian pressure to fill the
available ecological space, etc. Personally, I don't think that these
arguments are as strong as many think when they are applied to a
post-transition world, but they tend to persuade people. The way I
prefer to think about it is in terms of a superintelligence who
attempts to maximize what it thinks is physical value-structures;
i.e. it wants to organize as much matter as possible in the way that
it think has most value. Except for possible strategic or ethical
reasons, it makes no difference whether the matter is virgin
territory or some other computer is has already organized the matter
Subject: Re: Goo prophylaxis
Date sent: Fri, 29 Aug 1997 01:45:46
> >design work fairly tractable (Drexler has already produced some
nice
> >designs) and it is mainly the lack of molecular tools that prevent
us
> >from starting building things? Better CAM would help a lot, and
it is
> (1) The minimum self-reproducing device (mycoplasma genitalium) seems
to
> require about a million bits of information. I don't think we'll
be able to
> get much smaller than that with our artificial equivalents. That's
about as
> much information as is embodied in a car or medium-size piece of
software.
> To develop from having no experience in automotive technology whatever
to
> the point where you can build a reasonably effective car took at
least
> thousands of genius-years. Ditto software. The first piece of software
> that I'm aware of that was over a million bits long was OS/360, developed
in
> 1964, at a cost of 5000 man-years, to say nothing of all the research
that
> it took to bring software technology to the point where they could
even
There is a difference between the three systems you mention, on the
one hand, and nano self-replicator on the other. Mycoplasma
genitalium, automotive vehicles and comersial software are all
required to be fairly optimized. To build an optimised nano
self-reproducing device would be much harder than simply to make
something useful that can replicate. For example, a universal Turing
machine has been constructed in Conway's Life world. The entity is
very big and it was hard, but nothing near a thousands of genius-year
task, to do it. The feasibility stems from the fact that you have
identical components that you can put together into bigger identical
components, and so on, and at each step you need only consider the
apparatus at a certain level of abstraction. If this is the right
analogy for nanotech, then the design work would seem tractable, once
the right tools are there. But I will take your opinion on this
issue into account in my future thinking. And debugging is also a
> (3) Drexler and Merkle are two very (very!) smart guys. They have
labored
> for years, and designed what? Some bearings, a transmission, a PLA,
and a
> Stewart platform? And you claim this shows how easy it is? I'd hate
to see
> an engineering task you considered hard! If we keep working at this
rate,
> we should see a complete assembler in around a millenium.
Those are again optimized designs. What about the rod logic
computer? And I didn't Drexler give it as an exercise to his students
to design an autoreplicator when he taught a course at some
university? I am not sure about how detailed these designs are,
though. (Could anybody tell us this, please?)
> > As you point out, each partial achievment would bring great benefits
> >to the power that makes it, so wouldn't this mean that it would
have
> >a good chance of pushing further ahead, leaving the competition
> This assumes that it is possible to acquire the massive resources
needed to
> develop the next generation without letting anyone know how the current
> generation works, or even that it works. I would suggest that this
is
> impossible, since the resources have to be gathered either by selling
> products or extortion by threats of violence. Both of these are highly
If the early generations of nanotech can be used to build things,
those things can be sold without giving away details about how they
are made. Or if the early generations can be used to build better
computers, then the military might benefit from that while keeping
>As soon as the world realizes that there is massive
> power to be had, everyone will work like crazy to catch up.
But the leading power will work like crazy to keep the lead. If they
all work equally hard, the one that starts out with an advantage
should get to the goal first. The main point we are discussing is
weather the other powers would by then have obtained enough nanotech
to effectively defend themselves against the leading power. I think
the major military advantages could differ dramatically between some
of the pairs of adjacent generations, so that the first power to
develop the later version would have an easy match against the power
who has the earlier version. This means that even if the whole road
to advanced self-replicators is long and slow, there would still be
some point where a slight progression yielded huge military payoff
> >If superinelligence (that could perform a thousand genius years
in a
> >short time) comes before nanotech, or is developed at an early stage
> >of nanotech, then the bottle neck would almost certainly be the
> >hardware, the molecular tools, and in such a case the maturation
> >process would be almost instantaneous.
How likely do you consider that scenario?
> >maturation, will that mean that a multipolar world order can remain
> >I believe the answer to the second question is No. I think there
> >would either be a negotiated merger, or the stronger power would
> >obliterate the weaker, and then immediately rebuild itself, and
then
> >expand spherically at a good fraction of the light speed.
> There is at least one other possibility, and it is the one that keeps
the
> world system stable: the cost of subduing the smaller power is more
than the
> profit obtained from its submission. Please explain why an imbalance
in
> nanotech is more likely to make this solution infeasible than does
an
> imbalance in e.g. nuclear missiles, or iron swords. Nanotech makes
the cost
> of conquest smaller, but also reduces the profit of conquest, since
> everything gets so much cheaper with nanotech. What are you going
to get
> from your conquered enemies? Natuaral resources? Go to the asteroid
belt!
> Lebensraum? Well, that's OK if you like scorched earth. I'd rather
have my
Forget about the earth, what is at stake is half the universe (the
chance to double the amount of value-structures you can create) and
the possibility of getting rid of an enemy that might decide to
destroy you at a later time. (Even more than this if there are more
than one rival power aspiring to nanotech.)
> I'm being sarcastic here, perhaps more than I should be, but I genuinely
> don't see the payoff to instantaneous extermination of foriegners
through
> nanotech, espescially when your enemies will have at least nuclear
weapons
> and at best nanotech only one generation behind your own.
Not all foriegners would need to be exterminated, only those that
refused a negotiated merger. But if genuinely hostile powers were not
extreminated (or disarmed) at this point, it would only be a matter
of time when they would be in a position where they could
Whether nukes are a complication depends on how advanced nanotech we
are talking about. I agree that they would seem to retain some
deterrence at the early stages.
The outcome also depends on whether a superinelligence is in
controlled or whether the powers are ruled democratically by ordinary
people. In the latter case we would have a further complication.
Subduing the enemies might be more acceptable to the masses if it
could be done without bloodshed and harm to the other nation's
population. Advanced nanotech would make this possible.
What we are discussing lies at the very heart of the future. Maybe
it
has never been done so incisively, anywhere, as we are now doing it,
Subject: Re: Goo prophylaxis
Date sent: Fri, 29 Aug 1997 14:08:55
> Nicholas Bostrom writes:
> > i.e. it wants to organize as much matter as possible in the way
> > that it think has most value. Except for possible strategic or
> > ethical reasons, it makes no difference whether the matter is virgin
> > territory or some other computer is has already organized the matter
> > in a sub-optimal way.
> This would presume that there is a decision procedure for
> optimality, would it not? I have yet to run across any such
> decision procedure; if you've got one, I'd be fascinated to
I'm not sure what you mean by a decision procedure for optimality.
The values themselves I take as givens. They might be design in by
its constructor or they may result from some accidental occurence;
I
don't assume that they can be deduced from some self-evident axioms
Given the values, it may or may not be trivial to translate them into
micro-level descriptions of the value-optimal physical state. That
depends on what the values are. But whatever they are, we can be
pretty sure they are more likely to be made real by a
superinelligence who holds those values than by one who doesn't.
(Unless the values intrinsically involve, say, respect for
independent individuals or such ethical stuff.) The superintelligence
realizes this and decides to junk the other computers in the
universe, if it can, since they are in the way when optimising the
Subject: Re: NANO: Directive of Evacuation
Date sent: Fri, 29 Aug 1997 15:29:02
Eliezer S. Yudkowsky wrote:
> The Directive of Evacuation:
> "Do not declassify nanotechnology until all persons wishing to leave
the Earth
> I think this is a very Libertarian, cautious, ethical way to state
the issue.
> Strategic considerations make it necessary to partition humanity
and remove it
> from large masses before allowing everyone access to nanotechnology.
Why do you think that a multipolar world order with nanotech would
be
possible just because people move out into space?
> I should also propose the easiest way to do this without leaving
anything behind:
> Carve the Earth's crust into citylike sections or hundred-square-kilometer
> sections, whichever is larger, and lift all the sections into separate
orbits.
> Transportation between sections must be provided. Also roofing and
This sounds like fantasy to me.
> So, a bit after nanotech is discovered, the sky changes color, airports
and
> the Interstate undergo some peculiar changes, and space is filled
with giant
> spinning crowbars with domes on one side and counterweights on the
other.
This is from the same person who has written about the singularity?
Subject: Re: NANO: Directive of Evacuation
Date sent: Fri, 29 Aug 1997 15:33:28
> if nanotech is developed, it will first be fairly
> limited and mainly used in labs by people in white coats. Then its
products
> will be marketed and sold, and eventually every home will have its
own
Without a 100% reliable *global* immune system that will be
equivalent to giving everyone access to the launch buttons for the
world's total nuclear arsenal, as Eliezer said. Collective suicide.
Subject: Re: Goo prophylaxis
Date sent: Fri, 29 Aug 1997 15:51:42
> On Aug 29, 1:45am, "Nicholas Bostrom" wrote:
> } >As soon as the world realizes that there is massive
> } > power to be had, everyone will work like crazy to catch up.
> } But the leading power will work like crazy to keep the lead. If
they
> } all work equally hard, the one that starts out with an advantage
> } should get to the goal first. The main point we are discussing
is
> You seem to be assuming a bunch of isolated powers or labs working
> toward nanotech, with one having and keeping a vital lead. Setting
> aside the probability that progress will be too gradual for a massive
> discontinuity to develop, the non-leading labs can collaborate, applying
So could the leading lab. And it would be more attractive to
collaborate with the leading lab.
> } to effectively defend themselves against the leading power. I think
> } the major military advantages could differ dramatically between
some
> } of the pairs of adjacent generations, so that the first power to
> } develop the later version would have an easy match against the
power
> } who has the earlier version. This means that even if the whole
road
> } to advanced self-replicators is long and slow, there would still
be
> } some point where a slight progression yielded huge military payoff
> You're changed scenarios again! First it was destructive gray goo
> launched by some nihilist fanatic. Now it's national warfare.
We are trying to discuss several scenarios at once on this thread,
that's why it seems like some suspect rethoric maneuver is taking
place. The nihilit fanatic is only dangerous if nanotech becomes
everyman's tool, as Ander's said he think it will. But before
that happens, we have to look at what the big, leading institutions
Subject: Re: Goo prophylaxis
Date sent: Fri, 29 Aug 1997 17:03:40
Eliezer S. Yudkowsky wrote:
> > I never said I would sketch an
> > *invulnerable* system, just a sufficiently strong system. For any
immense
> > shielding I can come up with you could always invoke an even greater
> > cosmological disaster ("But your immune system can't stand a supernova!").
> > This exercise is trying to look at defenses against gray goo, which
> > is the real problem, not macro-level warfare.
> But you *do* need an invulnerable system, or at least one that is
invulnerable
> to goo. I can reasonably invoke any forces modern technology is capable
of
> wielding, up to and including nuclear weapons. If black goo reduces
your city
> to radioactive ash, you lose! It doesn't matter how sophisticated
your
> defenses are! I am darn well entitled to demand that your immune
system stand
> up against nuclear weapons, because in practice, that's what going
to be used!
> Not impossible. Not at all impossible. If you have a layered defense
system,
> a diamond shell, and Fog seat belts, the city might be perfectly
capable of
> withstanding a nuclear attack. It would lose a layer of defense,
but might
> well be capable of rebuilding it before more black goo crossed the
radioactive
> zone. In addition, as I pointed out, the city might gain more than
it lost.
Does this mean that yoo are no longer confident in you "destruction
> Even so, it's entirely possible that, on any planet, the black goo
wins. All
> the time. Every time. The Universe is under no obligation to make
things
> easy for us. Hence the proposed Directive of Evacuation: "Get everyone
off
> the Earth, into partitioned space colonies, before releasing nanotechnology
to
Yes, it is remarable how many people are stuck with the idea that an
effective defense *must* be possible. That's blind faith. And even
if
effective defense is possible, there is still the genesis problem;
but by another leap of faith, that problem *must* be soluble too,
even within an unregulated mulipolar world order. I suspect there is
some ideological prejudice at the bottom of this.
> My mental picture of these conflicts is partially drawn from Conway's
Game of
> Life, in which a single particle can destroy an enormous, complex
structure.
> Things on the molecular level will almost certainly be different.
Even so, I
> know of no better image.
An interesting way of looking at it.
> Try this tactic. First, the goo hits the city with a vaporize-one-layer
> attack, even though this also vaporizes a layer of goo. Then it detonates
a
> nuclear weapon next to the city. This pushes the city into the goo,
Do you literally mean "push"? That would seem like fantasy.
> The gray goo problem is overrated; it's black goo we need to worry
about.
I agree. As I think is clear from my postings, goo designed to
destruct is what I meant by grey goo, but from now on I will call it
> > You assume that the destroyed cell will no
> > longer be a problem. But what if it turned into a cube of inert
diamondoid?
> > Then it would also be a hinder, and give me even more time to develop
> If it is inert diamondoid, is that more of a hindrance than the outer
shell?
> Besides, this whole discussion is looking obsolete anyway. Any rigid
outer
> defense is toast. It has to be surrounded by explosive-equipped soft
'mune.
> At most, you can have a rigid outer defense as a pretty shell.
> > Note that they would be produced faster than the goo since the
goo would
> > have to both reproduce, breach security and defend itself while
the
> > antibodies and macrophages would just be produced (although a goo-like
> > macrophage is an interesting and dangerous concept) and sent on
their way.
>if the goo has so much as a cubic millimeter to
> call its own, it can keep its queen reproducers in the center while
> surrounding itself with warrior goo, just like the city.
> Why can't the goo use antibodies and macrophages against the 'mune?
> The tactical symmetry remains unbroken.
> > > The goo simply makes repeated
> > > attacks, and each time, the city shrinks a little. We won't even
speak of
> > > such horrors as cutting off that city's solar power.
> > An isolated immune system in a world where nobody else has immune
systems
> > is weaker than a system in a world where nanodefenses are common.
The ultimate
> > defense would of course be to have immune systems everywhere, defending
> > not just the transhumans but the himalayas, squirrels and grass.
A bit like
> > the current state in biology, really.
> If nanotechnology was released on Earth, the 'munes would have to
be
> EVERYWHERE. The air. The water. Earth's molten core. Otherwise, the
goo
Right. An infallible global immune system would be reguired. A local
immune system wouldn't stand a chance against black goo.
> > > Nanosystems are always faced with "destruction by induction".
That is, you
> > > can always destroy one cell; therefore you can destroy the whole
thing. To
> > > defend against this, it is required that the system expand faster
than the
> > > destruction OR that it be impossible to destroy one cell.
> > Or that you can make the loss of a finite number of cells bearable.
If their
> > loss removes the threat (for example by forming a nanoscar), then
the
> > defense side will win.
> I disagree with the whole concept of a nanoscar. If there is such
a thing,
> you turn it into your first line of defense, not wait until after
you've been invaded.
Subject: Re: NANO: Space phase (WAS: Goo prophylaxis)
Date sent: Fri, 29 Aug 1997 17:18:46
Eliezer S. Yudkowsky wrote:
> A retreat to space is the most likely method of negating the sea's
strategic
> advantages, leaving the local tactical advantages of home ground.
Even if the
> goo can fling unlimited missiles after you, they'll be defeated,
and their
> material consumed, only adding to your size.
How do you defeat a nuke with detectors on the outside that detonates
it is it is attacked by nanomachines?
I don't believe in the story about launching large parts of the earth
Subject: Re: Re: Goo prophylaxis
Date sent: Fri, 29 Aug 1997 19:13:53
> > prefer to think about it is in terms of a superintelligence who
> > attempts to maximize what it thinks is physical value-structures;
> > i.e. it wants to organize as much matter as possible in the way
that
> > it think has most value. Except for possible strategic or ethical
> > reasons, it makes no difference whether the matter is virgin
> > territory or some other computer is has already organized the matter
> > in a sub-optimal way.
> But what if its value-structure made it regard "natural" or "unchanged"
> systems as good? This view is already prevalent in our culture, and
> it is not unlikely that an SI might think that its preferred form
of
> complexity might include the activities of "simple" forms of life
> and environments in addition to its own structures. It can be a purely
Yes, I agree. It is conceivable that an SI would have conservatism
or
naturalness as values. (Presumably because those values were
prevalent in the culture in which it was built, so that these values
were programmed in.) But that's what they are: specific values that
would have to be explicitly added, not something that we can take for
In general, I think we can say the following: Sentimentality values
will be less prominent than they are today. By sentimentality values
I mean values that are dependent on an object's historical origin or
specific association with a beloved one etc. Why would they be less
prominent? Because at least in some cases they are valued only
indirectly, for the psychological effects they produce (a curl from
a
lost lover elicits memories and nostalgia, for example). But in the
future, these effects will be more efficiently produced by
manipulating the brain or the emotional centres in the
superintelligence (assuming it has bothered to preserve them).
So the physical objects (such as Mother Earth) would no longer be
Subject: Re: Goo prophylaxis
Date sent: Fri, 29 Aug 1997 23:32:05
> molecular nanotechnology to present a serious military threat to
> existing soldiery, highly optimized designs would have to be
And even if that is the case, don't forget that the step from rather
optimized designs to highly optimized designs might be fairly quick,
and if that's the step that creates the big military potential,
> I'm sorry, folks, but certain elements of this thread are starting
> to look like disasturbation to me.
I, on the contrary, think that this is the most interesting thread
we've ever had, and that it is really exciting that we are begining
to think seriously for the first time about the strategic situation
that will arise when nanotech is developed.
> Seriously, I think ordinary
> old biotech is a much more real danger (designer plagues and the
Subject: Re: Goo prophylaxis
Date sent: Sat, 30 Aug 1997 00:04:23
> >There is a difference between the three systems you mention, on
the
> >one hand, and nano self-replicator on the other. Mycoplasma
> >genitalium, automotive vehicles and comersial software are all
> >required to be fairly optimized.
> No, none of those are "optimized". Insofar as they are optimized,
germs and
> commercial software are heavily optimized for miniumum design requirements,
> even at the cost of performance.
(Fairly optimised). Germs are highly optimised, given
certain design constraints, shaped as they are by immense selection
pressure and short generation cycles. Cars are optimised. Much
commersial software is also fairly optimised. But you have a point
here. There is also commersial software whose performance is not
optimized, and in some cases it can still it can be highly
non-trivial to design it. So if that is the relevant analogy then it
points in the direction Carl intended it to.
> >To build an optimised nano
> >self-reproducing device would be much harder than simply to make
> >something useful that can replicate. For example, a universal Turing
> >machine has been constructed in Conway's Life world. The entity
is
> >very big and it was hard, but nothing near a thousands of genius-year
> Nobody has presented a self-replicating Life system. All Conway did
was
> produce a feasibility proof, so you know it *can* be done. Actually
> designing such a system is still considered not yet possible.
Really? I thought I've heard that the Universal Turing machine was
actually designed, with streams of gliders serving as tape etc. But
I
may be wrong, in which case I'm glad you pointed it out. Do you have
>[some interesting intuitions contrary to mine omitted]
Subject: Re: Goo prophylaxis
Date sent: Sun, 31 Aug 1997 12:02:04
> Can I appeal to someone to summarize this thread? I think there is
> original work being done here, but I have been too busy to keep up
with
I am planning to write a paper on nanotechnology and strategy, and
it
will draw material from the fruitful discussions we have had here.
> I forwarded one of the messages to Rob Freitas, who expressed
> interest, but he is extremly busy writing the *Nanosystems* of medicine
> (31 chapters, two volumes).
> Rob posted related essays, "Police Nanites", on sci.nanotech a few
> months ago, which will be in a book at some point. Perhaps some of
> the work accomplished here might be suitable for inclusion in that.
Sounds interesting. I couldn't find either the essay or the man. Do
you have any urls or email addresses?
Date sent: Sun, 31 Aug 1997 12:34:42
> Some people say (correctly I think) that we can know almost nothing
about
> what things will be like after the singularity, BUT then they tacitly
assume
> that the rate of change after The Spike will level out into a plateau
of
> sorts, a very high plateau to be sure but a plateau nevertheless.
I see no
> reason why that should be true and think we should expect an accelerating
> rate of change for the indefinite future.
>The black goo might look pretty
> scary in the first few milliseconds after it was made, but after
a geological
> age (a minute or two) it would seem more pitiful than scary, like
a man armed
> with a flint ax trying to concur the world.
I think you are vastly overestimateing the acceleration. A speed-up
of the order you talk about would require new hardware, and that
hardware needs to be assembled, which is a physical process which
cannot be made arbirarily fast. This means that a minute or two
would not ba a "geological time", even if we assume that the arrival
of superinelligence had completely cut away design and debug delays.
But suppose that you were right about this. Then the first power to
get to the "singularity" (I sometimes wonder whether that term
doesn't do more harm than good) would only need to be a *few seconds*
ahead of the competition to become the ruler of the universe (at
least in absence of aliens). I reality, of course, the leader would
be months ahead, so this would automatically enable the leading power
to eliminate all competition.
Subject: Re: Goo prophylaxis
Date sent: Sun, 31 Aug 1997 12:55:35
> > > molecular nanotechnology to present a serious military threat
to
> > > existing soldiery, highly optimized designs would have to be
> I'm starting to suspect that (at the nano level, not at the
> macroscopic mechanical level) life is highly optimized for survival
> under hostile conditions (where the source of the hostility is
> mostly competing life forms).
Yes, but competing *biological* life forms. It hasn't evolved to
> > And even if that is the case, don't forget that the step from rather
> > optimized designs to highly optimized designs might be fairly quick,
> > and if that's the step that creates the big military potential,
> This speculation usually rests on the idea that we will, in
> parallel, have developed AIs or easily replicable uploads that
> will be able to do good engineering work far more rapidly than
> present-day human beings can.
What about the step from highly optimized designs to very highly
optimized designs? By then we should have AI. And very highly
optimised attack will almost certainly win over merely highly
Subject: Re: Goo prophylaxis
Date sent: Sun, 31 Aug 1997 15:08:25
> The non-leading labs have a vital advantage: they will know at least
the
> vague outlines of what works.
[interesting WWII example deleted]
As Drexler has said, there is a difference between know-how and
know-what. But yes, they will have an advantage -- an advantage
relative to the state of the leading power before it had developed
nanotech. But relative to the state of the leading power at the time
in question, they would of course still be at a disadvantage.
We need to clarify the issue:
Let's simplify and assume that we can get to advanced nanotech
through the sequence of discoveries N0, N1, ..., Na...
N(1997) is where we are today.
Nredgoo-i is where universally destructive goo of generation i can
be
made. (i=0 is red goo that could extinguish all intelligent life if
there were no immune system.)
Nimmune-i is where a (global) immune system can be build
to deal with red goo of generation i or lower.
Nimmune-abc is where a good nanodefence against ABC and conventional
Nsuperintelligence is where superintelligence can be built.
Ncad++ is where nanotech gives substantial imrovements to nanodesign
computers (hardware of software)
Ncommersial is where nanotech can be used to make large commersial
Ngeneralassembler --self explanatory
It seems that what you are arguing is that even if
N(leader)>>N(competition) at some early stage, it still holds
(N(leader)>=Nredgoo-i and N(leader)>=Nimmune-abc)
N(competition)>=Nimmune-i or there is a j such
that N(competition)>=Nredgoo-j and N(leader)<Nimmune-j
Design-ahead efforts might mean that when enabling technologies
arrive, a lot of designs can be immediately implemented.
Advancement of technology means shorter design-cycles means a given
cronological time difference corresponds to a greater technological
lag. If not sooner, this design-cycle acceleration will occur when
machine intelligence begins to be developed.
These are reasons against (F1)
But even if (F1) were true, it would still not follow that we
could have a stable multipolar nanotech world order. Military balance
need not imply military stability. Two fighting men with guns pointed
to each other's heads are in power balance, but their situation isn't
stable. (This example is from Drexler.) It takes an aditional
argument to establish (F2) that a first strike won't be
> care to claim that it is possible to profit from an advance in nanotech
> without providing valuable clues as to its nature, I would be happy
to argue
Good. Note that it is not necessary for my position that such
commersal benefit is possible. However, I think it is. We could for
example mass produce certain medicines that can also be made without
nanotech but only at great cost. The same holds for a number of other
products. We could even sell things that aren't possible to make
without nanotech without revealing too much about how we bootstraped
the tools necessary to make them.
Subject: Re: Goo prophylaxis
Date sent: Sun, 31 Aug 1997 15:30:13
> At 02:29 PM 8/29/97 +0000, Nicholas Bostrom wrote:
> >... The superintelligence
> >realizes this and decides to junk the other computers in the
> >universe, if it can, since they are in the way when optimising the
> Throughout the 'goo prophylaxis' thread, you seem to completely ignore
the
> possibility that trade may be more profitable than conquest, that
employment
> may be more profitable than enslavement, and that symbiosis may be
more
> profitable than extermination. I don't see why the principle of comparative
> advantage should not continue to apply even in a world with vast
disparities
> in material power between different actors.
Ok, let's make a quick cost-benefit analysis for a SI whether or not
to destroy another budding SI who refuse a negotiated merger.
Destruction (Kill the tyrrant in the craddle)
(1a )Some resources has to be deployed for a brief time while the
(1b) If the rival has been allowed to reached a fairly advanced
stage, then the operation might incur some damages which will take
a
(2) We also lose all labor output that the rival could have done and
from which we could have benefitted through trade.
(1) Instead of having to share the universe with our rival, we get
it
to our selves. This means we gain 0.5*(resources in the part of the
universe that will ever be colonized). This is a *huge* benefit.
(2) We eliminate the risk that the rival will one day try to destroy
ourselves. Thus, untill we meet advanced Extraterrestrial
civilizations, we get the benefit of total safety from external
Costs (1a) and (1b) are neglible compared to benefit (1). Cost (2)
is
also smaller than benefit (1), because we can use the resources we
conquer to produce the same amount of output as our rival would have
had, and this output will now be ours, we won't need to buy it first.
Even without benefit (2), the benefit side far outweighs the cost
side. The only plausible considerations that could change this would
be either a balance of terror involving total mutual annihilation,
or
the inclusion of specific sorts of strong ethical motivations
> PS. What a great thread!
Date sent: Mon, 1 Sep 1997 11:26:16
> I see only one way past this problem. We must strive to
> precipitate the singularity before the advent of the ability
That's an interesting proposal.
> As a matter of definition, I use the term "Singularity"
> as Vinge does: the point in the future past which prediction
That means that the singularity is observer-moment relative, and that
it would not occur for me if I find a way to make some meaningful
prediction about the future of intelligence in the universe, even if
everything else make it look like a singularity: immensly accelerated
change, short-cycled positive feed-backs, explosion of computing
power etc. I think the term "singularity" is better used to denote
such an occurrence, whether or not the end result is to some extent
predictable or not. The concept you defined would more accurately be
called "the horizon", and that might also make it less likely that
it
will function as a "we can't know anything so let's close our minds
off" devise. (Drexler warned against this version of the concept in
his after-dinner speech at Extro3.)
Subject: Re: Goo prophylaxis
Date sent: Mon, 1 Sep 1997 12:09:12
> >> > = Nicholas Bostrom
> > To build an optimised nano
> >self-reproducing device would be much harder than simply to make
> >something useful that can replicate.
> We're talking about gray goo here, right? That has to live in the
wild,
No, black goo (deliberately designed war goo, destruction goo, doom
goo) (grey goo is accidental). What I had in mind was something that
would be able to live in
the wild, but even without that ability you could still have immense
millitary advantage from being able to build unlimited amounts of
military equiptment in large tanks practically for free.
> without getting all its little cogs and conveyor belts clogged up
with
> natural molecules that just happen to fit into its various nooks
and
An easy way to fix this would be to have those inner parts isolated
> >For example, a universal Turing
> >machine has been constructed in Conway's Life world. The entity
is
> >very big and it was hard, but nothing near a thousands of genius-year
> >task, to do it. The feasibility stems from the fact that you have
> >identical components that you can put together into bigger identical
> >components, and so on, and at each step you need only consider the
> >apparatus at a certain level of abstraction. If this is the right
> >analogy for nanotech, then the design work would seem tractable,
once
> >the right tools are there. But I will take your opinion on this
> >issue into account in my future thinking. And debugging is also
a
> I agree that Conway's Life self-replicator design could be carried
out to
> completion in a short time. However, it is not a good analogy. The
Life
> world is perfect, so engineering reduces to mathematics.
On the atomic level, our world is perfect too.
> designed his machine to function in an infinite vacuum, so the complexity
> produced by the impingement of the world is not present.
> Life self-replicator could exist in a sea of random pixels, inasmuch
as any
> design for Life machinery I have seen will be completely destroyed
by a
What about having three (or more) of these original self-replicators,
and one unit that compares the output and kills the one whose output
diverges from the other two, and prompts the construction of a
replacement. Of course, if noise levels get too high then everything
> >> (3) Drexler and Merkle are two very (very!) smart guys. They have
labored
> >> for years, and designed what? Some bearings, a transmission, a
PLA, and a
> >> Stewart platform? And you claim this shows how easy it is? I'd
hate to see
> >> an engineering task you considered hard!
> >Those are again optimized designs.
> No they aren't. Thay are pretty much the first designs they came
up with
> that didn't explode in simulation.
> >What about the rod logic
> The only part of it designed to atomic precision was the PLA (programmable
> logic array) I mentioned above. Even there, only the rods were designed
to
> atomic precision, not the frame that holds them, so maybe I shouldn't
count it.
What is the reason why the design only went that far? Was the rest
conceptually too difficult or was it just too big for the computer
Subject: Re: Re: Goo prophylaxis
Date sent: Tue, 2 Sep 1997 15:10:40
> In no sense. They get substantially better every year even after
you
> discount for technological improvement. In what sense could they
possibly be
(Fairly optimal.) In the sense that it would be *much* easier to
build something on four wheels that moves by itself than it is to
build a vehicle that can compete on the open market today. The
context was this: Carl said that a simple self-replicator would
contain about the same amount of information as a car. So some
kind of analogy-inference might be made if we know how difficult is
it to design a car. Well, how difficult is it? Many highly skilled
people have been busy for many decades designing cars, so it seems
very hard. But this would be to overlook the fact that the
self-replicators we are trying to build need not be optimised in the
sense that cars need to be, if they are to be acceptable to car
designers. The relevant analogy (a weak one, to be sure) is rather
to
steerable automobiles on four wheels or something like that; not to
car that could be sold today.
> >> >To build an optimised nano
> >> >self-reproducing device would be much harder than simply to make
> >> >something useful that can replicate. For example, a universal
Turing
> >> >machine has been constructed in Conway's Life world. The entity
is
> >> >very big and it was hard, but nothing near a thousands of genius-year
> >> Nobody has presented a self-replicating Life system. All Conway
did was
> >> produce a feasibility proof, so you know it *can* be done. Actually
> >> designing such a system is still considered not yet possible.
> >Really? I thought I've heard that the Universal Turing machine was
> >actually designed, with streams of gliders serving as tape etc.
But I
> >may be wrong, in which case I'm glad you pointed it out. Do you
have
> "The Recursive Universe" by William Poundstone. It has a good layman's
> description of Conway's proof, and some guesstimates of what it would
take to
> actually make a self-replicating Life computer (conclusion: no time
soon).
I don't have that book handy, but now I'm beginning to suspect that
what Poundstond makes guesstimates about is constructing such a
gadget in the real world. What we were talking about was Conway's
universal Turing machine in the Life world (a mathematical model).
I think Carl Feynman said that he was aquainted with Conway's proof
and thought that going to the detailed design was only a matter of
filling in some details. (Did you really think that I believed that
somebody had built a Life world replicator in the real world out of
Subject: Re: Re: Re: Goo prophylaxis
Date sent: Wed, 3 Sep 1997 13:26:43
> In a message dated 9/2/97 7:15:10 AM, Nicholas Bostrom wrote:
> >> >Cars are optimised.
> >> In no sense. They get substantially better every year even after
you
> >> discount for technological improvement. In what sense could they
possibly
> >> optimal at any point?
> >(Fairly optimal.) In the sense that it would be *much* easier to
> >build something on four wheels that moves by itself than it is to
> >build a vehicle that can compete on the open market today. The
> >context was this: Carl said that a simple self-replicator would
> >contain about the same amount of information as a car. So some
> >kind of analogy-inference might be made if we know how difficult
is
> >it to design a car. Well, how difficult is it? Many highly skilled
> >people have been busy for many decades designing cars, so it seems
> >very hard. But this would be to overlook the fact that the
> >self-replicators we are trying to build need not be optimised in
the
> >sense that cars need to be, if they are to be acceptable to car
> >designers. The relevant analogy (a weak one, to be sure) is rather
to
> >steerable automobiles on four wheels or something like that; not
to
> >car that could be sold today.
> Any self-replicator has a much harder job than a steerable car. Based
on Von
> Neumann's estimates, even in a tank with semi-processed raw materials,
you'll
> need about 250,000 parts in a sophisticated (i.e., well-designed
) system.
> Carl's point (I think) was that that is roughly comparable to a modern
car -
> an early car is much simpler. The analogy "simple car"="simple replicator"
> is not correct, in the same way that "simple cart"!="simple car".
Ok, that makes sense. It seems that we have exhausted the power of
analogies now, though. We know that making a nanotech
self-replicator, even given perfect atomic positioning, would be
non-trivial. It would be useful if it were possible to come to a
slightly more precise conlusion, say about the order of magnitude of
the number of "genius-years" required. We would need to take into
account the possibility of developing better computers that could
help in simulations, and other enabling technologies.
Atomic positioning, and atomic monitoring, are prehaps just around
the corner. The STM and AFM achive this in a very imperfect manner,
but some good gripp-molecules to be placed on the tip of the needle
could possibly be developed within a few years. How long will it take
after that untill we have ab initio self-replicators? Ten, fifteen
years? 2015? I suppose that most of the action would happen in the
last few years, when there would be frenetic activity in labs all
over the world. By 2010 we should have some pretty impressive CAD
which would give extra acceleration to the process. Drexler tends to
avoid predictions about when things are going to happen, but if
pressed he says he believes that we will have a general assembler
sometime during the first third of the next century, and more likely
in the earlier part than in the later part of this interval. Hmm. On
my transhuman home page I say that I believe there is at least a 50%
chance that we will have superhuman artificial intelligence within
50
years. Perhaps I should strengthen this to 30 years?
Subject: Goo prophylaxis:consensus
Date sent: Wed, 3 Sep 1997 14:31:01
Our discussion about the strategic situation after and during the
development of nanotechnology has gone on for a while, and there is
still disagreement on several issus. But perhaps we reached a near
consensus on the following non-trivial points?
1. Provided that technological research continues, nanotechnology will
2. An immune system wouldn't work unless it was global.
3. In the absence of a global immune system, if everybody could make
their own nanotech machines then all life on earth would soon become
4. In the absence of ethical motives, the benefits would outweigh the
costs for a nanotech power that chose to eliminate the competition
or
prevent it from arising, provided it had the ability to do so.
Subject: Re: Goo prophylaxis:consensus
Date sent: Thu, 4 Sep 1997 14:00:21
In response to the comments that my first consensus-feeler elicited,
we can make a second attempt by including the following changes and
clarifications and additions (borrowing ideas and formulations from
1. Provided that technological research continues, it is likely that
ab initio molecular nanotechnology, including general assemblers,
will eventually be developed.
2. Hostile attack goo cannot be allowed to gain a sizable foothold;
therefore there must not be any sizable global area unprotected by
an
[This leaves open the possibility that there may be
independent complementary immune systems, either side-by-side or
overlapping. The main reasons for 2 was apparent in the discussion
we
had about destruction by induction, the possibility of buiding huge
quantities of TNT, and the inadeqacy of Anders' immune system
proposal to deal with these problems. 2 does not say anything about
whether an isolated local immune system could be efficient
Given an island vs. sea battle, and a certain minimum technology
level on both sides, the sea will win, whether the "island" is a
malevolent spore or a city.
(I think that Eliezer share this position and that a "not" has got
>1b: Personal immune systems are feasible as a defense against
3. In the absence of a global immune system, if most people could
make their own nanotech machines then all human life on earth would
[We leave open the possibity that subterranean bacteria may hang on,
if only because nobody bothers to try to exterminate them. Also note
that we leave open whether the global immune system would be
unique and monolithic or otherwise. The point is that the vast
majority of the earths surface (and crust?) has to covered by one
immune system or another. ]
4. In the absence of ethical motives, the benefits would outweigh the
costs for a nanotech power that chose to eliminate the competition
or
prevent it from arising, provided it had the ability to do so.
[Hal agrees with this, but several people said they don't. In at
least one of the cases, the disagreement crept in outside the claim
that is made in 4. 4 does not say that the first nanotech power
*will* eliminate the competition (altough I happen to believe that
that is rather likely). Only that in the absence of ethical motives
it would be rational for it to do so. But ethical motievs need not
be
absent and it's decision making (democratic?) need not be perfectly
rational. Also, I use goals as the principle of individuation,so the
nonopower may have a rich internal structure, and it may encompass
many nations with similar aims, for example. With this clarification,
I don't see how anybody could disagree with it, in the light of the
cost-benefit analysis I posted a few days ago.
The comparative advantage objection is mistaken, as Hal explained:
>In particular, the doctrine of
>comparative advantage doesn't seem relevant. You aren't going to
>lose access to the resources represented by the competition; rather,
>you are going to subsume those resources and gain greater control
The lost-information objection I find completely unconvincing.
Carl Feynman says he disagrees with 4 but hasn't been been posting
much because of an ear infection. I hope he will get better soon. I
want to hear what he has to say about my cost-benefit analysis.]
5. Unintentional gray goo is a relatively mild danger compared to
>I think this kind of consensus-description is a good idea, although
I
>have the feeling that Nicholas has biased it a bit in his direction
>rather than the consensus perceived by (say) me.
Well, I should have said that what I was searching for was a
*consensus sapientum* (a consensus of the wise), where the sapienta
are defined as those who by and large agree with Mr. Bostrom ;-) .
Subject: Re: Goo prophylaxis:consensus
Date sent: Fri, 5 Sep 1997 15:38:01
> It takes much less effort to colonize a new star system than to
> every competing intelligence or SI in one particular system. Once
such
> colony is established, it is much easier to defend it than to attack
...which supports the claim that the smart stategy is to prevent the
rivals from arising, here on earth, in the first place.
Subject: Re: Goo prophylaxis:consensus
Date sent: Fri, 5 Sep 1997 16:27:22
> If we think of ethics as distilling our experience of the long-term
> consequences of our actions, then this suggests that there is something
> mistaken with the reasoning in favor of preemptive strikes.
Well, the circumstances wherein our traditional ethical system was
developed were very different from situation we will have when the
first nanopower takes off. Also, I think it is unnecessary to concede
too much wisdom to traditional ethical systems -- most of them are
very bizzare: "Don't eat pork!", "Don't use condoms!" or rest on
assumptions that most of us don't believe in (the avenging wrath of
> A recent historical example would be the situation immediately after
> WWII, when the U.S. had sole possession of the atomic bomb. There
was
> undoubtedly debate about using this power preemptively against the
USSR,
> our allies during the war, but inherent ideological adversaries.
von Neumann was in favour of a preemtive strike, and so was Bertrand
Russell (who was otherwise a pacifist!) For my part, I am not sure
what would have been best, given the knowledge that was available
then, or even the knowledge that we have today. But an intereting
fact is: it didn't happen. Perhaps it won't happen the next time
either, even if that will predictably cause the end of the world.
> Indeed, the cost of not conquering the Soviet Union was considerable:
> the Cold War; years of mistreatment of its population and its ecology
> by the Soviet government; justification of American excesses as necessary
> to stop the Red menace.
> However, if the U.S. had preemptively attacked Russia after WWII,
> destroyed it as a potential competitor for at least decades, things
> might easily have been worse. Certainly the U.S. would have been
a less
> trusted ally and partner in the world, more a heavy-handed, feared
tyrant.
Perhaps it was just sheer luck that WWIII never happened. But
somebody could argue that: "it is really only since the collapse of
the Sovjet Union that USA has begun to behave virtuosly. Now it is
the conscience of the world, leading the efforts to inhibit nuclear
proliferation and of quenching aspiring tyrrants, like Saddam
Hussein. Perhaps, if USSR had been defeated at an early stage, the
USA could have been virtuous for 40 years by now, and thereby have
earned the respect of the rest of the world." Who knows?
> And as things turned out, the USSR eventually fell apart
The best thing that has happened since the defeat of Hitler. I still
feel joyful when I think of this event.
> countries haltingly moving towards democracy. This is a victory for
> our ethical standards, as the USSR learned that its policies were
wrong
> in an absolute sense, that is, they were not in accordance with nature.
> Even with all the years of suffering caused by the existence of the
USSR,
> the world is very likely a better place today than it would have
been
> after 50 years under a nuclear-enforced Pax Americana.
> Another issue is less tangible. It could be argued that by taking
an
> action which is evil, you make yourself more likely to take other
evil
> actions in the future. In the nuclear American empire scenario, we
can
> easily imagine that after using nuclear weapons on first Japan and
then
> Russia, they might be used against China, Vietnam, Cuba, or any other
> country which dares to resist. It may be necessary to crack down
on
> dissent at home, as these outrageous actions lead to protests. You
could
> end up with the worst tyranny imaginable.
> Similarly, a nanotech power which is so paranoid and aggressive as
to
> take the step of eradicating everyone else on the planet may find
it
> difficult to survive on its own terms. Paranoia would rule... snip...
> The result would be a nightmare Borgism, a nearly mindless plague
whose
> only goal was conquest, spreading throughout the universe. This would
> all flow from that first step of destruction.
Yes, that would be a very bad outcome. In the EoC, Drexler mentions
the possibility that a state choose to get rid of its people and
replace them with obediant AIs. This is a real danger. -Another
reason why we must make sure that no totalitarian state gets to the
point where it can make a successful first strike.
> Consider, in contrast, an entity which takes the harder road from
the
> beginning, seeking to embrace diversity and work with competitors
who
Yes, everybody that is willing to cooperate could be included in the
dominating power. Hopefully, this would be most of the world, perhaps
excluding some rogue states and possibly China. These nations would
have to be forced to either cooperate or relinquish their military
power. That would not mean exterminating their populations, but
dissolving their military machines and doing whatever it takes to
make sure they never obtain a dangerous level nanotechnology
(relative to the defenses that the rest of the world has).
> But the diversity which
> results will be a positive benefit in and of itself. And the need
to
> deal flexibly and creatively with competitors will arguably make
it
> better prepared to deal with surprises which the universe throws
at it
I think this coalition would have enough diversity; but we must not
forget that we can always afterward *design in* as much diversity
as is desirable, whether by creating independent modules within the
SI or by arranging for cultural differences in society. Diversity is
something that can be manufactured. It is not something that a
nanopower would need to risk its own existence to achive, or give up
half of the universe for.
Subject: Re: Goo prophylaxis:consensus
Date sent: Fri, 5 Sep 1997 16:54:05
The Low Golden Willow wrote:
> } 2. Hostile attack goo cannot be allowed to gain a sizable foothold;
> } therefore there must not be any sizable global area unprotected
by an
If we assume that we need to cover at least 99.99% of the earth with
some immune system (which I do), then we can as well cover all of
it, I suppose. So let's reformulate:
2. In a multipolar world, the whole earth must be protected by one
immune system or another. If only cities were protected, attack goo
could easily extinguish all intelligent life.
> One city fighting off a planet of goo might lose.
> One half of a planet could fight off another half
But that would be an enormously expensive battle and it should be
> } Given an island vs. sea battle, and a certain minimum technology
> } level on both sides, the sea will win, whether the "island" is
a
> } malevolent spore or a city.
> "Certain minimum technology level" is completely vague. You can shift
> the levels on either side to force whatever result you want. And
I
> think I disagree with the claim in general anyway: the natures of
the
> "island" and "sea" do matter. A spore surrounded by white blood cells
> and a city surrounded by whatnot are not the same thing. A city can
> defeat a first wave and then cannibalize the remains, becoming stronger.
> The spread of civilization, except here the wild comes to the city
> rather than vice versa.
I inserted the "minimum technology" proviso into Eliezer's original
formulation to make sure that the sea could avoid making itself
available as dinner. But is there such a minimum technology that
would do the trick no matter how advanced the island? I suppose there
is (design space limited),though it might be very high up on the
technology ladder. Sooner or later, the sea would reach that level
of
technology, and then it would defeat the island, unless the island
had made itself into a sea by that time. So we reformulate:
In the long run, an island cannot defend itself against a sea.