Humanity is near the end of its time [Doomsday Argument]

greenspun.com : LUSENET : Human-Machine Assimilation : One Thread

The Doomsday Argument

Philosophy seldom produces empirical predictions. The Doomsday argument is an important exception. From seemingly trivial premises it seeks to show that the risk that humankind will go extinct soon has been systematically underestimated. Nearly everybodys first reaction is that there must be something wrong with such an argument. Yet despite being subjected to intense scrutiny by a growing number of philosophers, no generally convincing refutation has yet been formulated.

It started some fifteen years ago when astrophysicist Brandon Carter discovered a previously unnoticed consequence of version of the week anthropic principle. Carter didnt publish his finding, but the idea was taken up by philosopher John Leslie who has been a prolific author on the subject, culminating in his monograph The End of the World (Routledge 1996). Meanwhile another physicist, Richard Gott III, independently discovered the doomsday argument and published an article on it in Nature. Since then there have been numerous papers trying to refute the argument, and approximately equally many papers refuting these refutations.

Here is the doomsday argument. I will explain it in three steps:

Step I

Imagine a universe that consists of one hundred cubicles. In each cubicle, there is one person. Ninety of the cubicles are painted blue on the outside and the other ten are painted red. Each person is asked to guess whether she is in a blue or a red cubicle. (And everybody knows all this.)

Now, suppose you find yourself in one of these cubicles. What color should you think it has? Since 90% of all people are in blue cubicles, and since you dont have any other relevant information, it seems you should think that with 90% probability you are in a blue cubicle. Lets call this idea, that you should reason as if you were a random sample from the set of all observers, the self-sampling assumption.

Suppose everyone accepts the self-sampling assumption and everyone has to bet on whether they are in a blue or red cubicle. Then 90% of all persons will win their bets and 10% will lose. Suppose, on the other hand, that the self-sampling assumption is rejected and people think that one is no more likely to be in a blue cubicle; so they bet by flipping a coin. Then, on average, 50% of the people will win and 50% will lose.  The rational thing to do seems to be to accept the self-sampling assumption, at least in this case.

Step II

Now we modify the thought experiment a bit. We still have the hundred cubicles but this time they are not painted blue or red. Instead they are numbered from 1 to 100. The numbers are painted on the outside. Then a fair coin is tossed (by God perhaps). If the coin falls heads, one person is created in each cubicle. If the coin falls tails, then persons are only created in cubicles 1 through 10.

You find yourself in one of the cubicles and are asked to guess whether there are ten or one hundred people? Since the number was determined by the flip of a fair coin, and since you havent seen how the coin fell and you dont have any other relevant information, it seems you should believe with 50% probability that it fell heads (and thus that there are a hundred people).

Moreover, you can use the self-sampling assumption to assess the conditional probability of a number between 1 and 10 being painted on your cubicle given how the coin fell. For example, conditional on heads, the probability that the number on your cubicle is between 1 and 10 is 1/10, since one out of ten people will then find themselves there. Conditional on tails, the probability that you are in number 1 through 10 is one; for you then know that everybody is in one of those cubicles.

Suppose that you open the door and discover that you are in cubicle number 7. Again you are asked, how did the coin fall? But now the probability is greater than 50% that it fell tails. For what you are observing is given a higher probability on that hypothesis than on the hypothesis that it fell heads. The precise new probability of tails can be calculated using Bayes theorem. It is approximately 91%. So after finding that you are in cubicle number 7, you should think that with 91% probability there are only ten people.

Step III

The last step is to transpose these results to our actual situation here on Earth. Lets formulate the following two rival hypotheses. Doom Early: humankind goes extinct in the next century and the total number of humans that will have existed is, say, 200 billion. Doom Late: humankind survives the next century and goes on to colonize the galaxy; the total number of humans is, say, 200 trillion. To simplify the exposition we will consider only these hypotheses. (Using a more fine-grained partition of the hypothesis space doesnt change the principle although it would give more exact numerical values.)

Doom Early corresponds to there only being ten people in the thought experiment of Step II. Doom Late corresponds to there being one hundred people. Corresponding the numbers on the cubicles, we now have the "birth ranks" of human beings  their positions in the human race. Corresponding to the prior probability (50%) of the coin falling heads or tails, we now have some prior probability of Doom Soon or Doom Late. This will be based on our ordinary empirical estimates of potential threats to human survival, such as nuclear or biological warfare, a meteorite destroying the plant, runaway greenhouse effect, self-replicating nanomachines running amok, a breakdown of a metastable vacuum state due to high-energy particle experiments and so on (presumably there are many dangers that we havent yet thought of). Lets say that based on such considerations, you think that there is a 5% probability of Doom Soon. The exact number doesnt matter for the structure of the argument.

Finally, corresponding to finding you are in cubicle number 7 we have the fact that you find that your birth rank is about 60 billion (thats approximately how many humans have lived before you). Just as finding you are in cubicle 7 increased the probability of the coin having fallen tails, so finding you are human number 60 billion gives you reason to think that Doom Soon is more probable than you previously thought. Exactly how much more probable will depend on the precise numbers you use. In the present example, the posterior probability of Doom Soon will be very close to one. You can with near certainty rule out Doom Late.

That is the Doomsday argument in a nutshell. After hearing about it, many people think they know what is wrong with it. But these objections tend to be mutually incompatible, and often they hinge on some simple misunderstanding. Be sure to read the literature before feeling too confident that you have a refutation.

If the Doomsday argument is correct, what precisely does it show? It doesnt show that there is no point trying to reduce threats to human survival "because were doomed anyway". On the contrary, the Doomsday argument could make such efforts seem even more urgent. Working to reduce the risk that nanotechnology will be abused to destroy intelligent life, for example, would decrease the prior probability of Doom Soon, and this would reduce its posterior probability after taking the Doomsday argument into account; humankinds life expectancy would go up.

There are also a number of possible "loopholes" or alternative interpretations of what the Doomsday argument shows. For instance, it turns out that if there are many extraterrestrial civilizations and you interpret the self-sampling assumption as applying equally to all intelligent beings and not exclusively to humans, then another probability shift occurs that exactly counterbalances and cancels the probability shift that the Doomsday argument implies. Another possible loophole is if there will be infinitely many humans; its not clear how to apply the self-sampling assumption to the infinite case. Further, if the human species evolves into some vastly more advanced species fairly soon (within a century or two), maybe through using advanced technology, then it is not clear whether these posthumans would be in the same reference class as us, so its not clear how the Doomsday argument should be applied then. Yet another possibility is if population figures go down dramatically  it would then take much longer before enough humans are been born that your birth rank starts looking surprisingly low.

So even if the Doomsday argument is fundamentally correct, there is still a lot of scope for differing opinions about our future. But it would tend to more or less rule out certain kinds of hypotheses.



-- scott (hma5_5@hotmail.com), February 09, 2000

Answers

More on the Doomsday argument

-- scott (hma5_5@hotmail.com), February 09, 2000.

The argument posted here has a huge flaw. The key is in "Suppose you find yourself in a cube, whats the probability that the result of the coin toss was Heads". Your argument states, that since the coin is fair, it can be assumed that the probability was 50% it came up heads, and thus, there is a 50% chance the cubes are filled with 100 people. This is a hugely misstated number. Using Bayes' Theorem you so boldly flaunt around in the steps beyond this point, I will calculate the actual probability the coins result was heads, given you find yourself in a cube.

I will use this notation:

| symbol will mean "given" in the probability statements H indicates the event Heads is tossed T indicates the event Tails is tossed C indicates the event you find yourself in a cube.

With that in mind:

P(The toss was heads given you find yourself in a cube) = P(H|C) = P(H)*P(C|H)/( P(H)*P(C|H) + P(T)*P(C|T) )

But, if heads was tossed, then the probability that you are in a cube is 1 And, if tails was tossed, the probability you would be in a cube is .1 The probability of heads and tails is each 50%. Therefore:

(A) P(H|C) = .5*1/(.5*1 + .5*.1) = .5/.55 = .909 (Which means that P(T|C) = .091)

Therefore, if you find yourself in a cube, the probability that heads was rolled was about 91% (and the probability that the roll was tails, given that you are in a cube is about 9.1%).

Applying this further into the argument:

Now suppose you find that your number is 7. Now the question is, what was the probability that the toss was tails originally (corresponding to "Doom Early"). Lets figure this out.

I will use 7 to indicate the event that you find yourself in cube 7.

So, we want the probability that the coin toss was tails given you are in cube 7. Now, saying that you are in cube 7 is actually saying two things:

1. You are in a cube and 2. That cube is 7

So, what we want is:

(1) P(Tails given you are in a cube, and that cube is 7) = P(T|C and 7) = P(T)*P(C and 7|T)/( P(T)*P(C and 7|T) + P(H)*P(C and 7|H) )

Now, lets investigate P(C and 7|H) and P(C and 7|Tails):

P(C and 7|H) = .01. Here's why: If heads was rolled, then you MUST be in a cube. And the probability that cube is 7 is 1 in 100.

(2) P(C and 7|T) = P(C and 7 and T)/P(T) (This is a variation on Bayes' Theorem, P(A|B) = P(A and B)/P(B))

Now, then....lets work with P(C and 7 and T). This is:

(3) P(C and 7 and T) = P(C and T)*P(7|C and T) (Rearranging Bayes Theorem)

But P(7|C and T) is the probability that you are in cube 7 given that you are IN A CUBE and TAILS WAS TOSSED. This is the 1/10 chance given in the problem statement. If tails was tossed, and you are in a cube, then there is a 1 in 10 chance that cube is 7. So, lets look at P(C and T):

(4) P(C and T) = P(T|C)*P(C)

From the initial derivation above, (A), the probability that the coin toss was tails given you find yourself in a cubicle is .091. So, what the probability you find yourself in a cubicle?

P(C) = P(C|H)*P(H) + P(C|T)*P(T) (Law of total probability) = 1*.5 + .1*.5 (If the roll was heads, you are definitely in, if the roll is tails there is a 1 in 10 chance you will be in a cubicle) = .55

Rolling this back, we get:

P(C and T) = .55*.091 = .05005 (4) and P(C and T and 7) = .05005*.1 = .005005 (3) and P(C and 7|T) = .005005/.5 = .01001 (2) then P(T|C and 7) = .5*.01001/(.5*.01001 + .5*.01) (1) = .005005/(.005005 + .005) = .005005/(.010005) = .50

So, what this says is, if you find yourself in cubicle 7, there is a 50% probability that tails was rolled. So, now, all of a sudden, if Doom Early and Doom Late are equally probable events, then there is 50% chance that we will die Early. Which intuitively also makes sense. Now, suppose you are a total and complete pessimist, and the Doom Early scenario has an actual probability to occur of 5%, like in the example. Then, the probabilities become (Replacing P(H) with P (DL) and P(T) = P(DE), DL = doom late, DE = doom early. Also let Ex be the event you exist, ie, you are in a cube):

P(Ex ) = P(DE)*P(Ex|DE) + P(DL)*P(Ex|DL) = .05*.1 + .95*1 = .955

P(DE|Ex) = P(Ex|DE)*P(DE)/( P(Ex|DE)*P(DE) + P(Ex|DL)*P(DL) ) = .05*.1/.955 = .00524

P(Ex and DE) = P(DE|Ex)*P(Ex) = .00524*.955 = .005

P(Ex and DE and 7) = P(Ex and DE)*P(7|Ex and DE) = .005*.1 = .0005

P(Ex and 7|DE) = P(Ex and DE and 7)/P(DE) = .0005/.05 = .01

P(DE|Ex and 7) = P(DE)*P(Ex and 7|DE)/( P(DE)*P(Ex and 7|DE) + P(DL) *P(Ex and 7|DL) ) = .05*.01/(.05*.01 + .95*.01) = .0005/(.0005 + .0095) = .0005/.01 = .05

Which, again, brings us back to the original probability of Doom Early. Again, this makes sense. The probability the world is going to end is based on one thing: that probability, P(DE). The only way to say the Doom Early scenarion is more likely is to give it a higher probability than doom late. I think anyone would agree that this would be unreasonable. The argument posted here is serious flawed and lacking. To state that the odds are 50% that all 100 cubes are filled is a completely invalid statement, and messes up the entire argument.

-- Ted Dawson (Dawsothe@aol.com), March 01, 2000.


Thanks, Ted. Interesting point of view. The text in the original post is from the link given at top, not mine. From that link you can get to links to the large philosphical/mathematical literature on this. You may wish to submit your analysis for publication in one of the journals cited there.

-- scott (hma5_6@hotmail.com), March 01, 2000.

Moderation questions? read the FAQ