Sunday, July 8, 2007

The Original Doomsday Argument

First, some versions of the original Doomsday Argument – in a later post, I'll add my original (and even more ominous) Revised Technological Doomsday Argument.

The simplest Doomsday Argument:
Physicist Richard Gott’s simple Copernican Principle proposes to compute prediction intervals for the future duration of any randomly observed phenomenon. Gott's method hinges on the "Copernican assumption" that there is nothing special about the particular time of your observation, so with 95% confidence it occurs in the middle 95% of the lifetime of the phenomenon. If the phenomenon is observed to have started A years ago, Gott infers that A represents between 1/40 (2.5%) and 39/40 (97.5%) of the total life. He therefore predicts that the remaining life will extend between A/39 and 39A years into the future. (Given Gott's assumptions, this is simple algebra: if A = (1/40)L where L is the total life, then the future life is L - A = 39A.)

Gott’s version of the Traditional Doomsday Argument is relatively simple to articulate: if I consider myself a random sample in my observation of X - e.g. the USA is 231 years old, and let’s assume that I live at a random moment in its history – that is, this moment is a random time within its existence, and I have no further knowledge that would render this moment non-random with respect to its longevity - then there is a 95% chance that the USA will last between 6 and 9009 more years.

The relevance for humanity’s future can now be stated: The naïve Doomsday argument would follow Gott, and reason that if one’s birth occurs at a random time in human history (and given that the origin of Homo sapiens was approximately 100,000 years ago), we should expect there’s a 95% chance that the human species will persist for between 2564 and 3.9 million more years. Or, a 97.5% chance we have at least 2564 years of future. That doesn’t sound so bad.

But the birth rate has dramatically increased in the last few centuries of human history, so from a sheer numbers perspective, this reasoning has a flaw. One better way to think about this argument is in terms of the total number of humans born. My (or your) rank-order among all human births is approximately 70 billion – approximately 70 billion humans have been born before me – so using the Copernican Principle, there’s a 95% chance that between 1.8 billion and 2.73 trillion more humans will be born. If birth rates continue to rise, or even just hold steady (estimated at approximately 130 million a year now), that could mean doom relatively soon – a 95% chance that the last human will be born between 14 and 21,000 years from now!

But this approach is too simple, however. Philosopher Nick Bostrom develops a more sophisticated version of the Doomsday Argument. He first formally clarifies the assumption Gott is making, which Bostrom calls the self-sampling assumption:

(SSA) "Observers should reason as if they were a random sample from the set of all observers in their reference class."

Bostrom’s version of the Doomsday Argument can be explicated using Bayes Theorem, which can be stated as follows:
P(H|D) = [P(D|H)P(H)]/[P(D|H)P(H)+P(D|H')(1-P(H))].

Here’s a way of explaining Bayes theorem - I call it the 10 and 1000 room hotel example. Suppose you arrive at a conference and you’re the first person to check in. The clerk is in control of 1010 rooms reserved for conference attendees – the first 10 rooms (numbered 1-10) in one hotel, and the first 1000 (numbered 1-1000) in a second hotel. As you’re the first person to arrive, he decides to flip a (fair) coin to decide which hotel to put you in, and then has his computer randomly assign you a room in that hotel. He flips the coin, then checks the computer, and hands you a room key – room #7 – but doesn’t mention which hotel you’re supposed to go to! What are the odds that you are in the first hotel?

At first blush, many people answer 50% - after all, he flipped a fair coin to decide which hotel to put you in, so isn’t it 50/50 as to which you got?

No, it isn’t – because you have an additional piece of information – you were randomly assigned room #7. Of course, there are two room #7s, one in each hotel – so how does that help? Well, it helps because (assuming the process is random) you are far more likely to be randomly assigned room #7 in the first hotel, rather than the second. In fact, it’s 100 times more likely!

How’s that? Think of it this way: suppose you repeated this whole process of checking in first at the conference 2000 times. You would then expect, given the coin flip, to stay in each hotel 1000 times. In the 1000-room hotel, you’d then expect to stay in each of its rooms exactly once – including room #7. But the first hotel has only 10 rooms – so if you check into that hotel 1000 times, you’d expect to stay in its room #7 fully 100 times. So if you have a key to room #7, it’s 100-1 odds that you’re in the smaller hotel. (The clerk probably expected you to realize that – or maybe he was just being forgetful!) With only the additional information that you're in room #7, the epistemic odds of being in the smaller hotel have risen from 50% to over 99%.

(Doing the math - Prior: P(H) = 50%, P(D|H) = .1, P(D|H') = .001, so the posterior probability P(H|D) > .99)

So a Bayesian analysis enables one to reason consistently about apparently random events with some prior expectation about their frequency, and to update that reasoning in a consistent manner once more information is learned.

Now to the Bayesian version of the Doomsday Argument – suppose you are optimistic about the future of humanity, so you think the probability of ‘Doom soon’ is very low – say 5%. That subjective antecedent probability is called the ‘prior’ in Bayesian reasoning. But the force of the Bayesian version of the Doomsday Argument is to see that even such optimistic prior assumptions about the future of humanity are overwhelmed by the realization of our place in human history, and it becomes rational to believe (the ‘posterior probability’) that given our evidence, there’s an overwhelming probability that humanity has not much longer to survive. Unless we have some special knowledge about the antecedent likelihood of a long human future, it looks as if we should expect Doom relatively soon.

Or as Bostrom puts it:
"Classic Doomsday - Let a person’s birth rank be her position in the sequence of all observers who will ever have existed.
h1: = “There will have been a total of 200 billion humans.”
h2: = “There will have been a total of 200 trillion humans.”
Pr(h1) = .05, Pr(h2) = .95 are the posterior probabilities of h1 and h2 after taking your low birth rank into account: Your rosy prior probability of 5% of our species ending soon (h1) has mutated into a baleful posterior [probability] of 98%."
Bostrom actually argues a more accurate analysis would require a stronger self-sampling assumption:

(SSSA) "Each observer-moment should reason as if it were randomly selected from the class of all observer-moments in its reference class."

The result of the SSSA would be to make the Classic Doomsday argument even more ominous – because not only are birthrates increasing, but human lifespans are getting longer and longer – and so including more and more moments. The result: it would push the rational expectation of Doomsday even closer than a mere counting of the number of births does. Given the SSSA, we might estimate the total number of years ever lived by human beings up to now at 2.75 trillion. That would give a 95% chance that the total number of years left for us all is between 70 billion and 107 trillion. With a current average worldwide longevity of 65 years, that would give us a 95 % rational expectation of between 1.07 billion and 1.65 trillion more births, far lower than the simple SSA forecast. If the birthrate merely remains at 130 million a year, and longevity at an average of 65 years, that means there’s a 95% chance that human births come to an end somewhere between (only) 8 years and 12519 years from now! (And it’s even worse if birthrates continue to rise and/or average lifespans continue to increase). Extinction would presumably follow within at most a century or so afterward – if not simultaneously. Perhaps some science fiction movies aren’t as implausible as they seem.

Bostrom's articles on his website include valuable discussion of most of the objections raised to the Doomsday Arguments above, and counterarguments to those objections. Suffice it to say that some versions of the argument are still held to have force. In fact, I have a novel contribution to the debate, and in it I have some bad news - unless we act quickly, I think our likely future is even shorter than these estimates would imply. My Revised Technological Doomsday Argument will come soon… hopefully, soon enough!

No comments: