Saturday, August 18, 2007

Doomsday and Bostrom's Simulation Argument

Many futurists believe that we will soon enter a ‘posthuman’ phase of civilization (often called, somewhat misleadingly, the 'Singularity', although it has nothing to do with black hole physics). Becoming posthuman will have profound implications for us all. These futurists typically assume that the phenomenon called Moore’s Law will continue for the enhancement of computing speed for the medium-term future, and also assume a philosophy of mind that makes strong AI possible – involving the belief that minds are simply incredibly complex algorithms, and minds are ‘substrate-neutral’ or 'substrate-independent' – that is, those algorithms currently run on hardware called ‘brains’, but in principle could run on other hardware, such as a computer. If these beliefs are true, it is clear that computers will be capable of human-level minds within the next 20 or 30 years, and will have minds that dwarf our ability to comprehend them a short while afterwards. These computer superminds will even be able to simulate reality; and a good enough simulation, given substrate neutrality, is indistinguishable from reality itself.

So we will become posthuman at some point in the coming century. What does that mean? Well, let’s define a ‘posthuman’ civilization as one that is able to simulate an entire world in a way largely or entirely indistinguishable from reality. That is, for posthumans, the distinction between virtual reality and ‘real’ reality begins to disappear. The computing power required for this is truly stupendous, but it is a finite number, and if the futurist assumptions are correct, our civilization will have such powers in well less than one hundred years!

The philosopher Nick Bostrom has advanced an argument that we have good reason to believe we actually inhabit such a simulation right now. His argument is a few years old, but its publication in the NY Times has made it far better known, so I’ll give just a quick recap:

Bostrom argues that one of the following three propositions is almost certainly true:

(1) the human species is very likely to go extinct before reaching a “posthuman” stage;
(2) or, any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary (‘world’) history (or variations thereof – ‘ancestor-histories’);
(3) or, we are almost certainly living in a computer simulation.

Bostrom indicates that under certain plausible assumptions, option 3 is the most likely. Unless our posthuman descendants are much more ethically circumscribed than us, the desire to run ‘ancestor-histories’, or counterfactual histories, will be great – just imagine the historians wanting to see what would have happened if the Nazis won WW2, or even you just wondering what would have happened if you’d married that high school sweetheart (or not, as the case may be). The temptation to run a large number of such simulations thus strikes me as overwhelming.

Further, if our universe is a simulation, it explains several oddities nicely – for example, why relativity and quantum mechanics can’t be meshed (QM because there must be some lower level of granularity in the simulation, relativity for making things flow smoothly within the simulation – though both can’t be ‘real’).

And it explains the strange applicability of the anthropic principle well – the idea that so much of what we observe, particularly the otherwise inexplicable physical constants such as the fine structure constant, the ratio of the gravitational force to electromagnetic forces, and so forth – why they all appear to be "rigged" so as to create conditions favorable to the origin of life in general and we humans in particular. They are so rigged because the universe is a simulation designed with us in mind!

But then, why were things that way in the original universe, which we're now simulating? Was it a simulation, too? There’s an infinite regress problem, one that we simulated beings could never solve. After all, who knows what things are like in the fundamental level of reality… whatever that is.

But there’s a problem – or two. First, one of Bostrom’s other options is that all intelligent civilizations self-destruct before they become posthuman. That, unfortunately, is completely compatible with all our evidence. In particular, it would explain Fermi’s paradox – if ETIs exist, where are they? Perhaps we’re in a simulation that didn’t bother with aliens – but why not? It wouldn’t be much more expensive to add them in. So, Occam’s razor suggests that any intelligent aliens that ever existed never made it to posthuman stage.

Worse, suppose we are in a simulation. In that case, we are well on the way to becoming posthuman ourselves – being able to run our own ( let's say third-level) simulations, within our own second-level simulation. But simulating even a single posthuman civilization looks to be extraordinarily expensive in computing terms. If so, unless our upper level simulators have nearly infinite resources, then we should expect our simulation to be terminated when we are about to become posthuman – that is, when we are about to become simulators ourselves.

So, either we are doomed never to reach the posthuman stage – in which case it’s Doomsday soon – or our simulation will be terminated as soon as we do so – in which case, it’s Doomsday soon. Either way, this simulation argument brings us depressingly closer to the conclusion that it’s Doomsday soon. I think the best counterargument actually claims that philosophers like John Searle are right and the futurists are wrong about the philosophy of mind underlying strong AI and posthumanism. If they’re right, I think it’s Doomsday soon. If not Doomsday by simulation shutdown, perhaps it will be by the rise of the machines – superhuman robotic intelligences, who supersede and destroy us. Stay tuned for the next Doomsday installment….

No comments: