You are (Most Likely) Not Living in a Simulation
The idea that we may be living in a simulation is familiar to science fiction readers. Some fine films and many novels explore the theme. But though the idea may be old, one surprising new twist has arisen in recent years: many people now claim that we are likely living in a computer simulation. Such people argue, as Elon Musk reportedly said at a recent conference, that “There’s a billion-to-one chance we’re living in base reality.”
Why have this suspicion? It could be in part a consequence of the first commercial platforms for virtual reality. Technological advancement has allowed us to experience a few steps towards a crude simulation of a crude world. But even if technological advancement enables us to better imagine how our world could be a simulation, do we have any reason to believe that we are in a simulation? There is a new argument for this view, perhaps best given voice by the philosopher Nick Bostrom. It’s a relatively simple argument, and Bostrom summarizes it in the first paragraph of a paper:
Many works of science fiction as well as some forecasts by serious technologists and futurologists predict that enormous amounts of computing power will be available in the future. Let us suppose for a moment that these predictions are correct. One thing that later generations might do with their super-powerful computers is run detailed simulations of their forebears or of people like their forebears. Because their computers would be so powerful, they could run a great many such simulations. Suppose that these simulated people are conscious (as they would be if the simulations were sufficiently fine-grained and if a certain quite widely accepted position in the philosophy of mind is correct). Then it could be the case that the vast majority of minds like ours do not belong to the original race but rather to people simulated by the advanced descendants of an original race. It is then possible to argue that if this were the case, we would be rational to think that we are likely to be among the simulated minds rather than among the original biological ones. (2003: 243)
Is this argument sound? And should we care?
The most important thing to note about this argument (and also something we find in many fictions about simulation) is that it assumes that the world is a simple thing. A full simulation of our world will need to have a complete description of each object in our world. What if a complete description of a real, physical stone—a description that captured all that can be said about the stone, all the particles in it, their positions, their motions, how each is currently interacting with the weak gravitational field of the moon, and so on and so on—required at least as much storage space, and at least as much mass, as the stone itself? Then we could say that the stone was information dense. If the Earth and all the things on it are information dense, and if you were to simulate the Earth, you would need at least as much matter as there is in the Earth to run your simulation (which would be dedicated, presumably, to monstrous hard drives and processors).
Bostrom’s argument assumes that sticks and stones and stars and chairs and humans and all other objects are not information dense. We already know that’s implausible. Even if we consider something as simple as some gas in a jar, we find that we must make statistical generalizations about the behavior of the gas, because it is practically impossible to describe every particle knocking around in there.
If the world is information dense—and all the evidence indicates that it is—then not only can we not simulate our world with great accuracy, but also any inaccurate but respectable simulation is going to be immensely costly. It’s easy to say future generations will run millions of simulations, when we imagine such simulations are as cheap to store and run as are apps on my iPhone. But if these simulations will require tremendous space, time, and energy, then they will be nonexistent or they will be very rare. And so Bostrom’s claim that there will be millions of simulated Earths and trillions of simulated people is false.
One might reply even if the real world is information dense, the simulations do not have to capture all that information. But there are two problems with this escape, each of which is fatal to Bostrom’s argument and any similar argument.
First, we wouldn’t be ourselves. Bostrom’s argument is simple statistics. He supposes that there will be trillions of equally-conscious, equally-sentient “human” minds, and most of these will be simulations, so any randomly chosen mind is more likely to be a simulation than a real person. Put differently: there is a large population P of one kind of thing (equally-conscious, equally-sentient “human” minds), and most of them have property S (they are simulated), and so any arbitrary person chosen at random from that population (such as you) most likely has property S. But if we claim the simulations throw away a lot of information, then we now have two populations: one population of fully conscious and sentient beings (the real people), and another (supposedly larger) population of partially conscious, partially sentient beings. The statistical argument no longer applies.
This means that if you do not think you are an incomplete copy of a whole person, that’s very likely because you are not. Remember, Bostrom’s argument is that future humans of Earth are creating exact simulations of fully conscious and sentient human minds. If the simulation fails, the simulated people are likely to notice, not least because all the other things in their simulation—Shakespeare’s poetry, Beethoven’s music, Fellini’s La Dolce Vita—expect them to understand and to share the lives and experiences of fully conscious and sentient human beings. If wine tastes like water to you, then you ought to wonder why all those people in novels keep spending so much praise and money on it.
The second problem is not people, but the world. And I’m pleased to note that science fiction writers lead the way, and have sussed out this problem a long time ago. If we are living in a partial simulation of the world, we can predict that we will encounter errors in it. Thus, the protagonist of The Thirteenth Floor finds the edge of his world, enlightened humans in The Matrix recognize glitches in the simulation, and the travelers in Joe Haldeman’s Forever Free hit the bounds of their simulation and crash it.
One might reply: what if we program the simulation to only simulate things that minds are encountering? The designers can turn off the simulation of the shoes in your closet when you are not looking in the closet. But how could such a simulation maintain integrity? A number of logical results show that many computer programs are unpredictable, and that as a general principle the more complex programs are more likely to be unpredictable. Human simulations will be very complex programs. But then, how will the whole simulation know when you are going to look in your closet? How will it know when you set up mirrors to peek in there? Or put a camera in there? And so on. One can have answers for all those contingencies, but the answers are going to require a godlike AI that can predict everything that everyone and every device can do in the world. (The same can be said for any claim that when the simulation fails, the affected humans can have their memories revised in some kind of seamless way.) And now we have left Bostrom’s argument far behind, and we must consider the probability (and cost) of godlike AIs being invented and then put to work to maintain the lie of a partial simulation. Such a thing will not be done millions of times, if even once.
So Bostrom’s argument fails. I think we should be glad. If our environment is made specifically to appear a certain way to us, then the “world” is a stage set, not a world. The philosopher David Chalmers pointed out to me that this objection would apply to many past philosophical theories that claimed the world is the thought of a god. Bishop Berkeley, for example, argued that everything is made out of thoughts. But what happens when you close your closet door, and you no longer perceive or think about your shoes? Do they cease to exist? No, Berkeley argued, because God continues to think about your shoes. If we reject that simulations are real because they are stage sets, then we should reject that Berkeley’s world would be real.
To me, that sounds right. Take the red pill. So much the worse for Berkeley, and so much the worse for the claim that we are living in a simulation.
But good news! You almost certainly aren’t.
Bostrom, Nick (2003) “Are you living in a computer simulation?” Philosophical Quarterly, Vol. 53, No. 211, pp. 243‐255.
McCormick, Rich (2016) “Odds are we’re living in a simulation, says Elon Musk.” Vox, https://www.theverge.com/2016/6/2/11837874/elon-musk-says-odds-living-in-simulation.
Craig DeLancey is a writer and philosopher. He has published short stories in magazines like Analog, Lightspeed, Cosmos, Shimmer, and Nature Physics. His novels include the Predator Space Chronicles and Gods of Earth. Born in Pittsburgh, PA, he lives now in upstate New York and, in addition to writing, teaches philosophy at Oswego State, part of the State University of New York (SUNY).