Welcome!

Please login or sign up for a new account.

I forgot my password

Password Reset

“The Bridge of Sighs: Are We What Might Have Been?” by Robert E. Furey


Nick Bostrom's trilemma and Simulation Argument

1) "The fraction of human-level civilizations that reach a posthuman stage (that is, one capable of running high-fidelity ancestor simulations) is very close to zero", or

2) "The fraction of posthuman civilizations that are interested in running ancestor-simulations is very close to zero", or

3) "The fraction of all people with our kind of experiences that are living in a simulation is very close to one.”


An Existential Conundrum

A short while ago someone suggested that I might be virtual, a computer-generated being inhabiting a computer-generated universe. This guy was from Oxford, so I figured I’d have to give the idea a little thought. And after a little thought I decided he was wrong. And that was after only a little thought.

Before we continue it’s probably a good idea to make sure we’re all starting from the same point. What is the argument implying that you, me, and everyone else you have ever known are virtual beings in a virtual world? Without getting lost in the void between the Simulation Argument’s trilemma, and the Simulation Hypothesis which states that we are virtual, the take-home message is that since all civilizations will eventually run what Nick Bostrom calls Ancestor Simulations, we are most likely living in one now. The reasoning is simple. Simulations are perfect models of the real world; the inhabitants will eventually discover how to run their own simulations. Those simulations, in turn, will learn to build their own as well. Soon, in a Ponzi scheme of each individual simulation building many simulations, the number of sentient creatures that are living their lives inside a simulation will far outnumber those that are in physical reality. Therefore, probability has its thumb pressing the virtual side of the balance and therefore most beings are such, you are most likely such.

The reason why I quickly concluded he was wrong came down to one mind-bogglingly simplistic fact. To me, a fact that all of the logical square dancing could not do-si-do around. I took my epiphany with cheer—I don’t want to be virtual—and decided to share my relief with a few people who could appreciate the simplicity of my argument.

When I approached others with my insight the responses were both mixed and of a kind. For example, I spoke with a brain physiologist, an autodidact, and an astrophysicist. From the first, I heard that the complexity of chemical interactions in the brain could not be modeled, and from the second that consciousness existed in the random quantum effects skittering along dendrites. Since we know chemical reactions can be modeled, I dismissed the “it’s too hard” argument simply as a matter of scale. As for random quantum effects, I discounted that one given a lack of meaningful evidence. It’s a pretty guess, but little more than that. Later, when I asked the astrophysicist, the first response was a dismissive “pshaw” which led to a more elaborate, “Because! Don’t waste your time.”

None of my consultants even got to my own reason, which in my mind was by far the Occam’s razoriest of them all (with the possible exception of “Pshaw. Because!”). My issue with the Simulation Argument is a simple one, an infinite and expanding regress of Matryoshka virtualities would require an infinite amount of computer bandwidth and processing power. And infinity just cannot be. Voila. I am real.

Nick Bostrom addressed the possibility of an infinite universe and the ramifications of such a thing on his Simulation Argument. His retort to an “infinite universe” problem is limited to the effect on ratios of real, or physical, creatures and simulated ones. In an infinite universe there would be an infinite number of both physical and virtual creatures and so numbers get whacked fairly quickly. No longer can we postulate a preponderance of virtual folks in the face of infinity. If the universe is infinite, then your probability of being virtual falls to fifty percent.

To me, the problem of infinity was not in the ratio demographics of its denizens, but in the limitless requirements of computing power necessary for an infinity of nested infinities. The infinite universe problem effects numbers and muddies the water for the Simulation Argument for denizen ratios, but exacerbates the problem for bandwidth, or at least seems to almost to the point of eliminating the possibility of a simulation framework entirely. I will not consider the possibility of an infinite universe since it is incompatible with our current understanding of the world and would alone eliminate the need for any further discussion.

Since writing an essay needs to be more than a few words hooked into one short phrase hinging on a single word, “infinity,” I needed to get a better grip on how computer power is used. After some time, and then after a little more time, I collected and quietly processed the accessible information available before leaning back in my brown leather office chair and muttering a long, drawn-out, “Well, fuck me.” Because guess what, I’m not so sure you and I aren’t virtual after all.

First of all, we need to look at the building blocks of what would be, when snapped together like a box full of jigsaw puzzle pieces, an ancestor simulation as Dr. Bostrom defines it. What we need to see are the makings of a world with some minimum level of complexity such that the denizens to be found therein have things to do. And to be sure, some level of autonomy and free will, or what would be the purpose of launching a “What if?” simulation in the first place? But listen, here is what we are doing.

Just to be clear, no one is developing ancestor simulations right now. At least no one has openly announced anything like that. What we are doing is working on a myriad of other projects that could be coupled one to the other into a larger model with the emergent properties needed to launch a meaningful ancestor simulation. Not all of the efforts are directed ones, some of the preparation is random.

As I pen these words, I am sitting in the Plaza de Colón at the foot of a statue honoring Christopher Columbus, shadowed by the ramparts of Castillo San Cristobal. This is a busy place of heavy tourist traffic, each and every one equipped with smartphones and digital cameras uplinked to the cloud. A steady stream of photos and video flows away to wirelessly linked bins of visual and auditory information. The Plaza de Colón, and probably much of Old San Juan, has been recorded for sights and sounds from so many points and moments that most of San Juan could be reliably reproduced in virtual. What has not been directly recorded could be extrapolated without any meaningful alteration to the city’s gestalt.

Already, even if yet fractured, San Juan exists in shards around the internet. Currently, I’m background in a few tourists’ photographs and exist as a passing bit of information in someone else’s panoramic video sweep as I scribble into my field notebook. My conversations at the base of the statue have been folded in with the drone of background noise, as is every other person’s here, just waiting to be extracted from the general hum and bustle. For a hundred years, a thousand years, these data will rest scattered through cyberspace awaiting something with enough need or interest to pull together a dynamic image of early 21st century San Juan.

But these data from San Juan are random and raw. Other data of interest to us here are generated, nonrandomized and purposefully sought after. Simulations in their own right, even if they are but pieces of what might become a larger, more complex reflection of “reality” eventually with enough verisimilitude to become an ancestor simulation.

Out of MIT and a group of physicists led by Mark Vogelsberger bursts “Illustris.” Illustris is a whole universe simulation modeling a little over 1.2 million cubic megaparsecs of space over 99% of the universe’s history. Vogelsberger’s team included everything we think we know about the evolution of the cosmos, plugged it all into a supercomputer, and let it run.

What emerged was something that looked remarkably like our own skies in both visible and dark matter. The resolution on Illustris is such that relatively small structures, such as galaxies, can be observed in detail. The types of galaxies produced in the model parallel the observed galaxies in space around us. Illustris also matches the larger structures found in galaxy clusters and their distribution. Big Bang nucleosynthesis has left us with a distribution of primordial hydrogen and helium, and heavier elements produced in the hearts of dying giant stars; this too is reproduced in Vogelsberger’s modeled universe.

Vogelsberger ascribes the astonishing degree to which Illustris matches the Big Bang Universe we live into the data we have collected and better algorithms to model the interactions. He also stated that to run the simulation on the best desktop computer we have would require two millennia to complete. The simulation that produced the ultimately static model took several months on 8000 processors. Even still, this was no small feat to run a program that tracked more than 13.8 billion years of cosmic evolution over more than 1.2 million cubic megaparsecs in just a handful of months.

Even with the amazing accomplishment, the limits of computing power are exposed. Right? When you look up in the sky you see homogeneity and isotropy at work ensuring that wherever you point your telescopes you will see the same sorts of things arranged in the same sorts of ways. So in that way, the universe can trick us into thinking it’s more complex than it is just by repeating itself. We all know someone that does that, don’t we?

So even with repetition, the processors ran hot for months. Imagine if the scale encoded by Illustris included systems of individual planets with weather and terrain, ecosystems, populations of complicated organisms or even individual creatures. How long would the 8000 processors have had to run then? How many processors to make all that happen in a few months. How many processors running with embedded simulations of their own?

Building a simulation of the history of the universe is an astounding feat. But the granular level is much too large for simulating creatures like us. Arguably all the blink of human history could have been encompassed within the scope of Illustris’ run, but on that scale who could see it? Scale is a problem.

Of course, we can look at the world piece by piece. Meteorologists and climatologists have been modeling complicated systems for a very long time. Even something so deceptively simple-sounding as moving around bodies of air at different temperatures creates complicated models. The error in these models is often higher than Illustris’ results. Just try to plan for an outdoor wedding and see if it wouldn’t have been easier to locate an abundance of primordial hydrogen instead.

The complicated nature of global weather systems has not discouraged those interested in modeling from building better algorithms. And as if it wasn’t difficult enough to predict rain, researchers out of Microsoft have run with an idea first voiced in the village of Madingley, UK. The Madingley Model, as it has been christened, is the first global “General Ecosystem Model.” It is designed to model all life on the Earth. The model is designed around whole ecosystem behavior looking at variables for autotrophic and heterotrophic organisms, birth and death rates, dispersal and dispersion, metabolism, and of course reproduction.

Like Illustris, Madingley is able to discern things within its wheelhouse from such details as life spans of certain organisms up to herd responses to broad-scale environmental change. As powerful as Madingley has proven itself to be in showing the way the world is, the use of its clout goes beyond this.

The Madingley Model is used as a “what if” platform, a way to see how things would be if conditions were different. For example, we could look at the Colorado River as if they had decided to go ahead and build what would have been called the “Hoover Dam.” How would the Colorado River valley have been changed by such a megaproject? Madingley can run alternative simulations to predict future outcomes and assist in decision making. We are getting somewhat closer to Bostrom’s model here.

How much trouble would there be in taking one of these constructed worlds built by the Madingley Model and adding it to an algorithm-driven orrery of similarly modeled worlds? With our Illustris modeled cosmos in hand, we could point to an appropriate sun among the billion trillion stars and integrate our orrery and world therein to spin under its own private sky. But in a world where populations are modeled, just who looks up? To construct all that virtual cosmik debris without observers, well, that’s a lot of wasted sky.

Introducing OpenWorm. Caenorhabditis elegans is a tiny, soil living nematode. A lot is known about this little roundworm. So much so in fact that it could be considered the first “white mouse” developed for computer modeling. The worm C. elegans is the first metazoan to have its entire genome sequenced and is unique in being the only organism to have its entire nervous system mapped. Not only are these tiny nematodes primitive and simple, but they also display a curious lack of diversity in cell number. Adult males always have 1031 cells. These aspects of the biology of C. elegans are just what the groups working with OpenWorm are exploiting.

One trick in building models is to reduce the variables to include only those that are important to the functions being modeled. Those variables that serve no purpose save confusing things are eliminated until only something useful remains. C. elegans provided us with a solution to understanding whole animals and just so happens to be the perfect candidate for the first virtual creature given its simple and well-mapped ontology and morphology. Even now, virtual C. elegans crawls through its virtual petri dish constrained by the known functions of its 1031 virtual somatic cells, including its fully-mapped virtual nervous system.

The OpenWorm group is building a creature not from organic molecules but from algorithms. The European Future and Emerging Technologies (FET) group has been working on a somewhat more ambitious Human Brain Project. They aren’t there yet but scaling up shouldn’t be an unreasonable avenue of pursuit. They have taken the first small steps towards a “whole brain emulation,” a thinking human-level simulacrum, that could, in turn, be allowed to interact with others of its kind, collected on a Madingley Model planet set to spin around a star evolved in an Illustris generated universe. Virtual beings striding like lords across a virtual landscape they inhabit and treat as their own.

It’s as if there were a seamless continuum from mind to universe. There isn’t of course. The separation requires your wetware to run a unique simulation that approximates the world in an optimal manner such that you can move through data sources in an efficiently optimal way without constantly bonking your head on a cabinet door or misplacing your sandwich.

The universe is separated from us by a half-inch of bone. Through a few small apertures, nerves snake from data collectors on the outside to the central processor on the inside. In a very real way, we collect data about the world and through a Platonic allegory process. We reconstruct a virtual environment in our minds close enough to the outside world at large to allow us to function without stepping off a precipice or breaking our noses on a wall.

What this means is that each and every one of us is running a real-time simulation based on almost simultaneous data collection and rendering in the most suitable fashion for us to optimize our passage through the world around us. And don’t forget, there are seven billion or so humans generating simulations running right this moment.

But there are more than humans. How many organisms are on the planet? I won’t even pretend to toss out a meaningful number. Suffice it to say that there are a lot, an awful lot. Every one of them is running its own simulation honed to the data optimized for each organism. This represents a great deal, a mind-bogglingly large amount, of computing power. But we knew all that. Remember how intrigued you were by the Wizard’s words? “Why, anybody can have a brain. That’s a very mediocre commodity. Every pusillanimous creature that crawls on the Earth or slinks through slimy seas has a brain.” And every brain, every single one, is hosting a simulation.

To reiterate, your brain is running an imperfect yet optimized dynamic world image that you are in turn aware of based on data accumulated from the surroundings. We have already agreed to that. These data are limited by your biological senses, and machine-augmented senses in certain cases. These data are then used to build a real-time model of the outside world that you move around in. Your model is more or less correct, given the limitations, as evidenced by your ability to interact successfully with the world. But what happens when the inputs are off? In the case of fatigue, you miss a lot and so have holes in your simulation, sometimes important holes. Or perhaps you’re chasing a roadrunner and make the mistake of allowing him time to paint a detour tunnel on the side of a mountain. Even if you don’t chase roadrunners, you certainly have had to expend the occasional doubletake to a trompe-l’œil. That momentary disturbance you feel when the algorithms in your head are rebooting your private simulation.


We are still bothered by memory constraints

One place we might look to see how a simulation could contend with memory constraints would be examining the tricks of the computer gaming industry. Many computer games could easily be seen as fledgling full-world simulations. Many of the larger games are inhabited with thousands of players who come and go, seeing to the business of living in whatever simulation any given game projects. But it’s the NPCs that are of interest because at some level they keep going about their business even when you are not there. You’re a guest, they live there. Whether or not they take a form you recognize may be a question for philosophers, but that they remain when you leave is incontrovertible. My argument would be that the world continues in a simplified form because it can, and the clever programmers have seen to it that without a need to feed complexity to visitors the simulation runs essentials only. Like hearing noises behind a closed door.

Events occurring on the other side of a door do not need to be rendered or in some cases even decided before a player looks inside. Once a player does look then the game’s reality is forced to make a decision and new information is included in the gamer’s simulation. The increased bandwidth is only applied during necessity.

I am now standing in line for coffee at a kiosk in Detroit Metro Airport. There is a ledge running along one side of the concourse under which line all the shops and cafes and food stands in this concourse. There must be a windowed wall up there because the sun is streaming in and lighting the far side of the walkway. When a rumbling sound grew louder above me, and a long shadow streaked across the far wall, I knew there must be train tracks up there. My wetware built simulation inserted a train into itself from a limited Platonic allegory of shadows on the wall. There was probably something up there. I’m fairly sure there was a train up there. I’m certain there was a train up there. There was a train.

My personal simulation had a train in it without any information other than sound and shadow. All of the data for color, construction material, shape and volume, all unneeded. The universe I built saved on some bandwidth. The secondary qualities of sound and shadow on the wall substituted all the information needed to build in the “knowledge” for the primary qualities of mass and motion of a physical train. Clever, clever universe.


The Properties of Walls

Quantum theory tells us, among other things, that reality waits for an observer to be present to make a choice. Once someone is watching, the wave collapses and chooses which slit to pass through. A wall can delay choices unless or until needed. You could walk along the hallway of a five-star hotel and every hotel room door you pass would have a story behind it. You don’t process that hidden information unless you open the doors. You don’t need to know, and you save resources by not bothering to find out.

Walls serve as a form for matter to exist in. Walls hold up the roof. Walls also block data flow from one area to the next, thus reducing rendering time and required computation. Walls reduce the strain on our individual simulations. If we would include data flow characteristics to our definition of what a wall is, we could not be consciously aware of the data flow properties of walls. But it should be noted that information flow characteristics would be included not just in objects, but also in space-time. Go sit in Plaza de Colón in Old San Juan or catch a connecting flight through Detroit and remember more details in these two unique events than of all the 127 pleasant mornings in between.

These are pieces waiting to click together from disparate places in the noosphere (all the minds and interpersonal relationships, connected or stored), and like millions of data bits across the cloud coming together to rebuild San Juan. Of course, when San Juan rises from the data bin it too will find a place in the simulation.

One Dan Vanderkam is currently using images from the mid-1800s to the early 1900s to rebuild a map of fin-de-siècle New York City. Perhaps this could evolve into just another template for an ancestor simulation, or maybe an upload vacation spot—which would be another topic entirely. Just remember, no Irish allowed.

Still trying to retain my own physicality, one consoling factor I clutched at remained how dangerous it is to think of something as unique or special. Our position in a simulation universe is on the bottom rung. If we are simulated, then we are somewhat notable simply by virtue of being the latest. Which would position us in a special place, not as unique as original programmers to be sure, but special, nevertheless.

Then by the logic of the original argument, we are most probably a simulation and not the progenitor. If we are the end in a string of nested simulations, we occupy a somewhat less special place of being one of a multitude of simulations in an end game. So if simulations are common we are most likely simulated. Also, if simulations building simulations of their own are common, we should be building them. We aren’t. But once we launch our first real simulation, not only are we no longer the lowest rung in the simulation Ponzi scheme, we also know without a doubt that ancestor simulations exist.

There is a warning, and admonition against even asking this question. Many have said that once simulated beings realize they are simulated they lose their value. These simulations would be unceremoniously unplugged, rebooted, overwritten, or some other apocalyptic outcome handed out to simulated beings that are too clever by half.

If the plug were pulled on us for “discovering” our condition as virtual beings, then any of the upline matryoshka simulations would be under the same threat. Given that they have not been shut off—because if their platform would have been terminated so would we have been—the same logic that predicts the majority of self-aware creatures are virtual, would also support the idea that terminating simulations is rare.

Finally, a bit of indulgence if I may. However the universe is built, one thing is certain, we function based on the laws of the universe we exist within. We are an emergent portion of the universe that has awoken and finds its gaze turned inward. There is nothing that we do that the universe does not permit. We are the universe trying to understand itself.

We are the universe in microcosm, extrapolated from a reality writ large to the unique simulations running in our individual wetware. It is as already established above, these imperfect simulations run on the data provided and are constructed within parameters set by the laws of space and time. Remember that the argument is that there are more virtual people in the universe than there are physical ones. The people have no idea, indeed no way of knowing, if they are physical or not. The physical people might seem to be existing in a privileged position, but once they have initiated their own simulations, they have no way of knowing that they are physical. The result, there are more virtual people in the picture than physical. What is the weakness here?

The weakest part of the argument comes from the necessity of virtual people. Anything that can be dismissed with a “pshaw” and “Because! Don’t waste your time.” Probably deserves another look. So look.

Above you will find the jig-sawed pieces of a super snappy virtual world scattered all over the surface of our technology’s shifting adaptive landscape. Since it’s all about computer simulations we can turn a myopic eye to the question and naively see only Moore’s Law and quantum computing and Machine Masters of Jeopardy. We forget that there are already nested simulations. I live inside my head, you live inside yours. We all live in our individual simulations but, as it turns out, not all of us are alone.

Barry Dainton, a philosopher at the University of Liverpool, has suggested modifying Bostrom’s trilemma by substituting “neural ancestor simulations” for Bostrom’s more general “ancestor simulations. A neural ancestor simulation could be anything from a literal brain in a vat, to “far-future humans with induced high-fidelity hallucinations that they are their own distant ancestors.” Every philosophical school of thought can get behind the idea that sufficiently sophisticated neural ancestor simulation would be indistinguishable from non-simulated experiences.

Let’s look to the world we know for an analogy. So here’s my stretch. Multiple Personality Disorder (MPD) is a thing. Estimates of its occurrence range from 1–3% of the global population, a general population of some 7 billion people. That means that there are between 70,000,000 and 210,000,000 isolated biological computers running MPD as part of their simulations. While psychologists define anyone with at least one alternate personality as MPD, research has identified individuals with 10s, 100s, and even 1000s of alternates. These alternates have their own memories, their own personalities, and are usually unaware of each other. So let’s do some back of the envelop numbers.

We would have anywhere from 70,000,000 individual brains with two personalities to 210,000,000 with 2,000 personalities. So globally we have anywhere from 70,000,000 to 420,000,000,000 alternate personalities, counting those beyond each brain's first persona. Since these alternates are parasitic, we could think of them as virtual. So while we can say that an extremely small, non-zero percentage of people are virtual on the one end, estimates of up to 60 alternates for every individual on the planet are within the parameters of the current understanding of MPD.

So at least a number of people that you can talk to are alternates. And by some estimates, most people in the world are alternates, in other words, virtual and running on nested simulations. Let me repeat that, most people in the world are virtual. Most. Just like the Simulation Hypothesis predicts. Albeit in a little different, yet oddly analogous way.

On a visit to Piazza San Marco in Venice you might wander past the tower where Galileo first viewed the moons of Jupiter (thus complicating the simulation forevermore) to the docks along the Grand Canal. To the left is the Ducal Palace and behind that the prison. Connecting the rear of the palace and the prison is a short, enclosed bridge spanning a narrow canal. The windows overlooking the Grand Canal afforded prisoners their very last view of the outside world before incarceration. They had only to sigh at the loss. If you are virtual, the world you thought you knew is forever gone.

To be clear, Bostrom is not the first to suggest we live in a virtual world, even if he is the first to call it as such. Philosophy has plenty of room to discuss such a thing. Maybe have even a short conversation with Descartes’ Evil Demon for some insight. Of if you maintain a theological bent, an all-powerful god could have created the world this morning with a full and complete history already in place that included both sauropod fossils and the lingering taste of your morning coffee.

So perhaps now you understand why I am not settled on any question concerning whether or not you and I are virtual. The remaining doubt is not a little bothersome to me. I wish I could see how I might have felt this moment if I had never considered this question at all. And now that I think about it, there just might be a way to find out.



Copyright © 2020 Robert E. Furey


“Jim Baen would have enjoyed this one a lot,” commented Baen Publisher Toni Weisskopf on this essay by Rob Furey. Dr. Rob Furey is a biologist by training whose work was centered around social aspects of spider behavior, but with broad interest in areas of astronomy, physics, geology and forensics. Furey won numerous awards for innovative teaching from both academic and business groups and has worked closely with the Dauphin County Coroner's office as a sworn deputy. He is a professor in Integrative Sciences at Harrisburg University, and serves as assistant provost there. Complimentary activities include director of the environmental education center at Fort Belvoir, Virginia, guide and science adviser to a Partridge Films film crew in Equatorial West Africa, and science essays in Aeon Magazine and on Baen.com. He is a graduate science fiction workshop Clarion West, and his fiction has appeared in many anthologies.