What does the future of humanity look like? Will we boldly go where no one has gone before? Or perhaps we end up in an arid dystopian hellscape, fighting in the dust over a drop of water.
Maybe, though, the future won’t be human at all. No, we’re not talking about a doomed war against our evolutionary cousins or the species’s eventual split into eloi and morlocks. What if humans lost their place at the top of the world order not because it was taken from us, but because we gave it up willingly – to a world government, a universal moral code, or even a super-intelligent artificial intelligence?
This is the vision known as the “Singleton Hypothesis”. It comes from the futurist philosopher Nick Bostrom, and it’s the idea that humanity – or in fact any intelligent life on Earth – will eventually live as a “singleton”: a world ruled by a single decision-making entity.
“It is an open question whether the singleton hypothesis is true,” writes Bostrom. “My own opinion is that it is more likely true than not.”
In fact, Bostrom believes, it’s simply the final step on a ladder we’re already climbing. If you look at where humanity started and compare it to where we are now, he says, the Singleton Hypothesis starts to look all but inevitable.
“Historically, we have seen an overarching trend towards the emergence of higher levels of social organization, from hunter-gatherer bands, to chiefdoms, city-states, nation states, and now multinational organizations, regional alliances, various international governance structures, and other aspects of globalization,” he explains. “Extrapolation of this trend points to the creation of a singleton.”
Now, we’ve all seen The Matrix, so we’re probably all thinking the same thing right now.
Yes, it’s true that the Singleton Hypothesis might not be a utopian vision – there are plenty of ways it could go wrong. A totalitarian singleton, for instance, would give us a world with “absolutely no freedom, no privacy, no hope of escaping, no agency to control our lives at all,” warned Tucker Davey, a writer at the Future of Life Institute in Massachusetts, which focuses on existential risk research.
“In totalitarian regimes of the past, [there was] so much paranoia and psychological suffering because you just have no idea if you're going to get killed for saying the wrong thing,” he told the BBC. “And now imagine that there's not even a question, every single thing you say is being reported and being analyzed.”
But Bostrom doesn’t think his vision needs to be quite that nightmarish. There are plenty of ways the Singleton Hypothesis could be true that sound quite nice: maybe, with enough time and resources, everybody on the planet would independently adopt the same moral code – then this code would count as a singleton. Maybe the world unites under a global democratic republic, or a “friendly superintelligent machine,” Bostrom suggests – “assuming it was powerful enough that no other entity could threaten its existence or thwart its plans,” of course.
In fact, he suggests, the Singleton Hypothesis might be the only way to avoid a dystopian future. What would be the point in costly and dangerous arms races or devastating nuclear wars, he points out, if the world is united under a single entity? How better to avoid an unequal and wasteful distribution of resources, or the exponential growth of the population, than an all-knowing benevolent supercomputer?
“Broad support for the creation of a singleton could gradually develop if a singleton is indeed needed to solve [problems like these] and if the salience of these problems increases over time,” Bostrom writes. “A catastrophic event that highlighted the dangers of failure to solve global coordination problems, such as a war fought with weapons of mass destruction, could accelerate such a development.”
So just how likely is this scenario? According to Bostrom, it depends on what we decide to do next.
“Some anticipated technologies might facilitate the creation of a singleton,” he writes, “such as improved surveillance (including reliable lie detection) and mind-control technologies, communication technologies, and artificial intelligence.”
That may sound firmly in the realm of science fiction at the moment, but it’s closer to reality than you might think. We’re already pretty fine with the idea of being constantly surveilled, and as for mind control – well, we’re almost there.
“Over the last few years, we've seen the rise of filter bubbles and people getting shunted by various algorithms into believing various conspiracy theories, or even if they’re not conspiracy theories, into believing only parts of the truth,” Haydn Belfield, of the Centre for the Study of Existential Risk at the University of Cambridge, told the BBC.
“You can imagine things getting much worse, especially with deep fakes and things like that, until it's increasingly harder for us to, as a society, decide these are the facts of the matter, this is what we have to do about it, and then take collective action.”
But there’s another way humanity’s story could progress. If we instead move towards a greater use of cryptography, Bostrom says, a singleton-ruled future might be less likely. With less access to information, and less centralized control, the development of a single all-powerful entity would find it difficult to take off.
In that respect, those who fear the singleton might find cause for optimism. With growing awareness of the potential pitfalls of surveillance and manipulation technologies, we’re seeing political bodies call for bans on facial recognition and social media regulating misinformation on their platforms. If we follow this route, it’s possible a singleton – good or bad – will never exist.
“Singletons could be good, bad, or neutral,” Bostrom writes. “[It could] solve certain fundamental coordination problems that may be unsolvable in a world that contains a large number of independent agencies at the top level.”
“But if a singleton goes bad, a whole civilization goes bad,” he warns. “All the eggs are in one basket.”