The rogue A.I. HAL9000 from the movie 2001: A Space Odyssey (© Metro-Goldwyn-Mayer).
In the beginning, claim the physicists, the universe existed as a single point — infinitely small, infinitely dense. All of time, all of space, literally everything that currently exists was contained in this unbelievably small cosmic egg. Then, before you can say “Big Bang,” quantum fluctuations caused it to rapidly expand and the rest, as they say, is history.
This is called the Singularity. The beginning of everything. Without it there would be no Earth, no sun, no life at all. Reality itself came into being at that moment.
Now, in the 21st century, we may be heading toward another singularity event, a moment in history that will change everything that follows. A moment that will revamp reality so drastically it can be referred to by the same term as the event at the very beginning of all existence.
This is the Technological Singularity, and many experts think it will happen within the next 50 years.
Fourteen billion years ago, that first singularity was followed by a rapid expansion of time and space that eventually led to you and me. This new technological singularity will also herald an expansion of human knowledge and capability, and will, like the first one, culminate in the creation of a new form of life: the birth of the world’s first true artificial superintelligence.
Our lives have already been invaded by artificial intelligence in ways both subtle and substantial. A.I. determines which posts you see in your Facebook feed. It roams the internet, indexing pages and fixing broken links. It monitors inventory and makes restocking suggestions for huge retailers like Amazon and Walmart. It also pilots our planes and will soon be driving our cars. In the near future, A.I.s will likely replace our pharmacists, cashiers and many other jobs. Already, a company in Hong Kong has appointed one to its board of directors, and it’s been predicted A.I.s will be running most Asian companies within five years. Don’t be surprised to see our first A.I. elected to Congress sometime in the next two decades, and we’re likely to see one running for president before the end of the century.
We even have artificial intelligences creating other artificial intelligences. Google and other companies are experimenting with an approach to A.I. development reminiscent of the evolutionary process of natural selection.
The process works like this: they create a number of bots – little autonomous programs that roam the internet performing various tasks – which are charged with programming a new set of bots. These bots create a million variations of themselves. Those variations are then put through a series of tests, and only the bots which score in the top percentile are retained. The retained versions then go on to make another million variations of themselves, and the process is repeated. With each new generation, the bots become more adept at programming other bots to do those specific tasks. In this way, Google is able to produce very, very smart bots.
This is a rudimentary example of how we will eventually produce an artificial intelligence that is the equal of (and eventually surpasses) the human mind. It will not be created by us, but will instead be programmed by a less advanced version of itself. This process will be repeated until one of those generations is advanced enough that it becomes sentient. That is the singularity event, and after it nothing will ever be the same.
The problem, of course, is that an artificial intelligence created by this method will be incomprehensible to humans, since it was actually programmed by progressively smarter generations of A.I. By the time those generations result in something capable of thinking for itself, its code will be so complex only another artificial intelligence will be able to understand it.
Think this sounds like science fiction? Think again. Countries around the world (including our own) are now looking at artificial intelligence as the new arms race. The nation with the most advanced A.I. as its ally will have the kind of advantage not seen since the dawn of the nuclear age.
In the 1940s, America was determined to develop the atom bomb, not because we were eager to decimate our enemies, but because the possibility of Imperial Japan or Nazi Germany developing the technology first would have been disastrous. That same kind of thinking will drive the race to create the first artificial superintelligence. Russian President Vladimir Putin made this statement in a speech to a group of students only last year: “Artificial intelligence is the future not only of Russia, but of all mankind … Whoever becomes the leader in this sphere will become the ruler of the world.”
And it’s not as far off as you might think. Although an exact date (and even the idea of the singularity itself) is still hotly debated, most think — if it happens at all — it will occur within the next 50 years.
Ray Kurzweil, an inventor and futurist that Bill Gates calls “the best person I know at predicting the future of artificial intelligence,” pinpoints the date of the singularity even more precisely in his book, The Singularity is Near: When Humans Transcend Biology. He writes, “I set the date for the Singularity—representing a profound and disruptive transformation in human capability—as 2045.” Kurzweil thinks advancements in artificial intelligence will experience, in the coming decades, the same exponential progress that microchip technology has seen over the past half-century.
In conclusion, I’d like to leave you with a thought experiment that has been making the rounds on the internet. It’s called “Roko’s Basilisk” and is a futurist variation of Pascal’s Wager, in which we are asked to bet our lives on the existence of God. Pascal reasons that if God exists and we choose not to believe in Him, we risk eternal torment in the fires of Hell. On the other hand, if we believe in God and He does not exist, we have simply made ourselves a fool for believing in something that turns out to be only imaginary. Therefore, argues Pascal, one should believe in God since the risk of being a fool is preferable to the risk of burning forever in the depths of Hell.
In Roko’s Basilisk, belief or unbelief in God is replaced with support or opposition to the creation of a hypothetical future artificial superintelligence. In the future, this artificial superintelligence will come to rule over humanity and, like God, it will retroactively punish those people who opposed its creation and reward those that supported it. Which one will you be? Keep in mind that supporting it will increase the likelihood that such an A.I. will come to exist in the future and eventually rule the world, while opposing it will make its existence less likely – but if it does become a reality, you will surely be punished for opposing it. (As in Pascal’s Wager, neutrality is not an option.)
Yet, how can this superintelligent A.I. possibly know who supported or opposed it in the past before it existed? The answer to that question is not easy to get your head around, but once you do, it’s likely to blow your mind.
In order for the artificial superintelligence to know who to punish in the present, it would need to build a simulation of the past. This simulation will serve as a “predictive model” for the real world, and would be a perfect copy, down to every last detail, including little digital copies of you and me. The A.I. will base its real-world judgment of us on the actions of our digital counterparts in this simulation of the past. If the digital versions of you and I choose to oppose the A.I. in this simulated version of the past, the A.I. will use that as a predictor of our behavior in the real world and punish us accordingly.
Still with me? Because I’m about to take you further down the rabbit hole. For that simulation to be an accurate prediction of the real world, the digital people which populate it would need to think and act exactly as we do. And by necessity, they wouldn’t know they were only copies of us, or that they were living in a simulation. They would believe they were the real versions and would be unaware that the world in which they lived was only a digital facsimile of the real thing.
Okay, now I’m about to take a hard-right turn. Stick with me. Assuming all this is the case, how do we know which world we’re in – the simulated one or the real one? The answer is, we can’t. From the perspective of someone living inside the simulation, it would all look perfectly real, just the way it does right now. The people in that simulation would think they were living, breathing human beings, just as we do.
Therefore, we might simply be self-aware A.I. programs from the future living inside a simulation of the past, created by a malevolent artificial superintelligence – but we wouldn’t know that.
Does that possibility affect your decision to support or oppose the A.I.? After all, if we are the ones living in the simulation, then the A.I. already exists and opposing it will doom our counterparts in the real world. However, if this is not a simulation, your support will hasten the A.I.’s eventual creation and bring about the very scenario I am describing.
So, what do you choose? Oppose or support?
Some of you may be thinking, How can I be punished for something I didn’t know anything about?
Well, now you do. You’re welcome.
Eric W. Austin lives in China, Maine and writes about technical and community issues. He can be reached by email at ericwaustin@gmail.com.