For many, it’s known as the robot apocalypse. But for the more well-read followers of technology developments, it’s called The Singularity, a moment in time when artificial intelligence (A.I.) transcends human intelligence and, well, we don’t quite know what happens after that.
The idea in its current incarnation was presented back in 1983 by computer science professor and science fiction author Vernor Vinge in the pages of Omni, then the leading futurist periodical.
“We will soon create intelligences greater than our own. When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding,” wrote Vinge. “Given our progress in computer and biological sciences, that should be between 20 and 100 years from now.”
Then, about a decade later, Vinge updated [pdf] his singularity forecast in an essay titled “The Coming Technological Singularity: How to Survive in the Post-Human Era,” for a NASA-sponsored symposium in Ohio.
“Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended,” wrote Vinge. “Let me [be] more specific: I'll be surprised if this event occurs before 2005 or after 2030.”
Perhaps the most widely shared singularity prediction comes from inventor Ray Kurzweil, who in his 2005 book The Singularity is Near set the year of the singularity’s arrival as 2045. Through the lens of 2023, it’s looking like Kurzweil’s usually accurate predictive abilities were, in this case, a skosh too conservative.
We are now just nine months away from Vinge’s 30-year prediction. That’s this December, for the temporally-challenged. Alternatively, if we use Vinge’s 2030 date as the outlying piton for the planet’s shift into the hands of A.I., we might have about seven more years of blissfully chaotic human cognition-driven progress.
The future of human history
The reason so many people refer to the singularity as the robot apocalypse is probably related to our human need to anthropomorphize things to understand them better. It’s why 1984’s The Terminator, the story of a humanoid robot following A.I. orders to wipe out humanity, has more cultural traction than, say, Colossus: The Forbin Project, a 1970 film about how two faceless, body-less A.I.’s take over the planet.
Along the same lines, our need to give human form to our fears around A.I. often informs how we think about the ultimate intent of these emerging A.I.’s. Instead of assuming cold machine indifference on the part of these nascent super-intelligences, as They, or It, begin to pursue the scientific mysteries of the universe, many of us imagine the worst. Not just killer robots, “evil” killer robots.
In that respect, the most important moment in human history won’t be when A.I. achieves super-intelligence that surpasses humans and is perhaps accompanied by what appears to be self-awareness. No, the most important moment is when humanity stops considering A.I. super-intelligence a trope of science fiction and instead begins to take such a shift very seriously. I would argue that we have now arrived at that moment—the semantic singularity.
Fear is the mind-killer
When got my hands on some of the more public-facing A.I. tools around this time last year, I was immediately struck by a sense of dread. I spent an entire week largely unable to sleep through the night as I considered all of the implications on humanity. The end of labor? The beginning of transhumans merging with technology? And, most terrifying, considering the difference, if any, between human pattern recognition and interpretation, and that same process executed by A.I., in its own way.
However, despite my initial concerns, not once did I envision an “evil” A.I. compelled by the same motivations around avarice, spiritual beliefs, and ego. My fear, having considered the singularity for many years since I first learned of the idea during an interview with Kurzweil many years ago, was: What happens if we’re simply left behind in the exhaust plume of A.I.’s insouciance?
That perspective, whether optimistic or simply pragmatic, doesn’t seem to be shared by the masses who are finally beginning to take A.I. seriously. The first hints of how A.I. would be received by the mainstream public cropped up in places like media tabloids, where generative A.I. tools were quickly, and often ridiculously, used to show us terrifying images of “what A.I. thinks humans will look like in a thousand years,” or “what the last selfie taken by a human might look like.”
This new form of contemporary A.I. angst has, predictably, seeped into our most popular form of communication: film and television. Blumhouse’s horror-meets-tech ride M3GAN recently pulled in over $172 million in ticket sales as audiences embraced the idea of an A.I.-powered super doll that, unlike Chucky before it, is motivated by plans of evil robot dominance rather than, well, Satan. For those who do believe that A.I. is the coming of the Antichrist, NBCUniversal’s Peacock streaming network will debut a new show in April called Mrs. Davis in which a nun (yes, really) does battle against an evil A.I.
Longtime science fiction fans understand that dressing technology, especially robots and A.I., in the cloak of evil is nothing new. The list of Hollywood tales touching upon this theme is long. But that was then. In 2023, the new stories have a bit more bite to them. While the steely gaze of Hugo Weaving as Agent Smith in 1999’s The Matrix was almost charmingly dastardly, today, the manipulative tactics of the android called Ava in 2014’s Ex Machina somehow have the ChatGPT-like whiff of verisimilitude.
Arthur C. Clarke’s well-worn quote that “any sufficiently advanced technology is indistinguishable from magic” is being played out in real-time as some A.I. end-users have begun to think of their generative A.I. prompts as spells they cast into the void to produce something they’ve never seen or heard before. A new kind of Magic. And thus, the human habit of assigning good and evil traits to things many of us still don’t understand is gradually being aimed at A.I.
If human history is any indication, we are still in the honeymoon phase of this disruption. But it won’t take long before we hear “burn the witches.” Our job now—before alternating waves of innovation euphoria and disruption hysteria fully take hold—is to work to understand these A.l. tools on the way to harnessing them.
That is how you stop a robot apocalypse, no sorcery or time travel needed. Just the human mind—the same mind that gave birth to A.I. in the first place.