How Worried Should We Be About An AI Apocalypse?

How Worried Should We Be About An AI Apocalypse?

How Worried Should We Be About An AI Apocalypse?

Shaghayegh Hanson


Back in 2014, when most of us were only worrying about extinction due to nuclear weapons, Stephen Hawking told a BBC interviewer that artificial intelligence “could spell the end of the human race.” At the time, I vaguely remember thinking, “Robots taking over the world? Nah, that’s a movie, not real life.” But then, what did I know—I was still breaking into a sweat when anyone mentioned the words “external hard-drive” or “gigabyte.” I dismissively silenced the voice in my head that said, “Who are you to question the genius of Hawking?”

Then, of course, our vulnerability as a human race was underlined by a worldwide pandemic that did justice to any number of catastrophizing, pandemic-depicting movies. And just as we are recovering from that doomsday scenario, the Center for AI Safety (CAIS) releases a jaw-dropping statement: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” This is why people don’t listen to the news anymore.

Being technologically challenged, I needed a layperson’s explanation of how this potential extinction would happen and what it would look like. I found something relatively helpful in a Time article written by Eliezer Yudkowsky, who says “[t]o visualize a hostile superhuman AI,” we should imagine “an entire alien civilization, thinking at millions of times human speeds, initially confined to computers—in a world of creatures that are, from its perspective, very stupid and very slow.” This civilization will escape computer confinement by obtaining “DNA strings” and sending them to laboratories that will produce proteins that allow the AI to create artificial life forms. Yikes! Well, what can we do about that? The problem with a statement like the one CAIS has issued is its focus on something that could happen in the future without presenting any present solutions as to how we can control it. It’s maddeningly intangible. 

You have to wonder why the statement is signed by a number of people who create, and profit from, the forms of AI that could perpetrate the extinction they warn about. What are we to make of this cognitive dissonance? Is it “sleight of hand,” as some critics believe—“mentioning the hypothetical existential risk of AI in the same breath as very tangible risks like pandemics and climate change, which are very fresh and visceral for the public, gives it more credibility,” but at the same time, it causes “misdirection, attracting public attention to one thing (future risks) so they don’t think of another (tangible current risks like bias, legal issues and consent).” For example, Margaret Mitchell, an ethics scientist, points out that AI is already causing harms, such as “women in Iran forced to wear clothes they don’t consent to based on surveillance, . . . people unfairly incarcerated based on shoddy face recognition, [and] the treatment of Uyghurs in China based on surveillance and computer vision techniques.”2 

Motives aside, we need to put CAIS’ statement in perspective, and give it a place in our consciousness that doesn’t just inspire fear of the unknown. For that reason, I agree with writer and futurist Daniel Jeffries, that an AI apocalypse remains a “fantasy,” and that, “[i]t’s a total and complete waste of time to try to solve imaginary problems of tomorrow. Solve today’s problems and tomorrow’s problems will be solved when we get there.”3 That requires us to hold people like the signatories to CAIS’ statement accountable for the harms their technology is already causing.

 

admin