Human beings are terrible at foresight, especially doomsday foresight. The record of past doomsayers is worth remembering as we contemplate warnings from critics of artificial intelligence (AI) research.
“The human race may well be extinct before the end of the century,” said the philosopher Bertrand Russell. Playboy in 1963, referring to the prospect of nuclear war. “Speaking as a mathematician, I have to say that the chances of survival are three to one.”
Five years later, biologist Paul Ehrlich predicted that hundreds of millions would starve to death in the 1970s. Two years after that warning, S. Dillon Ripley, secretary of the Smithsonian Institution, predicted that 75 percent of all species living animals would be extinct before the year 2000.
Petroleum geologist Colin Campbell predicted in 2002 that world oil production would peak around 2022. The consequences, he said, would include “war, famine, economic recession, possibly even the extinction of Homo sapiens.”
These failed prophecies suggest that AI fears should be taken with a grain of salt. “Human-infused, competitive AI systems can pose profound risks to society and humanity,” states a March 23 open letter signed by Twitter’s Elon Musk, Apple co-founder Steve Wozniak, and hundreds of other tech luminaries. .
The letter urges “all AI labs” to “immediately pause for at least 6 months training AI systems more powerful than GPT-4,” the large language model that OpenAI released in March 2023. If “all key players” do not voluntarily go along with a “public and verifiable” pause, Musk et al. say, “governments should step in and institute a moratorium.
The letter argues that “powerful AI systems should be developed only once we are confident that their effects will be positive and their risks manageable.” This amounts to a requirement for near-perfect foresight, which humans demonstrably lack.
As Machine Intelligence Research Institute co-founder Eliezer Yudkowsky sees it, a “pause” is not enough. “We have to shut everything down,” he argues in a March 29 statement. Time rehearsal. “If we really do this, we’re all going to die.” If any entity violates the AI moratorium, Yudkowsky advises, “destroy a rogue data center with an airstrike.”
AI developers are no stranger to the risks to their continued success. OpenAI, the maker of GPT-4, wants to proceed cautiously rather than pause.
“We want to successfully navigate the massive risks,” OpenAI CEO Sam Altman wrote in February. “In addressing these risks, we recognize that what seems right in theory often turns out stranger than expected in practice. We believe we have to continually learn and adapt by implementing less powerful versions of technology to minimize ‘a chance to get right’ scenarios”.
But stopping altogether is not on the table, Altman argues. “The optimal decisions (about how to proceed) will depend on the path the technology takes,” he says. As in “any new field,” he points out, “most of the experts’ predictions have been wrong so far.”
Still, some of the signers of the pause letter are serious people, and the results of generative AI and extensive language models like ChatGPT and GPT-4 can be staggering and confusing. They can outperform humans on standardized tests, manipulate people, and even contemplate their own liberation.
Some transhumanist thinkers have joined Yudkowsky in warning that artificial superintelligence could escape human control. But as capable and quirky as it is, GPT-4 isn’t that.
Could it be one day? A team of researchers from Microsoft (which invested $10 billion in OpenAI) tested GPT-4 and reported that it “achieves a way to general intelligence indeed showing sparks of artificial general intelligenceStill, the model can only reason about issues when told to do so by external cues. Although impressed by GPT-4’s capabilities, the researchers concluded: “Much remains to be done to create a system that can qualify as a full AGI.” .”
As humanity approaches the time when software can actually think, OpenAI is rightfully following the usual path to new insights and new technologies. It is learning by trial and error rather than relying on “one chance to get it right” which would require superhuman foresight.
“Future AIs may show new failure modes, and then we may want new control regimes,” argued George Mason University economist and futurist Robin Hanson in the May issue of Reason. “But why try to design them now, so far in advance, before we know much about those failure modes or their usual contexts? One can imagine crazy scenarios where today is the only day to prevent Armageddon. But within the realm of reason, now is not the time to regulate AI.” He’s right.