Rogue data centres may need to be destroyed: AI researcher
We may have to bomb rogue data centres to save humanity.
Leading artificial intelligence (AI) safety researcher Eliezer Yudkowsky is calling for urgent action to be taken in order to safeguard humanity's future. Yudkowsky is a research fellow at the Machine Intelligence Research Institute (MIRI) and is widely known for popularising the concept of friendly artificial intelligence.
In a recent article published in Time Magazine, Yudkowsky has stated that humanity's future hangs in the balance and that immediate action must be taken to prevent a catastrophic event from occurring.
Since data centres act as digital neurological cortices for AI, its destruction would necessitate the obliteration of the data centres that power it.
Yudkowsky argues that if a too-powerful AI is built, it could result in the extinction of all biological life on Earth. "If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter," Yudkowsky said.
The researcher has also expressed his concerns over the lack of transparency from OpenAI, a company that is attempting to build safe and beneficial AI. Yudkowsky believes that the lack of transparency makes it difficult to understand how close we are to AI self-awareness or disaster.
Yudkowsky is a researcher in the field of AI alignment, which is about steering AI systems towards their designers' intended goals and interests, so that they do not accidentally or intentionally cause harm.
OpenAI has said that it plans to solve the alignment problem by building an AI that can help develop alignment for other AIs.
He has criticised this approach, stating that "just hearing that this is the plan ought to be enough to get any sensible person to panic."
Despite the fact that Yudkowsky has played a key role in accelerating research on artificial general intelligence (AGI), he is calling for a cap on compute power and for GPU sales to be tracked.
He argues that rogue data centres must be destroyed, in order to prevent a catastrophic event from occurring.
Yudkowsky's concerns over AI safety are shared by other researchers, CEOs and AI figures, including Elon Musk, Stuart Russell, and Yoshua Bengio.
In a recent open letter, they called for a six-month moratorium on giant AI experiments.
Yudkowsky believes that this is a step in the right direction, but "is understating the seriousness of the situation and asking for too little to solve it."
In the Time article, Yudkowsky argues that effectively studying AI safety could take decades, and that humanity must take immediate action to prevent a catastrophic event from occurring.
"The thing about trying this with superhuman intelligence is that if you get that wrong on the first try, you do not get to learn from your mistakes."
He adds: "Humanity does not learn from the mistake and dust itself off and try again, as in other challenges we’ve overcome in our history, because we are all gone."
As a solution, Yudkowsky suggests a moratorium on new large training runs, which should be indefinite and worldwide.
He recommends shutting down all the large GPU clusters, which are the large computer farms where the most powerful AIs are refined, and putting a ceiling on how much computing power anyone is allowed to use in training an AI system.
Yudkowsky argues that no exceptions should be made for governments and militaries, and that immediate multinational agreements are needed to prevent prohibited activities from moving elsewhere.
In order to enforce these regulations, Yudkowsky proposes tracking all GPUs sold, and being less scared of a shooting conflict between nations than of the moratorium being violated. He suggests that rogue data centres should be destroyed by airstrikes if necessary.
Yudkowsky's proposals may seem drastic, but he argues that we are not ready to face the challenges, and that preventative action is needed in terms of data centre operations across the globe.