PARIS: Global leaders should be working to reduce “the risk of extinction” from artificial intelligence technology, a group of industry chiefs and experts warned on Tuesday, urging policymakers to equate its threat on a par with risks posed by pandemics and nuclear war.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” more than 350 signatories wrote in a letter published by the nonprofit Centre for AI Safety (CAIS).
The one-line statement was signed by dozens of specialists, including Sam Altman whose firm OpenAI created the ChatGPT bot. As well as Altman, they included the CEOs of AI firms DeepMind and Anthropic, and executives from Microsoft and Google.
Also among them were Geoffrey Hinton and Yoshua Bengio — two of the three so-called “godfathers of AI” who received the 2018 Turing Award for their work on deep learning — and professors from institutions ranging from Harvard to China’s Tsinghua University.
The latest statement gave no details of the potential threat posed by AI. The centre said the “succinct statement” was meant to open up a discussion on the dangers of the technology.
The statement from CAIS singled out Meta, where the third godfather of AI, Yann LeCun, works, for not signing the letter.
The letter coincided with the US-EU Trade and Technology Council meeting in Sweden where politicians are expected to talk about regulating AI.
Elon Musk and a group of AI experts and industry executives were the first ones to cite potential risks to society in April.
Among the criticism is that the algorithms could be trained on racist, sexist or politically biased material.