More than 350 top executives and researchers in artificial intelligence have signed a statement urging policymakers to see the serious risks posed by unregulated AI, warning the future of humanity may be at stake.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the signatories, including OpenAI CEO Sam Altman, said in a 23-word letter published Tuesday by the nonprofit Center for AI Safety (CAIS).
Competition in the industry has led to a sort of “AI arms race,” CAIS executive director Dan Hendrycks told CBC News in an interview.
“That could escalate and, like the
- Advertisement -