The CAIS statement gathers support from prominent AI scientists and notable figures.
Notable figures such as Bill Gates, OpenAI CEO Sam Altman, Congressman Ted Lieu, UN Sciences and Technology chairman Aaron Xiang, DeepMind COO Lila Ibrahim, Microsoft CTO Kevin Scott, Stability AI CEO Emad Mostaque, and Anthropic CEO Dario Amodei, among leading AI experts, professors, scientists, and everyone from astronauts to former presidents have signed a statement from the Center for AI Safety. The topic? “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
AI poses serious risks that are often difficult to voice concerns about, as per the statement. The aim is to overcome obstacles, open up discussion, and create general awareness among experts and public figures to look at these severe risks.
The Center for AI Safety launched in August 2022 and it identifies AI risk as generative AI’s ability to “perpetuate bias, power autonomous weapons, promote misinformation, and conduct cyberattacks.”
There’s plenty of literature as food for thought, such as how natural selection favors AI over humans or how rogue AI systems might arise.
How will signing this statement help? The signatories aim to build a consensus among AI experts and policymakers that AI systems pose risks that should be looked at today. It’s a nudge in the direction of regulation and governance. The statement itself doesn’t delve into the details, but it shows how influential personalities and expert AI scientists are increasingly looking to support AI regulation in order to mitigate these serious risks.
Indeed, generative AI’s adoption is increasing at a speed that’s concerning. But just how useful gathering support from leading industry experts will be in influencing policy changes in the world’s most innovative economies is something that remains to be seen. Here’s the statement.