Roughly 1200 tech personalities and well-known researchers have signed an open letter that calls for pausing all experiments & training by AI labs beyond the GPT-4 level in a bid to put safety precautions in place first.
The current growth in tools such as ChatGPT and Midjourney has been explosive. The ongoing race as a result of OpenAI’s recent advancements that has forced tech giants like Google and Meta to launch their products and update them more frequently has further fueled the AI hype.
A recent open letter on the Future of Life Institute calls for a verifiable halt on all types of experiments and training procedures by all key AI labs in order to assess the situation better and put social, ethical, and legal safeguards in place.
The demand stems from the unprecedented growth in recent AI tools that hasn’t given people sufficient time to adjust to them, as per the letter. It begins by quoting research into the risks and dangers of AI and moves on to argue in favor of “shared safety protocols for advanced AI design” to be “overseen by independent outside experts.”
The letter is currently proposing a 6-months ban.
Signed by personalities such as SpaceX’s Elon Musk, Apple’s Steve Wozniak, StabilityAI’s Emad Mostaque, DeepMind’s Victoria Krakovna, and many more, the platform is currently vetting new signatures and not displaying them before verification.
However, the top names including the ones mentioned above and others like Gary Marcus, Max Tegmark, Grady Booch, Andrew Yang, Tristan Harris, and many government officials as well as leaders of tech and AI startups are all “independently and directly verified.” The sign of Sam Altman, OpenAI’s CEO, also surfaced temporarily after being dismissed as it couldn’t be verified.
Notable AI researchers, professors, and authors with backgrounds in the world’s most prestigious institutions including Princeton University, MIT, University of California, Cambridge, NYU, Harvard, Artificial Intelligence Research Institute, and more are also named in the list.
Upon being asked if this will work, New York University’s Gary Marcus (Rebooting.ai, RobustAI) replies, “Will be interesting to see.”
- Many argue that even if this motion is put in place and companies do verify a pause, they can keep working in the background and release their new products or updates after a 6-month gap.
- A bloc of AI commentators also believes that this will just further the AI hype and might fuel the very AI startups with renewed enthusiasm that it aims to pause.
- James Vincent from The Verge writes that “the letter is unlikely to have any effect on the current climate in AI research … but it is a sign of the growing opposition.”
The Future of Life Institute is primarily funded by the Musk Foundation and mainly publishes about risks in AI, biotechnology, nuclear weapons, and climate change.
You can read the full letter here.