OpenAI CEO testifies and urges lawmakers to set up an agency to oversee AI regulation.
We’ve grown accustomed to Congressional hearings with tech executives over the last few years. They’re almost always bad. OpenAI’s CEO Sam Altman recently had the pleasure to join the ranks of Mark Zuckerberg and Sundar Pichai. He testified before a Senate subcommittee to agree with the need to regulate AI tech. More and more influential tech experts and company executives are now shifting toward what can only be called an inevitable worldview that AI needs to be regulated for everyone’s safety.
Altman notably urged lawmakers to regulate the uncharted territory to mitigate the potential hazards of generative AI. He also suggested the creation of an agency whose sole purpose would be to issue licenses for large-scale AI models after some form of due diligence and rigorous checks.
The FDA or SEC of AI tech, in other words.
Indeed, the interest in AI has grown incredibly. Tech giants have invested billions of dollars in different tools, experiments, and teams that are working on some form of AI or ML. The concerns around AI typically include misinformation, job loss, and social issues. It’s these concerns from where the demand to regulate AI stems—Even US President Joe Biden has expressed concerns now.
The subcommittee was generally appreciative of the cooperation. The suggestions were not taken without a grain of salt, however. The reactions were mixed at best. Some experts indeed suggest that more rigorous measures need to be deployed for AI and that a simple agency won’t cut it.
That being said, there was a solid discussion to create an independent agency that would oversee the larger realm of AI. More specifically, it will work in the context of disclosure requirements and antitrust rules.
It’s noteworthy that the European Union and China have already put regulations in place for (or against) artificial intelligence’s unchecked growth. Also, initially, Altman had threatened to leave Europe if compliance with AI laws became too challenging, though he was quick to retract that statement and OpenAI continues to operate in Europe.
The EU’s focus has been on copyright issues and the black box problem (lack of transparency in how an AI engine actually arrives at a particular output from an input, because the steps are not visible owing to the pre-trained nature of such models) of training data. Altman also discussed risks and regulations with the UK’s Prime Minister among heads of AI companies.
The EU is also aiming to develop an AI pact with Google’s parent company Alphabet which will affect projects like Google Bard and DeepMind.
The role of accountability and transparency cannot be undermined. And if left unchecked, AI can certainly do irreparable harm. But at the same time, advancement should not be hindered in areas where we’re seeing mass public good, such as pharmaceuticals.