The four tech giants launch Frontier Model Forum, which is accepting members, to better protect the misuse of AI and mitigate its risks.
In an announcement yesterday, Anthropic, Google, Microsoft, and OpenAI launched the Frontier Model Forum—An industry body that will focus on ensuring the safe and responsible development of AI models and tools.
The core objectives of the body include advancing AI safety research, identifying best practices, collaborating with policymakers and academics, and supporting efforts to develop applications to meet social challenges like climate change.
This follows a talk of seven tech companies (that also included Meta, Amazon, and Inflection AI) with US President Joe Biden in the White House to work on these fronts and put up safety standards. You can read the official announcement on the Google blog. OpenAI and Microsoft have also announced the collaboration.
The forum is open to new organizations developing large-scale machine learning models that exceed the capabilities currently present in the most advanced existing models.
It’s truly remarkable for big tech companies to come together to solve the problem of the risks posed by artificial intelligence.
The Guardian reported the two sides of the conversation:
Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control. This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity.
Brad Smith, President, Microsoft
I have grave concerns that governments have ceded leadership in AI to the private sector, probably irrecoverably. It’s such a powerful technology, with great potential for good and ill, that it needs independent oversight that will represent people, economies and societies which will be impacted by AI in the future.
Dr. Andrew Rogoyski, Institute for People-Centred AI, University of Surrey