8 more companies including Nvidia, Adobe, and IBM join the White House’s effort toward responsible AI development.
After tech giants agreed to safety measures for managing AI risk with the White House in late July, the Joe Biden administration has secured 8 more companies to agree to pretty much the same principles and protocols.
These 8 companies include Adobe, Nvidia, IBM, Palantir, Stability.ai, Salesforce, Scale AI, and Cohere. The commitments are voluntary and there is no penalty in case there is no follow-through, especially because a lot of the language is more about ethics than concrete rules or figures to hit.
The official press release mentions that the companies will be implementing the commitments immediately and that these commitments underscore the fundamental notions of safety, security, and trust while developing AI systems, models, tools, and platforms.
Broadly, the commitments include:
- Ensuring products are safe before introducing them to the public.
- Building systems that put security first.
- Earning the public’s trust.
While covering the development, Emilia David from The Verge commented how “legislating AI has been slow compared to the pace of innovation. Regulations governing AI, particularly generative AI, are still being discussed, with the Senate resuming hearings on legislating the technology this week.”
In a considerable boost to AI safety efforts, big tech companies including OpenAI, Google, Meta, Microsoft, Inflection AI, Anthropic, and Amazon committed to enacting voluntary safeguards and standards during a White House meeting. (July 26, 2023—Big Tech joins hands for AI safety)