18 countries including the UK and UK sign a new guiding agreement to make AI secure by design.
In a major move toward making AI “secure by design,” the US, the UK, and more than a dozen countries have inked an international agreement to keep AI safe from bad actors. The idea is to push companies working on AI to create tools and systems that are secure. The document signed is 20 pages long and signed by 18 countries.
Notably, it’s a non-binding agreement. There’s no rule or law to put things in motion. As such, these are along the same line followed by previous agreements – guiding principles rather than rigorous rules. We’ve not hit the point where AI truly becomes malevolent, after all.
Reuters reported on this on Monday, quoting US Cybersecurity and Infrastructure Security Agency director Jen Easterly:
This is the first time that we have seen an affirmation that these capabilities should not just be about cool features and how quickly we can get them to market or how we can compete to drive down costs, an agreement that the most important thing that needs to be done at the design phase is security.
Aimed to shape the development of AI going forward, the agreement is also signed by Germany, Italy, Australia, and Singapore, among others.
It’s another framework of sorts and doesn’t directly face the biggest problems plaguing the world of generative AI – the ways in which it can be misused, that is.
Countries in Europe are already working on various initiatives and conducting research to better manage AI risk and control the spread of disinformation and misinformation spreading with the help of AI.
Has AI brought the world together? After all, nothing brings powerful economies together like a common threat.