Google Writes “Robot Constitution” to Make Sure Its Robots Don’t Kill Us

DeepMind writes a Robot Constitution based on Isaac Asimov’s Three Laws of Robotics to ensure safety when the LLMs inside Google’s bots work on tasks involving humans, animals, sharp objects, electrical appliances, etc.

One of the three new safety-focused advances of Google DeepMind’s work on LLM-based robots is the Robot Constitution, based on the Three Laws of Robotics by Isaac Asimov. In a blog post titled “Shaping the future of advanced robotics,” the DeepMind team introduced real-world data collection that improves their autonomous as well as supervised robots to perform tasks more efficiently.

The three laws of Asimov are: A robot may not injure a human being or, through inaction, allow a human being to come to harm; A robot must obey orders given it by human beings except where such orders would conflict with the First Law; and A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. Source.

This “set of safety-focused prompts” are for Google’s robots “to abide by when selecting tasks.” The exact Constitution has not been published, it’s just a part of the Google AI team’s work toward increased safety in its robots. There are other measures in place as well, such as pre-programming to stop robots automatically if the forces on any of its joints go past a threshold and the inclusion of a physical kill switch in case of an emergency.

Google used over 50 AutoRT robots in its buildings to conduct 77,000+ trials. The LLM behind Google’s advanced robots doing everyday tasks with comparative ease.

By Abhimanyu

Unwrapping the fast-evolving AI popular culture.