The company is using the rules to develop a ‘Robot Constitution’ to prevent a new machine system from potentially harming humans.

Google

Science-fiction writer Isaac Asimov’s “Three Laws of Robotics” are being used to help Google create guardrails for its latest robot advancement. 

The company’s DeepMind team developed the safety rules as Google focuses on creating robots that can complete complex tasks based on a simple request. For example, tidying up the house or cooking a delicious meal. 

These tasks can be difficult for a robot to pull off since it can require them to identify and learn how to use all kinds of objects and tools in various environments. In a Thursday blog post, the DeepMind team described the progress it’s been making to using “foundational” computer models to help the machines visually identify objects and then execute the corresponding task, like viewing a countertop and then wiping it down with a sponge.   

How the system works.

(Credit: Google)

The work led to Google’s development of AutoRT, a system that can safely orchestrate commands to over 20 robots simultaneously in a variety of office building settings. While the feat is impressive, programming robots with such wide-ranging capabilities can also lead to potential risks. For instance, a robot wielding a meat cleaver or splashing a boiling pot of water near a person. So to ensure the machines function safely, Google’s DeepMind team took inspiration from Asimov, a prolific sci-fi author who wrote many stories revolving around robots.

The result is a “Robot Constitution” for the company’s latest robot system. “These rules are in part inspired by Isaac Asimov’s Three Laws of Robotics – first and foremost that a robot ‘may not injure a human being,’” Google wrote in the blog post. “Further safety rules require that no robot attempts tasks involving humans, animals, sharp objects or electrical appliances.

The irony is that the Three Laws of Robotics are flawed. Asimov essentially used them as plot device to drive stories about how the ambiguities in the laws can still result in conflict and conspiracy. For example, one Asimov story involves the machines covertly working to sacrifice individual people for the betterment of humanity. 

In Google’s case, the company’s DeepMind team noted the Robot Constitution was merely one guardrail. In addition, the team created several layers of practical safety measures for its latest robot system, including a button to turn them off.  “For example, the collaborative robots are programmed to stop automatically if the force on its joints exceed a given threshold, and all active robots were kept in line-of-sight of a human supervisor with a physical deactivation switch,” the company said.

LEAVE A REPLY

Please enter your comment!
Please enter your name here