May 03, 2007

A Code of Ethics for Robots?
Uh, Yes. Please.

In the current issue of The Scientist, Glenn McGee argues for a code of ethics to guide your treatment of your Roomba ... and to protect you for the day when it wakes up:
The South Korean people really love robots. Industry in South Korea receives millions in government subsidies to develop them. Recently, Park Hye-Young, of the South Korean Ministry of Commerce, Industry, and Energy’s robot team, said in a statement to French Press Agency that the Ministry hoped “to have a robot in every South Korean household between 2015 and 2020,” and predicted that these robots would develop “strong intelligence.”

South Koreans are not the only ones embracing robots. Already iRobot, a company founded by Rodney Brooks, director of the MIT Artificial Intelligence Lab, has sold at least two million Roombas, a little robotic vacuum cleaner. The promise of the robot vacuum and its cousins is that the home robot will become faster, more reliable, and more cost-effective than human domestic work. It has to get this “strong intelligence” part down first. My Roomba is a one-trick pony, sucking dirt while rolling in circles and slapping into the same walls every day as it relearns a 12' x 12' room. This is not Rosie from “The Jetsons.”

But the more important issue regarding today’s domestic robots and the future is not so much about intelligence as it is about ethics. If you ever watched the Roomba-sized robots hack each other to bits on the aptly named BBC-5 television program, “Robot Wars”, you know the fear that lives in the souls of many who will never buy a domestic robot: that their Roomba would one day awaken like the robots of The Terminator. A robot with sinister intentions, without ethics, or adhering dispassionately to a code of ethics where intuition and subtlety is required (remember RoboCop?) has been the fuel of science fiction for decades. Should we require robot makers to program in a code of ethics to domestic products?

Perhaps robots should be afraid of us too; whether or not they dream of electric sheep, the robotic sex toys under development are purveyed as better-than-real-life companions. But they are plastic and metal, not human. As humans build robots that learn what their owners desire, the dilemma of the robots of Blade Runner emerges: What do humans owe “purpose-built” machines who begin to reach awareness, or to so resemble awareness that it becomes a selling point? Should laws be written to protect robots from us, by requiring robot makers to stop short of, say, robosexual devices that learn to be incredibly intimate with humans and yet are owed nothing? If so, do we create such laws in the interest of robots, or to preserve our own human dignity by choosing not to create a new kind of slave, whether or not that slave is fully aware?

The South Korean government has taken a progressively minded step by convening a committee to draw up an ethical code to prevent humans from abusing robots and vice versa. The code draws in part on the work of science-fiction writer Isaac Asimov, and specifically, according to Park, on the three laws Asimov proposed for robot ethics in a 1942 story, “Runaround.” They are: 1) A robot may not injure a human being or, through inaction, allow a human being to come to harm; 2) A robot must obey orders given it by human beings, except where such orders would conflict with the first law; and 3) A robot must protect its own existence as long as such protection does not conflict with the first or second law.

Likewise, a committee of EURON, the European Robotics Research Network, met in Genoa, Italy, in June, 2006 and concluded that a code must be created to deal with the problems of hostility to and from robots, as well as how to avoid accidents, trace robots, ensure the secrecy of their data, and monitor the nature of their intelligence, which one member of the latter commission aptly described as “intelligence of an alien sort.”

It remains to be seen whether robots will become in some sense intelligent androids, capable of interacting as peers with humans and other parts of the world. In the meantime, we are much closer to making robots with “strong intelligence” than we are to creating a code of ethics to guide our stewardship of tin men, or to protecting humanity from misbegotten robotics. Either the effort to create a code of ethics to shape the evolution of robotics will be embraced, or we may reap the consequences. It only remains to be seen who will wake up first.

View blog reactions

| More