Who is Responsible for Robot Ethics?
Posted on Dec 07, 2015 12:00 PM
Do roboticists need to get involved in ethical debates? Aside from a few specific issues, debates concerning ethics are not widespread in our field, nor are ethicists. At present we don't have a legal obligation to follow industry-specific ethical standards as some industries do. Despite this, debates on the ethics of robotics are common in the media and, as a result, are often misguided or simplistic. Who is responsible to ensure that ethical debates about robotics are realistic? Well, probably all of us. To mark the development of a new standard on roboethics, we take a look at why ethics affect everyone in robotics.
It's impossible to avoid ethical questions. They're important in both manufacturing businesses and research. How should we balance health and safety with cost? How can we be transparent about the limitations of our product or study, without making it sound like a failure? Whose fault is it if something goes wrong with a piece of technology?
We deal with ethical questions all the time, but often we don't realize it. With robotics there are a whole load of factors to think about when posing ethical questions. Even with several past attempts to introduce ethics into robotics, it is less well integrated than in other industries and fields.
People in robotics do occasionally bring up the topic of ethics. However, on a day-to-day basis it's often left to the media to pose the ethical questions - that's the same media which, as Alan Winfield has pointed out, has a somewhat distorted view of robotic capabilities. As a recent debate discussed, it is in the interest of roboticists ourselves to bring the right ethical issues to the forefront.
2016 looks to be the year where roboethics are really put on the table - with the upcoming release of a Springer book on Roboethics and a new standard under development. It's a good time to ask how we robotic professionals can incorporate ethics into our current workflow.
Do I Really Need Ethics?
Ethics is one of those odd theoretical topics which most of us understand in principle, but not all of us know how to apply in practice. This is probably because ethics doesn't receive much focus in engineering degrees, which is a bit worrying because engineering ethics can be responsible for many human lives.
Robotics involves complex systems which bridge the gap between many different fields. Even for the simplest collaborative robot, ethical questions might include: psychological factors (Are there any dangers if workers give the robot a name?); socioeconomic factors (Will this robot deskill any workers?); commercial factors (Who is legally responsible for the actions of the robot?) and more.
It doesn't matter if you are a robot manufacturer or a robot user, at some point ethical questions will enter into the use of the robot. Ethical codes try to systematize these issues, to ensure no important questions are forgotten.
A Standard for Robotic Ethics
Attempts to introduce roboethical standards have been going on for some years now. Back in 2007 on the Robotiq blog, we covered the South Korean Robot Ethics Charter. It was around the same time that the EURON Roboethics Roadmap was released, which started to look at specific ethical issues in several robotic fields. Proposals of ethical codes continue to pop up in research every so often, ranging from codes for human-robot interaction to guidelines for robot makers. There is even the think tank, OpenRoboEthics, which collects public opinion on a few key robotic fields, like autonomous cars and military robots.
The history of robot ethics began when science fiction writer Isaac Asimov proposed his famous Three Universal Laws of Robotics in 1942. These laws were based on the idea of robots being ethical themselves. However, current Artificial Intelligence is not advanced enough for moral robots to be our biggest ethical issue.
Many existing codes of roboethics have tried to follow Asimov's idea. They cover the possible future capabilities of robotics, e.g. programming highly intelligent robots to make ethical decisions, deciding how to deal with a super-AI, questioning whether future AIs can be more morally objective than humans, etc. They deal with the ethics of human-robot interaction, based on a level of robot intelligence which is currently unrealistic.
What we're missing is a standard code of ethics which is directly applicable to the state of robotics right now.
One potential contender is the new BS 8611 standard that's currently under development (you can leave comments on the draft until December 31st, 2015). Titled "Guide to the ethical design and application of robots and robotic systems", it covers some broad ethical categories which affect robot users and manufacturers, not rules for robots themselves. The categories are relevant for all roboticists, even those who don't develop cognitive systems.
The standard provides a good starting point for thinking about ethical questions. Some categories are familiar to us and have already received a lot of attention, such as the risk of job-loss caused by robots. Other important categories are less familiar, such as the potential for a robot operator to unknowingly command a robot to do something illegal and thereafter be held responsible for its actions. Issues such as wrongly attributing too much intelligence to a robot can be highly dangerous, as we've discussed before.
However, for a standard to become useful people need to actually use it.
How to Actually Use Roboethics
Having an ethical standard is a good start, but in order for a code of ethics to be effective it has to be put into practice. This means actually thinking about the ethical questions surrounding our robots. Ethical standards are not like other engineering standards, which can often be followed by keeping within certain tolerances. Ethics requires a certain amount of lateral thinking and creativity.
It might also be helpful to look at other fields and industries to find how they implement ethical codes. Within medical research, the Declaration of Helsinki has provided a working basis for any study involving human experimentation. Almost all medical studies must achieve ethical approval by an objective panel of judges. It is surprising that so few robotics research studies consider the ethical aspects of their work, even those which claim to be developing technologies for human use. However, this is because robotics doesn't yet have a culture of ethics. It's probably time that we did.
It's our job to teach the public about the capabilities of robots and the real ethical questions surrounding them. We can only do that if we understand those questions ourselves.
How do you think ethics are applicable to your robot? What questions do you have about how ethics can apply to robots? Have you ever had to deal with any ethical issues before in engineering? Tell us in the comments below or join the discussion on LinkedIn, Twitter or Facebook.