We Can Now Build Autonomous Killing Machines, But Should We?
Posted on Mar 09, 2015 2:34 PM. 5 min read time
Working in the robotic world has made me realize that the general population (read: people not in the robotic world) sometimes have a pretty bad perception of robots. As soon as you bring the subject of robots to the table, people imagine intelligent humanoid killing machines that will set out to conquer the world. Thanks to the majority of Sci-Fi classic movies, such as Terminator and more recently Chappie, people see this as the future of robotics. Technologically we are at a place where we can literally build autonomous killing machines and here are the reasons why it is probably not a good idea.
.
With constant (and fast) evolving technology, we are at a place where drones and automatic missiles are ever improving and increasing. However at the moment, these technologies are still backed or controlled by a human being. In terms of decision capability, these machines can't judge what is right and what is wrong. The next step in the technological evolution is to introduce artificial intelligence into these devices to make them totally autonomous; ethically do we really want to do this?
Is Technology Ready?
Now with the use of drones and other remote controlled devices, the era where these machines could be fully automated is not that far off. The technology is there. Robots can be programmed to make decisions based on a relatively complex logic... however can we ever be totally sure that this logic will always be right, could it ever encompass all the possible scenarios. In some cases, where the danger or possibility of danger is high (let's say a street shootout) the logical decision of whether or not a robot can harm and/or kill civilians maybe isn't the priority that is relevant to the conflict. This is basically why it is still humans that are involved in these types of combat scenarios.
“The potential for lethal autonomous weapons systems to be rolled off the assembly line is here right now. But the potential for lethal autonomous weapons systems to be deployed in an ethical way or to be designed in an ethical way is not, and is nowhere near ready.” - Ryan Gariepy, Clearpath Robotics CTO.
The same conflict might be recognizable with self-driving cars. In fact, if the logical algorithm, decides to avoid a dog on the road, this decision can potentially harm the car passengers... What if the dog is now a child or even another car ? What level of logical complexity needs to be reach before we can have fully autonomous cars and not worry about these ethical questions... Who knows? For the moment, if we translate this situation to the battlefield, we still don't have a clear answer. The technology is at a level where some autonomous machines can make some decisions, but for now, do those decisions have a solid logical background to analyze every situation so that these devices can be implemented on a large scale and we can be sure everything will run correctly? Hmmm...
Read: Clearpath Robotics Takes Stance Against 'Killer Robots'
Do You Really Want to Design Killing Machines?
As a robot designer, engineer, scientist, do you really want to build killing machines? It is a high tech world, but ethically what are the rewards? Peter Asaro has spent the past few years lobbying the international community for a ban on killer robots, as the founder of the International Committee for Robot Arms Control. He believes that it’s time for “a clear international prohibition on their development and use.” According to him, this would let companies like Clearpath continue to cook up cool stuff, “without worrying that their products may be used in ways that threaten civilians and undermine human rights”.
For now the problem of remote controlled devices is more often than not the loss of communications in the field. Since the drone pilot is at a remote location, sometimes far from the drone, it is frequent to see communication breakdown. What if instead of building machines that can decide autonomously what to do in a certain situations, we design reliable encrypted communication devices that allow people to control the drone 100% of the time. This would allow humans to still make important decisions and potentially avoid mistakes.
So, Are We in Danger?
To answer this question: I don't think we are in danger in the near future. Over the long term horizon, I seriously don't know. It basically depends on the different rules and regulations that will (and should) be set to prevent robots from becoming killing machines. When considering safety, you have to remember that human error is often a large component of accident fault. In this regard, robots and robot programming can, if it could analyze every situation, make robotic decision making more or less foolproof. Then human or robot error might only be relegated to a breakdown of sensory input.
With the current robotic revolution that the manufacturing world is living right now, we can see that robots are designed to help humans in their everyday jobs. But even if the actual robots don't look like a humanoid, the perception facing these devices isn't all that good. I imagine that this perception is related to all those ''autonomous killing machine fears'' installed in people minds. The manufacturing world is improving with the introduction of robots in North America and Europe. Manufacturing processes are beginning to leave developing countries and to come back home. This is good news not only for robot manufacturers, but for the general population. Bringing back manufacturing processes means bringing back jobs, increasing product quality and the development of local markets. So the answer to the: ''Are we in danger ?'' question: I don't think so. For the moment, artificial intelligence is not overtaking the world, we are safe. ;-)
To have more information on robots that can make a positive change in the world, we have put together a document which includes great R&D robotic projects using Robotiq 3-Finger Robot Gripper.
Leave a comment