Blog | Robotiq

Gaining Trust in Collaborating with Robots

Written by Samuel Bouchard | Feb 23, 2011 10:23 AM

I just re-read Shuichi Fukuda’s article, “How can Man and Machine Trust Each Other and Work Better Together?” in the proceedings of the 2008 ASME IDETC.  It’s an incisive cogitation on the ever more complex interactions between humans and their tools.

Everyone knows what to do with a hammer or drill. We work a little with it, develop a rhythm and, if we are dextrous, we can build something nice. We trust the simplest technologies because we know what to expect. We are masters of uncomplicated machines: they obey. Commands are one-way. 

On the other hand, technology keeps getting more sophisticated while their service life decreases. We therefore have less time to learn to use increasingly complicated machines. To Fukuda, ours is an era of non-experts. With inefficient manipulation, the objects we use feel like a black box. Poor understanding makes us lose faith in it. We try to make it work, but when it doesn’t respond the way we want, we get frustrated. This can be dangerous, lethal, even. The article refers to plane accidents where the pilot winds up battling some automatic command that he unknowingly activated. When relations sour between human and machine, the problem worsens. A frustrated individual seemingly uses one-third of the brainpower of a calm one.

There’s no turning back. Technology will keep getting more complicated and we’ll be able to use it in an increasing variety of contexts. We will no longer command our machines, we will cooperate with them. How can we deal with this burgeoning complexity? The author suggests two approaches:

  1. Draw inspiration from the computer program. In the early days, software had fixed characteristics like today’s mechanical products. Now we tend to iterate and add functions as the user learns to use the program. Because more and more products incorporate electronic and programming characteristics, we can see progress going that way in the future.
  2. Allow machines develop a personality. There are theories on creating effective teams of humans. Likewise, he thinks that a machine with the right “temperament” (what he calls mechanality) for the user will improve collaboration. He thinks that if machines start with simple functions and interactive capabilities, they can adapt to the user’s personality. Then, the machines will be able to interpret our actions and anticipate what we want them to do. We will then have the impression that they understand us and we will begin to trust them.

I find these ideas are interesting from the perspective human-robot collaboration. It not just a kinematics, control or even safety issue. It gets into user interface, ergonomics and psychology, things robot engineers are not the most skilled at. It is a very important aspect. Robot integrators that have been around for decades tell me how the introduction of the teach pendant was such a revolution to lower the barrier to interacting with robots. If we can get find even more intuitive ways to program and communicate with robots --  and I'm sure that we can -- hopefully we can make robots used by many more non-robotic experts. 

[Photo: Davezilla on Flickr]