Would you trust a co-worker who had proved themselves to be unreliable? It's a problem which we will almost certainly face with our robot co-workers in the near future.
Let's face it. Collaborative robots are not yet truly collaborative.
Modern cobots are only collaborative in the sense that they can operate around humans without the need for safety fences. They won't harm a human co-worker but that's as far as it goes. If we were to apply the same definition to our human co-workers, we wouldn't call them "collaborative," would we? Not harming your colleagues is surely a basic requirement for being a human being!
Collaborative (adj): "produced by or involving two or more parties working together."
In this sense, cobots are have a little way to go until they are really collaborative. However, advanced collaboration between people and robots is not too far away.
Before we start to truly collaborate there's something we need to sort out first.
How will robots and humans communicate their reliability to each other? How will our robot co-workers show us that we can trust them to do their jobs reliably?
Although this situation is still a little way in the future, researchers are already looking for the answers.
Human + Robots collaboration at the RUC 2018
Last year, a warning was published in the journal Ergonomics by psychologist Prof. Peter Hancock. In it, he warned that the new autonomous technologies need more research into "human factors" — a discipline which examines the interaction between humans and technology.
Trust is a big reason that this research is necessary.
If a robot behaves unpredictably, it will surprise its human work-mates. These people will lose trust in the robot. They will become nervous about what the robot will do next. As roboticists, we might not be too concerned by this situation — we know that we can just identify the problem in the robot's programming, fix it, and happily reboot the robot. However, other workers may not be so understanding.
Robots will need ways to actively repair the trust that is lost when they "misbehave," as a group of psychologists explained in some follow-up research to Prof. Hancock's warning. Perfect autonomy, they argue, is not possible in real industrial situations. Human workers will have to deal with unexpected events relating to the robot many times a day.
In the near future, any robotic system will need ways to quickly repair its co-worker's trust in its abilities so that the human-robot team can get back to work.
How can a robot communicate that it is trustworthy to its human team members?
The answer to this, we need to look at how we, as humans, build trust with each other when we are working together.
Rapport (noun): "a close and harmonious relationship in which the people or groups concerned understand each other's feelings or ideas and communicate well."
Rapport has been shown to be essential for effective work and worker satisfaction.
Earlier this year, a team of Canadian researchers studied rapport-building in human-human teams to discover how we can improve human-robot interaction. They found that humans apply similar rapport-building behaviors with robotic colleagues as they do with human colleagues.
They identified various behaviors that people use to build rapport, or hinder rapport. These are split into verbal and non-verbal behaviors.
As humans, we use a whole host of different behaviors to put each other at ease when working together in a team.
Rapport-building behaviors include:
Verbal behaviors |
Non-verbal behaviors |
Complimenting and thanking your co-worker. |
Displaying an open, inviting, and friendly posture to co-worker. |
Emphatically and appropriately responding to questions (e.g. in full sentences). |
Engaging in friendly facial expressions (e.g. smiling, eye contact, etc). |
Use of inclusive speech (e.g. "we", "us", etc), and including personal information unrelated to the task at hand to build common ground. |
Friendly "back-channel" body language (e.g. smiling, laughing, waving, etc). |
Genuinely apologizing when criticized and responding to general concerns with agreement and empathy. |
Maintaining the other person's personal space. |
Most of us use these behaviors without thinking. They are just part of our normal interaction. With robots, any rapport-building behaviors would have to be explicitly programmed. Some of them would be easier to implement than others.
Of course, not every action helps to build rapport in a team. Some behaviors actively hinder rapport-building.
Behaviors that hinder rapport include:
Verbal behaviors |
Non-verbal behaviors |
Ignoring co-workers politeness and not responding when thanked. |
A closed posture (e.g. arms folded). |
Ignoring and/or not mitigating criticism from co-workers. |
Showing disinterest and not engaging with co-worker. |
Giving unusually brief responses to questions and adding no personal information. |
Neutral or uninterested facial expression. |
Use of aggressive or derogatory techniques (e.g. insincerity, sarcasm, questioning the person's abilities, etc). |
Maintaining socially awkward physical distance from co-worker. |
You can probably think of people that you have encountered in the past who have engaged in some of these behaviors. As a result, these people were probably hard to work with. Robots will need to explicitly avoid these behaviors otherwise their human team members may mistrust them.
Truly collaborative robot co-workers will have to use at least some rapport-building behaviors and avoid behaviors which hinder rapport.
This might be quite difficult to achieve.
For example, several of the behaviors that hinder rapport are the natural state for robots. Unless explicitly programmed to interact with people in a friendly way, robots will "ignore" people and not engage with them. Also, we often program robots to give brief responses which sound insincere to the human ear.
Cobots in the not-too-distance future may have to communicate in more human ways in order for their human co-workers to accept working closely with them.
It might seem silly to think about programming an industrial robot to say "please" and "thank you", and introduce seemingly irrelevant pieces of "personal information" about the robot to aid interaction. These might seem ridiculous as they do not add anything to the useful functionality of the robot.
However, we are already starting to see this type of rapport building in real-world robotics. Specifically, with self-service checkouts in supermarkets, which are basically service robots.
If you have been in a shop that uses these, you will know that they can be quite annoying. Part of the annoyance comes from the condescending tone of commands like "Place Item in Bagging Area." Over the last few years, supermarkets have been trying to change the interactions and voices of self-service checkouts to make them less "irritating and bossy" (as shoppers describe them).
For truly collaborative robots, these human factors and rapport-building behaviors will be equally as important as any technological advancements.
Which behaviors would annoy you in a robot co-worker? Which behaviors would endear you to them? Tell us in the comments below or join the discussion on LinkedIn, Twitter, Facebook or the DoF professional robotics community.