Blog | Robotiq

The Cobot Experience: AJung Moon & Resolving Human-Cobot Resource Conflicts

Written by Emmet Cole | Mar 19, 2020 11:00 AM

Leading roboethicist and human-robot interaction expert AJung Moon talks industrial robot ethics, human-cobot resource conflicts and bringing cobot technology to workers.  

Credit: AJung Moon

AJung Moon is an assistant professor in the Department of Electrical & Computer Engineering at McGill University, Montreal, Canada and the founder and director of the Open Roboethics Institute.

An experimental roboticist with a particular focus on human-robot interaction, Moon also serves as a member of the Government of Canada Advisory Council on Artificial Intelligence and as an executive committee member of The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems.

Moon's primary research interests include investigating how to create fluent collaborative interactions between humans and robots in manufacturing settings and exploring the wider societal and ethical implications of robotics and artificial intelligence.

Moon kindly agreed to be interviewed about her work, including the intersection of ethics and industrial robotics. 

 

The Interview

Where do the worlds of industrial automation and ethics intersect?

We tend to look at robots as stand-alone technologies that are not necessarily framed within the bigger structure of things, but there are many different ways in which the worlds of industrial automation and ethics intersect.

For example, research has found that people tend to match the pace of a robot that they are working with just like how people naturally lock their steps while walking side-by-side.  This is useful for collaboration and serves social functions for people.

So, if a robot increases its speed, ever so subtly, so that you instinctively speed up to keep pace, that could help boost the overall production speed on the factory floor.  But what are the long-term implications for that worker ergonomically, psychologically, and otherwise? What kind of machine influence on people is acceptable in our workplace, and when does it become harmful?

On the societal level, there are questions around labor law, specifically the influence that automated technologies have on workers' rights and worker safety.  There are also ecological issues to consider.

****

Moon's research at UBC's Collaborative Advanced Robotics and Intelligent Systems (CARIS) Lab included work on the influence of gaze and eye contact on human-robot object handovers...

****

Given the fact that cobots are intended to enhance rather than replace human labor, is it fair to say that cobots represent an ethical innovation, or at least that they represent the ethical face of innovation in industrial automation?

Automation technologies, including cobots, have the capacity to really improve working conditions, replacing certain repetitive tasks and reducing physical stress and risk of injury, and that serves a positive purpose.

But it's not just the intention of the technology that determines whether the technology is an ethical innovation. It's also about the process and the implementation.

For example, we talk about 'AI for Good' with the implication that any kind of AI built for ‘good’ purposes must be good.  But that's not really the case, if you create systems and implementations that do not uphold peoples' human rights and various ethical standards in the design of the technology and how they are deployed.

The same thing applies to cobots: they can be implemented in many different ways.

For example, we think about cobots being for larger companies and SMEs, but I'm interested in investigating whether cobots can give more power to individuals working from home.

Some cobots are light enough to be carried.  Can we use cobots to help manual workers take advantage of flexible working environments, such as working from home and flexible working hours, so they can reap the full financial benefits created by cobot technology?

I would like to find ways for us to think a bit more creatively and differently about the uses of these cobots in creating better working situations for individuals and not just large corporations.

****

Moon founded the Open Roboethics Institute (ORI) in 2012 as a student-led research collaboration between the University of British Columbia in Canada and researchers from the IEIIT research lab in Genova, Italy.  Recognizing that the spread of robotics is generating new questions and challenges, the ORI explores ethical and legal issues around AI and robotics, including autonomous cars, lethal autonomous weapons, and social and domestic robots.

At the BC Tech Summit in 2018, Moon discussed the question “Should we fear the robots?” with Tyler Orton, a journalist from Business in Vancouver.

****

What advice do you have for cobot end-users, especially those that may not have worked with a robot before?

I would recommend the person to ensure a) that the robot is a proper cobot, meaning that it's safe to interact with and b) that they voice any opinions, concerns and uncertainties they might have in working with cobots.  Some concerns may be debunked by learning more about the cobot’s limitations and capabilities.

Educating workers in terms of the greater strategy of the corporation is part of managing that process as well.  And other concerns may raise important issues that we hadn’t thought of at the design or deployment stage of the system, and that can make the difference between successful and failed deployment of cobots.

And because it is such a new phenomenon within the manufacturing environment, the faster we are able to close that feedback loop between the users, technologists, management and policy makers, the better the environment will be.

Where do you draw the line between industrial and collaborative robots these days?

It comes down to physical barriers. If it is safe to collaborate without the need for a barrier between you and the robot, then I am happy to classify that as a cobot.

Are end-users generally best served by viewing their cobot as a colleague, a tool, a form of prosthesis, or some other category?

I would say a tool, definitely. People may call it whatever they want, including a "colleague."  But I think that if we think of it as a tool, then we're able to exercise a greater sense of freedom and better able to take creative advantage of its capabilities.

Taking creative advantage of a “colleague” does not sit as well with me.

What can you share about your future research plans in the area of human-cobot interaction?

I've recently taken a position as a professor in McGill University, where my ambition is to study and quantify the factors that influence the interactions between robots and people.  I'm particularly interested in studying negotiative dynamics between human and robots.

So, for example, I'm researching how to build systems that enable humans and robots to naturally resolve resource conflicts using communicative cues.

Resource conflicts? Is that when a human and a robot reach for the same object at the same time?

Yes.  Sometimes people are faster than the robot or the other way around, while sharing the same space or objects.  Occasionally, they both reach for the same thing and need to figure out who should get the right of way or the object first.

My previous research found that when two humans reach for the same object at the same time, there are a lot of hesitations and jerky motions, as each individual decides whether to yield or to grab the object.


We also found that when we replaced one of the humans with a robot that is designed to be very persistent, humans tend to yield to the robot.

Interesting! In general, aren't cobots typically set up to yield and come to a stop when any possible conflict/collision with a human might occur?

They often are.  For safety reasons, many robots in industry settings are designed to trigger what we call an “emergency stop response,” which means robots stop immediately and often abruptly when someone seem to be in the way of where the robot is trying to go.

My studies show that a human-robot pair finishes collaborative tasks much faster when the robot hesitates and negotiates its way around the conflict with the person than if the robot always yielded to the person.  And we were able to do that without jeopardizing people’s safety.

If cobots are programmed to yield to humans and humans instinctively yield to robots, how can handovers take place at all?!  I'm picturing a scenario with two extremely polite people stuck forever in a doorway, insisting that the the other goes through first...

That’s a great question!  In the type of robot behaviour I designed, you can tune how readily the robot yields to the person or persistently negotiates for its right of way.  But I have yet to study the effects of changing these parameters.

I'm really interested in that negotiative dynamic where communicative cues can be used so that the robot doesn't always yield, which, by the way, is highly frustrating in terms of human-robot interaction, but is able to really work with a person and in support of that person.

That's the research direction I am headed in for now.  I'm hoping that the entire community of human robot collaboration will be accepting of the notion that these robotic systems affects peoples' actions and decisions and should be designed with this in mind.

 

(Note: The interview was edited for length and clarity. It was conducted for educational purposes and the views expressed therein are those of the expert and do not necessarily reflect those of Robotiq.)  

****

The Cobot Experience explores the human side of human-robot collaboration through a series of interviews with thought leaders in collaborative robotics, human-robot interaction, industrial safety, advanced manufacturing and related topics: