Chances are that one of your coming colleagues will be a collaborative robot. So it might be good to know how such a collaborative robot senses its surroundings – and you, its human colleague.
The reason why more robots are being integrated in factories is simple maths. More and more companies are doing the calculations and finding that collaborative robots have a very short ROI and add great value to the company. Not only do they drive up efficiency, but also employee happiness by taking on boring, repetitive of tasks.
Collaborative robots are cheaper than their cousins in the industrial robot space, as well as reaching almost plug and play levels of user friendliness. They are well and truly within the economic and technical reach of small and medium-sized enterprises.
While your classic flesh-and-blood colleagues rely on the five senses that are the standard specifications of human beings, robots use a number of different ways to sense and interpret their environment. This includes how they ‘see’ human beings. It involves the likes of cameras and force sensors, as well as senses that would normally be associated with the realm of superheroes.
Together, they help determine how a collaborative robot operates in open industrial environments and interacts with human beings.
“We see a lot of rapid progress both from the sensors and from the perception software. In both cases, we will soon reach a price-performance point that will make these add-ons part of many collaborative robot installations. This will give the robot the ability to do more things by itself and improve how it interacts with its co-workers,” Mathieu Bélanger-Barrette, production engineer at Robotiq, says.
Perhaps the best way of presenting how a collaborative robot sees its surroundings – and you - is to compare its senses to a couple of our human ones – as well as those from the realm of super heroes.
Humans are a race of twos. We are bipedal, have two manipulators with preinstalled end effectors, two audio receptors, two smell sensors – and we are bifocal.
The specs for the latter (pun intended) are pretty impressive. The same goes for collaborative robots, though.
Robots are often equipped with a single visual sensor. Some, such as Rethink Robotics’ Baxter and ABB’s YuMi come equipped with a camera, while others do not.
The camera – or cameras- are used by a robot to generate either 2D or 3D images.
2D cameras have traditionally been mounted in stationary positions and register what comes into their field of view. Imagine a camera that scans a conveyor belt for shapes that it recognizes. Once the camera finds such a shape, it can trigger different actions from the robot, such as picking up the object.
3D camera setups – which are also usually stationary - add depth to what a robot sees. It makes the robot able to perform tasks such as sort through a pile of different objects.
There is also a third way. It is an approach that involves placing a 2D camera on the arm of the robot. This allows the robot’s field of vision to travel where the arm does.
Robotiq recently launched the Robotiq UR+Camera, which lets companies add a Robotiq Wrist Camera to a Universal Robots arm. This allows you to easily include a 2D smart camera with integrated light source into your UR robot.
Jumping into the realm of super powers, some collaborative robots also uses the likes of lasers and infrared sensors. While some help a robot perform work tasks, they are usually safety features.
Collaborative robots can use lasers and infrared to tell that there is a presence around it. This presence is most often you, its human colleagues. This is often a 360-degree feature, so it gives the robot the equivalent to eyes in the back of its head.
The systems can be configured to slow down or even stop a robot once the worker is in a certain area/space.
This is referred to as speed and separation monitoring. It is a subject that is covered in depth in a number of articles on the Robotiq blog and in the company’s free eBooks.
‘Use the force, Luke.’
Exchange ‘Luke’ for ‘collaborative robot’ and we are dealing with one of the most important robots senses – both for robot / human interaction and for what a collaborative robot can do.
The ANSI/RIA R15.06-2012 standard for collaborative robots defines a number of safety standards that ensure that robots and humans can interact without risk of injury. For example, it states how fast a robot should move and how much resistance should cause it to stop. The resistance recommendations are based on force, expressed in newton.
Imagine that you wander into the reach of the arm of a collaborative robot as it is working. It moves towards you, and accidentally pushes into you. Its registers – or ‘feels’ - that it has encountered something that it could risk damaging – or being damaged by - and stops. Since it was moving slowly, you receive no more than a soft push from the robot.
Force is also a deciding factor in regards to which tasks a collaborative robot is capable of performing.
When a human picks up an object, we use force to determine if the object is soft or hard. We also use force to decide how hard we hold the object.
Traditionally, the lack of force and tensile detection in end effectors has limited the use case scenarios for collaborative robots.
However, this has changed in recent years.
For example, Robotiq offers a number of Force Torque Sensors. These plug and play add-ons to Universal Robots-solution instantly gives a collaborative robot the ability to pick up and manipulate fragile parts that precious generations of robots would likely damage.
The force torque sensors allows your robot to perform tasks such as precision part assembly, product testing and take on new parts of the production process.
“With a force sensor, it is much easier to program a robot to insert a part into another, for example, knowing when it reaches the bottom of the host part. It is also easier to perform polishing or grinding, as it allows the robot to apply a constant force on the object it's polishing or grinding,” Mathieu Bélanger-Barrette says.