Your robot vision just won't detect your object! What's wrong with it!? With this unusual trick, often used by the experts, you can improve your vision teaching in a flash.
Imagine that you are using robot vision to recognize an object. But, there's a problem! When you used the teaching mode, there was something unusual about the object outline in the camera. Perhaps the object's shiny surface had a bright spot, or the camera picked up too many detailed features. Now, whenever you try to detect objects with the vision system, it does not recognize the object because it looks different in the camera.
This is a common problem in robot vision.
Ideally, you want to teach your vision system with a model which is accurate and only contains the necessary features. If the model you teach it has flaws, the outline of the object will be harder to detect and this will reduce the effectiveness of the whole system.
But, there is an unusual trick you can use to improve the teaching mode and make it more robust.
I'll reveal the trick in a moment, but first let's look at what causes robot vision teaching to fail…
There are various tricky challenges for robot vision, including occlusion, deformation, scale, etc. These affect the vision system both in the teaching and detection phases of operation. However, they are more problematic during the teaching phase because they distort the model that the robot uses to detect objects.
To understand why the teaching phase is so important, it's helpful to understand a little bit how template matching works as this is the cornerstone of object detection. We wrote an entire article about this called How Template Matching Works in Robot Vision, but here are the basic details.
Template matching involves training your vision system with a "template image" of the object you want to detect. You usually train this template image by pointing the vision sensor at your object and running the system's teaching mode. Later, during the detection phase, the vision algorithm will look for areas within the image which are similar to that template image.
If the template image is flawed, the system will not be able to detect the object.
Here are three common factors which result in bad template images, and which can be solved by the unusual trick I'm about to reveal to you:
Thankfully, there is a handy trick that you can use to improve the quality of your template image.
It is a slightly unusual trick…
Don't train the vision system using the object you want to detect!
Huh?
Surely that doesn't make sense?
Surely you need to train the vision system with the object you want it to detect?
Not necessarily.
Let me explain. Most robot vision systems detect objects using 2D images captured by a camera. This means that the template image is also 2 dimensional. As a result, you don't actually need to use the 3D object to train the system.
What can you use instead? A printed drawing of the object, for example.
This is a technique that our integration coaches often use to combat the challenges they encounter when teaching robot vision. In the past, for example, they've used it to overcome lighting issues with shiny objects and to teach objects which are too big to move.
The process for using a 2D image to teach your robot vision system is quite simple. It starts by turning your CAD model into a 2D engineering drawing.
Then, follow these 5 steps:
By the way, you don't have to limit yourself to 2D drawings. Sometimes a 3D printed version of the object can overcome teaching issues like shiny surfaces and low background contrast.
With this unusual trick, often used by the experts, you can easily overcome many of the common teaching problems with robot vision systems!
What issues have you had in the past with robot vision? Tell us in the comments below or join the discussion on LinkedIn, Twitter, Facebook or the DoF professional robotics community.