Subscribe

Select Topics

Posts by Tag

See all

Latest Blog Post

Challenges of a Vision System for Template Matching

Amanda Lee
by Amanda Lee.
Posted on Nov 21, 2016 7:00 AM. 4 min read time

When implementing a vision system it is important to understand what constitutes a challenge for this kind of algorithm. Some of these challenges are discussed in this blog post.

Parts Overlap or Touch Each Other

_MG_3736.jpgOf course, non-overlapping parts will be easier to recognize than overlapping ones. This doesn’t mean it’s impossible to detect overlapping parts, but it will be more of a challenge for the algorithm. So in terms of reliability, you have more chance of recognizing non-overlapping parts every single time. For overlapping parts, you might end up with contour detection that gives you multiple possibilities. Moreover, if there’s no contrast between the two parts, then the camera will have a hard time determining which one is on top of the other, so the robot will not know which one to pick up first.

Insufficient Contrast

_MG_3738.jpgIf you want to detect edges easily, or recognize a variation in the grayscale, you’ll obviously need good contrast. Therefore, the background for the parts should be as far away as possible (grayscale speaking) from the parts themselves. Easily said: you want to recognize black parts? Use a white or clear background.

The surface finish of the parts will also influence the contrast. If you have relatively dark gray metallic parts, you might have opted for a white background. If the parts are shiny, there is a good chance that they’ll reflect light back into the camera. Particularly if they are caught at a certain angle. So, depending on the camera angle, you might end up with an almost white part (light has been reflected back to the camera), a relatively dark part, or a mix of the two. This is another source of challenge for template matching.

Outside Lighting Influence

If your part’s material is highly reflective, you will need to take into account lighting that comes from outside of your vision system. That is, the sunshine coming in from a window, or light flashes coming in from a nearby soldering process. And even if your part’s material is not that reflective, ambient light changes could influence the shadows created in the background… So make sure you take this into account, and read these tricks to cope with such problems.

Shadowing

_MG_3740.jpgSeeing an object and its shadow will create another visual challenge! This can still be managed, but it might reduce the robustness of the part recognition program. It is important that the template image you are using is free from as much shadowing as possible in order to have the correct template stored in your library. Otherwise, you might end up looking for a part plus its shadow and then the program won’t find anything that resembles this, since the shadow will have changed. One way to compensate for shadows at teaching time is to use multiple images of the same object, taken at various angles. For example, you could take 1 image, rotate the part, take another image, etc. The ‘template image’ would then be built from a combination of all of these images.

Too Much Variation Between Parts

This is a crucial aspect of template creation and it harkens back to a blog post about OCR. “Let’s look at an Optical Character Recognition (OCR) example: you have taught the system that the pattern to be looked for is 8, but a B would also be okay because you know the left-hand side of the character sometimes has problems being punched correctly. Now let’s say the machine reads a 3… would that be acceptable? The left-hand side is different, but you have trained the system to be less picky on this side, because of known punching problems... That’s a good example of desensitization: the more various things you input into the system as “normal”, the less sensitive your system will become.”

The example here is about the recognition of a specific letter, but the same rules apply to other shapes. If you are too permissive at the teaching stage; the ‘template’ you will be looking for will be less specific and the part recognition might be less reliable. If you have similar part models, you will probably confuse a few of them. Another example is deformable objects. Think of a bath towel that could be presented as either folded or as a shapeless blob of cloth. You will have so many possible towel configurations that you will probably end up identifying anything as a towel.

Learn more on how to integrate vision systems for collaborative robots by downloading our brand new eBook. You will acquire basic knowledge of machine vision to help you figure out exactly what a simple vision application is. It might also be useful to understand the differences between this type of application and a more complex one. So, if you are just starting with vision, or if you think adding vision to your system might solve one of your pet peeves, this eBook will be a great place to start.

 

New Call-to-action

Leave a comment

Amanda Lee
Written by Amanda Lee
As an e-marketing coordinator, Amanda focuses on Robotiq's content management. With her background in marketing, she hopes to bring valuable, relevant, and consistent content to our audience through the blog, social media and video center.
Connect with the writer:

Related posts

Don't Let Your Robot Vision Get Fooled and Fail!

Who says only humans suffer from optical illusions? A robot vision system was recently fooled by a team of engineers. And your...

Alex Owen-Hill
By Alex Owen-Hill - September 26, 2019
How to Improve Object Pick-up With Sensors

What's the best way to detect if an object is ready to be picked up? There are various sensing technologies you could use. In...

Alex Owen-Hill
By Alex Owen-Hill - August 2, 2016