When implementing a vision system it is important to understand what constitutes a challenge for this kind of algorithm. Some of these challenges are discussed in this blog post.
The surface finish of the parts will also influence the contrast. If you have relatively dark gray metallic parts, you might have opted for a white background. If the parts are shiny, there is a good chance that they’ll reflect light back into the camera. Particularly if they are caught at a certain angle. So, depending on the camera angle, you might end up with an almost white part (light has been reflected back to the camera), a relatively dark part, or a mix of the two. This is another source of challenge for template matching.
If your part’s material is highly reflective, you will need to take into account lighting that comes from outside of your vision system. That is, the sunshine coming in from a window, or light flashes coming in from a nearby soldering process. And even if your part’s material is not that reflective, ambient light changes could influence the shadows created in the background… So make sure you take this into account, and read these tricks to cope with such problems.
This is a crucial aspect of template creation and it harkens back to a blog post about OCR. “Let’s look at an Optical Character Recognition (OCR) example: you have taught the system that the pattern to be looked for is 8, but a B would also be okay because you know the left-hand side of the character sometimes has problems being punched correctly. Now let’s say the machine reads a 3… would that be acceptable? The left-hand side is different, but you have trained the system to be less picky on this side, because of known punching problems... That’s a good example of desensitization: the more various things you input into the system as “normal”, the less sensitive your system will become.”
The example here is about the recognition of a specific letter, but the same rules apply to other shapes. If you are too permissive at the teaching stage; the ‘template’ you will be looking for will be less specific and the part recognition might be less reliable. If you have similar part models, you will probably confuse a few of them. Another example is deformable objects. Think of a bath towel that could be presented as either folded or as a shapeless blob of cloth. You will have so many possible towel configurations that you will probably end up identifying anything as a towel.
Learn more on how to integrate vision systems for collaborative robots by downloading our brand new eBook. You will acquire basic knowledge of machine vision to help you figure out exactly what a simple vision application is. It might also be useful to understand the differences between this type of application and a more complex one. So, if you are just starting with vision, or if you think adding vision to your system might solve one of your pet peeves, this eBook will be a great place to start.