Select Topics

Posts by Tag

See all

Latest Blog Post

The Human Brain VS the Digital Brain - A Case for Visual Inspections

Audrey Boucher-Genesse
by Audrey Boucher-Genesse. Last updated on May 05, 2016 4:23 PM
Posted on Feb 16, 2016 7:00 AM. 7 min read time

I would like to begin by telling you how happy I am to contribute to this blog, whose goal is to demystify automation and help people in achieving the first steps in automation. Hopefully after reading this article you will have learned a bit more about the automation of a visual inspection process. 

I was once at a conference on image processing and the speaker discussed the perception of color. He spoke about a conversation with a potential client who said “well, it’s pretty easy: the automated visual system just has to check if the part is green or not”. Sounds simple enough, right? Now when was the last time you had to decide on a color to paint a room in your house? What green did you pick: seaweed, khaki, pistachio or cucumber? What about turquoise, is that green? One of the toughest challenges when automating a visual inspection process is to clearly define the boundaries for what is accepted and what is not. The machine does not have your intuition…   

What is this “machine” anyway? There are various options available out there, but we will define the automated visual inspection system as a non-contact system that detects visual defects and/or checks for desired features. It comprises one or many visual sensors (1D, 2D or 3D) and a data processor (it could be a computer, or the processor could be embedded within the sensor, as is the case with intelligent cameras). The system provides an output (e.g. “good/bad part”). Depending on the complexity of the part, it can also include a handling system (e.g. a robot). This article will focus mainly on inspections made with 2D images that are widely used in the industry.

Well, that’s all pretty similar to the way we humans work: a sensor, a processor and a handling system… What is so different then? Quite a few things, actually!


Let’s begin with a small video (it’s only 1:20):

Spoiler alert: watch the video before continuing… otherwise I will spoil it! 

Now what’s the typical situation for an inspection? The inspector is trained – he focusses on a specific task. Theoretically he should be inspecting all surfaces of a part. In practical terms, he knows what and where the usual defects are. He has a guide that stipulates a certain number of defects. He knows the potential flaws as they are often based on potential failures from previous processes: a die that has reached its maximal wear; an oven that was not at its optimal temperature and causes cracks, etc. All of these failures can leave marks on the part that is being inspected; they even have defect names and categories, because they have occurred quite a few times in the past. But what if the defect is something that has never occurred (or so rarely occurs that this inspector has never seen it). What if the visual defect is not in its usual location?  The human inspector, not because s/he is not good enough for the job, but just because s/he is human, might miss something obvious – like the gorilla (if you didn’t get the gorilla reference, now would be a good time to watch the video… but as mentioned before, I have spoiled it). 

An automated visual inspection system that will have been trained to inspect all surfaces of the part will always look at all of these surfaces, therefore noticing this type of “unusual defect”. It most definitely will not be able to classify it, never mind know its potential cause. But it will flag it to the attention of the human inspector who will be able to investigate further. 

So automated visual inspection systems are always best, right?  … Not so fast. Our human brain is pretty sophisticated; it has multiple capabilities. Let’s look at a few of them.


Let’s say you’re inspecting a part, and the visual defect guide stipulates “any black spot on the gray surface is considered a defect”. I’m giving you 20 parts with various gray shades – some of them will be darker and some of them will be lighter. Would you still be able to detect black marks on it? I’m pretty sure you could. Now give the same parts to an automated system. You will need to specify exactly what is black (geekly speaking, color code 0x000000 is considered black… but what about 0x090909? That also looks pretty dark to me!), versus what is dark gray.

Now here’s another example of our super-adaptability: you’ve probably come across security questions when completing a transaction online, like “please prove you’re not a robot”:


Why is that? Well, character recognition is hard for a digital brain: our human brain can accomplish wonderful things: complete a character that has been partially erased or twisted, complete a word with missing characters… G33z, y0u c4n 3v3n r34d th1s! Talk about adaptable!

OK, so now you’re thinking… it is impossible to automate an inspection process? Not at all! But you could do a lot to limit the need for adaptability, therefore simplifying the automation process. One possible answer would be to look at the process flow and maybe change it. Swapping the inspection process with the tumbling process might be a strategic change. By tumbling the part first, its surface will be more uniform, making it easier for the machine to inspect it. 


What is a normal feature and what is a defect? Humans score high at detecting patterns and flagging what is suspicious. 

A human might need only one part to detect a pattern, e.g. aligned holes. 


The machine, on the other hand would probably flag these holes (and many others) as individual “suspicious black dots” unless it had learned otherwise. The automated visual system will thus need some training in order to know what is considered a normal pattern and what is not (there might be a size threshold? a color threshold?). Here again, the concept of adaptability comes in. If you do not use enough samples to train the system, a pattern that is not EXACTLY like the one before it will be flagged by the machine as being incorrect. Using many samples is a good way to go, but beware of overtraining it with very different parts as this may cause desensitization. Let’s look at an Optical Character Recognition (OCR) example: you have taught the system that the pattern to be looked for is 8, but a B would also be okay because you know the left-hand side of the character sometimes has problems being punched correctly. Now let’s say the machine reads a 3… would that be acceptable? The left-hand side is different, but you have trained the system to be less picky on this side, because of known punching problems... That’s a good example of desensitization: the more various things you input into the system as “normal”, the less sensitive your system will become. Bottom line, training the system can be tricky and may need to be done in partnership with the integrator.



Now let’s get back to our human inspectors. They take a part, turn it around while constantly looking at it. They notice a small black dot on the part. What is the next thing that happens? Chances are good that they’ll either blow on the part to confirm if it’s just dust or use his/her fingernail to try verify its solidity. Sitting down with inspectors and observing the actual process (not the official one that is written down in the manual) is a good practice when trying to automate a process.    


When it comes to repeatability, the digital brain scores higher. This is a key concept that comes back for pretty much any process being automated: the result will be repeatable, no matter if the part is being inspected on a Monday morning or a Friday afternoon. 

So what can YOU actually do to automate a visual inspection?

We have seen that the human is very adaptable, good for detecting patterns, and has other tools that can help him achieve an accurate inspection. The machine, on the other hand, is reliable and repetitive… so how can we get the best results by combining these two?   

Here are a few clues that might help:

  • Choose your integrator carefully: you will be working closely with him to fully understand your inspection process and its variables, train the system, train the inspectors to use the system, etc.
  • Sit down with inspectors and document the actual process. If “blowing on the part” is not written anywhere in the inspection guidelines, but inspectors do it every day because the parts are always dusty when they get them, then an air blower could be integrated into the automated system. 
  • If the parts’ normal surface appearance is very variable, keep in mind that the machine won’t be as adaptable as you are, and consider reordering some process steps (see our tumbling example above)
  • If possible, define numerically the boundary between what is acceptable and what is not: the maximal defect length, the accepted color…. When you cannot define it numerically, you will have to use more examples to train the system. 
  • Include your inspectors in the automation process: they know the real deal about normal parts, defect definitions, variables that will influence the appearance of the part, etc.
  • Train the system with parts from various batches, made on different days, in order to have multiple “normal” surface appearances… The system will thus be adjusted to take some variations into account. Keep in mind, however, that there’s a balance to be reached: you want an accurate system that can be flexible, but not too desensitized.  
  • Work hand-in-hand with the integrator, as a close collaboration will enable a faster delivery, plus more reliable results.

Now that you have had a chance to understand the pros and cons of a visual system, you can think about how one might be integrated with your other robotic devices, force torque sensors or grippers, for example, to really automate your system. 

Download everything you need to know about collaborative robots

Leave a comment

Audrey Boucher-Genesse
Audrey aims to demystify automation and robotics for manufacturers. She uses her experience as an application engineer in automated visual inspection to conduct training, write articles and give seminars to raise awareness on how automation can benefit a shop floor.
Connect with the writer:

Related posts

How to Use Robot Vision With Big or Awkward Objects

What if your objects are too big or awkward to teach to a robot vision system?

Is robot vision only suitable for small, regular...

Alex Owen-Hill
By Alex Owen-Hill - August 21, 2020
The Problem With Robot Vision + Surface Finishing (solved)

You've got a surface finishing task.

You want to detect the parts with robot vision.

Simple, right?

Not always. Surface...

Alex Owen-Hill
By Alex Owen-Hill - July 30, 2020
Don't Let Your Robot Vision Get Fooled and Fail!

Who says only humans suffer from optical illusions? A robot vision system was recently fooled by a team of engineers. And your...

Alex Owen-Hill
By Alex Owen-Hill - September 26, 2019