Why did the internet go crazy for a blue dress last year? What does it have to do with robot vision? We find out why color vision can be tricky for both humans and robots. We also give a simple answer to the question: Should I get a color sensor for my robot?
Last year, there was a huge online controversy about a particular blue dress. Did you see it? The dress was controversial because nobody could agree what color it was. Some people insisted that it was blue and black. Others were sure that it was gold and white. The internet went crazy. I myself experienced a few bizarre discussions, where most people in a group said the dress was blue while one or two people got very angry because "It's obviously gold!"
Who was right? In the end it turned out that the dress in the photo was, surprisingly, blue and orange. The real dress was definitely blue and black. Apparently, people saw different colors because their brain was color correcting the photo differently. Which dress they saw depended on their unconscious assumptions of the ambient light conditions in the photo.
The interesting thing about the dress is that it shows how subjective our perception of color is. Robot vision systems wouldn't make such mistakes… or would they?
Welcome to the complex world of color machine vision! In this article, we'll find out what color vision is and why it's not always better than black and white.
If I had written this article 20 years ago, I would probably have told you: color vision is very difficult for robotics. That's not true any more. In the past, color vision meant long processing times, costly equipment and expensive lighting setups. The technology has advanced a lot since then.
These days, color vision systems are similar in cost to monochrome vision systems (black and white). Color still requires more processing than monochrome, but thanks to improved computing power this processing can be done very quickly - depending on the complexity of your program, of course.
With those barriers gone, does that mean color vision is better?
The answer, as you might have guessed from my introduction, is no.
Color vision is not inherently better than monochrome vision. Sometimes, color can cause unnecessary problems. Other times, color is vital for the task. In order to understand why color vision is not perfect, we need to understand how it works.
Most cameras for robot vision use CMOS or CCD sensors. These contain an array of small, light-sensitive cells. When light hits the sensor, the cell will detect it with a varying intensity depending on the brightness of the light. Monochrome sensors have only one type of sensing cell, which detect all wavelengths of light, but color sensors are filtered so that some cells only permit certain wavelengths. A mosaic pattern of color filters alternate between red, green and blue. When green wavelength light hits the sensor, only the green-filtered sensing cells will detect the color.
Your eyes work in a similar way. The back of your eyeball holds the retina with four types of photo receptive cell to detect different wavelengths of light. The cones cells detect color: L-Cone sensitivity peaks at a yellow-green color (though they are commonly called the "red cones"), M-Cones peak at a lime green color and S-Cones peak at violet blue. Finally, the rod cells are more sensitive than the cones and their sensitivity peaks at a sort of turquoise color. Clearly one big difference between human and computer vision is that our vision system is slightly biased towards the red-green part of the color spectrum, as there are more "red" cones than green cones and very few blue cones.
Color sensors are more flexible than monochrome sensors, but some are a bit less sensitive due to the light lost when, say, red light hits a green sensing cell. As a result, some color sensors also produce darker images so they need stronger lighting than monochrome sensors to produce an image of the same brightness.
All this means that both color vision sensors and your eyes see a slightly inaccurate color view of the world. Machine vision is more repeatable than human vision, but both are influenced by factors like ambient lighting, age of the sensor/retina, incorrect assumptions in the processing, etc. Color vision is never as simple as saying "detect the green object" because, as Audrey explained in a previous post there really is no such thing as "green."
Now we know why color vision is a bit complex. However, even with its complications it is still far more flexible than monochrome vision. For some applications it is vital. Here are a few examples of such applications:
If color isn't necessary, then performing vision in monochrome is often more reliable. Quite a lot of image processing algorithms (such as edge detectors) convert color images to grayscale before running the algorithm for this reason.
The easy answer is that, if you're using collaborative robots, a color sensor is probably your best option. This is true, even though if you only need monochrome vision algorithms. Let me explain.
There are many situations where monochrome vision is sufficient for an automated process. The signal from a color vision sensor can be easily converted to a monochrome image to deal with these tasks. However, for other tasks, color will be necessary. Without the flexibility of a color vision sensor, these tasks would become almost impossible.
Therefore, a color sensor is usually a better investment to cover you for future use of the sensor. Unless you need to perform high-resolution barcode reading, where monochrome sensors are often better, color vision sensors can't be beaten for their flexibility.
That's why we chose to use a color sensor in our Robotiq Camera for Universal Robots. You can always make a color image monochrome, but you can't turn a black and white image into a color image.
Are there any vision topics you would like us to cover? Have you had any success with color vision? Do you think that monochrome is better in some situations? Tell us in the comments below or join the discussion on LinkedIn, Twitter or Facebook.