Technical Article

Uncontrolled Machine Vision Lighting for Infrared and Robots

September 21, 2022 by Seth Price

Ambient light, reflection, and transparent objects can all cause difficulties during machine vision applications. Learn about these issues and some potential solutions to better ensure safety and quality.

The growth of machine vision and autonomous robots often runs in direct conflict with the competitive and regulatory demands of energy efficiency. While robots are relying on constant ambient light to navigate around a warehouse, skylights and windows are being added to the same warehouse to take advantage of natural lighting and save energy. The light changes can confuse the robots, leading to quality and safety excursions.

This technical article will address some of the common problems and potential solutions to various lighting issues during machine vision applications. 

 

machine vision difficulties with inconsistent lighting, reflections, and transparent parts

Figure 1. Machine vision finds a number of industrial uses but can experience difficulties with inconsistent lighting, reflections, and transparent parts. Image used courtesy of Cognex

 

Machine Vision Applications

Machine vision is used for navigation, quality assurance, picking, sorting, packaging, and numerous other applications. The field of machine vision includes everything from high-resolution images for quality assurance purposes, to line detection for autonomous vehicle navigation, to simple “go/no go” data that may determine the placement of an item or to assist in picking and sorting operations. They may detect colors (as a Red-Green-Blue, or RGB value), reflected light, broken light (where light is interrupted), infrared, and other methods.

While there are numerous machine vision schemes, perhaps the most common schemes use ambient light, supplied light, or infrared light. In ambient light configurations, only the natural light of the working environment is used for picture-taking and decision-making. Supplied light relies on additional light sources (often light emitting diodes, or LEDs), to enhance image properties. Infrared radiates from some objects and is detected using an infrared detector, or can be reflected, transmitted through or interrupted from an infrared source.

For picking, packing, and machine tending applications, the machine vision system must be fine tuned to identify key features, such as areas for gripping, obstructions, and defects. Consider the childhood game of pickup sticks, where an assortment of sticks is randomly strewn about and the players must not disturb any stick they are not actively picking up. Picking operations are similar; a bin filled with randomly-oriented parts can be a challenge. The machine vision system must determine the optimal path for moving a part from this bin, disturbing as few other parts as possible.

 

optical sensors are complex and simple and used in machine vision

Figure 2. Optical sensors can be as complicated as a temperature-controlled camera, or as simple as a photovoltaic sensor that outputs a voltage based on the amount of light present. Image used courtesy of QueSera4710

 

Machine Vision Lighting

Vision itself, whether human or machine, is the detection of how light is reflected into a sensor, and how the brain or processor interprets this light. In order for the machine vision system to see the object, the lighting should be consistent. Back to the pickup sticks game—imagine playing this game with an unpredictable strobe light, in the dark, or some other such situation. 

Ambient lighting changes frequently. Some ambient light comes from uncovered windows and skylights, and the amount of light that comes through them depends on the weather and the season. Artificial ambient light, such as room lighting, can also vary. Shadows, flickering fluorescent lights, dirty light fixtures, and power fluctuations can all affect the light output and wreak havoc on a poorly-tuned machine vision system. 

Consider also part orientation in a bin during a picking operation. The light reflected from the parts may be blocked by the moving robotic arm, or the part may not be in a favorable orientation for picking. Training the machine vision system for every possible scenario with every possible part orientation is impossible, so the system must have a way to account for some variation.

 

Workpiece Properties can Affect Machine Vision

Cameras are easy to fool. Highly-reflective objects, such as finely polished, flat metal parts, cause reflections of robotic arms, other parts, and other objects. A robotic gripper reaching for a reflective metal part may scare itself! It may see its own robotic gripper in the reflection and think it is about to collide with an unknown object. 

A sharp reflection of light off a polished surface can create a bright spot, which will ‘wash out’ other colors or important features nearby. In the optics industry, there is an entire market segment geared towards making “anti-reflective” coatings to keep from spooking wildlife with hunting scopes and binoculars. 

While it can be simply annoying for the human or animal eye, these reflections can quickly saturate an optical sensor, affecting adjacent pixels. This type of reflection is especially harmful, as the algorithm will typically try to self-adjust to the brightest and darkest spots to improve contrast. Because so much of machine vision uses contrast to determine depth, the falsely contrasted image can lead to robotics misunderstanding the depth of travel required, and that leads to safety issues and potential product and machine damage. A quality control issue quickly becomes an additional downtime and maintenance expense.

Transparent parts are also an issue. Suppose a robot system equipped with machine vision is checking glass cell plates during biomedical research. At some plate locations, the robot may see a reflection, and in others, it may not see the plate at all. This is similar to how light bounces off a pond at sunset; the colors, reflectivity, and transparency paint a pretty scene but can confuse the machine vision system.


 

active infrared detector schematic

​​​​​​​Figure 3. Simple schematic of an active infrared detector. Image used courtesy of ScienceDirect

 

What About Infrared?

Infrared signals are simply another electromagnetic wave at a different frequency. All of these same problems appear with infrared. The challenge is that materials behave differently under infrared light. For example, a thick, plastic shopping bag may not transmit much light through it. In visible light, it appears one color or another. However, under infrared light, the bag does not provide much thermal insulation, and thus the infrared light passes right through. Under infrared, the bag appears transparent, which means it may not appear at all under a poorly-tuned system.

Infrared has a few advantages, however. Systems that are calibrated properly with infrared reflections may be a little less susceptible to variations in visible light. Typically, infrared systems are active, in that there is a source present for infrared light, and that source tends to overpower any ambient sources.

 

Techniques to Improve Machine Vision

To improve machine vision’s efficiency, the contrast of images can be increased through better cameras and optical sensors, or better lighting conditions. Sensors are continuously improving, offering higher resolution, meaning better edge detection and depth of field. 

One method of producing better lighting conditions is to supply external light. With the addition of some ultra-bright LEDs, lighting can become more consistent. LEDs output a specific wavelength with a constant DC voltage supply, minimizing the effects of ambient lighting fluctuation.

The software that controls the decision-making abilities of the robots is also improving. More advanced algorithms require less “training” of the robot to see parts, navigation guides, and so on. This can also be enhanced by making detection as simple as possible. Perhaps developing bins that align parts may have additional cost, but allow the machine vision to more readily detect parts in the bin. Instead of pickup sticks, maybe the game is more like choosing a toothpick from a brand new box, where they are aligned. It may mean making navigation aids, such as tape along the floor, higher contrast, so that it is easier to detect. Anything that makes detection easier through increasing the contrast will reduce the effects of lighting fluctuations.

 

An Optical Sensor Experiment

Instrumentation students were tasked with determining how many sheets of paper were stacked over an optical sensor. The more pieces of paper, the less light passed through, and the smaller the voltage from the optical sensor. For this project, the students leveraged LabVIEW software and set thresholds values in the voltage that corresponded to the number of pages. In fact, in one lab-period, they had the data acquisition running and the graphs showed a nice step change with each added page.

The next week, a problem emerged. No group’s software was recording the proper number of pages with 100% accuracy. What changed? The ambient lighting! By placing their sensor in a slightly different place, the ambient lighting changed, and thus the voltage values reported by the sensors changed as well. Their solution? Use an output channel on the data acquisition board to power an LED. The bright LED drowned out most of the ambient light. A simple recalibration with new voltage values, and the system was much more robust.


 

Keyence's IV3 vision sensor with AI for self correction
​​​​​​​Figure 4. Keyence’s IV3 vision sensor with AI built in for stable operation, despite changes in lighting. Image used courtesy of Keyence

 

Sensor Solutions

At first glance, machine vision seems to be a complex and maybe even finicky field. How can an engineer accurately predict every possible part orientation, every possible lighting condition, and optical property? The trick is that they don’t. As artificial intelligence (AI) and machine learning (ML) rapidly expand, these sensors “learn” how to detect subtle changes in lighting or orientation. Perhaps a CAD model with a source light can be programmed into the system, and the system sweeps through numerous source lighting orientations and creates its own library of possible scenarios. Oddities may be entered manually into the library as well. The future of machine vision will be driven both by higher-resolution and higher contrast cameras, as well as improved AI and ML algorithms.