Technical Article

Error-proofing AI Algorithms

August 27, 2021 by Sara McCaslin

Humans are the root cause of errors in algorithms. To avoid errors in industrial algorithms for machine vision and other AI applications, control engineers must train, or “error proof,” their AI.

Glitches with machine vision cameras, uncalibrated sensors, or unpredictable shadows can lead to potentially costly and dangerous errors in industrial AI systems. However, researchers are developing error-proofing algorithms as well as simple measures that can be taken to reduce the probability of error.

 

Artificial intelligence has evolved well beyond the basic perceptron artificial neural network, but error remains an issue. Image used courtesy of Pixabay

 

Industrial Applications for Artificial Intelligence

There are numerous industrial applications for AI (artificial intelligence), such as robotics platforms, material handling, packaging, machine tending, assembly, inspection, and BAS (building automation systems). More specific examples include drone and swarm technology for sorting, moving, and transporting items or detection anomalies in production processes.

AI has become a critical but often forgotten aspect of industrial automation dependent on machine vision, robotic arms, remote sensing, and process control.

However, AI tools are not a simple black box into which data is provided as input and transformed into accurate output. Sometimes the output is wrong, and that is a source of concern.

 

Types of Errors in Industrial Applications

Computers do what they are told to do, so a human is at the root of every error. It may be a design error, an algorithm error, an engineering error, or bad data, but there will be a human at the root.

This also applies to AI, including industrial AI. Errors within industrial AI can fall into one of two categories: algorithmic bias and machine bias.

Algorithmic bias involves errors that are both repeatable and systemic. Such errors can manifest in several different ways: inherent error in the algorithm’s logic, unanticipated use of the algorithm’s output, or issues with the data provided to the AI system.

Machine bias occurs when a limited data set is used to train the system, leading to erroneous output.

 

The Importance of Keeping AI Error Under Control

As an example of algorithmic bias, consider an AI-empowered machine vision system used for automated quality control systems. Such an application depends heavily on accurate measurements supplied as data to the AI, determining whether the part falls within tolerances. If inaccurate measurements are provided to the AI, it will result in erroneous parts labeling.

The algorithms behind the machine vision AI can be 100% correct, but bad data means bad output. Acceptable parts may be disposed of, while poor-quality parts may be sent on to customers. This leads to unnecessary costs and downtime as the source of the problem is tracked down.

 

FANUC’s iRVision 3DL uses lasers and AI to check the surface conditions of a part. Image used courtesy of FANUC

 

Some AI systems that require training before use in a particular environment or application. In such cases, the training data provided to the system is extremely important. For example, if a system receives training limited to conditions in a lighted area, there will be issues when the system must perform without light.

 

Error-proofing AI (Training AI)

Because there is a human element behind AI and machine learning, it cannot be error-proofed. There are, however, ways to minimize error within AI systems. An example includes CARRL (Certified Adversarial Robustness for Deep Reinforcement Learning), a deep-learning algorithm developed at MIT whose purpose is to assist autonomous systems by encouraging a level of skepticism about the data that takes into account, such as noise in the data and adversarial efforts to confuse the system.

Carnegie Mellon has also been working on an AI algorithm for deep learning models. Called RATT (Randomly Assign, Train and Track), this approach uses unlabeled and noisy training data to establish an upper bound for the true error risk. This upper bound can then determine how well an AI model adapts to new input data. In addition, researchers at Princeton have been looking at algorithms that will allow an AI system to learn effectively when errors are present in the training data.

There are also standards in development that will impact error-proofing efforts. NIST (National Institute of Standards and Technology) is actively contributing to AI standards that include a focus on evaluating the trustworthiness of AI technology. NIST has also proposed an approach to reduce the risk of bias in AI systems.

The United States CISA (Cybersecurity and Infrastructure Security Agency) is already looking at standards for vetting AI algorithms and data collection, as revealed during a 2020 panel entitled “Genius Machines.” This effort, along with that of NIST, emphasizes accountability.

 

Addressing Error in AI Systems

While error-proofing the AI systems you are responsible for may not be possible because of the human element involved, there are certainly ways to minimize the possibility of errors.

If you suspect errors are coming from your AI system, do not automatically blame the algorithm; rather, study the errors to look for a pattern. For example, if it is an autonomous bin-picking robot making errors, see if there is anything in common that the incorrectly sorted items have or if there are changes in the robot’s environment (lighting, shadows, etc.) that could affect its performance. There could also be an issue with a dirty camera lens in the machine vision portion of the system, such as those found in autonomous mobile robots.

 

Many machine vision cameras have built-in AI systems, such as the FLIR Firefly DL, but these systems can generate errors if not kept clean and configured correctly. Image used courtesy of FLIR

 

A hard rule of programming is that bad input will always result in bad output. The first step in minimizing the error generated by an industrial AI system is to ensure that its data is as accurate as possible, starting with the sensors. The sensors that provide input data to AI systems should be calibrated regularly.

Tools within the AI system that allow users to set acceptable ranges for data should be implemented after careful consideration of an acceptable range: too strict, and the AI will not provide much value; too loose, and it generates far too many errors. And remember, these values can be adjusted.

Also, as alluded to earlier, keep any cameras clean. While machine vision systems are designed to be robust in various environmental conditions, that does not mean they will still perform well when the vision is compromised by a dirty lens. The same is true of other industrial sensors whose accuracy can be compromised by a build-up of scaling, exposure to corrosive environments, mechanical issues, or aging.

AI is widely used in the industrial sector for everything from process controls to quality inspections. And because of the human factor involved, these AI systems are also subject to error. Error-proofing algorithms are being developed, but these methods are not fully mature, nor have they been extensively tested on industrial applications. And while organizations such as NIST and CISA are working toward error-proofing standards, they are still in development. However, some simple measures can reduce the probability of error in your AI systems.