Vision inspection system measurement uncertainty and R&R


Thread Starter

Michael Batchelor

Hey gang,

I've had a customer ask me a question that I really don't know how to answer. We were asked to look at building a test instrument to detect part placement using a commercially available vision system. In theory it's a simple enough task; you fix the camera on the end of a robot arm so you can articulate it to different areas of the piece under inspection, go to the inspection points, line up on a few key features of the assembly, then look for the edges of the part to see if they fall within the "box" of acceptable position.

However, the question arose about how to perform a gage R&R on the inspection system. My first thoughts were, it's a nonsensical question because there aren't any "units" the system is measuring. What's it off by 4 pixels? What's the unit for a pixel? However, the question isn't a "trick" and it was asked in all seriousness. Honestly, the device would be a piece of gaging equipment determining quality of a finished good. How would a metrology lab "calibrate" it? I don't know.

How should I think about this? I didn't anticipate attempting to use the camera as an optical CMM device trying to measure quantitative data about how far out the part may be if it isn't in the correct position, but even if it's just a go/no-go gage what's the acceptance criteria?

Anyone have any thoughts on what concept I'm overlooking?


Michael R. Batchelor

5 Day Hands on PLC Boot Camp for Allen Bradley
PLC-5, SLC-500, and ControlLogix

If you aren't satisfied, don't pay for it. Guaranteed. Period.

[email protected]

Industrial Informatics, Inc.
1013 Bankton Cir., Suite C
Charleston, SC 29406

843-329-0342 x111 Voice
843-412-2692 Cell
843-329-0343 FAX

Curt Wuollet

Hi Michael

Calibration is easy with a little forethought. Establish a hard point for robot calibration, pin fits in a hole, that sort of thing so you can establish an absolute physical position. At the camera focal length from that position. mount a gage block or other object of traceable size. Count the size in pixels and do the math. You can even make it self-calibrating. For reportable figures you need both the distance and the size as the distance can change the apparent size even with telecentric lenses. You must also make sure the camera stays focused as any circle of confusion issues can alter the pixel count.



Michael Griffin

Michael Batchelor asked: "if it's just a go/no-go gage what's the acceptance criteria?"

The acceptance criteria is whether it fails bad parts and accepts good ones. This is an "attribute GR&R" as opposed to a "variable data GR&R".

An "attribute" test is any test where the results are reduced to two or more classes before deciding "pass / fail". Attribute tests can involve more than just two classifications. For example, you may sort by colour into red / green / blue and decide that "blue" is "pass" while red and green are "fail".

A "variable data" test on the other hand is where numerical limits are applied to an numerical measurement before deciding "pass / fail".

Any system that can pass "good" parts and fail "bad" ones should be subjected to an approrpriate GR&R. The point is to see if the system as a whole will do the job it is intended for.

With an attribute GR&R you feed the system a set of known tests with some (known) good and some (known) bad parts. There will be some standard statistical criteria determining how many tests must be conducted, and of these how many must be (known) "bad", how many must be (known) "good", and how many errors ("bad" parts "pass", or "good" parts "fail") are acceptable (possibly none). I suggest that you arrange a meeting with your customer's QA people (or whoever will be approving the GR&R) to work out the GR&R criteria for this case. In many industries, *your* customer's criteria will be dictated by *their* customer's criteria, and their customer will not approve shipment of product without passing GR&R.

You need to figure out how to create the "good" and "bad" test cases using a means that will produce predictable results. That is, you need to be able to prove by independent means that the "good" test parts are really good, and the "bad" test parts are really bad so that when the machine passes or fails parts you know whether it was supposed to pass or fail that part. An attribute GR&R usually requires a much larger number of tests than a variable data GR&R, so you will want to make sure you can conduct the test cases quickly and without a lot of manual intervention. I don't know the details of your application, so I can't offer any reasonable suggestions on how to do this.

You said you are attempting to "detect part placement". You need to be clear what exactly this means (I assume this is in the customer specs). Attribute tests look simple at first glance, but they can be much more complex than variable tests. You need to determine how many different attributes your system is really testing for. This can get quite complex, which is why it is considered good practice to avoid attribute tests whenever it is reasonably possible to do a variable data test.

Michael Batchelor


That's an easy way to calibrate the camera pixel-to-physical distance ratio. We're already doing something similar on another machine where we use the camera to measure that part's real location and then drive to an offset to align the work tool. Works quite well, frankly.

But when I've used a camera in the past to inspect for part presence or alignment we would establish a known point on the piece for camera placement, then do an edge and corner placement for the part compared to a stored reference picture. For that matter, the camera's do it themselves these days. (And quite well, too.)

Actually we'd also need something to measure angles. The corners and the edges need to be in the right place to "pass" placement criteria. However, the customer has no "manufacturing" specifications for measuring the part when it's placed, so there's no "acceptance criteria" that's defined. At least not yet. And where those edges and angles need to be would have to be determined by where we hold the camera, not merely by the physical dimensions of the part.

It started out as an innocent enough question, but as the conversation goes on this way, the idea of "proving" it can find an out of place part is going to add as much cost in engineering time as just designing the system in the first place. And it won't work any better in the end. We'll merely have "proof" that it "can" work.

To be successful, you'll need to know a lot more detail about your application than is in this post. Vision system design is all about three-dimensional geometry and calibration is all about first-order corrections. As you've noticed, even things that are simple in principle get very complex very fast.

Don't get discouraged, however, problems like the one you describe are solvable and have been solved many times -- and everything you need to know is available on the Internet.

Rather than try to guess at what you're really trying to do, then describe a solution in words, I'm going to tell you what you (or your customer) need to do.

The RIGHT way to approach a problem like this is to pick your favorite vision-equipment supplier and ask them for help. The number of supplier choices has dwindled in the past few years as the industry has consolidated.

The company that has the most experience doing vision-based metrology is Cognex ( They'll most likely suggest an off-the-shelf product, tell you how to set it up, etc.

Another company to talk to is Edmund Optics ( They're more involved with "roll your own" solutions. Their catalog has all the components you need and their tech library includes white papers that teach all the necessary concepts to become your own vision-metrology expert. I'd start by exploring the tech library on their website, then, if you don't find the answer in there, call their sales support line.

C.G. Masi
Senior Editor
Control Engineering

Michael Griffin

In reply to Michael Batchelor - I'm having a few problems understanding your description of the application. I'm not sure if this involves measuring a finished part, checking placement of components in a fixture prior to assembly, checking for part presence or orientation, or something else.

However when you say "the customer has no 'manufacturing' specifications for measuring the part when it's placed, so there's no 'acceptance criteria' that's defined" some big alarms should start going off. It sounds as if the customer (or someone) is putting the cart before the horse. The purpose and criteria of the test(s) has to be defined before a testing method can even be discussed. I think it will be very important that this is defined clearly in writing somewhere before you commit to building something.

Something very important to define is whether the tests or checks being conducted have any effect on the quality of the finished product. That is, if the test is intended to prevent defective material from being shipped to the customer, then there needs to be a clear paper trail between the product specifications and the test equipment capabilities.

I can think of some very valid reasons for not having any manufacturing specifications - say for example the product is still being designed. No one at the customer wants to just pick some numbers out of the air. On the other hand, you don't want to be the one left holding the bag when the equipment can't meet the requirements. It's the customer's product, and someone there has to make a commitment. (One common reason for late specs is because the product design isn't going too good. The product designers may eventually give up and throw some impossible tolerances on the design just to meet the delivery deadline.)

If the check has no effect on finished good quality and is just intended as an in-process check to prevent the part from jamming up in a machine further down the line, then you have a very different situation. In this case, it is important to make sure the machine isn't being labelled as a "product quality" test, but rather as a "manufacturing productivity" check. This may sound like just an exercise in semantics, but it is an important distinction to make. If your customer's QA people know what they are doing, they will appreciate this difference in intent being made clear as well.

If this is a "product quality test", then you need to be able to do a GR&R (and possibly calibration) according to recognised methods. If the testing method doesn't lend itself to that, then the test method is not suited to the application.

If this is a "manufacturing productivity check", then you and your customer may have more freedom to define acceptable run-off criteria. In this case, the machine run-off could for example include a set number of both "good" and "bad" parts, and the criteria should be that "good" parts are accepted and "bad" parts are rejected, with an error rate below an agreed level. This still leaves you with a problem though - how do you get some "good" and "bad" parts to do the tests, if you don't know what "good" and "bad" are?

You might think that proving the testing method sounds like a lot of unproductive work, but if you can't prove that it works, why bother doing it? It's no good just saying "I think it works". If you can't prove it, then it's not worth doing.

Curt Wuollet

I agree, and there are a number of simpler ways, but calibration is calibration. I would think as a practical matter, placing landmarks or monuments in the field of view and assuring that they are as they should be should be adequate to "prove" repeatability. Or to calibrate on the fly if you know the camera orientation within reason, which you need to do anyway.