calibration sealed transducers using software

R

Thread Starter

Robert Kwiatkowski

What I am looking for is an answer or direction here. The situation is this:

1. We have installed in the field several Pressure Transducers that have a 4-20 ma output.

2. The sensors are sealed and cannot be adjusted internally

3. When the calibration team does their periodic 3 yr calibration they check the system output at the display, if it is outside 1% tolerance they test the individual components.

4. The transducers must stay within 1% or they are failed and must be replaced.

The system has the ability to calibrate the output on the display using software that adjusts the span and offset equation (Mx + B). The calibration policy does not allow this to be done. I understand in certain instances using the software to make adjustments may not be acceptable (in safety or control situations) but I cannot find any real documentation on using software to calibrate a system.

I have to ask, if you treat the system as a whole from the sensor to the display and you inject a calibrated source into the transducers, use the software to calibrate and the output is a calibrated, repeatable value then why would this not be acceptable?
 
The 'treat system as a whole' is sometimes called a loop calibration.

As you seem to indicate, for those who accept a loop calibration it doesn't matter if the analog output is slightly off and/or the analog input is slightly off as long as the final reading of an injected, calibrated source shows up as the correct number.

It would be prudent to check a value or two between zero and span to make sure that linearity stays within your 1%.

As to why your proposal is not acceptable?
- legitimate reason like usage history shows that if the transmitters are out by greater than 1% then they become non-linear or rapidly degrade even further or somehow become problematic.

- or bureaucratic, like NIH (not invented here) or "we've always done it that way" or Joe's brother has the calibration contract.
 
R

Robert Kwiatkowski

I can understand if there is sufficient evidence that the transducers degrade after they have drifted but then why do they have adjustments on some of them? 1% accuracy is pretty much a standard transducer today, high end units go down to .25% or lower. Just because the TD has drifted if should mean they are tossed, these things are not cheap.

I am proposing that the system be calibrated along the range to ensure it is within 1% and repeatable. I am also suggesting a hard limit be set on the transducers similar to the limiting factors of the potentiometer which by ROT is 10% in most cases.

I believe it is more of this is the way is has and will continue to be done, the problem with this approach is as hardware gets better this becomes a cost savings issue.

Are there any standards out there or documentation on loop calibration?

I am trying to build a strong case that this should be studied to determine if it is feasible for our systems. What applications can we apply this to and where are the limits?
 
Top