16 Bit Analog Signals

G

Thread Starter

GP

I have been discussing the topic of analog signal resolution with some co-workers and we have become stuck on a point. A true 16 bit analog signal will allow the controls engineer to evaluate a 4-20mA signal in 65,535 steps. Is this kind of accuracy necessary in any of the applications found in process or factory automation? If not today how about next week, year, decade? If so, what is the application?

Thanks for the input.
 
P

Peter Nachtwey

Our customers use 16 bit analog feedback all the time. Having fine resolution allows the use of the differentiator. This, 16 times more resolution, can make a BIG difference in PID applications.
 
I have been an independent integrator for a number of years. It isn't that 16 bits is needed, it's that the next step down is to 12 bits. I have on occasion had a device that would only give me 0-2v for example and the brand of PLC that I had only offered 0-5v input. At 12 bits (0-4098) with a reduced range of 0-1600 (2/5ths x 4098) there starts to be a problem.
 
J

James Fountas

Most applications I have worked on don't need 16 bit. Most could get by with 8 - 12 bit resolution.

There are ones that 16 bit is useful.

1.) Orifice plates formly used two dp cells if accuracy is required in the low range. One dp cell would be spanned to 10% of the range of the higher one. One 16 bit dp cell would give a similar resolution at the low range as two lower resolution dp cells.

2.) A weighing system for a tank where the tank was a significant portion of the weight. A small process tank could weight 200-400# but you are only interested in 100# of material at best. You can span a low resolution system to start at roughly the weight of the tank and end at tank weight plus 100 or more pounds. You could avoid this problem and use a higher resolution sensor.

There are others, but they tend to be the exception not the rule (for me at least).

James Fountas
james@ fountas.net
 
T
James, your #2 example highlights a similar and very commom application requiring high resolution: bulk bag filling, be it dog food, flour, fertilizer, or cement. The bags are filled from large hoppers using a loss in weight method. If your filling 100# bags from an 8 ton hopper, you need 16 bit resolution. Since an 8 ton hopper (160bag) isnt really all that big for high output operations, some bag filling equipment manufacturers use even higher resolutions.

This is just one of hundreds of possible examples.

Additionally, the LSB of many A/D converters is garbage, ie, a 12 bit A/D may only have 11 bits of useful data, or worse if there are range matching issues.
 
J
For A/D and D/A for 4-20 mA I find that 12 bits or so is usually sufficient. 16 bits or more is usually found in the sensor-end of a transmitter, e.g. on the input of a temperature transmitter. Sensor inputs need higher resolution in order to get a good rangeability, i.e. the ability to select only a small range within the sensor limits.

Tomorrow? Well, then there will be no 4-20 mA, there will be only Fieldbus networking using 32 bit floating point. No more A/D and D/A for signal
transmission.

Jonas
 
K
Don't confuse resolution with accuracy; they're not the same thing.

An actual 16-bit sensor could provide a huge dynamic range in applications that practically need only 12, 10 or 8 bit resolution, so perhaps a single sensor could be used for very different situations in a plant, leading to economies of scale, reduced parts count, etc. Probably not practical with existing technologies.

A 16-bit signal can be used to transmit very flexible scaled translations of a lower resolution sensor. This is commonly done, e.g., in plcs with 16-bit registers holding scaled & translated values from 10 or 12-bit sensed inputs.

I've worked with systems where one manufacturer used 10-bit resolution on inputs, and another used 8-bits for the same thing. The higher resolution was more aesthetically pleasing, and potentially could provide tighter control, but practically the 8-bit stuff was probably good enough in many cases.

Ken
 
A
hi,

What u said may be generally true. 65535 steps does seem large value, especially when u look at a 1" control valve operating with pneumatic air
pressure as a final control element. Even 8 bit ADC may suffice most needs.

Some nuclear and scientific applications do demand very high accuracy.

But before going down to 8 bits or 12, we can consider this theoretical example:

If you are looking at a 100 bar transmitter where you have to maintain the pressure at say 65 bars +/- 0.15 bar then all u have is a .15 bar tolerance on system inaccuracy. Which if you look at the 65535 steps translates to 98.3025 steps.

Lets say that the transmitter accuacy is 0.1% or that translates to a loss of 65.535 steps.

In other words your ADC has to have a good accuracy of 98.3025-65.535 steps or 32.7575 steps.

or roughly 0.05% for the system to have any meaning.

Then look at the other end i.e. control action, and you may find yourself in a similar dilemna (more inaccuracy has to be accounted for).

In other words, higher accuracy does matter as it provides you with more margin where u can afford to have. i.e. on the input side (ADC) and output
side (DAC),

You have reduced the inaccuracy of the reading to a lower percentage by having a ADC with more resolution. Thus you have reduced total system error and increased your chances of better control.

Hope this helps.

Anand
 
M

Mark Lochhaas

I believe the answer is both ‘yes’ and ‘no’. The answer may be, from a practical sense, ‘no’. Think about the amperage resolution of the signal. At 16 bits the resolution is about 0.244 micro-amps per unit or step. I do not believe amperage can be controlled through typical industrial transducers, wiring, and termination accurately to this resolution.

The answer may be, from a functional sense, ‘yes’. Often sensors are not available in the range necessary. So, the sensor used may have a range several orders of magnitude larger than the necessary resolution. In these cases significant accuracy is compromised. As an example, suppose a sensor needs to detect a depth to a resolution 0.001 inches. But the sensor range is 0 to 100 inches. The necessary resolution is 0.001% of full scale, or even more than 16 bit resolution (0.0015%). It is also economically impractical to use lab resolution sensors in an industrial setting in many cases.
Resolution and range can be a real challenge. In my experience 12 bit is almost always sufficient as long as range and resolution are not mismatched. 12 bit resolution is 4,096 steps or 0.0244% of range. If this is not sufficient, then common mechanical principals can often be applied, usually to reduce the range. That is, perhaps a sensor with the necessary resolution, but not the range, is moved accurately through a fixed portion of the range where measurement is not necessary and then the measurement is taken. A simple mechanical solution could prevent very high expense from custom sensors, transducers, and sometimes more than 16 bit resolution.

Mark Lochhaas
[email protected]
 
J

Jonathan Reed

I work in the Dairy indistry in the UK, and the accuracy of level probes in Milk silos is required - the management will be looking for exact milk throughputs and losses. I have been encountering problems with Endress and Hauser level probes error variation (1%) meaning the Dairy is actually producing more milk than it has delivered..... Interesting. However a 200,000 litre milk silo has less than one digital point per litre of milk, and I am looking for accuracy of one litre...
 
M

Michael Griffin

A better question would be whether that type of accuracy is *possible* in normal factory or process automation. One part in 2^16 of accuracy is 0.0015%. In a 4-20mA signal, this would be in the nano-amp range. I wouldn't want to make any promises about deriving meaningful results from signal changes of that magnitude.

12 bit A/D is more realistic. I don't expect for this to change in the forseeable future without improvements in sensors, signal conditioners, signal shielding, and analogue front ends.

A number of A/D systems in industrial hardware combine 12 bit resolution with poor accuracy due to a poor analogue front end. More resolution is often relatively cheap, and it is the analogue section which requires careful design and good components. In these cases much of the resolution is wasted in most applications because of poor accuracy.

************************
Michael Griffin
London, Ont. Canada
************************
 
R

Ranjan Acharya

Sixteen bits versus twelve bits or fourteen bits or even eight bits is only part of the problem you have to consider if you are handling analogue signals.

Is your initial sensor accurate?

Are you sampling at an appropriate rate to avoid problems such as aliasing?
Do you have something to correct for under sampling of a rapidly changing signal?

Analogue filtering versus digital filtering?

Et cetera.

As other replies have stated most of us get by just fine with 12-bit resolution. Most analogue signals I have sampled using "conventional" 4-20mA (or voltage ranges) are so jumpy that I have never seen a good reason to fork out extra brass for 16-bits of resolution. I am sure that there are sensitive applications where 16-bits of resolution are required.

As noted in the replies, once you switch to some sort of fieldbus then you end up with floating point data (IEEE754 - with associated problems) or
integer data (e.g., first word is integer portion and second word is fractional portion or just fixed point) and so on.

RA
 
C
Well said, Michael

The wiring care and scope work needed to meaningfully use 16 bit resolution would surprise most people. Far too many people accept the reading they get, often a single conversion, as truth, even when the noise floor allows at best, 8 or 10 bit dynamic range. And unless the signal is pure DC the heavy filtering used often
produces errors far greater than the difference between 12 and 16 bits. Then the effects of averaging or smoothing round off a few more bits. For perspective, 100mv of noise and a 10 volt
input range make 8 bit accuracy impossible. I have to chuckle when people want to measure with signals to.01% accuracy on open wiring without ever having looked at the signal. It can be done and sometimes people even get lucky, but I view any claim of better than about 1% accuracy with a high degree of skepticism. Doing waveform acquisition and high accuracy measurements with DAQ cards is a lesson in humility and patience. It's one of the many areas where ignorance is bliss.

Regards

cww
 
B
on the other hand ...

With many industrial temperature sensors, the analogue-digital conversion is active over the maximum span of the transmitter - so for a 1000 degC maximum span, the 12 bits or whatever is applied to the full 1000 degC. Adjust the transmitter to give a lower span, and you are limiting the resolution available over that reduced range.

In the worst case, this can lead to a temperature signal that is quantised in steps of about .25 degC. And this plays merry hell with control loops using derivative (as the books say you can on temperature loops). With a span of say 0 - 100 deg, the increment between digital steps in this case is quite high, and gives a nice kick to the controller output. It also makes it hard to explain to operators why the temperature won't sit steady on 56.5 but jumps from 56.3 to 56.7.

There is a good explanation of this in Greg McMillan's "Tuning and Control Loop Performance". Luckily, I'd read this chapter the night before I
encountered the problem. Just sometimes, life turns out OK.

Bruce.
 
W
This is a good time to set the number of decimal places in the display to "zero". (big grin)

Walt Boyes

---------SPITZER AND BOYES, LLC-------------
"Consulting from the engineer
to the distribution channel"
www.spitzerandboyes.com
[email protected]
21118 SE 278th Place
Maple Valley, WA 98038
253-709-5046 cell 425-432-8262 home office
fax:801-749-7142
--------------------------------------------
 
M

Michael Griffin

I am sure you are aware that resolution and accuracy are two different things, but this is a point which should always be kept in mind. Some of the newer low cost industrial hardware combine high resolution with low accuracy. It is usually cheaper to increase resolution than it is to improve accuracy. In most applications, having a more precise reading of an incorrect value does little good. The accuracy must be maintained all the way from the thing being sensed to the point at which the value is used.

There has been some discussion of using increased resolution to amplify an instrument range (e.g. use 1/4 of the nominal range as full scale). We must be careful when doing this, as this also often amplifies the error as well
as the reading.

In the above example, it is desired to use a level probe to measure 200,000 litres of milk to within 1 litre. In other words, we are trying to measure the level to a high degree of accuracy, and then equate level with volume. I
am not familiar with the dairy industry so I am not sure how this problem is usually solved, but I can imagine so many potential sources of error with this (the whole system, not just the instrument) that I could not say how to do this with a direct level measurement. I would be very interested if someone could explain to me how this is accomplished.


************************
Michael Griffin
London, Ont. Canada
************************
 
P
I think the physical shape of your silos will change between empty and full much more than the 1 litre resolution you are hoping to achieve. Have you tried girthing your silos?

I think you will have to measure something other than level to account for your throughputs and losses.

Good luck anyway.

Peter Green
[email protected]
 
> 12 bit resolution is 4,096 steps or 0.0244% of range. <

While 12 bits can have 4096 distinct values, ie. 0-4095, there are only 4095( 2^12 -1 ) steps in 12 bits not 4096. Therefore, 12 bit resolution is
one part in 4095.

Bill Mostia
=====================================================
William(Bill) L. Mostia, Jr. P.E.
Partner
exida.com
Worldwide Excellence in Dependable Automation
[email protected](b) [email protected](h)
www.exida.com 281-334-3169
These opinions are my own and are offered on the basis of Caveat Emptor.
 
R

Ranjan Acharya

I get to play with level sensors from time to time. Horrible things.

For accurate measurement I would be more inclined to measure in and out with a flow meter. They can be quite accurate and I am sure you could find one even for a colloidal / emulsion like milk. I just did a project with some beauties from Endress + Hauser for solvent flow and we had down to a few grammes of accuracy on multi kilogramme flows.

Perhaps you could keep the level sensor as a secondary verification to make sure that the milk is not disappearing.

Just an idea anyway - I do not do the measurement side, I just write the code to use the data.

RA
 
S

Steve Myres, PE

I agree. Furthermore, if it is differential product mass flow and inventory control that you want, measure the weight of the tank. Then the structural rigidity of the tank walls and other things remain irrelevant like they should be. I say always try to measure the parameter you really care about, rather than measure something else and make an inference.

In an industry where I sometimes work, there is an application for continuously pumping solution from a collecting sump up to a working reservoir, from whence it continuously spills back down to the lower tank. Since the work is done in the upper tank, customers sometimes like to monitor the function of the pump. Flow and pressure switches in the pump pressure line are common solutions. I say, if you wake up one morning and God has repealed the law that says fluid runs downhill, as long as the upper tank is full, do you care? No. So measure the parameter that really matters, which is fluid level in the upper tank. This way, you are not exposing yourself to the possibility of some confounding factor like a closed valve preventing flow when the pressure switch is happy, or a broken line starving the upper tank when the flow switch is happy.
 
Top