Accuracy (was Global Warming)

B

Thread Starter

Bruce Durdle

From: Bruce Durdle <[email protected]>
To: "'[email protected]'" <[email protected]>
Subject: RE: SENSOR: Accuracy (was Global Warming)

(now we're getting into some measurement-related stuff!).

Yes, the uncertainties of a single system made up of components must be added to give the overall uncertainty of the resulting measurement. But the addition is not linear - it is on an RMS basis. So if I have an RTD with an uncertainty of +/- 2 deg, and it is connected to an indicating gauge with an uncertainty of +/- 2 deg, the overall uncertainty of the reading is SQRT(2^2 + 2^2) = SQRT(8) = 2.83. THis is needed to take into account the possibility that a+ error in one device is compensated for by a - error in the other.

When talking a large number of devices, each accurate to the same specified uncertainty, the "accurate" value is taken (pretty well by definition) as the mean of the indicated values of each individual element. Taking the mean of a small number from a large population gives a result which is more accurate than any of the individual readings - this is of course the basis of the SPC method. The uncertainty of the mean of readings from 5 RTDs will be the uncertainty of any individual value divided by SQRT(5).

So in your example, if you take the means of the readings of A and B, the result will be nearer the true temperature than either reading alone, (and certainly not worse, as your argument would indicate).

Perhaps another question ... How much have temperature readings over the last 100 years been affected by changes in the definition of the
International Temperature Scale?

Bruce.
 
E
In the example that Bob Pawley gave the only condition in which the uncertainties make any sense, i.e., they were accurate representations,
would be if the bath that A and B measured would be 0 degrees. Another way of looking at it is that independent multiple measurements increases your knowledge and removes uncertainty.

Regards
Erich Mertz
[email protected]
 
B
I am afraid that there is a misunderstanding here. If the temperature reading is 23 degree with an inaccuracy of +/- one degree the actual temperature can be anywhere between 22 and 24 degrees. Just because there is one element reading 24 and another reading 22 one can not assume that the actual temperature is 23. The actual temperature may very well be 22 degrees, 22.5 degrees or any point within the uncertainty.

The point is - that within the devices tolerance levels one can not be certain of any reading. To assume that the actual temperature is the mid
point between the reading of two elements is merely an assumption.

Unfortunately a lot of assumptions are mad on this subject. Including, occasionally, by me.

Bob Pawley
250-493-6146
 
F

Frederick Bloggs

With reference to the root thread of Global Warming, you need to reflect this:

Typical temperature sensors used by climatologists have uncertainties down to a few millikelvins for temperatures less than 373K.

Climatologists currently measure to traceable NIST ITS-90 standards.

There must be some confusion between garden-variety industrial sensors (e.g., thermocouple and RTD) and those used by agencies monitoring
the deteriorating global climate.

FB
 
M

Michael Griffin

Since we are now discussing sensor accuracy, I'll join in the discussion. The key point to the confusion caused by the above is the statement "one can not be certain of any reading". A statistical view does not increase the accuracy of any one reading. Rather it decreases the uncertainty of knowing what the actual temperature is by looking at the aggregate of many independent readings.

This may not be readily apparent in the example above because your sample size is so small (two samples). However consider this. The 23 degree average is more likely to be closer to the true value than selecting either one of the 22 or 24 degree individual readings. As the number of samples increases, the statistical uncertainty decreases.

I think you will find the +/- 1 degree specification you have cited is itself statistically derived. That is, quite probably (so to speak) +/- 1 degree represents +/- 3 standard deviations, 19 times out of 20. If that
isn't quite the calculation used, the correct one would (likely) be something similar.

************************
Michael Griffin
London, Ont. Canada
************************
 
E
To clarify, see my notes below. Let me say, however, that I welcome any comments if I have misunderstood the situation.

Regards

Erich Mertz
[email protected]

> ---------- Forwarded message ----------
> From: Bob Pawley <[email protected]>
>
> I am afraid that there is a misunderstanding here. If the temperature
> reading is 23 degree with an inaccuracy of +/- one degree the actual
> temperature can be anywhere between 22 and 24 degrees. Just because there
is
> one element reading 24

This is element A and it indicates that a bath of unknown temperature is between 23 and 25 degrees since the measurement uncertainty of A is +/- 1
degree.

>and another reading 22

This is element B and it is measuring the same bath of unknown temperature. It is indicating that the bath is between 21 to 23 degrees. If the
uncertainties have been reported correctly, the sensors can be in agreement only if the bath is at 23. My point is, that this is an example in which one can see that multiple, independent, and that is the key, they must be independent, measurements, reduce uncertainty.

>one can not assume that the
> actual temperature is 23. The actual temperature may very well be 22
> degrees, 22.5 degrees or any point within the uncertainty.
>
> The point is - that within the devices tolerance levels one can not be
> certain of any reading. To assume that the actual temperature is the mid
> point between the reading of two elements is merely an assumption.
>
> Unfortunately a lot of assumptions are mad on this subject. Including, > occasionally, by me.
>
 
B
As far as global warming is concerned I ask this of you. Are these more accurate temperature systems in wide use now as well as one hundred years ago?
 
B
As Samuel Clements once wrote - "There are liars, damn liars and then there's statistics.

Most of the recent 'scientific' studies that have been done to warrant health warnings against certain lifestyle choices have been based on the
study of statistics. Alcohol, milk, cholesteral (now known as both good and bad) etc, etc, etc.

Doom and gloom was prophesied based simply on statistical 'science'. Later, when real science, the act of observation and detection, came to the fore, all of these statistical warnings were discounted to one degree or another.

Let us hope that our world of industrial measurement is not ruled by the 'science' of statistics.

Bob Pawley
250-493-6146
 
C

Chris Jennings

I must say that your assumption that statistics is somehow at fault in the lifestyle choice studies is totally bogus.

Most studies that use statistics to determine if something is good/bad/indifferent are doomed to fail if all mitigating factors are not considered. For example a study that finds that people who drink red wine live longer because the statistical average lifespan of people who drink red wine is higher than those who don't is not a reasonable conclusion. It is more likely for example that people who drink wine are in a higher socio-economic class than those who don't and that is the reason for increase life expectancy.

To compare these studies with statistics used to determine the accuracy of temperature probes is like comparing chalk and cheese!

Let's not forget that the statstics don't lie, it's either the people interpreting them or the fact they haven't covered all the bases.

--
Chris Jennings Ph +61(0)351360417
Elect/Control Engineer Fax +61(0)351360540
Australian Paper Maryvale Mob +61(0)407320113
 
Let's try a different approach.

For the sake of simplicity assume that a liquid has a purely homogeneous temperature gradient.

(Note - Inaccuracy is not a construct employed by the manufacturer to explain differences in readings. Inaccuracy is a physical and manufacturing property of the elements, all elements, within the temperature monitoring system and if you are knowledgeable about how the system is manufactured and assembled then these inaccuracies can be, fairly 'accurately', quantified. Also, some people appear to assume that a +/- 1 degree accuracy means that the reading can only be exactly 1 degree out in either direction. If you were able to compare sensors, to a much more highly accurate standard, you will obtain readings from individual sensors at any point within the +/- 1 degree spread.)

Two temperature elements A and B, both with an accuracy stated as +/- 1 degree.

The temperature reading from both just happen to read 23 degrees.

How can you be sure exactly what the actual temperature is. You can be certain that the two elements chosen have the same inaccuracies physically generated in an identical manner. What you can not do is assume that you have just happened to pick up the two elements that are absolutely accurate. Do you now determine that the actual temp is 23 degrees simply because both
readings agree? I don't think so. The actual temperature can still be anywhere between 22 to 24 degrees.

If you added a third element and it read 23.5 degrees, are you now sure that the actual temperature is the statistical average of the three? In fact, what you now have is another unknown. Instead of an uncertainty between 22 and 24 you have added another factor which increases the uncertainty from a 2 degree spread to a 2.5 degree spread. The only way this isn't true is if you discount the reading that is outside of the mean readings. However if you discount the reading of 23.5 you are then in danger of eliminating the one element of the three elements that may be offering the most accurate absolute reading.

Uncertainty is uncertainty is uncertainty. You can not improve uncertainty by averaging arithmetically, statistically or by any other mathematical projection. The only way to defeat uncertainty is to physically remove it.

Bob Pawley
250-493-6146
 
P

Peter Whalley

Hi Bob,

The 3 sensor approach is in fact used in practice because it does work. In critically controlled environments in museums you will often see 3 identical space temperature sensors mounted together on the wall. If sensors A, B and C read 23.1, 22.9 and 27 deg C respectively we can say with a high degree of confidence (but not absolute certainty) that the temperature is close to 23 deg C and that sensor C is faulty, needs to be recalibrated and can be safely ignored for the time being. Majority rules.

This is also used in triple redundant control systems again because in practice it works.

It is also the basis for smoothing noisy signals using an RC network. The RC network averages the readings and reduces the noice (uncertainty). It's the same process but applied to a time series of data samples from a single noisy source rather than from many sources.

Look at it another way. Imagine you have 1000 perfect temperature readings with each being 23 deg C and for each one you role a die. If it comes up 1 subtract 2 degrees from the reading, if 2 subtract 1 deg if 3 or 4 do nothing if 5 add 1 and if 6 add 2. You now have a list of 1000 reading which have an introduced error of +/- 2 degrees. Now add them all together and divide by 1000. You can be (almost) certain the resultant average will be very very close to 23 deg. Whilst it is theoretically possible that the average would come out as high as 25 deg C this would require every role of the die to come up as 1 and the odds of this are very very low.

When you add together many readings, the errors accumulate it is true but they do this on an RMS basis not a linear basis so in the above example the sum of the readings is 23,000 +/- 32. But then to calculate the average (since it is average temperature we were concerned with) we divide both the sum and the uncertainty by 1000. This gives 23 +/- 0.032. The great reduction in uncertainty comes from both the RMS function and the division by a large number which has no uncertainty of its own.

You can build a simple spread sheet that demonstrates this very easily.

Regards.

Peter Whalley
 
I accept the charge of ranting. Many have been so charged, before being proven right. I am honoured to be included in such distinguished company.

Peter, - of course statistics work.

That is what statistics is all about. Assume a scenario and obtain the results for which you happen to be looking. You want an 'average" temperature? Create an average. Doesn't mean that the result is the actual temperature - but it makes everyone feel warm and cozy.

That is all that mathematical projections do. The fact that we accept these projections (assumptions) does not make the reading any more accurate, it merely calms the uninformed.

Bob Pawley
250-493-6146
 
Top