Global Warming

I thought global warming estimates were based on changes in sea levels, plant and animal distributions, ice forms etc rather than on temperature measurements.

However, temperature measurement with RTD's is an interesting subject on it's own. I have worked with industrial instrumentation for many many years and never had to achieve an uncertainty better than about 0.5 degrees C. Recently I have had to do some homework on how to achieve an uncertainty of 0.03 in the 25 to 80 degree range. Do I think I can achieve that within a reasonable budget? ... Yes. Am I absolutely sure?... No. Would I change the current plant calibration equipment and
procedures (dry block/reference thermometer/associated procedures) for general
work? Probably. So......... for the experts........ how should I go about it and
minimise costs?

Vince Dooley
Probably the best measure of temperature fluctuation does not require a thermometer, RTD, or thermocouple, but rather the average latitude at which the polar icecaps start. This is a highly repeatable measurement based on the freezing point of water which has little margin for error (though it is possible to make highly accurate and repeatable temperature measurements by other means, typically by multi-point calibration and linearisation of sensors).

So the requirement for a scientific theory and observations to support or reject it are met. The (very reasonable) hypothesis is that carbon dioxide emissions cause a build up of carbon dioxide tending to retain heat that would otherwise be reflected into space. The evidence for this can be provided by the recession of the icecaps (I don't actually know whether they
are receding, but it can be fairly easily checked), supported by traditional temperature measurements using repeatable techniques (for example a simple mercury thermometer!).

There are other techniques for measuring longer term climatic variations, most notably dendrology and the analysis of earth or ice cores.


Tim Linnell
Ken Irving wrote:
>I'm not necessarily a "supporter of global warming", but I think you'd do well to check your assumptions of how the numbers and trends have been derived. I think it is naive to think that scientists are not clued in to this level of detail.<

I don't believe I am being naive at all. I have seen this lack of appreciation for the inherent uncertainties of temperature reading within even our industry, which should give us all pause.

Bob Pawley

George (Jim) Hebbard

The politics of global warming do not belong in this thread. However, the science relates to differences of opinions which cannot be reconciled without a clear knowledge of how the various sensors or proxies for sensors are calibrated. For example, one of the most popular proxies is tree growth ring with. I'm fairly confident that is responds to the average local CO2 in the atmosphere as well as local moisture and temperature. How about the alternate, oxygen isotopes in ice bubbles!!!

To avoid a raft of E-mails diverting from the value of this wonderful list-server, consider the arguments going on elsewhere "There WAS no
medieval maximum nor a little ice age (LIA) around 1735." vs "Yes there was. We are now only warming up to normal high temperatures send before the LIA."

See "": for more.

And let's drop this thread? :)

Peter Whalley

Hi Bob,

I think your fundamental premise that the temperature measurement taken by meterologists for the last hundred years use the same technology that we use is fundamentally flawed. A range of measuring instruments would have been used including for the most part mercury thermometers read directly by people so the accuracy or otherwise of temperature measurements made in factories is of no consequence.

The second thing is that when you average thousands or millions of measurements the likely error in the average value decreases dramatically
provided the instruments and their individual measurements have independent random errors which could well be the case. To merely say that the accuracy of the measuring instruments individually is only +/- 1 deg is not good
enough to discredit the measurements. You would need to show that the errors were systematic (that is all measurement were high or low) and if
you could prove this then the systematic error could be removed in any case.

There is also a substantial difference between the accuracy obtained by connecting a sensor to a controller compared to a laboratory calibrated
integrated measuring instrument. Take a look at:


This laboratory claims to be able to calibrate measuring instruments to an accuracy of +/-0.003=B0 C over the range -38 degrees C to +200 degrees C.

You would need to talk to meterologists rather than industrial controls people to understand the accuracy of meterological temperature measurements.


Peter Whalley
Good luck Vince. Please let me know how you do.

At one point in my career I happened to spend thousands of project dollars "calibrating" RTDs to accuracies that we could ensure monitoring a 0.5 degree change, in absolute terms.

All we proved was that it could not be done, at least not with RTDs that were priced in a range we could afford. We had to fall back instead, on the accuracy of measurement repeatability; the saving grace of RTDs.

Bob Pawley

Clark Southoff

I have watched the comments with interest.

In Canada, as opposed to Norway, the size of one province, Alberta, is much larger than Norway.
Some times a victim of urban sprawl, and taking advantage of large resources widely scattered over vast regions. Some of our daily temperature
variations can exceed 40 C. And yet we're planning gas pipelines from the Arctic Ocean to Chicago.

As for government policy being based on facts (undisputed) is illusory. I do not think a government will last, if it legislates that 20% of the population must freeze to death to meet Kyoto agreements! Jobs In Canada will migrate to the US or Mexico.

If we do not discuss, debate and examine our work for more efficient methods, the lights will go out.

The Kyoto Accords have been described as the most massive conspiracy to transfer 50% of the wealth from the G-8 Countries to the third world, with
no hope of actually reducing CO2 emissions in the world wide.

The G-8 countries could put in the best available technologies in the third world countries to reap CO2 credits. The market for the credits could fund upgrades of utilities and industry in
the G-8 countries. This all falls through if the US does not play along.

A different mind set can be explored. How about increasing energy efficiency of existing process and systems? There can be minor changes (tuning ) that can net energy efficiencies of a minimum of 1 %. Have we examined our systems to see how much 5 or 10% energy efficiencies will cost?

I pointed out to an American cousin, complaining of the high cost of energy for air conditioning. He lives in Dallas, Texas. Certainly a necessity when the temperature can jump to 40 C +. His cost to cool his house was equivalent of my entire heating cost for the winter in Calgary, Alberta, Canada.

I suggested an obscure invention: It keeps the heat on one side and the cold on the other. It is called insulation. He insulated his house and his energy costs dropped by 65%. That was coal fired electrical power.

As for industrial processes, we have seen the wide spread adoption of VFD drives and energy costs dropping by 30%.

Might I suggest that we continually challenge "pseudo science" proffered in the unenlightened realms of some media outlets. AND examine with equal vigor, our own assumptions of the efficiencies of the processes, equipment
that we control or specify.

Good discussion, keep it up.

Best regards
Clark Southoff
Technology Wranglers Inc.
Calgary, Alberta, Canada
It is well known that sea water can go below the established freezing point before ice is formed. Water temperature measurements of minus three degrees Celsius are rather common.

If you are looking for absolute accuracy, when calibrating industrial temperature systems, one needs to understand what freezing point means. A
mere 12ppm of contamination in distilled water can change the freezing point by an estimated 0.01 degrees C.

At the other end, the calibrators altitude above sea level affects the boiling point. How accurate are the corrections for altitude. How accurate
is the altitude measurement itself.

All this is to say that temperature monitoring, whether atmospheric or industrial, has a score of variables that creates temperature readings that are uncertain. The degree of uncertainty appears to be more than +/- one degree C. All observations made, be they atmospheric or industrial should have that fact as a caveat. IMHO

Bob Pawley
Politic considerations aside, it turns out that the whole question of "global warming" is an interesting study in measurement technique,
measurement error (and correction techniques used to adjust for it), and the impact of human involvement in measurement and interpretation (not
to mention political and social impacts on scientific technique). The ambiguities involved in determining whether our climate is getting
warmer should sound a cautionary note for anyone attempting to make critical measurements in other complex applications.

There is an interesting website that brings forward a lot of information about how climatic measurements are made, and how they were made
historically, and highlights some of the questionable research that has been published in this area. If you can get past the unabashed advocacy of the site (he's brutal when it comes to debunking poor research) and the horrible website design, there are some good references for anyone interesting in pursuing this area further:


While I'm a believer in taking great care of our environment, I'm also a believer in being very careful about decision-making where massive
public expenditures are concerned. Every dollar spent in the wrong direction due to faulty science is a dollar that cannot be spent in the
right direction, and lay people must depend on those who are well-versed in measurement technology (like some of those reading this forum) to determine the quality of the data being deployed.

Ken Crater Inc.
[email protected]
Our industry also uses simple mercury thermometers, mostly as part of the equipment when calibrating RTDs.

Can anyone tell me the stated accuracy of these devices?

Bob Pawley

Higginbotham Ricky (External)

My car won't start if my battery is dead. I've tried it, its a factual statement based on real world observations. "Scientific" and repeatable
(with no known discrepancies). So if I get into my car and it won't start, I know I have a dead battery (as opposed to running out of gas) right?...

In order to prove a relationship you have to factor out external influences. It doesn't matter if you can prove the temperature is an average 50 deg. warmer or that my car won't start.

cart horse

Richard Higginbotham
speaking for me
Bob Pawley wrote:
> I don't believe I am being naive at all. I have seen this lack of
> appreciation for the inherent uncertainties of temperature reading within
> even our industry, which should give us all pause.

I wouldn't argue with that point, but to start from this one assumed source of uncertainty and conclude that concerns about global warming are necessarily therefore flawed is, to me, a stretch. Do you have any actual evidence that errors in RTD sensors data are such a key part in
the argument?


Ken Irving <[email protected]>
Unfortunately, uncertainty of measurement does not average out. If you take two devices both with an uncertainty of +/- 1 degree and the readings are -1 on "A" and + 1 on "B" you can not say, with any accuracy, that the actual reading is 0 degrees.

Uncertainty is uncertainty and it is the same (or greater) with one reading or one million readings.

To make the uncertainty factor even worse, than simply the specs of the temperature element, is the uncertainty that all elements are calibrated
exactly the same, the uncertainty that all of the electronics are equally accurate and the uncertainty that all RTD elements or thermometers are manufactured to exact spec.

They aren't, which is why all manufacturers hedge their specs with an accuracy similar to +/- 1 degree or greater.

Bob Pawley
All atmospheric and industrial temperature measurements, outside of the laboratory, are accomplished with RTDs, thermometers and less accurate devices such as thermocouples. Rtds, IMHO, are the most accurate of these devices.

All scientific observations and computer models that describe the real world heavily rely on these devices.

While wasting project money calibrating RTDs, I was using a highly accurate temperature monitor. Placing this monitor in different areas of the oil bath produced temperature variation of almost 0.5 degrees. This is a closed container properly insulated using a highly conductive fluid, at least compared to air and water.

Even if the real world measurement systems were as accurate (certain) in measurement as my monitoring device, the difference from point to point a few inches apart can be as great as the one degree temp. rise claimed by the GW supporters.

Industrial processes and the atmosphere are complex dynamic systems. Measuring these systems, for temperature in particular, is more of an art than a science. We techs, in the industry, are the ones who bring the tools to the artists - the process operators.

What we need to understand is how these tools work, in all of their peculiarities, in order to provide the best tools possible.

Bob Pawley

Peter Whalley

Hi Bob,

Actually in a strictly logical sense you could be "certain" the actual temperature was 0 deg. as any other temperature would be outside the
uncertainty range of either sensor A or B. That is if the true temperature was +0.5 deg then the error in the reading from sensor A would be +1.5 deg whereas you have stated that the error must be within +/-1 deg.

If you had 2 measurements as stated you would suspect that one or both of the sensors had in fact gone out of calibration and would recalibrate them.

If the 2 measurements were say +0.5 deg and -0.5 deg then the actual temperature would be 0.0 +/-0.5 deg. by the same logic.

If you had 100 measurements (of the same actual temperature but each taken using a different device of the stated accuracy) which were randomly distributed over the range +0.5 deg and -0.5 deg then it would be theoretically possible for the actual temperature to fall anywhere within the range but statistically much more likely that the temperature was very
close to 0 deg C.

When you average large numbers of readings it becomes an excercise in probability so that what you should have is a statement like: "the
temperature is 0.0 deg +/- 0.1 deg with a confidence level of 99%." That is you are not absolutely certain of the error range but have a high level of confidence in it.

Of course all this does assume that their are no systematic errors to contend with and these may well be the major issues of contention.

BTW, the web site cited below suggests reading:

"....Traceable Temperatures by J V Nicholas and D R White, available from John Wiley & Sons Ltd, (it) comprehensively discusses the subject of
temperature calibration, the selection of equipment and measurement methods. Anyone concerned with temperature measurement should have a copy of this book..."

BTW2, the process of calibrating temperature sensors using buckets of ice etc is not the only approach. Other alternative worth considering are:

1. calibration against a measuring instrument of known accuracy. ie buy or hire a hand held precision temperature measuring instrument and have it laboratory calibrated. Then use this as your temperature standard when calibrating temperature sensors.

2. buy factory calibrated 100 ohm platinum RTD sensors complete with 4-20mA transducers. I have in the past priced these and could obtain units
calibrated to +/-0.1 deg C at 23 deg C for about $450 Aust. Calibration after installation is then a matter of measuring the current output of the transducer which can be done with a high degree of accuracy using a standard multimeter.

Both of these approaches are a lot easier than messing about with buckets of ice etc.


Peter Whalley.

Peter Whalley

Magenta Communications Pty Ltd

Melbourne, VIC, Australia

e-mail: peter*no-spam*
delete *no-spam* before sending
There's two parts to this really, of which I think the second may be enough to drag this thread back somewhere near on-topic.

Briefly, however, the first point is that if the assertion is that the scientific community do not understand errors in measurement, and are propagating misleading information on global warming on the basis of faulty measurements, then it is demonstrably false, and rather patronising. Measurement errors have been understood for hundreds of years, and perfectly good statistical methods known to most high school students (for example use of standard deviation of multiple measurements) can be used to filter out discernible trends from data subject to random fluctuations within known boundaries. Indeed (back on topic!) these techniques are the bedrock of Statistical Process Control in which process drift can be discerned before measurement equipment actually goes out of calibration. (If I can risk an observation on the argument - I don't take sides on the actual issue of Global warming since I have never reviewed the data - it would be that there are a great many statements of the "It's well known that...", "I'm told that..", and so on from those arguing that there is no scientific basis for the observations tending to support the hypothesis, i.e. hand waving anecdotal statements with no sound data backing them - so, frankly, if you want to claim the science backs your statements, then use science, not the same emotive and rhetorical tactics as the 'greens'!).

Second point relates to the absolute accuracy of Industrial Temperature Measurements (a subject close to my heart since I work for a temperature control company!). There are two principal factors leading to the accuracy of measurements, firstly the resolution of the analogue input used over the target range - this is most likely the limiting factor in the +/-1 degree figure quoted, as the RTD inputs in PLCs (for example) tend to be fairly limited, i.e. 12 bits or so covering a range of +/- 1000 degrees. However it is perfectly possible to get arbitrarily high input precision (obviously the smaller the input range in absolute terms the better the accuracy for any given resolution on the input). The second factor is the linearisation technique used to provide a value. Typically this is done using a polynomial which introduces some errors as the fit is not uniformly perfect. However it is again possible to use different techniques, or to overlay further linearisation polynomials to improve accuracy in different ranges. Calibration bath manufacturers, for example, are capable of extraordinarily high precision and repeatable accuracy in a given (limited) range.

All opinions personal, as usual!


Tim Linnell (Eurotherm Controls)
If you have two elements, both with a +/- 1 degree accuracy; and in free air "A" was giving you a reading of +1 degree you could not be sure if the actual temperature was zero or + 2 degrees. Or anywhere between.

If "B" also gave a reading of + 1 degree you would not know if the actual temperature was zero or +2 degrees.

In both cases you have an uncertainty of 2 degrees.

Say that "B" gave a reading of -1 degree. You can not be certain if the actual temperature is -2 degrees or zero.

If you took both reading "A" in the first instance and "B" in the second instance your uncertainty has now increased to FOUR degrees separation.

Inaccuracies or uncertainties, are not averaged out. They are in fact ADDED together.

So the global warming people are probably dealing with inaccuracies of MANY times greater than the temperature rise they have estimated over the last

Bob Pawley

Lee Eng Lock

for high accuracy measurements over limited range, eg the hvac range of say 0degC to 50degC one can easily use thermistors. the superstable ones from yellow springs instruments of ohio can be obtained with standard matching of +-0.2degC, we normally hand calibrate them to better than 0.03degC using gallium cells and triple point of water cells and precision water baths such as those made by hart scientific. the linearisation curves are very accurate within the working range when one calibrates with three points.

the triple point cells like the jarrett in proper baths can establish better than 0.001degC accuracy at triple point. the gallium cells are able to obtain 0.002degC at 29.7646degC, while the precision water baths are rated for uniformity across the entire volume and stability to better than 0.005degC.

we use hp3457A data acq systems with 7 1/2 digit voltmeters for hvac use, sampling thru instrument reed relays at one minute period. the drift of the superstable thermistors within their working range has been tested by oak ridge national labs and others to better than 0.01degC for 10 years.

you might want to check the website of yellow springs or isotech or hart scientific for their high accuracy sensors and displays and calibrators.

I don't think anyone's argued with that (last) point, but the only connection that I've seen between "sensors are inaccurate" and "global warming is ..." is an array of pronouncements and assumptions offered as fact. You've ignored the argument that many (randomly) inaccurate readings can reduce the overall uncertainty when taken together. You claim that "all ... models ... rely on these devices", but have offered no convincing (to me) argument that their inherent uncertainties are simply ignored by the modelers or supporters (?) of global warming.

Ken Irving <[email protected]>

Vince Dooley

Thank you, Lee. You have given me a certain amount of reassurance and some useful information. I am currently doing some work where I hope to achieve an uncertainty better than 0.03degC over the range 25 to about 90degC. You may have seen my post asking for information. What I did some weeks ago was order one of Isotech's TTI series bench top thermometers and a reference RTD. For QA checking during the test work I ordered one of their stirred baths, a water triple point cell and a gallium cell. Having been through the process of determing what equipment was required I still had a few doubts.

When the tests are finished I will need to determine a fixed temperature for future work in the field. It will be determined to some extent by the test results but I expect it to be in the 50 to 70degC range. The thermisters seem like the obvious choice. Would you agree? If so is there any thermister transmitter that you would recommend for on-line work?

I feel uneasy not having a QA check above the gallium melt point cell temperature. I thought about using a laboratory standard barometer and boiling DI water but I estimate that at best it would have an uncertainty of around 0.04 to 0.05degC. Have you done any work with boiling water as a reference?

An interesting aside to this exercise is that having been through this exercise and having gained some understanding of the structure and fragility of RTD's and the possible errors I will never trust an RTD based reference thermometer again irrespective of the traceability or currency of certification without having access to a water triple point cell to verify that it hasn't changed. I was also surprised at how relatively inexpensive it is to have the water triple point cell facility. I took the easy way out and bought one tht is easy to use. However, it might be a bit messy to use but I believe Scientific American did an article on building a water triple point cell for $50. That's hard to beat for accuracy versus cost.

Thanks. Any info would be appreciated.

Vince Dooley