Today is...
Wednesday, April 24, 2019
Welcome to, the global online
community of automation professionals.
Featured Video...
Featured Video
Wiring and programming your servos and I/O just got a lot easier...
Our Advertisers
Help keep our servers running...
Patronize our advertisers!
Visit our Post Archive
Error Calculation
error calculation
By emvprasad on 7 April, 2005 - 12:27 pm

hello friends

Why accuracy for many of the instruments will be specified in the forum of percentage full scale. Even though percentage of reading is superior?

You are requested to mail me at

With Regards,


By Walt Boyes on 8 April, 2005 - 12:46 am

Because one percent of full scale looks better than ten percent of reading.

Walt Boyes
Editor in Chief
CONTROL magazine

By Sergey Y. Yurish on 8 April, 2005 - 11:17 pm

But it is a tradition coming from the era of analog measuring instruments.

Today it is dependent on a measurand and/or measuring instruments. For example, for wide range frequency counter it is expediently to use a percent of reading. But for pressure sensors - FS %.

Sergey Y. Yurish,
Editor in Chief
Sensors & Transducers Magazine

By Jon Watson on 12 April, 2005 - 6:58 am

A good question but perhaps one with a long answer if answered in full.

There is more to understanding accuracy claims than simply why are some %reading and some %fsd.
One answer is that all meters exhibit a characteristic response that is related to the technology.

There is no reason today why we should prize a linear relationship. However, not all devices exhibit a linear behaviour and before there were such sophisticated electronics available the limitations of the electronics or the mechanical display devices imposed limits on accuracy claims.

Devices which tended to be approximately linear tended to be expressed in terms of % reading over the linear range. For others where a linear respoinse was not possible nor characterisation, a %fsd is more usual.

Even with better electronics the next hurdle was the cost of a more intensive calibration needed for a more complex response modelling.
We should note that different instrument types will have a conventional expresion of the accuracy (and in some industries this is standardised).
So long as all manufacturers follow the convention we are able to compare different manufacturers of the same technology on a like for like basis. We should beware of those who deviate from convention; they may have something to hide or they may have something better to offer.
It alwasy pays to try and understand exactly what is the convention for a particular instrument and to understand what it means in terms of the actual performance.

A turbine meter has a reasonably linear response over 10-20% of the range to maximum range. This is referred to as the linear range. Its accuracy is often described as a % of reading in the linear range. However, it has a poor viscosity response.
With modern electronics, curve fityting and even multiple calibration curves become possible and we have helical bladed turbines with a better viscosity response (and now that pulse interpolation is accepted by API they are becoming more common in hydrocarbon applications).

A variable area meter is typically decsribed by a % fsd accuracy. In more sophisticated designs by a combination of % fsd and of reading.

Vortex meters are designed to produce strong stable pulses over as wide a linear range as possible. One wonders if these design parameters have changed as they should to reflect modern electronics and microprocessors plus the ability to automate calibration means that intensive calibration routines are not the high cost they once were.

The Eastech vortex was quoted as a 1% of reading meter. In fact, as an early vortex meter with simple hard wired links on the amplifier board, they were not individually calibrated i.e. all votex meters of the same size had the same meter factor.
The 1% was a manufacturing tolerance.
All Eastech vortex meters of the same size and over their operating range had the same meter factor. This persisted even after new electronics were introduced (because no one asked the simple question "why?").
So the manufacturer continued to offer them as 1% meters.... one user fitted a linearising amplifier and individually calibrated the meter to find a 0.1% accuracy.

The comment above about "what looks better" is also valid.
A typical fiscal density meter has an accuracy of 0.0001kg/m3.
When mass flow meters were offered for density measurement the units chosen were gm/cc, not lbs/ft3 (US manufacturers) nor the pre-existing convention of using kg/m3 (the SI standard) and simply because the performacne looke dgood expressed in gm/cc. A simple conversion revealed a significant difference in performance.
Then too, note that performance is often quoted as based at 20degC and lab conditions.

For any meters it is important to understand what happens to that accuracy as the conditions change.

Accuracy, repeatability, linearity, and conventions etc are fundamentals we need to understand when evaluating any instrument.

The biggest problem any of us have is a lack of familiarity with a specific instrument and we then have to depend on standards and conventions to help us. We need to know what questions to ask. For the novel instrument, we need to ask and ask again.

Simply because they want to sell you an instrument that has a lower accuracy. It all depends upon how critical your measurements are and how accurate you want to be. If you want the best at any given time you have to pay for it. Higher quality, higher price.

Check out the ISA terminology books to understand first upper & lower range value, instrument range, span and calibrated span etc. If the calibrated span is only a small value of the total span and the instrument species a an accuracy based upon the calibrated span you will definitely get a superior instrument.