Vol. Advanced Theory
Chapter Creative Commons Attribution License

Legal Code

Creative Commons Attribution 4.0 International Public License

By exercising the Licensed Rights (defined below), You accept and agree to be bound by the terms and conditions of this Creative Commons Attribution 4.0 International Public License ("Public License"). To the extent this Public License may be interpreted as a contract, You are granted the Licensed Rights in consideration of Your acceptance of these terms and conditions, and the Licensor grants You such rights in consideration of benefits the Licensor receives from making the Licensed Material available under these terms and conditions.

Section 1 – Definitions.

a. Adapted Material means material subject to Copyright and Similar Rights that is derived from or based upon the Licensed Material and in which the Licensed Material is translated, altered, arranged, transformed, or otherwise modified in a manner requiring permission under the Copyright and Similar Rights held by the Licensor. For purposes of this Public License, where the Licensed Material is a musical work, performance, or sound recording, Adapted Material is always produced where the Licensed Material is synched in timed relation with a moving image.

b. Adapter’s License means the license You apply to Your Copyright and Similar Rights in Your contributions to Adapted Material in accordance with the terms and conditions of this Public License.

c. Copyright and Similar Rights means copyright and/or similar rights closely related to copyright including, without limitation, performance, broadcast, sound recording, and Sui Generis Database Rights, without regard to how the rights are labeled or categorized. For purposes of this Public License, the rights specified in Section 2(b)(1)-(2) are not Copyright and Similar Rights.

d. Effective Technological Measures means those measures that, in the absence of proper authority, may not be circumvented under laws fulfilling obligations under Article 11 of the WIPO Copyright Treaty adopted on December 20, 1996, and/or similar international agreements.

e. Exceptions and Limitations means fair use, fair dealing, and/or any other exception or limitation to Copyright and Similar Rights that applies to Your use of the Licensed Material.

f. Licensed Material means the artistic or literary work, database, or other material to which the Licensor applied this Public License.

g. Licensed Rights means the rights granted to You subject to the terms and conditions of this Public License, which are limited to all Copyright and Similar Rights that apply to Your use of the Licensed Material and that the Licensor has authority to license.

h. Licensor means the individual(s) or entity(ies) granting rights under this Public License.

i. Share means to provide material to the public by any means or process that requires permission under the Licensed Rights, such as reproduction, public display, public performance, distribution, dissemination, communication, or importation, and to make material available to the public including in ways that members of the public may access the material from a place and at a time individually chosen by them.

j. Sui Generis Database Rights means rights other than copyright resulting from Directive 96/9/EC of the European Parliament and of the Council of 11 March 1996 on the legal protection of databases, as amended and/or succeeded, as well as other essentially equivalent rights anywhere in the world.

k. You means the individual or entity exercising the Licensed Rights under this Public License. Your has a corresponding meaning.

Section 2 – Scope.

a. License grant.

1. Subject to the terms and conditions of this Public License, the Licensor hereby grants You a worldwide, royalty-free, non-sublicensable, non-exclusive, irrevocable license to exercise the Licensed Rights in the Licensed Material to:

A. reproduce and Share the Licensed Material, in whole or in part; and

B. produce, reproduce, and Share Adapted Material.

2. Exceptions and Limitations. For the avoidance of doubt, where Exceptions and Limitations apply to Your use, this Public License does not apply, and You do not need to comply with its terms and conditions.

3. Term. The term of this Public License is specified in Section 6(a).

4. Media and formats; technical modifications allowed. The Licensor authorizes You to exercise the Licensed Rights in all media and formats whether now known or hereafter created, and to make technical modifications necessary to do so. The Licensor waives and/or agrees not to assert any right or authority to forbid You from making technical modifications necessary to exercise the Licensed Rights, including technical modifications necessary to circumvent Effective Technological Measures. For purposes of this Public License, simply making modifications authorized by this Section 2(a)(4) never produces Adapted Material.

5. Downstream recipients.

A. Offer from the Licensor – Licensed Material. Every recipient of the Licensed Material automatically receives an offer from the Licensor to exercise the Licensed Rights under the terms and conditions of this Public License.

B. No downstream restrictions. You may not offer or impose any additional or different terms or conditions on, or apply any Effective Technological Measures to, the Licensed Material if doing so restricts exercise of the Licensed Rights by any recipient of the Licensed Material.

6. No endorsement. Nothing in this Public License constitutes or may be construed as permission to assert or imply that You are, or that Your use of the Licensed Material is, connected with, or sponsored, endorsed, or granted official status by, the Licensor or others designated to receive attribution as provided in Section 3(a)(1)(A)(i).

b. Other rights.

1. Moral rights, such as the right of integrity, are not licensed under this Public License, nor are publicity, privacy, and/or other similar personality rights; however, to the extent possible, the Licensor waives and/or agrees not to assert any such rights held by the Licensor to the limited extent necessary to allow You to exercise the Licensed Rights, but not otherwise.

2. Patent and trademark rights are not licensed under this Public License.

3. To the extent possible, the Licensor waives any right to collect royalties from You for the exercise of the Licensed Rights, whether directly or through a collecting society under any voluntary or waivable statutory or compulsory licensing scheme. In all other cases the Licensor expressly reserves any right to collect such royalties.

Section 3 – License Conditions.

Your exercise of the Licensed Rights is expressly made subject to the following conditions.

a. Attribution.

1. If You Share the Licensed Material (including in modified form), You must:

A. retain the following if it is supplied by the Licensor with the Licensed Material:

i. identification of the creator(s) of the Licensed Material and any others designated to receive attribution, in any reasonable manner requested by the Licensor (including by pseudonym if designated);

ii. a copyright notice;

iii. a notice that refers to this Public License;

iv. a notice that refers to the disclaimer of warranties;

v. a URI or hyperlink to the Licensed Material to the extent reasonably practicable;

B. indicate if You modified the Licensed Material and retain an indication of any previous modifications; and

C. indicate the Licensed Material is licensed under this Public License, and include the text of, or the URI or hyperlink to, this Public License.

2. You may satisfy the conditions in Section 3(a)(1) in any reasonable manner based on the medium, means, and context in which You Share the Licensed Material. For example, it may be reasonable to satisfy the conditions by providing a URI or hyperlink to a resource that includes the required information.

3. If requested by the Licensor, You must remove any of the information required by Section 3(a)(1)(A) to the extent reasonably practicable.

4. If You Share Adapted Material You produce, the Adapter’s License You apply must not prevent recipients of the Adapted Material from complying with this Public License.

Section 4 – Sui Generis Database Rights.

Where the Licensed Rights include Sui Generis Database Rights that apply to Your use of the Licensed Material:

a. for the avoidance of doubt, Section 2(a)(1) grants You the right to extract, reuse, reproduce, and Share all or a substantial portion of the contents of the database;

b. if You include all or a substantial portion of the database contents in a database in which You have Sui Generis Database Rights, then the database in which You have Sui Generis Database Rights (but not its individual contents) is Adapted Material; and

c. You must comply with the conditions in Section 3(a) if You Share all or a substantial portion of the contents of the database.

For the avoidance of doubt, this Section 4 supplements and does not replace Your obligations under this Public License where the Licensed Rights include other Copyright and Similar Rights.

Section 5 – Disclaimer of Warranties and Limitation of Liability.

a. Unless otherwise separately undertaken by the Licensor, to the extent possible, the Licensor offers the Licensed Material as-is and as-available, and makes no representations or warranties of any kind concerning the Licensed Material, whether express, implied, statutory, or other. This includes, without limitation, warranties of title, merchantability, fitness for a particular purpose, non-infringement, absence of latent or other defects, accuracy, or the presence or absence of errors, whether or not known or discoverable. Where disclaimers of warranties are not allowed in full or in part, this disclaimer may not apply to You.

b. To the extent possible, in no event will the Licensor be liable to You on any legal theory (including, without limitation, negligence) or otherwise for any direct, special, indirect, incidental, consequential, punitive, exemplary, or other losses, costs, expenses, or damages arising out of this Public License or use of the Licensed Material, even if the Licensor has been advised of the possibility of such losses, costs, expenses, or damages. Where a limitation of liability is not allowed in full or in part, this limitation may not apply to You.

c. The disclaimer of warranties and limitation of liability provided above shall be interpreted in a manner that, to the extent possible, most closely approximates an absolute disclaimer and waiver of all liability.

Section 6 – Term and Termination.

a. This Public License applies for the term of the Copyright and Similar Rights licensed here. However, if You fail to comply with this Public License, then Your rights under this Public License terminate automatically.

b. Where Your right to use the Licensed Material has terminated under Section 6(a), it reinstates:

1. automatically as of the date the violation is cured, provided it is cured within 30 days of Your discovery of the violation; or

2. upon express reinstatement by the Licensor.

For the avoidance of doubt, this Section 6(b) does not affect any right the Licensor may have to seek remedies for Your violations of this Public License.

c. For the avoidance of doubt, the Licensor may also offer the Licensed Material under separate terms or conditions or stop distributing the Licensed Material at any time; however, doing so will not terminate this Public License.

d. Sections 1, 5, 6, 7, and 8 survive termination of this Public License.

Section 7 – Other Terms and Conditions.

a. The Licensor shall not be bound by any additional or different terms or conditions communicated by You unless expressly agreed.

b. Any arrangements, understandings, or agreements regarding the Licensed Material not stated herein are separate from and independent of the terms and conditions of this Public License.

Section 8 – Interpretation.

a. For the avoidance of doubt, this Public License does not, and shall not be interpreted to, reduce, limit, restrict, or impose conditions on any use of the Licensed Material that could lawfully be made without permission under this Public License.

b. To the extent possible, if any provision of this Public License is deemed unenforceable, it shall be automatically reformed to the minimum extent necessary to make it enforceable. If the provision cannot be reformed, it shall be severed from this Public License without affecting the enforceability of the remaining terms and conditions.

c. No term or condition of this Public License will be waived and no failure to comply consented to unless expressly agreed to by the Licensor.

d. Nothing in this Public License constitutes or may be interpreted as a limitation upon, or waiver of, any privileges and immunities that apply to the Licensor or You, including from the legal processes of any jurisdiction or authority.

Creative Commons is not a party to its public licenses. Notwithstanding, Creative Commons may elect to apply one of its public licenses to material it publishes and in those instances will be considered the “Licensor.” Except for the limited purpose of indicating that material is shared under a Creative Commons public license or as otherwise permitted by the Creative Commons policies published at creativecommons.org/policies, Creative Commons does not authorize the use of the trademark “Creative Commons” or any other trademark or logo of Creative Commons without its prior written consent including, without limitation, in connection with any unauthorized modifications to any of its public licenses or any other arrangements, understandings, or agreements concerning use of licensed material. For the avoidance of doubt, this paragraph does not form part of the public licenses.

Creative Commons may be contacted at creativecommons.org.


  1. Version numbers ending in odd digits are developmental (e.g. 0.7, 1.23, 4.5), with only the latest revision made accessible to the public. Version numbers ending in even digits (e.g. 0.6, 1.0, 2.14) are considered “public-release” and will be archived. Version numbers beginning with zero (e.g. 0.1, 0.2, etc.) represent early editions that were substantially incomplete.↩︎

  2. This includes selling copies of it, either electronic or print. Of course, you must include the Creative Commons license as part of the text you sell, which means anyone will be able to tell it is an open text and can probably figure out how to download an electronic copy off the Internet for free. The only way you’re going to make significant money selling this text is to add your own value to it, either in the form of expansions or bundled product (e.g. simulation software, learning exercises, etc.), which of course is perfectly fair – you must profit from your own labors. All my work does for you is give you a starting point.↩︎

  3. In mathematics, the term rigor refers to a meticulous attention to detail and insistence that each and every step within a chain of mathematical reasoning be thoroughly justified by deductive logic, not intuition or analogy.↩︎

  4. The book’s subtitle happens to be, Being a very-simplest introduction to those beautiful methods of reckoning which are generally called by the terrifying names of the differential calculus and the integral calculus. Not only did Thompson recognize the anti-pragmatic tone with which calculus is too often taught, but he also infused no small amount of humor in his work.↩︎

  5. Isaac Newton referred to derivatives as fluxions, and in Silvanus Thompson’s day they were known as differential coefficients.↩︎

  6. British units of measurement for velocity indicate this same process of division: the number of feet traveled in a time period of seconds yields a velocity in feet per second. There is nothing unique about metric units in this regard.↩︎

  7. Most likely a thermal mass flowmeter or a Coriolis flowmeter.↩︎

  8. Although we will measure time, and differentials of time, as positive quantities, the mass flowmeter should be configured to show a negative flow rate (\(W\)) when propane flows from the tank to the building. This way, the integrand (the product “inside” the integration symbol; \(W \> dt\)) will be a negative quantity, and thus the integral over a positive time interval (from 0 to \(x\)) will likewise be a negative quantity.↩︎

  9. According to calculus convention, the differential \(dt\) represents the end of the integrand. It is safe to regard the long “S” symbol and the differential (\(dx\), \(dt\), etc.) as complementary grouping symbols declaring the beginning and end of the integrand. This tells us \(m_0\) is not part of the integrand, but rather comes after it. Using parentheses to explicitly declare the boundaries of the integrand, we may re-write the expression as \(m_x = (\int_0^x W \> dt) + m_0\)↩︎

  10. Recall from the previous section (“The Concept of Differentiation”) that velocity could be defined as the time-derivative of position: \(v = {dx \over dt}\) All we have done here is algebraically solved for changes in \(x\) by first multiplying both sides of the equation by \(dt\) to arrive at \(dx = v \> dt\). Next, we integrate both sides of the equation in order to “un-do” the differential (\(d\)) applied to \(x\): \(\int dx = \int v \> dt\). Since accumulations (\(\int\)) of any differential (\(dx\)) yields a discrete change for that variable, we may substitute \(\Delta x\) for \(\int dx\) and get our final answer of \(\Delta x = \int v \> dt\).↩︎

  11. To be perfectly accurate, we must also include initial values for position and velocity. In other words, \(x = \int v \> dt + x_0\) and \(v = \int a \> dt + v_0\)↩︎

  12. For instance, at \(x=1\), the original function tells us that \(y\) will be equal to \(- {6 \over 7}\). If we plug this same value of 1 into \(x\) of the derivative function, the result \({dy \over dx} = -{40 \over 49}\) tells us the original function \(y = f(x)\) has a slope of \(-{40 \over 49}\) when \(x=1\).↩︎

  13. Unlike the recording shown from Cassier’s Magazine, which runs chronologically from right to left, modern chart recordings all run from left to right.↩︎

  14. Not only does a 5-minute rate calculation period miss a lot of detail, but it also results in a time delay of (up to) 5 minutes detecting a pipeline rupture.↩︎

  15. The technical term for a line passing through a pair of points on a curve is called a secant line.↩︎

  16. Please note that the pipeline pressure is not actually 340.0 PSI at a time of 1:37:30. This is simply a coordinate convenient to mark because it how it lines up with the divisions on the trend display. We choose coordinate points on the tangent line easy to visually discern, then calculate the tangent line’s slope using those coordinates.↩︎

  17. “Pseudocode” is a name given to any imaginary computer language used for the purpose of illustrating some procedure or concept without having to make reference to any particular (real) computer programming language. I could have just as well shown you the same algorithm using BASIC, C, or Java code, but pseudocode does just as well without the burden of introducing unfamiliar syntax to the reader.↩︎

  18. Another source of trouble for differentiation of live signals is when the signal originates from a digital sensor. Digital devices, by their very nature, break analog signals into a series of discrete amplitude steps. As a digital process transmitter encounters a steadily increasing or decreasing process variable, its output rises or falls in discrete “jumps” rather than continuously as a fully analog transmitter would. Now, each of these jumps is quite small, but since each one occurs almost instantly it still translates into an extremely large rate-of-change when detected by a differentiator sampling over small time increments or sampling continuously (as in the case of an analog differentiator circuit). This means the problem of false rates-of-change exists even in perfectly noiseless systems, when the detection device (and/or the information channel to the monitoring system) is digital rather than analog.↩︎

  19. Once gain, we are looking for points where the tangent line happens to intersect with major divisions on the graph’s scale. This makes it relatively easy to calculate the line’s slope, since the pressure and distance values for those coordinates are easy to read.↩︎

  20. The Foxboro model 14 totalizer’s design was quite ingenious, since centrifugal force varies with the square of angular velocity. This had the effect of naturally performing the square-root characterization required of most pneumatic flow-measuring instruments due to the quadratic nature of most primary flow-sensing elements (e.g. orifice plate, venturi tubes, pitot tubes, etc.).↩︎

  21. Vehicles equipped with a trip odometer allow the driver to reset this integration constant to zero at will, thus allowing the tracking of mileage for individual trips instead of over the life of the automobile.↩︎

  22. As we lower the mass to ground level, height (\(x\)) goes from being a positive value to zero. This means each differential (infinitesimal change in value) for \(x\) will be negative, thus causing the integrand \(F \> dx\) to have a negative value and thus causing the integrated total (work) to be negative as well.↩︎

  23. While a longbow is really nothing more than a long and flexible stick with a straight string drawn across it, a compound bow is a sophisticated machine with multiple passes of string and cam-shaped pulleys providing the nonlinear force-draw relationship.↩︎

  24. One simple way to do this is to cover the entire integration area using nothing but rectangles and triangles, then measuring all the sketched shapes to totalize their areas.↩︎

  25. An interesting point to make here is the United States did get something right when they designed their monetary system of dollars and cents. This is essentially a metric system of measurement, with 100 cents per dollar. The founders of the USA wisely decided to avoid the utterly confusing denominations of the British, with their pounds, pence, farthings, shillings, etc. The denominations of penny, dime, dollar, and eagle ($10 gold coin) comprised a simple power-of-ten system for money. Credit goes to France for first adopting a metric system of general weights and measures as their national standard.↩︎

  26. A basic mathematical identity is that multiplication of any quantity by 1 does not change the value of that original quantity. If we multiply some quantity by a fraction having a physical value of 1, no matter how strange-looking that fraction may appear, the value of the original quantity will be left intact. The goal here is to judiciously choose a fraction with a physical value of 1 but with its units of measurement so arranged that we cancel out the original quantity’s unit(s) and replace them with the units we desire.↩︎

  27. Density figures taken or derived from tables in the CRC Handbook of Chemistry and Physics, 64th Edition. Most liquid densities taken from table on page F-3 and solid densities taken from table on page F-1. Some liquid densities taken from tables on pages E-27 through E-31. All temperatures at or near 20 \(^{o}\)C.↩︎

  28. The only exception to this rule being units of measurement for angles, over which there has not yet been full agreement whether the unit of the radian (and its solid counterpart, the steradian) is a base unit or a derived unit.↩︎

  29. The older name for the SI system was “MKS,” representing meters, kilograms, and seconds.↩︎

  30. I’m noting my sarcasm here, just in case you are immune to my odd sense of humor.↩︎

  31. Relativistic physics deals with phenomena arising as objects travel near the speed of light. Quantum physics deals with phenomena at the atomic level. Neither is germane to the vast majority of industrial instrument applications.↩︎

  32. A common definition of energy is the “ability to do work” which is not always true. There are some forms of energy which may not be harnessed to do work, such as the thermal motion of molecules in an environment where all objects are at the same temperature. Energy that has the ability to do work is more specifically referred to as exergy. While energy is always conserved (i.e. never lost, never gained), exergy is a quantity that can never be gained but can be lost. The inevitable loss of exergy is closely related to the concept of entropy, where energy naturally diffuses into less useful (more homogeneous) forms over time. This important concept explains why no machine can never be perfectly (\(100.\overline{0}\)%) efficient, among other things.↩︎

  33. A vector is a mathematical quantity possessing both a magnitude and a direction. Force (\(F\)), displacement (\(x\)), and velocity (\(v\)) are all vector quantities. Some physical quantities such as temperature (\(T\)), work (\(W\)), and energy (\(E\)) possess magnitude but no direction. We call these directionless quantities “scalar.” It would make no sense at all to speak of a temperature being “79 degrees Celsius due North” whereas it would make sense to speak of a force being “79 Newtons due North”. Physicists commonly use a little arrow symbol over the variable letter to represent that variable as a vector, when both magnitude and direction matter. Thus \(\vec{F}\) represents a force vector with both magnitude and direction specified, while plain \(F\) merely represents the magnitude of that force without a specified direction. A “dot-product” is one way in which vectors may be multiplied, and the result of a dot-product is always a scalar quantity.↩︎

  34. Note that this calculation will assume all the work of towing this load is being performed by a single wheel on the truck. Most likely this will not be the case, as most towing vehicles have multiple driving wheels (at least two). However, we will perform calculations for a single wheel in order to simplify the problem.↩︎

  35. Consider the example of applying torque to a stubbornly seized bolt using a wrench: the force applied to the wrench multiplied by the radius length from the bolt’s center to the perpendicular line of force yields torque, but absolutely no work is done on the bolt until the bolt begins to move (turn).↩︎

  36. In practice, we usually see heavy objects fall faster than light objects due to the resistance of air. Energy losses due to air friction nullify our assumption of constant total energy during free-fall. Energy lost due to air friction never translates to velocity, and so the heavier object ends up hitting the ground faster (and sooner) because it had much more energy than the light object did to start.↩︎

  37. Hooke’s Law may be written as \(F = kx\) without the negative sign, in which case the force (\(F\)) is the force applied on the spring from an external source. Here, the negative sign represents the spring’s reaction force to being displaced (the restoring force). A spring’s reaction force always opposes the direction of displacement: compress a spring, and it pushes back on you; stretch a spring, and it pulls back. A negative sign is the mathematically symbolic way of expressing the opposing direction of a vector.↩︎

  38. Technically, it is a pseudovector, because it does not exhibit all the same properties of a true vector, but this is a mathematical abstraction far beyond the scope of this book!↩︎

  39. A “flywheel” is a disk on a shaft, designed to maintain rotary motion in the absence of a motivating torque for the function of machines such as piston engines. The rotational kinetic energy stored by an engine’s flywheel is necessary to give the pistons energy to compress the gas prior to the power stroke, during the times the other pistons are not producing power.↩︎

  40. Technically, mechanical advantage should be defined by the ratio of input motion to output motion, rather than being defined in terms of force. The reason for this is if friction happens to exist in the machine, it will cause a degradation of force but not of motion. Since “mechanical advantage” is supposed to represent the ideal ratio of the machine, it is always safest to define it in terms of motion where friction will not affect the calculation. For a frictionless machine, however, defining mechanical advantage in terms of force is perfectly legitimate, and in fact makes more intuitive sense, since a larger mechanical advantage always corresponds with force multiplication from input to output.↩︎

  41. “Torque” is to rotational motion as “force” is to linear motion. Mathematically, torque (\(\tau\)) is defined as the cross-product of force acting on a radius (\(\vec{\tau} = \vec{r} \times \vec{F}\)).↩︎

  42. I am indebted to NASA for this and the rest of the black-and-white gear illustrations found in this section. All these illustrations were taken from NASA technical reports on gearing.↩︎

  43. Here, each gear is shown simply as a toothless wheel for the sake of simplicity. Truth be told, your humble author has difficulty drawing realistic gear teeth!↩︎

  44. An interesting feature of many flat-belt sheaves is a slight “crown” shape to the sheave, such that the diameter is slightly larger at the sheave’s center than it is at either side edge. The purpose of this crown is to help the belt center itself while in operation. As it turns out, a flat belt naturally tends to find the point at which it operates under maximum tension. If the belt happens to wander off-center, it will naturally find its way back to the center of the sheave as it rotates because that is where the tension reaches a maximum.↩︎

  45. In practice, not all of these 24 “speeds” are recommended, because some of the front/rear sprocket selections would place the chain at an extreme angle as it engaged with both sprockets. In the interest of extending chain life, it should run as “straight” on each sprocket as possible.↩︎

  46. Helium at room temperature is a close approximation of an ideal, monatomic gas, and is often used as an example for illustrating the relationship between temperature and molecular velocity.↩︎

  47. Kelvin is typically expressed without the customary “degree” label, unlike the three other temperature units: (degrees) Celsius, (degrees) Fahrenheit, and (degrees) Rankine.↩︎

  48. Animals process food by performing a very slow version of combustion, whereby the carbon and hydrogen atoms in the food join with oxygen atoms inhaled to produce water and carbon dioxide gas (plus energy). Although it may seem strange to rate the energy content of food by measuring how much heat it gives off when burnt, burning is just a faster method of energy extraction than the relatively slow processes of biological metabolism.↩︎

  49. Heat may be forced to flow from cold to hot by the use of a machine called a heat pump, but this direction of heat flow does not happen naturally, which is what the word “spontaneous” implies. In truth, the rule of heat flowing from high-temperature to cold-temperature applies to heat pumps as well, just in a way that is not obvious from first inspection. Mechanical heat pumps cause heat to be drawn from a cool object by placing an even cooler object (the evaporator) in direct contact with it. That heat is then transferred to a hot object by placing an even hotter object (the condenser) in direct contact with it. Heat is moved against the natural (spontaneous) direction of flow from the evaporator to the condenser by means of mechanical compression and expansion of a refrigerant fluid.↩︎

  50. In this context, we are using the word “radiation” in a very general sense, to mean thermal energy radiated away from the hot source via photons. This is quite different from nuclear radiation, which is what some may assume this term means upon first glance.↩︎

  51. Or in degrees Rankine, provided a suitably units-corrected value for the Stefan-Boltzmann constant were used.↩︎

  52. Jim Cahill of Emerson wrote in April 2010 (“Reducing Distillation Column Energy Usage” Emerson Process Expert weblog) about a report estimating distillation column energy usage to account for approximately 6% of the total energy used in the United States. This same report tallied the number of columns in US industry to be approximately 40000 total, accounting for about 19% of all energy used in manufacturing processes!↩︎

  53. An important detail to note is that specific heat does not remain constant over wide temperature changes. This complicates calculations of heat required to change the temperature of a sample: instead of simply multiplying the temperature change by mass and specific heat (\(Q = mc \Delta T\) or \(Q = mc [T_2 - T_1]\)), we must integrate specific heat over the range of temperature (\(Q = m \int_{T_1}^{T_2} c \> dT\)), summing up infinitesimal products of specific heat and temperature change (\(c \> dT\)) over the range starting from temperature \(T_1\) through temperature \(T_2\) then multiplying by the mass to calculate total heat required. So, the specific heat values given for substances at 25 \(^{o}\)C only hold true for relatively small temperature changes deviating from 25 \(^{o}\)C. To accurately calculate heat transfer over a large temperature change, one must incorporate values of \(c\) for that substance at different temperatures along the expected range.↩︎

  54. In reality, the amount of heat actually absorbed by the pot will be less than this, because there will be heat losses from the warm pot to the surrounding (cooler) air. However, for the sake of simplicity, we will assume all the burner’s heat output goes into the pot and the water it holds.↩︎

  55. We will assume for the sake of this example that the container holding the water is of negligible mass, such as a Styrofoam cup. This way, we do not have to include the container’s mass or its specific heat into the calculation.↩︎

  56. An alternative way to set up the problem would be to calculate \(\Delta T\) for each term as \(T_{final} - T_{start}\), making the iron’s heat loss a negative quantity and the water’s heat gain a positive quantity, in which case we would have to set up the equation as a zero-sum balance, with \(Q_{iron} + Q_{water} = 0\). I find this approach less intuitive than simply saying the iron’s heat loss will be equal to the water’s heat gain, and setting up the equation as two positive values equal to each other.↩︎

  57. This is not far from the hypotheses of eighteenth-century science, where heat was thought to be an invisible fluid called caloric.↩︎

  58. A useful analogy for enthalpy is the maximum available balance of a bank account. Suppose you have a bank account with a minimum balance requirement of $32 to maintain that account. Your maximum available balance at any time would be the total amount of money in that account minus $32, or to phrase this differently your maximum available balance is the most money you may spend from this account while still keeping that account open. Enthalpy is much the same: the amount of thermal energy a sample may “spend” (i.e. lose) before its temperature reaches 32 degrees Fahrenheit.↩︎

  59. Appealing to the maximum available balance analogy, if we compared the maximum available balance in your bank account before and after a transaction, we could determine how much money was deposited or withdrawn from your account simply by subtracting those two values.↩︎

  60. Following the formula \(Q = mc \Delta T\), we may calculate the heat as (1)(1)(\(170-125\)) = 45 BTU. This is obviously the same result we obtained by subtracting enthalpy values for water at 170 \(^{o}\)F and 125 \(^{o}\)F.↩︎

  61. The word “latent” refers to something with potential that is not yet realized. Here, heat exchange takes place without there being any realized change in temperature. By contrast, heat resulting in a temperature change (\(Q = mc \Delta T\)) is called sensible heat.↩︎

  62. Latent heat of vaporization also varies with pressure, as different amounts of heat are required to vaporize a liquid depending on the pressure that liquid is subject to. Generally, increased pressure (increased boiling temperature) results in less latent heat of vaporization.↩︎

  63. The reason specific heat values are identical between metric and British units, while latent heat values are not, is because latent heat does not involve temperature change, and therefore there is one less unit conversion taking place between metric and British when translating latent heats. Specific heat in both metric and British units is defined in such a way that the three different units for heat, mass, and temperature all cancel each other out. With latent heat, we are only dealing with mass and heat, and so we have a proportional conversion of \(5 \over 9\) or \(9 \over 5\) left over, just the same as if we were converting between degrees Celsius and Fahrenheit alone.↩︎

  64. Styrofoam and plastic cups work as well, but paper exhibits the furthest separation between the boiling point of water and the burning point of the cup material, and it is usually thin enough to ensure good heat transfer from the outside (impinging flame) to the inside (water).↩︎

  65. This is a lot of fun to do while camping!↩︎

  66. This may be done in a vacuum jar, or by traveling to a region of high altitude where the ambient air pressure is less than at sea level.↩︎

  67. The mechanism of this influence may be understood by considering what it means to boil a liquid into a vapor. Molecules in a liquid reside close enough to each other that they cohere, whereas molecules in a vapor or gas are relatively far apart and act as independent objects. The process of boiling requires that cohesion between liquid molecules to be broken, so the molecules may drift apart. Increased pressure encourages cohesion in liquid form by helping to hold the molecules together, while decreased pressure encourages the separation of molecules into a vapor/gas.↩︎

  68. As mentioned previously, a useful analogy for enthalpy is the maximum available balance for a bank account with a $32 minimum balance requirement: that is, how much money may be spent from that account without closing it out.↩︎

  69. At first it may seem as though the enthalpy of steam is so easy to calculate it almost renders steam tables useless. If the specific heats of water and steam were constant, and the latent heat of vaporization for water likewise constant, this would be the case. However, both these values (\(c\) and \(L\)) are not constant, but rather change with pressure and with temperature. Thus, steam tables end up being quite valuable to engineers, allowing them to quickly reference heat content of steam across a broad range of pressures and temperatures without having to account for changing \(c\) and \(L\) values (performing integral calculus in the form of \(Q = m \int_{T_1}^{T_2} c \> dT\) for specific heat) in their heat calculations.↩︎

  70. This is not unlike calculating the voltage dropped across an electrical load by measuring the voltage at each of the load’s two terminals with respect to ground, then subtracting those two measured voltage values. In this analogy, electrical “ground” is the equivalent of water at freezing temperature: a common reference point for energy level.↩︎

  71. Applying the maximum available balance analogy to this scenario, it would be as if your bank account began with a maximum available balance of $1287 and then finished with a maximum available balance of $138 after an expenditure: the amount of money you spent is the different between the initial and final maximum available balances ($1287 \(-\) $138 = $1149).↩︎

  72. When H\(_{2}\)O is at its triple point, vapor (steam), liquid (water), and solid (ice) of water will co-exist in the same space. One way to visualize the triple point is to consider it the pressure at which the boiling and freezing temperatures of a substance become the same.↩︎

  73. Anywhere between the triple-point temperature and the critical temperature, to be exact.↩︎

  74. The triple point for any substance is the pressure at which the boiling and freezing temperatures become one and the same.↩︎

  75. The non-freedom of both pressure and temperature for a pure substance at its triple point means we may exploit different substances’ triple points as calibration standards for both pressure and temperature. Using suitable laboratory equipment and samples of sufficient purity, anyone in the world may force a substance to its triple point and calibrate pressure and/or temperature instruments against that sample.↩︎

  76. To be more precise, a propane tank acts like a Class II filled-bulb thermometer, with liquid and vapor coexisting in equilibrium.↩︎

  77. Steam boilers exhibit this same explosive tendency. The expansion ratio of water to steam is on the order of a thousand to one (1000:1), making steam boiler ruptures very violent even at relatively low operating pressures.↩︎

  78. Class IIA systems do suffer from elevation error where the indicator may read a higher or lower temperature than it should due to hydrostatic pressure exerted by the column of liquid inside the tube connecting the indicator to the sensing bulb. Class IIB systems do not suffer from this problem, as the gas inside the tube exerts no pressure over an elevation.↩︎

  79. Circulation pumps and a multitude of accessory devices are omitted from this diagram for the sake of simplicity.↩︎

  80. This is another example of an important thermodynamic concept: the distinction between heat and temperature. While the temperature of the pressurizer heating elements exceeds that of the reactor core, the total heat output of course does not. Typical comparative values for pressurizer power versus reactor core power are 1800 kW versus 3800 MW, respectively: a ratio exceeding three orders of magnitude. The pressurizer heating elements don’t have to dissipate much power (compared to the reactor core) because the pressurizer is not being cooled by a forced convection of water like the reactor core is.↩︎

  81. In this application, the heaters are the final control element for the reactor pressure control system.↩︎

  82. Since the relationship between saturated steam pressure and temperature does not follow a simple mathematical formula, it is more practical to consult published tables of pressure/temperature data for steam. A great many engineering manuals contain steam tables, and in fact entire books exist devoted to nothing but steam tables.↩︎

  83. An experiment illustrative of this point is to maintain an ice-water mixture in an open container, then to insert a sealed balloon containing liquid water into this mixture. The water inside the balloon will eventually equalize in temperature with the surrounding ice-water mix, but it will not itself freeze. Once the balloon’s water reaches 0 degrees Celsius, it stops losing heat to the surrounding ice-water mix, and therefore cannot make the phase change to solid form.↩︎

  84. The concept of pressure is also applicable to solid materials: applying either a compressive or tensile force to a solid object of given cross-sectional area generates a pressure within that object, also referred to as stress.↩︎

  85. To give some perspective on this, 1 pascal of pressure is equal to (only) 0.000145 pounds per square inch!↩︎

  86. There is actually a speed of propagation to this increase in pressure, and it is the speed of sound within that particular fluid. This makes sense, since sound waves are nothing more than rapidly-changing regions of pressure within a material.↩︎

  87. Interestingly, the amount of pressure generated by the weight of a fluid depends only on the height of that fluid column, not its cross-sectional area. Suppose we had a column of water the same height (144 feet) but in a tube having an area twice as large: 2 square inches instead of 1 square inch. Twice the area means twice the volume of water held in the tube, and therefore twice the weight (124.8 lbs). However, since this greater weight is distributed over a proportionately greater area at the bottom of the tube, the pressure there remains the same as before: 124.8 pounds \(\div\) 2 square inches = 62.4 pounds per square inch.↩︎

  88. Suppose a 1 square inch piston were set on the top of this tall fluid column, and a downward force of 20 lbs were applied to it. This would apply an additional 20 PSI pressure to the fluid molecules at all points within the column. The pressure at the bottom would be 82.4 PSI, and the pressure at the middle would be 51.2 PSI.↩︎

  89. Usually, this standard temperature is 4 degrees Celsius, the point of maximum density for water. However, sometimes the specific gravity of a liquid will be expressed in relation to the density of water at some other temperature. In some cases specific gravity is expressed for a liquid at one temperature compared to water at another temperature, usually in the form of a superscript such as 20/4 (liquid at 20 degrees Celsius compared to water at 4 degrees Celsius).↩︎

  90. For each of these calculations, specific gravity is defined as the ratio of the liquid’s density at 60 degrees Fahrenheit to the density of pure water, also at 60 degrees Fahrenheit.↩︎

  91. A colleague of mine told me once of working in an industrial facility with a very old steam boiler, where boiler steam pressure was actually indicated by tall mercury manometers reaching from floor to ceiling. Operations personnel had to climb a ladder to accurately read pressure indicated by these manometers!↩︎

  92. To give some perspective on just how little the liquid level changes in the well, consider a well-type manometer with a 1/4 inch (inside) diameter viewing tube and a 4-inch diameter circular well. The ratio of diameters for these two liquid columns is 16:1, which means their ratio of areas is 256:1. Thus, for every inch of liquid motion in the viewing tube, the liquid inside the well moves only \(1 \over 256\) of an inch. Unless the viewing tube is quite tall, the amount of error incurred by interpreting the tube’s liquid height directly as pressure will be minimal – quite likely less than what the human eye is able to discern on a ruler scale anyway. If the utmost accuracy is desired in a well manometer, however, we may compensate for the trifling motion of liquid in the well by building a custom ruler for the vertical tube – one with a \(255 \over 256\) reduced scale (so that \(255 \over 256\) of an inch of liquid motion in the tube reads as exactly 1 inch of liquid column) in the case of the 1/4 inch tube and 4 inch well dimensions.↩︎

  93. With few exceptions!↩︎

  94. The origin of this unit for pressure is the atmospheric pressure at sea level: 1 atmosphere, or 14.7 PSIA. The word “bar” is short for barometric, in reference to Earth’s ambient atmospheric pressure.↩︎

  95. At sea level, where the absolute pressure is 14.7 PSIA. Atmospheric pressure will be different at different elevations above (or below) sea level.↩︎

  96. It should be noted that many different values exist for \(R\), depending on the units of measurement. For liters of volume, atmospheres of pressure, moles of substance, and Kelvin for temperature, \(R = 0.0821\). If one prefers to work with different units of measurement for volume, pressure, molecular quantity, and/or temperature, different values of \(R\) are available.↩︎

  97. The conservation law necessitating equal current at all points in a series electric circuit is the Law of Charge Conservation, which states that electric charges cannot be created or destroyed.↩︎

  98. Although not grammatically correct, this is a common use of the word in discussions of fluid dynamics. By definition, something that is “incompressible” cannot be compressed, but that is not how we are using the term here. We commonly use the term “incompressible” to refer to either a moving liquid (in which case the actual compressibility of the liquid is inconsequential) or a gas/vapor that does not happen to undergo substantial compression or expansion as it flows through a pipe. In other words, an “incompressible” flow is a moving fluid whose \(\rho\) does not substantially change, whether by actual impossibility or by circumstance.↩︎

  99. According to Ven Te Chow in Open Channel Hydraulics, who quotes from Hunter Rouse and Simon Ince’s work History of Hydraulics, Bernoulli’s equation was first formulated by the great mathematician Leonhard Euler and made popular by Julius Weisbach, not by Daniel Bernoulli himself.↩︎

  100. Surely you’ve heard the expression, “Apples and Oranges don’t add up.” Well, pounds per square inch and pounds per square foot don’t add up either! A general mathematical rule in physics is that any quantities added to or subtracted from each other must bear the exact same units. This rule does not hold for multiplication or division, which is why we see units canceling in those operations. With addition and subtraction, no unit cancellation occurs.↩︎

  101. It is entirely possible to perform all our calculations using inches and/or minutes as the primary units instead of feet and seconds. The only caveat is that all units throughout all terms of Bernoulli’s equation must be consistent. This means we would also have to express mass density in units of slugs per cubic inch, the acceleration of gravity in inches per second squared (or inches per minute squared), and velocity in units of inches per second (or inches per minute). The only real benefit of doing this is that pressure would remain in the more customary units of pounds per square inch. My personal preference is to do all calculations using units of feet and seconds, then convert pressures in units of PSF to units of PSI at the very end.↩︎

  102. A simple approximation for pressure loss due to elevation gain is approximately 1 PSI for every 2 vertical feet of water (1 PSI for every 27.68 inches to be more exact).↩︎

  103. Technically, an eductor uses a liquid such as water to generate the vacuum, while an ejector uses a gas or a vapor such as steam.↩︎

  104. A piezometer tube is nothing more than a manometer (minus the well or the other half of the U-tube).↩︎

  105. For a moving fluid, potential energy is the sum of fluid height and static pressure.↩︎

  106. The form of Bernoulli’s equation with each term expressed in units of distance (e.g. \(z\) = [feet] ; \(v^2 \over 2g\) = [feet] ; \(P \over \gamma\) = [feet]) was chosen so that the piezometers’ liquid heights would directly correspond.↩︎

  107. This generally means to seek the lowest gross potential energy, but there are important exceptions where chemical reactions actually proceed in the opposite direction (with atoms seeking higher energy states and absorbing energy from the surrounding environment to achieve those higher states). A more general and consistent understanding of matter and energy interactions involves a more complex concept called entropy, and a related concept known as Gibbs Free Energy.↩︎

  108. This statement is not perfectly honest. When atoms join to form molecules, the subsequent release of energy is translated into an incredibly small loss of mass for the molecule, as described by Albert Einstein’s famous mass-energy equation \(E = mc^2\). However, this mass discrepancy is so small (typically less than one part per billion of the original mass!), we may safely ignore it for the purposes of understanding chemical reactions in industrial processes. This is what the humorous quote at the start of this chapter meant when it said “ignore the nuclear physicists at this point”.↩︎

  109. In order for a wave of light to be influenced at all by an object, that object must be at least the size of the wave’s length. To use an analogy with water waves, it would be comparing the interaction of a water wave on a beach against a large rock (a disturbance in the wave pattern) versus the non-disturbance of that same wave as it encounters a small buoy.↩︎

  110. One line represents a single bond, which is one electron shared per bound atom. Two parallel lines represent a double bond, where each carbon atom shares two of its valence electrons with the neighboring atom. Three parallel lines represent a triple bond, where each atom shares three of its outer electrons with the neighboring atom.↩︎

  111. Incidentally, nitrogen atoms preferentially form exactly three bonds, and oxygen atoms exactly two bonds. The reason for this pattern is the particular patterns of electrons orbiting each of these atoms, and their respective energy levels. For more information on this, see section 3.4 beginning on page .↩︎

  112. The amount of energy required to rearrange particles in the nucleus for even just a single atom is tremendous, lying well outside the energy ranges of chemical reactions. Such energy levels are the exclusive domain of nuclear reactions and high-energy radiation (subatomic particles traveling at high velocity). The extremely large energy “investment” required to alter an atom’s nucleus is why atomic identities are so stable. This is precisely why alchemists of antiquity utterly failed to turn lead into gold: no materials, processes, or techniques they had at their disposal were capable of the targeted energy necessary to dislodge three protons from a nucleus of lead (\(_{82}\)Pb) to that it would turn into a nucleus of gold (\(_{79}\)Au). That, and the fact the alchemists had no clue about atomic structure to begin with, made their endeavor fruitless.↩︎

  113. It used to be believed that these elements were completely inert: incapable of forming molecular bonds with other atoms. However, this is not precisely true, as some compounds are now known to integrate noble elements.↩︎

  114. All isotopes of astatine (At) are radioactive with very short half-lives, making this element difficult to isolate and study.↩︎

  115. These orbitals just happen to be the 1s, 2p, 3d, and 4f orbitals, as viewed from left to right. In each case, the nucleus lies at the geometric center of each shape. In a real atom, all orbitals share the same center, which means any atom having more than two electrons (that’s all elements except for hydrogen and helium!) will have multiple orbitals around one nucleus. This four-set of orbital visualizations shows what some orbitals would look like if viewed in isolation.↩︎

  116. Please understand that like all analogies, this one merely illustrates a complex concept in terms that are easier to recognize. Analogies do not explain why things work, but merely liken an abstract phenomenon to something more accessible to common experience.↩︎

  117. Truth be told, higher-order shells exist even in simple atoms like hydrogen, but are simply not occupied by that atom’s electron(s) unless they are “excited” into a higher energy state by an external input of energy.↩︎

  118. The letters s, p, d, and f refer to the words sharp, principal, diffuse, and fundamental, used to describe the appearance of spectral lines in the early days of atomic spectroscopy research. Higher-order subshells are labeled alphabetically after f: g, h, and i.↩︎

  119. The two electrons of any orbital have opposite spin values.↩︎

  120. The atomic number is the quantity of protons found in an atom’s nucleus, and may only be a whole number. Since any electrically balanced atom will have the same number of electrons as protons, we may look at the atomic number of an element as being the number of electrons in each atom of that element.↩︎

  121. Building on the amphitheater analogy for one atom of the element aluminum, we could say that there are two electrons occupying the “s” seating row on the first level, plus two electrons occupying the “s” seating row on the second level, plus six electrons occupying the “p” seating row on the second level, plus two electrons occupying the “s” seating row on the third level, plus one electron occupying the “p” seating row on the third level.↩︎

  122. Recall the definition of a “period” in the Periodic Table being a horizontal row, with each vertical column being called a “group”.↩︎

  123. Building on the amphitheater analogy once again for one atom of the element aluminum, we could say that all seats within levels 1 and 2 are occupied (just like an atom of neon), plus two electrons occupying the “s” seating row on the third level, plus one electron occupying the “p” seating row on the third level.↩︎

  124. This is the reason silicon-based photovoltaic solar cells are so inefficient, converting only a fraction of the incident light into electricity. The energy levels required to create an electron-hole pair at the P-N junction correspond to a narrow portion of the natural light spectrum. This means most of the photons striking a solar cell do not transfer their energy into electrical power because their individual energy levels are insufficient to create an electron-hole pair in the cell’s P-N junction. For photovoltaic cells to improve in efficiency, some way must be found to harness a broader spectrum of photon frequencies (light colors) than silicon P-N junctions can do, at least on their own.↩︎

  125. Solids and liquids tend to emit a broad spectrum of wavelengths when heated, in stark contrast to the distinct “lines” of color emitted by isolated atoms.↩︎

  126. To create these spectra, I used a computer program called Spectrum Explorer, or SPEX.↩︎

  127. Including wavelengths of 397 nm, 389 nm, and 384 nm.↩︎

  128. The wavelength of this light happens to lie within the visible range, at approximately 606 nm. Note the shell levels involved with this particular electron transition: between 2p\(^{10}\) and 5d\(^{5}\). Krypton in its ground (un-excited) state has a valence electron configuration of 4p\(^{6}\), which tells us the electron’s transition occurs between an inner shell of the Krypton atom and an excited shell (higher than the ground-state outer shell of the atom). The wavelength of this photon (606 nm) resulting from a shell 5 to shell 2 transition also suggests different energy levels for those shells of a Krypton atom compared to shells 5 and 2 of a hydrogen atom. Recall that the Balmer line corresponding to a transition from \(n=5\) to \(n=2\) of a hydrogen atom had a wavelength value of 434 nm, a higher energy than 606 nm and therefore a larger jump between those corresponding shells.↩︎

  129. In fact, it is often easier to obtain an absorption spectrum of a sample than to create an emission spectrum, due to the relative simplicity of the absorption spectrometer test fixture. We don’t have to energize a sample to incandescence to obtain an absorption spectrum – all we must do is pass white light through enough of it to absorb the characteristic colors.↩︎

  130. One student described this to me as a “shadow” image of the hydrogen gas. The missing colors in the absorption spectrum are the shadows of hydrogen gas molecules blocking certain frequencies of the incident light from reaching the viewer.↩︎

  131. Truth be told, a “mole” is 602,200,000,000,000,000,000,000 counts of literally any discrete entities. Moles do not represent mass, or volume, or length, or area, but rather a quantity of individual units. There is nothing wrong with measuring the amount of eggs in the world using the unit of the mole, or the number of grains of sand in moles, or the number of bits in a collection of digital data. Think of “mole” as nothing more than a really big dozen, or more precisely, a really big half-dozen!↩︎

  132. Another way to define one mole is that it is the number of individual nucleons (i.e. protons and/or neutrons) necessary to comprise one gram of mass. Since protons and neutrons comprise the vast majority of an atom’s mass, we may essentially ignore the mass of an atom’s electrons when tabulating its mass and pay attention only to the nucleus. This is why one mole of Hydrogen atoms, each atom having just one lone proton in its nucleus, will have a combined mass of one gram. By extension, one mole of Carbon-12 atoms, each atom with 6 protons and 6 neutrons, will have a combined mass of twelve grams.↩︎

  133. Take the combustion of hydrogen and oxygen to form water, for example. We know we will need two H\(_{2}\) molecules for every one O\(_{2}\) molecule to produce two H\(_{2}\)O molecules. However, four hydrogen molecules combined with two oxygen molecules will make four water molecules just as well! Similarly, six hydrogen molecules combined with three oxygen molecules also perfectly balance, making six water molecules. So long as we consider all three molecular quantities to be unknown, we will never be able to solve for just one correct answer, because there is no one correct set of absolute quantities, only one correct set of ratios or proportions.↩︎

  134. Note that you cannot have a molecule comprised of 4.8 carbon atoms, 8.4 hydrogen atoms, and 2.2 oxygen atoms, since atoms exist in whole numbers only! This compositional formula merely shows us the relative proportions of each element in the complex mixture of molecules that make up sewage sludge.↩︎

  135. These assumptions are critically important to equating volumetric ratios with molar ratios. First, the compared substances must both be gases: the volume of one mole of steam is huge compared to the volume of one mole of liquid water. Next, we cannot assume temperatures and pressures will be the same after a reaction as before. This is especially true for our example here, where ethane and oxygen are burning to produce water vapor and carbon dioxide: clearly, the products will be at a greater temperature than the reactants!↩︎

  136. Looking at the unity-fraction problem, we see that “grams” (g) will cancel from top and bottom of the unity fraction, and “ethane” will cancel from the given quantity and from the bottom of the unity fraction. This leaves “kilograms” (kg) from the given quantity and “oxygen” from the top of the unity fraction as the only units remaining after cancellation, giving us the proper units for our answer: kilograms of oxygen.↩︎

  137. This notation is quite common in scientific and engineering literature, as a way to avoid having to typeset fractions in a text document. Instead of writing \(\hbox{kJ} \over \hbox{mol}\) which requires a fraction bar, we may write \(\hbox{kJ mol}^{-1}\) which is mathematically equivalent. Another common example of this notation is to express frequency in the unit of \(\hbox{s}^{-1}\) (per second) rather than the unit of Hertz (Hz). Perhaps the most compelling reason to use negative exponents in unit expressions, though, is sociological: scientific studies have shown the regular use of this unit notation makes you appear 37.5% smarter than you actually are. Questioning statistical results of scientific studies, on the other hand, reduces your apparent intelligence by over 63%! Now, aren’t you glad you took the time to read this footnote?↩︎

  138. Just how catalysts perform this trick is a subject of continuing research. Catalysts used in industrial process industries are usually selected based on the results of empirical tests rather than by theory, since a general theoretical understanding of catalysis is lacking at this present time. Indeed, the specific selection of catalysts for high-value chemical processes is often a patented feature of those processes, reflecting the investment of time, finances, and effort finding a suitable catalyst for optimizing each chemical reaction.↩︎

  139. If this were not true, one could construct an over-unity (“perpetual motion”) machine by initiating an endothermic reaction and then reversing that reaction (exothermic) using a catalyst in either or both portions of the cycle to reap a net energy release from the system. So trustworthy is the Law of Energy Conservation that we may safely invoke the impossibility of over-unity energy production as a disproof of any given hypothesis permitting it. In other words, if any hypothesis allows for an over-unity process (i.e. violates the Law of Energy Conservation), we may reject that hypothesis with confidence! This form of disproof goes by the name reductio ad absurdum (Latin: “reducing to an absurdity”).↩︎

  140. At first it may seem non-sensical for the carbon dioxide product of this reaction to have a negative energy, until you realize the zero values given to both the carbon and oxygen reactants are entirely arbitrary. Viewed in this light, the negative heat of formation for CO\(_{2}\) is nothing more than a relative expression of chemical potential energy in reference to the elements from which CO\(_{2}\) originated. Therefore, a negative \(\Delta H_f^{\circ}\) value for any molecule simply tells us that molecule has less energy (i.e. is more stable) than its constituent elements.↩︎

  141. We may also readily tell whether any given reaction will be exothermic or endothermic, based on the mathematical sign of this \(\Delta H\) value.↩︎

  142. Of course, it is not necessary to look up \(\Delta H_f^{\circ}\) for oxygen gas, as that is an element in its natural state at STP and therefore its standard heat of formation is defined to be zero. The heat of formation for carbon dioxide gas may be found from the preceding example, while the heat of formation for water may be found in the “Heats of Reaction and Activation Energy” subsection of this book. The only substance in this list of which the heat of formation is not defined as zero or given in this book is propane. Note that many thermochemical reference books will give heats of formation in units of kilocalories per mole rather than kilojoules per mole. The conversion factor between these is 1 calorie = 4.184 joules.↩︎

  143. These names have their origin in the terms used to classify positive and negative electrodes immersed in a liquid solution. The positive electrode is called the “anode” while the negative electrode is called the “cathode.” An anion is an ion attracted to the anode. A cation is an ion attracted to the cathode. Since opposite electrical charges tend to attract, this means “anions” are negatively charged and “cations” are positively charged.↩︎

  144. Ionic compounds are formed when oppositely charged atomic ions bind together by mutual attraction. The distinguishing characteristic of an ionic compound is that it is a conductor of electricity in its pure, liquid state. That is, it readily separates into anions and cations all by itself. Even in its solid form, an ionic compound is already ionized, with its constituent atoms held together by an imbalance of electric charge. Being in a liquid state simply gives those atoms the physical mobility needed to dissociate.↩︎

  145. Covalent compounds are formed when electrically neutral atoms bind together by the mutual sharing of valence electrons. Such compounds are not good conductors of electricity in their pure, liquid states.↩︎

  146. Actually, the more common form of positive ion in water is hydronium: H\(_{3}\)O\(^{+}\), but we often simply refer to the positive half of an ionized water molecule as hydrogen (H\(^{+}\)).↩︎

  147. Free hydrogen ions (H\(^{+}\)) are rare in a liquid solution, and are more often found attached to whole water molecules to form a positive ion called hydronium (H\(_{3}\)O\(^{+}\)). However, process control professionals usually refer to these positive ions simply as “hydrogen” even though the truth is a bit more complicated.↩︎

  148. The letter “p” refers to “potential,” in reference to the logarithmic nature of the measurement. Other logarithmic measurements of concentration exist for molecular species, including pO\(_{2}\) and pCO\(_{2}\) (concentration of oxygen and carbon dioxide molecules in a liquid solution, respectively).↩︎

  149. Often, students assume that the 7 pH value of water is an arbitrary assignment, using water as a universal standard just like we use water as the standard for the Celsius temperature scale, viscosity units, specific gravity, etc. However, this is not the case here. Pure water at room temperature just happens to have an hydrogen ion molarity equivalent to a (nearly) round-number value of 7 pH.↩︎

  150. If the electrolyte is considered strong, all or nearly all of its molecules will dissociate into ions. A weak electrolyte is one where only a mere portion of its molecules dissociate into ions.↩︎

  151. For “strong” acids, all or nearly all molecules dissociate into ions. For “weak” acids, just a portion of the molecules dissociate.↩︎

  152. For “strong” bases, all or nearly all molecules dissociate into ions. For “weak” bases, just a portion of the molecules dissociate.↩︎

  153. It should be noted that the solution never becomes electrically imbalanced with the addition of an acid or caustic. It is merely the balance of hydrogen to hydroxyl ions we are referring to here. The net electrical charge for the solution should still be zero after the addition of an acid or caustic, because while the balance of hydrogen to hydroxyl ions does change, that electrical charge imbalance is made up by the other ions resulting from the addition of the electrolyte (anions for acids, cations for caustics). The end result is still one negative ion for every positive ion (equal and opposite charge numbers) in the solution no matter what substance(s) we dissolve into it.↩︎

  154. Exceptions do exist for strong concentrations, where hydrogen ions may be present in solution yet unable to react because of being “crowded out” by other ions in the solution.↩︎

  155. A battery is an electrochemical device producing an electrical voltage as the result of a chemical reaction.↩︎

  156. I have yet to read a document of any kind written by an equipment manufacturer using electron flow notation, and this is after scrutinizing literally hundreds of documents looking for this exact detail! For the record, though, most technical documents do not bother to draw a direction for current at all, leaving it to the imagination of the reader instead. It is only when a direction must be drawn that one sees a strong preference in industry for conventional flow notation.↩︎

  157. If by chance I have missed anyone’s digital electronics textbook that does use electron flow, please accept my apologies. I can only speak of what I have seen myself.↩︎

  158. Although the unit of the “watt” is commonly used for electrical power, other units are valid as well. The British unit of horsepower is every bit as valid for expressing electrical power as “watts,” although this usage is less common. Likewise, the “watt” may be used to express measurements of non-electrical power as well, such as the mechanical power output of an engine. European automobile manufacturers, for example, rate the power output of their cars’ engines in kilowatts, as opposed to American automobile manufacturers who rate their engines in horsepower. This choice of units is strictly a cultural convention, since any valid unit for power may be applied to any form of energy rate.↩︎

  159. Except in the noteworthy case of superconductivity, a phenomenon occurring at extremely low temperatures.↩︎

  160. Except in the noteworthy case of superfluidity, another phenomenon occurring at extremely low temperatures.↩︎

  161. Interesting exceptions do exist to this rule, but only on very short time scales, such as in cases where we examine the a transient (pulse) signal nanosecond by nanosecond, and/or when very high-frequency AC signals exist over comparatively long conductor lengths.↩︎

  162. Those exceptional cases mentioned earlier in the footnote are possible only because electric charge may be temporarily stored and released by a property called capacitance. Even then, the law of charge conservation is not violated because the stored charges re-emerge as current at later times. This is analogous to pouring water into a bucket: just because water is poured into a bucket but no water leaves the bucket does not mean that water is magically disappearing. It is merely being stored, and can re-emerge at a later time.↩︎

  163. An ideal conductor has no resistance, and so there is no reason for a difference of potential to exist along a pathway where nothing stands in the way of charge motion. If ever a potential difference developed, charge carriers within the conductor would simply move to new locations and neutralize the potential.↩︎

  164. Again, interesting exceptions do exist to this rule on very short time scales, such as in cases where we examine the a transient (pulse) signal nanosecond by nanosecond, and/or when very high-frequency AC signals exist over comparatively long conductor lengths.↩︎

  165. The exceptional cases mentioned in the previous footnote exist only because the electrical property of inductance allows potential energy to be stored in a magnetic field, manifesting as a voltage different along the length of a conductor. Even then, the Law of Energy Conservation is not violated because the stored energy re-emerges at a later time.↩︎

  166. But not always! There do exist positive-ground systems, particularly in telephone circuits and in some early automobile electrical systems.↩︎

  167. Both in the British system of measurement and the SI metric system of measurement! The older metric system (called “CGS” for Centimeter-Gram-Second) had a special unit of measurement called the Gilbert for expressing magnetic field strength, with 1 Gilbert (Gb) equal to 0.7958 Amp-turns (At).↩︎

  168. The term “ferrous” simply refers to any substance containing the element iron. This includes steel, which is a combination of iron and carbon.↩︎

  169. The word “solenoid” may also be used to describe a wire coil with no armature, but the more common industrial use of the word refers to the complete arrangement of coil and movable armature.↩︎

  170. There is also a left-hand rule for fans of electron flow, but in this book I will default to conventional flow. For a more complete discussion on this matter, see section 4.2.1 beginning on page .↩︎

  171. The term “ferrous” refers to any substance containing the element iron. Steel is one such substance, being a combination of iron and carbon.↩︎

  172. Unlike the charge/hold/discharge capacitor circuit, this inductor demonstration circuit would not function quite as well in real life. Real inductors contain substantial amounts of electrical resistance (\(R\)) in addition to inductance (\(L\)), which means real inductors have an inherent capacity to dissipate their own store of energy. If a real inductor were placed in a circuit such as this, it would not maintain its store of energy indefinitely in the switch’s “neutral” position as a capacitor would. Realistically, the inductor’s energy would likely dissipate in a matter of milliseconds following the switch to the “neutral” position.↩︎

  173. It is also acceptable to refer to electrical voltages and/or currents that vary periodically over time even if their directions never alternate, as AC superimposed on DC.↩︎

  174. Charles Proteus Steinmetz, in his book Theoretical Elements of Electrical Engineering, refers to the voltage and current values of a reactive component being “wattless” in honor of the fact that they transfer zero net power to or from the circuit (page 41). The voltage and current values of resistive components, by contrast, constitute real power dissipated in the circuit.↩︎

  175. At first it may seem strange to apply Faraday’s Law here, because this formula is typically used to describe the amount of voltage produced by a coil of wire exposed to a changing magnetic field, not the amount of magnetic field produced by an applied voltage. However, the two are closely related because the inductor must produce a voltage drop in equilibrium with the applied voltage just like any other component, in accordance with Kirchhoff’s Voltage Law. In a simple circuit such as this where the voltage source directly connects to the inductor (barring any resistive losses in the connecting wires), the coil’s induced voltage drop must exactly equal the source’s applied voltage at all points in time, and so Faraday’s Law works just as well to describe the source’s applied voltage as it does to describe the coil’s induced voltage. This is the principle of self-induction.↩︎

  176. In this context, “constant” means an alternating voltage with a consistent peak value, not “constant” in the sense that a DC source is constant at all points in time.↩︎

  177. In this context, “constant” means an alternating voltage with a consistent peak value, not “constant” in the sense that a DC source is constant at all points in time.↩︎

  178. These power losses take the form of core losses due to magnetic hysteresis in the ferrous core material, and winding losses due to electrical resistance in the wire coils. Core losses may be minimized by reducing magnetic flux density (\(H\)), which requires a core with a larger cross-section to disperse the flux (\(\phi\)) over a wider area. Winding losses may be minimized by increasing wire gauge (i.e. thicker wire coils). In either case, these modifications make for a bulkier and more expensive transformer.↩︎

  179. Transformers, of course, utilize the principle of electromagnetic induction to generate a voltage at the secondary winding which may power a load. Ideally, 100 percent of the magnetic flux generated by the energized primary winding “links” or “couples” to the secondary winding. However, imperfections in the windings, core material, etc. conspire to prevent every bit of magnetic flux from coupling with the secondary winding, and so any magnetic flux from the primary winding that doesn’t transfer power to the secondary winding simply absorbs and releases energy like a plain inductor. This is called “leakage” inductance because the flux in question has found a path to “leak” around the secondary winding. Leakage inductance may be modeled in a transformer as a separate series-connected inductance connected to the primary winding. Like any inductance, it presents a reactance equal to \(X_L = 2 \pi f L\), and in a transformer serves to impede primary current.↩︎

  180. Although it is possible to express transformer impedance in the more familiar unit of Ohms (\(\Omega\)), percentage is greatly preferred for the simple reason that it applies identically to the primary and secondary sides of the transformer. Expressing transformer impedance in ohms would require a different value depending on whether the primary side or secondary side were being considered.↩︎

  181. The rather colorful term “bolted” refers to a short-circuit fault consisting of a large copper bus-bar physically attached to the transformer’s secondary terminal using bolts. In other words, a “bolted” fault is as close to a perfect short-circuit as you can get.↩︎

  182. A full circle contains 360 degrees, which is equal to \(2 \pi\) radians. One “radian” is defined as the angle encompassing an arc-segment of a circle’s circumference equal in length to its radius, hence the name “radian”. Since the circumference of a circle is \(2 \pi\) times as long as its radius, there are \(2 \pi\) radians’ worth of rotation in a circle. Thus, while the “degree” is an arbitrary unit of angle measurement, the “radian” is a more natural unit of measurement because it is defined by the circle’s own radius.↩︎

  183. The definition of an imaginary number is the square root of a negative quantity. \(\sqrt{-1}\) is the simplest case, and is symbolized by mathematicians as \(i\) and by electrical engineers as \(j\).↩︎

  184. The term “unit vector” simply refers to a vector with a length of 1 (“unity”).↩︎

  185. Although \(A\) truly should represent a waveform’s peak value, and \(\theta\) should be expressed in units of radians to be mathematically correct, it is more common in electrical engineering to express \(A\) in RMS (root-mean-square) units and \(\theta\) in degrees. For example, a 120 volt RMS sine wave voltage at a phase angle of 30 degrees will be written by an engineer as \(120e^{j30}\) even though the true phase angle of this voltage is \(\pi \over 6\) radians and the actual peak value is 169.7 volts.↩︎

  186. The fact that this graph shows the vertical (imaginary) projections of both phasors rather than the horizontal (real) projections is irrelevant to phase shift. Either way, the voltage waveform of source B will still lead the voltage waveform of source A by 60\(^{o}\).↩︎

  187. One way to think of this is to imagine an AC voltage-measuring instrument having red and black test leads just like a regular voltmeter. To measure \(V_{BA}\) you would connect the red test lead to the first point (B) and the black test lead to the second point (A).↩︎

  188. The necessity of a shared frequency is easily understood if one considers a case of two sine waves at different frequencies: their respective phasors would spin at different speeds. Given two phasors spinning at different speeds, the angle separating those two phasors would be constantly changing. It is only when two phasors spin around at precisely the same speed that we can sensibly talk about there being a fixed angular displacement between them. Fortunately this is the usual case in AC circuit analysis, where all voltages and currents share the same frequency.↩︎

  189. An important detail is that our phasometer must always spin counter-clockwise in order to maintain proper phasor convention. We can ensure this will happen by including a pair of shading coils (small copper rings wrapped around one corner of each magnetic pole) in the stator structure. For a more detailed discussion of shading coils, refer to the section on AC induction motors ([shading_coil]) starting on page .↩︎

  190. This, of course, assumes the generator powering the system is also a two-pole machine like the phasometer. If the generator has more poles, the shaft speed will not match the phasometer’s rotor speed even though the phasometer will still faithfully represent the generator’s cosine wave rotation.↩︎

  191. Automobile mechanics may be familiar with a tool called a timing light, consisting of a strobe light connected to the engine in such a way that the light flashes every time the #1 cylinder spark plug fires. By viewing the marks etched into the engine’s crankshaft with this strobe light, the mechanic is able to check the ignition timing of the engine.↩︎

  192. Recall from calculus that the derivative of the function \(e^x\) with respect to \(x\) is simply \(e^x\). That is, the value of an exponential function’s slope is equal to the value of the original exponential function! If the exponent contains any constants multiplied by the independent variable, those constants become multiplying coefficients after differentiation. Thus, the derivative of \(e^{kx}\) with respect to \(x\) is simply \(ke^{kx}\). Likewise, the derivative of \(e^{j \omega t}\) with respect to \(t\) is \(j \omega e^{j \omega t}\).↩︎

  193. Note also one of the interesting properties of the imaginary operator: \({1 \over j} = -j\). The proof of this is quite simple: \({1 \over j} = {j \over j^2} = {j \over -1} = -j\).↩︎

  194. Note that we begin this analysis with an exponential expression of the current waveform rather than the voltage waveform as we did at the beginning of the capacitor analysis. It is possible to begin with voltage as a function of time and use calculus to determine current through the inductor, but unfortunately that would necessitate integration rather than differentiation. Differentiation is a simpler process, which is why this approach was chosen. If \(e^{j \omega t} = L {dI \over dt}\) then \(e^{j \omega t} \> dt = L \> dI\). Integrating both sides of the equation yields \(\int e^{j \omega t} \> dt = L \int dI\). Solving for \(I\) yields \(e^{j \omega t} \over j \omega L\) plus a constant of integration representing a DC component of current that may or may not be zero depending on where the impressed voltage sinusoid begins in time. Solving for \(Z = V / I\) finally gives the result we’re looking for: \(j \omega L\). Ugly, no?↩︎

  195. A “unit” phasor is one having a length of 1.↩︎

  196. The fact that these impedance phasor quantities have fixed angles in AC circuits where the voltage and current phasors are in constant motion is not a contradiction. Since impedance represents the relationship between voltage and current for a component (\(Z = V / I\)), this fixed angle represents a relative phase shift between voltage and current. In other words, the fixed angle of an impedance phasor tells us the voltage and current waveforms will always remain that much out of step with each other despite the fact that the voltage and current phasors themselves are continuously rotating at the system frequency (\(\omega\)).↩︎

  197. With one notable exception: Joule’s Law (\(P = IV\), \(P = V^2 / Z\), \(P = I^2 Z\)) for calculating power does not apply in AC circuits because power is not a phasor quantity like voltage and current.↩︎

  198. Assuming a two-pole generator, where each period of the sinusoidal waveform corresponds exactly to one revolution of the generator shaft.↩︎

  199. When dividing two phasors in polar form, the arithmetic is as follows: divide the numerator’s magnitude by the denominator’s magnitude, then subtract the denominator’s angle from the numerator’s angle. The result in this case is 5 milliamps (5 volts divided by 1000 ohms) at an angle of 0 degrees (0 minus 0).↩︎

  200. The same arithmetic applies to this quotient as well: the current’s magnitude is 5 volts divided by 1000 ohms, while the current’s phase angle is 60 degrees minus a negative 90 degrees (150 degrees).↩︎

  201. \(\sigma\) is equal to the reciprocal of the signal’s time constant \(\tau\). In other words, \(\sigma = 1 / \tau\).↩︎

  202. One value of \(\omega\) not shown in this three-panel graphic comparison is a negative frequency. This is actually not as profound as it may seem at first. All a negative value of \(\omega\) will do is ensure that the phasor will rotate in the opposite direction (clockwise, instead of counter-clockwise as phasor rotation is conventionally defined). The real portion of the sinusoid will be identical, tracing the same cosine-wave plot over time. Only the imaginary portion of the sinusoid will be different, as \(j \sin - \theta = - j \sin \theta\).↩︎

  203. The expression used here to represent voltage is simply \(e^{st}\). I could have used a more complete expression such as \(Ae^{st}\) (where \(A\) is the initial amplitude of the signal), but as it so happens this amplitude is irrelevant because there will be an “\(A\)” term in both the numerator and denominator of the impedance quotient. Therefore, \(A\) cancels out and is of no consequence.↩︎

  204. What we are really doing here is applying a problem-solving technique I like to call limiting cases. This is where we simplify the analysis of some system by considering scenarios where the mathematical quantities are easy to compute.↩︎

  205. Of course, the mathematical plotting software cannot show a pole of truly infinite height, and so the pole has been truncated. This is why it appears to have a “flat” top.↩︎

  206. My first pole-zero plot using the ePiX C++ mathematical visualization library took several hours to get it just right. Subsequent plots went a lot faster, of course, but they still require substantial amounts of time to adjust for a useful and aesthetically pleasing appearance.↩︎

  207. A powerful mathematical technique known as a Laplace Transform does this very thing: translate any differential equation describing a physical system into functions of \(s\), which may then be analyzed in terms of transfer functions and pole-zero plots.↩︎

  208. As before, this counter-intuitive condition is possible only because the capacitor in this circuit has the ability to store energy. If the capacitor is charged by some previous input signal event and then allowed to discharge through the resistor, it becomes possible for this circuit to develop an output voltage even with short-circuited input terminals.↩︎

  209. The two solutions for \(\omega\) (one at +1 radian per second and the other at \(-1\) radian per second) merely indicate the circuit is able to oscillate “forward” as well as “backward”. In other words, it is able to oscillate sinusoidally where the positive peak occurs at time \(t = 0\) (+1 rad/sec) as well as oscillate sinusoidally where the negative peak occurs at time \(t = 0\) (\(-1\) rad/sec). We will find that solutions for \(s\) in general are symmetrical about the real axis, meaning if there is any solution for \(s\) requiring an imaginary number value, there will be two of them: one with a positive imaginary value and the other with a negative imaginary value.↩︎

  210. The only way to obtain a purely imaginary root for this polynomial is for the “\(b\)” coefficient to be equal to zero. For our example circuit, it means either \(R\) or \(C\) would have to be zero, which is impossible if both of those components are present and functioning. Thus, our RLC filter circuit will have either real poles or complex poles.↩︎

  211. Or, one might argue there are two repeated poles, one at \(s = -1 + j0\) and another at \(s = -1 - j0\).↩︎

  212. The center of the pole farthest from the plot’s origin actually lies outside the plotted area, which is why that pole appears to be vertically sliced. This plot’s domain was limited to the same values (\(\pm 2\)) as previous plots for the sake of visual continuity, the compromise here being an incomplete mapping of one pole.↩︎

  213. Low-pass filter circuits are typically used to “smooth” the ripple from the output of a rectifier. The greater the frequency of this ripple voltage, the easier it is to filter from the DC (which has a frequency of zero). All other factors being equal, a low-pass filter attenuates higher-frequency components to a greater extent than lower-frequency components.↩︎

  214. Here, the term “balanced” refers to a condition where all phase voltages and currents are symmetrically equal. Unbalanced conditions can and do exist in real polyphase power systems, but the degree of imbalance is usually quite small except in cases of component faults.↩︎

  215. You may recall from basic physics that while force and displacement are both vector quantities (having direction as well as magnitude), work and energy are not. Since power is nothing more than the rate of work over time, and neither work nor time are vector quantities, power is not a vector quantity either. This is closely analogous to voltage, current, and power in polyphase electrical networks, where both voltage and current are phasor quantities (having phase angle “direction” as well as magnitude) but where power merely has magnitude. We call such “directionless” quantities scalar. Scalar arithmetic is simple, with quantities adding and subtracting directly rather than trigonometrically.↩︎

  216. We end up with the same final result if we substitute line quantities in a wye-connected system, too. Instead of \(V_{line} = V_{phase}\) and \(I_{phase} = {I_{line} \over \sqrt{3}}\) in the delta connection we have \(I_{line} = I_{phase}\) and \(V_{phase} = {V_{line} \over \sqrt{3}}\) in the wye connection. The end-result is still \(P_{total} = (\sqrt{3}) (I_{line})(V_{line})\) based on line quantities.↩︎

  217. A colorful term for this odd voltage is bastard voltage.↩︎

  218. If you are having difficulty seeing the A-B-C or A-C-B rotations of the positive and negative sequences, you may be incorrectly visualizing them. Remember that the phasors (arrows) themselves are rotating about the center point, and you (the observer) are stationary. If you imagine yourself standing where the tip of each “A” phasor now points, then imagine all the phasor arrows rotating counter-clockwise, you will see each phasor tip pass by your vantage point in the correct order.↩︎

  219. It is good to remember that each of the symmetrical components is perfectly balanced (i.e. the “b” and “c” phasors each have exactly the same magnitude as the “a” phasor in each sequential set), and as such each of the phasors for each symmetrical set will have exactly the same magnitude. It is common to denote the calculated phasors simply as \(V_1\), \(V_2\), and \(V_0\) rather than \(V_{a1}\), \(V_{a2}\), and \(V_{a0}\), the “a” phasor implied as the representative of each symmetrical component.↩︎

  220. A “shorthand” notation commonly seen in symmetrical component analysis is the use of a unit phasor called \(a\), equal to \(1 \angle 120^o\). Multiplying any phasor quantity by \(a\) shifts that phasor’s phase angle by +120 degrees while leaving its magnitude unaffected. Multiplying any phasor quantity by \(a^2\) shifts that phasor’s phase angle by +240 degrees while leaving its magnitude unaffected. An example of this “\(a\)” notation is seen in the following formula for calculating the positive sequence voltage phasor: \(V_{a1} = {1 \over 3} (V_a + a V_b + a^2 V_c)\)↩︎

  221. The battery-and-switch test circuit shown here is not just hypothetical, but may actually be used to test the polarity of an unmarked transformer. Simply connect a DC voltmeter to the secondary winding while pressing and releasing the pushbutton switch: the voltmeter’s polarity indicated while the button is pressed will indicate the relative phasing of the two windings. Note that the voltmeter’s polarity will reverse when the pushbutton switch is released and the magnetic field collapses in the transformer coil, so be sure to pay attention to the voltmeter’s indication only during the instant the switch !↩︎

  222. An autotransformer is any transformer configuration where the primary and secondary windings are connected rather than galvanically isolated from each other as is typical.↩︎

  223. This use of the term is entirely different from the same term’s use in the electric power industry, where a “transmission line” is a set of conductors used to send large amounts of electrical energy over long distances.↩︎

  224. The signal generator was set to a frequency of approximately 240 kHz with a Thévenin resistance of 118 ohms to closely match the cable’s characteristic impedance of 120 ohms. The signal amplitude was just over 6 volts peak-to-peak.↩︎

  225. The termination shown here is imperfect, as evidenced by the irregular amplitude of the square wave. The cable used for this experiment was a length of twin-lead speaker cable, with a characteristic impedance of approximately 120 ohms. I used a 120 ohm (\(\pm\) 5%) resistor to terminate the cable, which apparently was not close enough to eliminate all reflections.↩︎

  226. A “polar” molecule is one where the constituent atoms are bound together in such a way that there is a definite electrical polarity from one end of the molecule to the other. Water (H\(_{2}\)O) is an example of a polar molecule: the positively charged hydrogen atoms are bound to the negatively charged oxygen atom in a “V” shape, so the molecule as a whole has a positive side and a negative side which allows the molecule to be influenced by external electric fields. Carbon dioxide (CO\(_{2}\)) is an example of a non-polar molecule whose constituent atoms lie in a straight line with no apparent electrical poles. Interestingly, microwave ovens exploit the fact of water molecules’ polarization by subjecting food containing water to a strong oscillating electric field (microwave energy in the gigahertz frequency range) which causes the water molecules to rotate as they continuously orient themselves to the changing field polarity. This oscillatory rotation manifests itself as heat within the food.↩︎

  227. An older term used by radio pioneers to describe antennas is radiator, which I personally find very descriptive. The word “antenna” does an admirable job describing the physical appearance of the structure – like antennas on an insect – but the word “radiator” actually describes its function, which is a far more useful principle for our purposes.↩︎

  228. In practice, the ideal length of a dipole antenna turns out to be just a bit shorter than theoretical, due to lumped-capacitive effects at the wire ends. Thus, a resonant 30 MHz half-wave dipole antenna should actually be about 4.75 meters in length rather than exactly 5 meters in length.↩︎

  229. For more information on conducting “thought experiments,” refer to the subsection of this book titled “Using Thought Experiments” (34.3.4) beginning on page .↩︎

  230. Many interesting points may be drawn from these two illustrations. Regarding the strip chart recording instrument itself, it is worth noting the ornate design of the metal frame (quite typical of machinery design from that era), the attractive glass dome used to shield the chart and mechanism from the environment, and the intricate mechanism used to drive the strip chart and move the pen. Unlike a circular chart, the length of a strip chart is limited only by the diameter of the paper roll, and may be made long enough to record many days’ worth of pressure measurements. The label seen on the front of this instrument (“Edson’s Recording and Alarm Gauge”) tells us this instrument has the ability to alert a human operator of abnormal conditions, and a close inspection of the mechanism reveals a bell on the top which presumably rings under alarm conditions. Regarding the strip chart record, note the “compressed” scale, whereby successive divisions of the vertical scale become closer in spacing, reflecting some inherent nonlinearity of the pressure-sensing mechanism.↩︎

  231. These might be float-driven switches, where each switch is mechanically actuated by the buoyancy of a hollow metal float resting on the surface of the water. Another technology uses metal electrodes inserted into the water from above, sensing water level by electrical conductivity: when the water level reaches the probe’s tip, an electrical circuit is closed. For more information on liquid level switches, refer to section 9.6 beginning on page .↩︎

  232. D.A. Strobhar, writing in The Instrument Engineers’ Handbook on the subject of alarm management, keenly observes that alarms are the only form of instrument “whose sole purpose is to alter the operator’s behavior.” Other instrument devices work to control the process, but only alarms work to control the human operator.↩︎

  233. When a complex machine or process with many shutdown sensors automatically shuts down, it may be difficult to discern after the fact which shutdown device was responsible. For instance, imagine an engine-powered generator automatically shutting down because one of the generator’s “trip” sensors detected an under-voltage condition. Once the engine shuts down, though, multiple trip sensors will show abnormal conditions simply because the engine is not running anymore. The oil pressure sensor is one example of this: once the engine shuts down, there will no longer be any oil pressure, thus causing that alarm to activate. The under-voltage alarm falls into this category as well: once the engine shuts down, the generator will no longer be turning and therefore its output voltage must be zero. The problem for any human operator encountering the shut-down engine is that he or she cannot tell which of these alarms was the initiating cause of the shutdown versus which of these alarms simply activated after the fact once the engine shut off. An annunciator panel showing both an under-voltage and a low oil pressure light does not tell us which event happened first to shut down the generator. A “first-event” (sometimes called a “first-out”) annunciator, however, shows which trip sensor was the first to activate, thus revealing the initiating cause of the event.↩︎

  234. A fun and informative essay to read on this subject is Mortimer Adler’s How to Mark a Book, widely disseminated on the Internet. In it, Adler argues persuasively for the habit of annotating the books you read, and gives some practical tips for doing so. He says reading a book should be a sort of conversation with the author where the flow of information is not just from the author to you, but also from you to yourself as you question, consider, and even argue the author’s points.↩︎

  235. Sometimes P&ID stands for Piping and Instrument Diagram. Either way, it means the same thing.↩︎

  236. It should be noted that the “zooming in” of scope in a P&ID does not necessarily mean the scope of other areas of the process must be “zoomed out.” In fact, it is rather typical in a P&ID that the entire process system is shown in finer detail than in a PFD, but not all on one page. In other words, while a PFD may depict a process in its entirely on one piece of paper, a comprehensive P&ID will typically span multiple pieces of paper, each one detailing a section of the process system.↩︎

  237. Compressor “surge” is a violent and potentially self-destructing action experienced by a centrifugal compressor if the pressure drop across it becomes too high and the flow rate through it becomes too low. Surging may be prevented by opening up a “recycle” valve from the compressor’s discharge line to the suction line, ensuring adequate flow through the compressor while simultaneously unloading the high pressure differential across it.↩︎

  238. Functional diagrams are sometimes referred to as SAMA diagrams in honor of the organization responsible for their standardization, the Scientific Apparatus Makers Association. This organization has been succeeded by the Measurement, Control, and Automation Association (MCAA), thus obsoleting the “SAMA” acronym.↩︎

  239. Exceptions do exist to this rule. For example, in a cascade or feedforward loop where multiple transmitters feed into one or more controllers, each transmitter is identified by the type of process variable it senses, and each controller’s identifying tag follows suit.↩︎

  240. EBAA Iron Sales, Inc published a two-page report in 1994 (“Connections” FL-01 2-94) summarizing the history of flange “pound” ratings, from the ASME/ANSI B16 standards.↩︎

  241. For example, 1/8 inch NPT pipe fittings have a thread pitch of 27 threads per inch. 1/4 inch and 3/8 inch NPT fittings are 18 threads per inch, 1/2 inch and 3/4 inch NPT fittings are 14 threads per inch, and 1 inch through 2 inch NPT fittings are 11.5 threads per inch.↩︎

  242. Impulse lines are alternatively called gauge lines or sensing lines.↩︎

  243. This happens to be a Swagelok brass instrument tube fitting being installed on a 3/8 inch copper tube.↩︎

  244. So is Gyrolok, Hoke, and a host of others. It is not my intent to advertise for different manufacturers in this textbook, but merely to point out some of the more common brands an industrial instrument technician might encounter on the job.↩︎

  245. It should be noted that the fitting nuts became seized onto the tube due to the tube’s swelling. The tube fittings may not have leaked during the test, but their constituent components are now damaged and should never be placed into service again.↩︎

  246. No one wants to become known as the person who “messed up” someone else’s neat wiring job!↩︎

  247. An occupational hazard for technicians performing work on screw terminations is carpal tunnel syndrome, where repetitive wrist motion (such as the motions required to loosen and tighten screw terminals) damages portions of the wrist where tendons pass.↩︎

  248. An exception is when the screw is equipped with a square washer underneath the head, designed to compress the end of a stranded wire with no shear forces. Many industrial instruments have termination points like this, for the express purpose of convenient termination to either solid or stranded wire ends.↩︎

  249. This is similar to people referring to adhesive bandages as “Band-Aids” or tongue-and-groove joint pliers as “Channelocks,” because those particular brands have become popular enough to represent an entire class.↩︎

  250. The principle at work here is the strength of the field generated by the noise-broadcasting conductor: electric field strength (involved with capacitive coupling) is directly proportional to voltage, while magnetic field strength (involved with inductive coupling) is directly proportional to current.↩︎

  251. Incidentally, cable shielding likewise guards against strong electric fields within the cable from capacitively coupling with conductors outside the cable. This means we may elect to shield “noisy” power cables instead of (or in addition to) shielding low-level signal cables. Either way, good shielding will prevent capacitive coupling between conductors on either side of a shield.↩︎

  252. This is not to say magnetic fields cannot induce common-mode noise voltage: on the contrary, magnetic fields are capable of inducing voltage in any electrically-conductive loop. For this reason, both differential and ground-referenced signals are susceptible to interference by magnetic fields.↩︎

  253. An example of this is the UTP (Unshielded, Twisted Pair) cabling used for Ethernet digital networks, where four pairs of wires having different twist rates are enclosed within the same cable sheath.↩︎

  254. This use of the term is entirely different from the same term’s use in the electric power industry, where a “transmission line” is a set of conductors used to send large amounts of electrical energy over long distances.↩︎

  255. A student of mine once noted that he has been doing this out of habit whenever he has a conversation with anyone in a racquetball court. All the hard surfaces (floor, walls) in a racquetball court create severe echoes, forcing players to speak slower in order to avoid confusion from the echoes.↩︎

  256. The characteristic, or “surge,” impedance of a cable is a function of its conductor geometry (wire diameter and spacing) and dielectric value of the insulation between the conductors. Any time a signal reaches an abrupt change in impedance, some (or all) of its energy is reflected in the reverse direction. This is why reflections happen at the unterminated end of a cable: an “open” is an infinite impedance, which is a huge shift from the finite impedance “seen” by the signal as it travels along the cable. This also means any sudden change in cable geometry such as a crimp, nick, twist, or sharp bend is capable of reflecting part of the signal. Thus, high-speed digital data cables must be installed more carefully than low-frequency or DC analog signal cables.↩︎

  257. Smoke signals are an ancient form of light-based communication!↩︎

  258. These are thin plastic or sheet metal tubes with mirrored internal surfaces, extending from a collector dome (made of glass or plastic) outside the dwelling to a diffusion lens inside the dwelling.↩︎

  259. Technicians working with optical fiber typically carry pressurized cans of dust-blowing air or other gas to clean connectors and sockets prior to joining the two.↩︎

  260. Chief of which is the potential to get optical fibers embedded in the body, where such transparent “slivers” are nearly impossible to find and extract.↩︎

  261. A “photon” is a quantity of light energy represented as a particle, along the same scale as an electron. It isn’t entirely fair to characterize light as either consisting of waves or as consisting of particles, because light tends to manifest properties of both. Actually, this may be said of any sub-atomic particle (such as an electron) as well: under certain conditions these particles act like clumps of matter, and under different conditions they tend to act as waves of electromagnetic energy. This particle-wave duality lies at the heart of quantum physics, and continues to be something of a philosophical mystery simply because the behavior defies the macroscopic constructs we are accustomed to using when modeling the natural world.↩︎

  262. Fluorescence is the phenomenon of a substance emitting a long-wavelength (low-energy) photon when “excited” by a short-wavelength (high-energy) photon. Perhaps the most familiar example of fluorescence is when certain materials emit visible light when exposed to ultraviolet light which is invisible to the human eye. The example of fluorescence discussed here with dissolved oxygen sensing happens to use two different colors (wavelengths) of visible light, but the basic principle is the same.↩︎

  263. Impurities such as metals and water are held to values less than 1 part per billion (ppb) in modern optical fiber-grade glass.↩︎

  264. The “index of refraction” (\(n\)) for any substance is the ratio of the speed of light through a vacuum (\(c\)) compared to the speed of light through that substance (\(v\)): \(n = {c \over v}\). For all substances this value will be greater than one (i.e. the speed of light will always be greatest through a vacuum, at 299792458 meters per second or 186282.4 miles per second). Thus, the refractive index for an optically transparent substance is analogous to the reciprocal of the velocity factor of an electrical transmission line, where the permittivity and permeability of the cable materials act to slow down the propagation of electric and magnetic fields through the cable.↩︎

  265. All of these sizes refer to glass fibers. Plastic-based optical fibers are also manufactured, with much larger core diameters to offset the much greater optical losses through plastic compared to through ultra-pure glass. A typical plastic optical fiber (POF) standard is specified at a core diameter of 980 microns and a cladding diameter of 1000 microns (1 millimeter)!↩︎

  266. A common core size for “multi-mode” optical fiber is 50 microns, or 50 micro-meters. If a wavelength of 1310 nanometers (1.31 microns) is used, the core’s diameter will be \(50 \over 1.31\) or over 38 times the wavelength.↩︎

  267. The most straight-forward way to make an optical fiber single-mode is to manufacture it with a skinnier core. However, this is also possible to achieve by increasing the wavelength of the light used! Remember that what makes a single-mode optical fiber only have one mode is the diameter of its core relative to the wavelength of the light. For any optical fiber there is a cutoff wavelength above which it will operate as single-mode and below which it will operate as multi-mode. However, there are practical limits to how long of a wavelength we can make the light before we run into other problems, and so single-mode optical fiber is made for standard light wavelengths by manufacturing the cable with an exceptionally small core diameter.↩︎

  268. Typically a few inches for multi-mode fiber.↩︎

  269. Not just light lost along the length of the fiber, but also at each connector on the fiber, since placing the test fiber within the optical path between the light source and optical power meter necessarily introduces another pair of connectors where light may be lost.↩︎

  270. Since distance along any path is simply the product of speed and time (\(x = vt\)), and the speed of light through an optical fiber is a well-defined quantity (\(v = {c \over n}\) where \(n\) is the core’s index of refraction), the distance between the OTDR and the flaw is trivial to calculate.↩︎

  271. Mistaken interpretations of switch status remains one of the most resilient misconceptions for students first learning this topic. It seems that a great many students prefer to think of a switch’s drawn status as its status at the present moment (e.g. when the process is running as expected). I believe the heart of this misconception is the meaning of the word “normal,” which to most peoples’ minds refers to “the way things typically are.”↩︎

  272. In this discussion I am deliberately omitting the detail of deadband for process switches, for the sake of simplicity.↩︎

  273. This curious label is used to describe switch contacts lacking their own built-in power source, as opposed to a switch contained in a device that also provides power to drive the switch circuit. Dry contacts may be mechanical in nature, or they may be electronic (e.g. transistor). By contrast, a “wet” contact is one already connected to an internal power source, ready to drive a load with no other external components needed.↩︎

  274. To be honest, one could use an NPN transistor to source current or a PNP to sink, but it would require the transistor be used in the common-collector configuration which does not allow for saturation. The engineers designing these proximity switches strive for complete saturation of the transistor, in order to achieve minimum “on” resistance, and that requires a common-emitter configuration.↩︎

  275. If the trip setting of a pressure switch is below atmospheric pressure, then it will be “actuated” at atmospheric pressure and in its “normal” status only when the pressure falls below that trip point (i.e. a vacuum).↩︎

  276. “Ferrous” simply means any iron-containing substance.↩︎

  277. The reason for this opposition is rooted in the roles of primary and secondary coils as power load and source, respectively. The voltage across each coil is a function of Faraday’s Law of Electromagnetic Induction: \(V = N{d \phi \over dt}\). However, since the primary coil acts as a load (drawing power from the 120 VAC source) and the secondary coil acts as a source (sending power to the probes), the directions of current through the two coils will be opposite despite their common voltage polarities. The secondary coil’s opposite current direction causes an opposing magnetic force in that section of the core, reducing the magnetic flux there. In a normal power transformer, this reduction in magnetic flux caused by secondary current is also felt by the primary coil (since there is only one magnetic “path” in a power transformer’s core), which then causes the primary coil to draw more current and re-establish the core flux at its original magnitude. With the inductive relay, however, the opposing magnetic force created by the secondary coil simply forces more of the primary coil’s magnetic flux to bypass to the alternate route: through the armature.↩︎

  278. The B/W Controls model 5200 solid-state relay, for example, uses only 8 volts AC at the probe tips.↩︎

  279. If the trip setting of a temperature switch is below ambient temperature, then it will be “actuated” at ambient temperature and in its “normal” status only when the temperature falls below that trip point (i.e. colder than ambient).↩︎

  280. A plug valve is very much like a ball valve, the difference being the shape of the rotating element. Rather than a spherical ball, the plug valve uses a truncated cone as the rotary element, a slot cut through the cone serving as the passageway for fluid. The conical shape of a plug valve’s rotating element allows it to wedge tightly into the “closed” (shut) position for exceptional sealing.↩︎

  281. While it would be technically possible to use water instead of oil in a hydraulic power system, oil enjoys some distinct advantages. First, oil is a lubricating substance, and non-corrosive, unlike water. Second, oil enjoys a wider operating temperature range than water, which tends to both freeze and boil more readily.↩︎

  282. Note also how identical reservoir symbols may be placed at different locations of the diagram although they represent the exact same reservoir. This is analogous to “ground” symbols in electronic schematic diagrams, every ground symbol representing a common connection to the same zero-potential point.↩︎

  283. Close-coupled hydraulic systems with variable-displacement pumps and/or motors may achieve high efficiency, but they are the exception rather than the rule. One such system I have seen was used to couple a diesel engine to the drive axle of a large commercial truck, using a variable-displacement pump as a continuously-variable transmission to keep the diesel engine in its optimum speed range. The system was so efficient, it did not require a cooler for the hydraulic oil!↩︎

  284. Many kinds of hydraulic oils are flammable, so this is not a perfectly true statement. However, fire-resistant fluids such as Skydrol (introduced to the aviation industry for safety) are commercially available.↩︎

  285. Certain types of plastic pipe such as PVC should never be used in compressed air systems because it becomes brittle and liable to fracture over time. If you are considering the use of plastic for a high-pressure compressed air system, be sure the type of plastic is engineered for air pressure service!↩︎

  286. One could argue that enough fluid pressure could override the solenoid’s energized state as well, so why choose to have the fluid pressure act in the direction of helping the return spring? The answer to this (very good) question is that the solenoid’s energized force greatly exceeds that of the return spring. This is immediately obvious on first inspection, as the solenoid must be stronger than the return spring or else the solenoid valve would never actuate! Furthermore, the solenoid’s force must be significantly stronger than the spring, or else the valve would open rather slowly. Fast valve action demands a solenoid force that greatly exceeds spring force. Realizing this, now, we see that the spring is the weaker of the two forces, and thus it makes perfect sense why we should use the valve in such a way that the process pressure helps the spring: the solenoid’s force has the best chance of overcoming the force on the plug produced by process pressure, so those two forces should be placed in opposition, while the return spring’s force should work with (not against) the process pressure.↩︎

  287. In hydraulics, it is common to use the letter “T” to represent the tank or reservoir return connection rather than the letter “E” for exhaust, which is why the supply and vent lines on this valve are labeled “P” and “T”, respectively.↩︎

  288. The letters “IAS” refer to instrument air supply.↩︎

  289. This solenoid valve arrangement would be designated 1oo2 from the perspective of starting the turbine, since only one out of the two solenoids needs to trip in order to initiate the turbine start-up.↩︎

  290. If you examine this diagram closely, you will notice an error in it: it shows the top and bottom of the piston actuator connected together by air tubing, which if implemented in real life would prevent air pressure from imparting any force to the valve stem at all! Connecting the top and bottom of the actuator together would ensure the piston always sees zero differential pressure, and thus would never develop a resultant force. The output tube of PY-590 should only connect to the bottom of the piston actuator, not to the bottom and the top. A more minor error in this diagram snippet is the labeling of SOV-590A: it actually reads “SOV-59DA” if you look closely enough! My first inclination when sampling this real P&ID for inclusion in the book was to correct the errors, but I think an important lesson may be taught by leaving them in: documentation errors are a realistic challenge you will contend with on the job as an instrumentation professional!↩︎

  291. To view a flip-book animation of this sequence, turn to Appendix [animation_blinking_lights] beginning on page .↩︎

  292. To view a flip-book animation of this same sequence, turn to Appendix [animation_3phase_motor] beginning on page .↩︎

  293. A helpful analogy for this effect is to imagine a sailboat traveling directly downwind, its motive force provided by a sail oriented perpendicular to the direction of travel. It should be obvious that in this configuration the sailboat cannot travel faster than the wind. What is less obvious is the fact that the sailboat can’t even travel as fast as the wind, its top speed in this configuration being slightly less than the wind speed. If the sailboat somehow did manage to travel exactly at the wind’s speed, the sail would go slack because there would be no relative motion between the sail and the wind, and therefore the sail would cease to provide any motive force. Thus, the sailboat must “slip” or “lag” behind the wind speed just enough to fill the sails with enough force to overcome water friction and maintain speed.↩︎

  294. As a vivid illustration of this concept, I once worked at an aluminum foundry where an AC induction motor stator assembly was used to electromagnetically spin molten aluminum inside the mold as it cooled from molten to solid state. Even though aluminum is a non-magnetic material, it was still spun by the stator’s rotating magnetic field due to electromagnetic induction and Lenz’s Law.↩︎

  295. Two magnetic poles in the stator per phase, which is the lowest number possible because each phase naturally produces both a “north” and a “south” pole when energized. In the case of a three-phase induction or synchronous motor, this means a total of six magnetic stator poles.↩︎

  296. Doubling the number of magnetic poles increases the number of AC power cycles required for the rotating magnetic field to complete one full revolution. This effect is not unlike doubling the number of light bulbs in a chaser light array of fixed length, making it seem as though the light sequence is moving slower because there are more bulbs to blink along the same distance.↩︎

  297. As mentioned previously, the rotor can never fully achieve synchronous speed, because if it did there would be zero relative motion between the rotating magnetic field and the rotating rotor, and thus no induction of currents in the rotor bars to create the induced magnetic fields necessary to produce a reaction torque. Thus, the rotor must “slip” behind the speed of the rotating magnetic field in order to produce a torque, which is why the full-load speed of an induction motor is always just a bit slower than the synchronous speed of the rotating magnetic field (e.g. a 4-pole motor with a synchronous speed of 1800 RPM will rotate at approximately 1750 RPM).↩︎

  298. In this mode, the machine is called an induction alternator rather than an induction motor.↩︎

  299. Faraday’s Law of Electromagnetic Induction describes the voltage induced in a wire coil of \(N\) turns as proportional to the rate of change of the magnetic flux: \(V = N {d \phi \over dt}\). The greater the difference in speed between the rotor and the rotating magnetic field, the greater \({d \phi \over dt}\), inducing greater voltages in the rotor and thus greater currents in the rotor.↩︎

  300. This principle is not difficult to visualize if you consider the phase sequence as a repeating pattern of letters, such as ABCABCABC. Obviously, the reverse of this sequence would be CBACBACBA, which is nothing more than the original sequence with letters A and C transposed. However, you will find that transposing any two letters of the original sequence transforms it into the opposite order: for example, transposing letters A and B turns the sequence ABCABCABC into BACBACBAC, which is the same order as the sequence CBACBACBA.↩︎

  301. I once encountered a washing machine induction motor with an “open” fault in the start winding. When energized, this motor remained still and hummed because it had no second phase to give its magnetic field a rotation. However, if you used your hand to give the motor a spin in either direction, the motor would accelerate to full speed in that direction!↩︎

  302. In this example, the direction of rotation is counter-clockwise. The shaded poles are oriented counter-clockwise of center, which means their delayed magnetic fields create an “appearance” of rotation in that direction: the magnetic field achieves its peak strength first at the pole centers, and then later (delayed) at the shaded poles, as though there were an actual magnet rotating in that direction.↩︎

  303. A convenient source of small shaded-pole motors is your nearest home improvement or hardware store, where they likely sell replacement electric motors for bathroom fans. Of course, you may also find such motors inside of a variety of discarded electric appliances as well. Being rather rugged devices, it is quite common to find the shaded-pole motor inside of an electrical appliance in perfect condition even though other parts of that appliance may have failed with age. In fact, the shaded-pole motor shown in the preceding photograph was salvaged from a “water-pic” electric toothbrush, the motor used to drive a small water pump (which in this case had mechanically failed) delivering water to the head of the toothbrush.↩︎

  304. This is not to say overload heaters cannot fail open, because they can and will under extraordinary circumstances. However, opening like a fuse is not the design function of an overload heater.↩︎

  305. For a more complete coverage of protective relays, refer to section 25.7 beginning on page .↩︎

  306. One way to help clarify the function of a protective relay is to envision circuit protection without one. Household and low-current industrial circuit breakers are constructed to have their own internal current-sensing elements (either thermal or magnetic) to force the circuit breaker open automatically when current exceeds a pre-set limit. With protective relays, the circuit breaker instead has a “trip coil” which will cause the breaker to trip when energized. The breaker then relies entirely on the (external) protective relay to tell it when to trip. By relegating the function of event detection to a sophisticated, external relay, the circuit breaker may act much “smarter” in protecting against a wider variety of faults and abnormal conditions than if it relied entirely on its own internal overcurrent-sensing mechanism.↩︎

  307. Potential transformers are also known as voltage transformers, abbreviated VT.↩︎

  308. This bucket was still under construction at the time the photograph was taken. As such, none of the motor leads have been connected, which is why there are no power conductors exiting the bottom of the bucket. Instead, all you see are three terminals ready to accept heavy-gauge motor leads.↩︎

  309. An unfortunately common tendency among novices is to sketch slash marks through relay contact symbols in order to show when they happen to be closed. This is a very bad habit, and should be discouraged at all times! Diagonal lines drawn through a contact symbol are supposed to denote the contact to be normally-closed, not closed: it shows that a switch contact will be in the closed (conducting) state when it is at rest. What we actually need is a different kind of symbol to show when a contact is closed during any arbitrary condition we may imagine. When someone uses this same symbology to denote a contact that happens to be closed during some condition, it needlessly confuses the concepts of closed versus normally-closed.↩︎

  310. If the diode were connected the other way, it would pass current whenever the proximity switch turned on, shorting past the relay coil and most likely damaging the proximity switch in doing so!↩︎

  311. A “contactor” is nothing more than a very large electromechanical relay, and itself is a form of interposing device. Its purpose is to make and break three-phase AC power to a heavy load (e.g. an electric motor) at the command of a much smaller electrical signal, in this case a 120 volt AC signal sent to the coil of the contactor.↩︎

  312. There are such things as soft PLCs, which consist of special-purpose software running on an ordinary personal computer (PC) with some common operating system. Soft PLCs enjoy the high speed and immense memory capacity of modern personal computers, but do not possess the same ruggedness either in hardware construction or in operating system design. Their applications should be limited to non-critical controls where neither main process production nor safety would be jeopardized by a control system failure.↩︎

  313. I/O “channels” are often referred to as “points” in industry lingo. Thus, a “32-point input card” refers to an input circuit with 32 separate channels through which to receive signals from on/off switches and sensors.↩︎

  314. By “control wire,” I mean the single conductor connecting the I/O card channel to the field device, as opposed to conductors directly common with either the positive or negative lead of the voltage source. If you focus your attention on this one wire, noting the direction of conventional-flow current through it, the task of determining whether a device is sourcing or sinking current becomes much simpler.↩︎

  315. Some modern PLCs such as the Koyo “CLICK” are also discrete-only. Analog I/O and processing is significantly more complex to engineer and more expensive to manufacture than discrete control, and so low-end PLCs are more likely to lack analog capability.↩︎

  316. A “de facto” standard is one arising naturally out of legacy rather than by an premeditated agreement between parties. Modbus and Profibus networks are considered “de facto” standards because those networks were designed, built, and marketed by pioneering firms prior to their acceptance as standards for others to conform to. In Latin, de facto means “from the fact,” which in this case refers to the fact of pre-existence: a standard agreed upon to conform to something already in popular use. By contrast, a standard intentionally agreed upon before its physical realization is a de jure standard (Latin for “from the law”). FOUNDATION Fieldbus is an example of a de jure standard, where a committee arrives at a consensus for a network design and specifications prior to that network being built and marketed by any firm.↩︎

  317. It should be noted that in some situations the programming software will fail to color the contacts properly, especially if their status changes too quickly for the software communication link to keep up, and/or if the bit(s) change state multiple times within one scan of the program. However, for simple programs and situations, this rule holds true and is a great help to beginning programmers as they learn the relationship between real-world conditions and conditions within the PLC’s “virtual” world.↩︎

  318. The electrical wiring shown in this diagram is incomplete, with the “Common” terminal shown unconnected for simplicity’s sake.↩︎

  319. For a PLC program contact, the shading represents virtual “conductivity.” For a PLC program coil, the shading represents a set (1) bit.↩︎

  320. It is worth noting the legitimacy of referencing virtual contacts to output bits (e.g. contact Y5), and not just to input bits. A “virtual contact” inside a PLC program is nothing more than an instruction to the PLC’s processor to read the status of a bit in memory. It matters not whether that bit is associated with a physical input channel, a physical output channel, or some abstract bit in the PLC’s memory. It would, however, be wrong to associate a virtual coil with an input bit, as coil instructions write bit values to memory, and input bits are supposed to be controlled solely by the energization states of their physical input channels.↩︎

  321. The most modern Allen-Bradley PLCs have all but done away with fixed-location I/O addressing, opting instead for tag name based I/O addressing. However, enough legacy Allen-Bradley PLC systems still exist in industry to warrant coverage of these addressing conventions.↩︎

  322. Also called the data table, this map shows the addressing of memory areas reserved for programs entered by the user. Other areas of memory exist within the SLC 500 processor, but these other areas are inaccessible to the technician writing PLC programs.↩︎

  323. This is not to say one cannot specify a particular bit in an otherwise whole word. In fact, this is one of the powerful advantages of Allen-Bradley’s addressing scheme: it gives you the ability to precisely specify portions of data, even if that data is not generally intended to be portioned into smaller pieces!↩︎

  324. Programmers familiar with languages such as C and C++ might refer to an Allen-Bradley “element” as a data structure, each type with a set configuration of words and/or bits.↩︎

  325. Referencing the Allen-Bradley engineering literature, we see that the accumulator word may alternatively be addressed by number rather than by mnemonic, T4:2.2 (word 2 being the accumulator word in the timer data structure), and that the “done” bit may be alternatively addressed as T4:2.0/13 (bit number 13 in word 0 of the timer’s data structure). The mnemonics provided by Allen-Bradley are certainly less confusing than referencing word and bit numbers for particular aspects of a timer’s function!↩︎

  326. Some systems such as the Texas Instruments 505 series used “X” labels to indicate discrete input channels and “Y” labels to indicate discrete output channels (e.g. input X9 and output Y14). This same labeling convention is still used by Koyo in its DirectLogic and “CLICK” PLC models. Siemens continues a similar tradition of I/O addressing by using the letter “I” to indicate discrete inputs and the letter “Q” to indicate discrete outputs (e.g. input channel I0.5 and output Q4.1).↩︎

  327. This particular program and editor is for the Koyo “CLICK” series of micro-PLCs.↩︎

  328. If this were a legacy Allen-Bradley PLC system using absolute addressing, we would be forced to address the three sensor inputs as I:1/0, I:1/1, and I:1/2 (slot 1, channels 0 through 2), and the indicator lamp output as O:2/0 (slot 2, channel 0). If this were a newer Logix5000 Allen-Bradley PLC, the default tag names would be Local:1:I.Data.0, Local:1:I.Data.1, and Local:1:I.Data.2 for the three inputs, and Local:2:O.Data.0 for the output. However, in either system we have the ability to assign symbolic addresses so we have a way to reference the I/O channels without having to rely on these cumbersome labels. The programs showing in this book exclusively use tag names rather than absolute addresses, since this is the more modern programming convention.↩︎

  329. The most likely reason why one out of two flame sensors might not detect the presence of a flame is some form of misalignment or fouling of the flame sensor. In fact, this is a good reason for using a 2-out-of-3 flame detection system rather than a simplex (1-out-of-1) detector scheme: to make the system more tolerant of occasional sensor problems without compromising burner safety.↩︎

  330. The particular input and output channels chosen for this example are completely arbitrary. There is no particular reason to choose input channels 6 and 7, or output channel 2, as I have shown in the wiring diagram. Any available I/O channels will suffice.↩︎

  331. While it is possible to wire the overload contact to one of the PLC’s discrete input channels and then program a virtual overload contact in series with the output coil to stop the motor in the event of a thermal overload, this strategy would rely on the PLC to perform a safety function which is probably better performed by hard-wired circuitry.↩︎

  332. A very common misconception among students first learning PLC Ladder Diagram programming is to always associate contacts with PLC inputs and coils with PLC outputs, thus it seems weird to have a contact bear the same label as an output. However, this is a false association. In reality, contacts and coils are read and write instructions, and thus it is possible to have the PLC read one of its own output bits as a part of some logic function. What would be truly strange is to label a coil with an input bit address or tag name, since the PLC is not electrically capable of setting the real-world energization status of any input channels.↩︎

  333. In an effort to alleviate this confusion, the Allen-Bradley corporation (Rockwell) uses the terms examine if closed (XIC) and examine if open (XIO) to describe “normally open” and “normally closed” virtual contacts, respectively, in their Ladder Diagram programming. The idea here is that a virtual contact drawn as a normally-open symbol will be “examined” (declared “true”) by the PLC’s processor if its corresponding input channel is energized (powered by a real-life contact in the closed state). Conversely, a virtual contact drawn as a normally-closed symbol (with a slash mark through the middle) will be “examined” by the PLC’s processor if its corresponding input channel is de-energized (if the real-life contact sending power to that terminal is in the open state). In my experience, I have found this nomenclature to be even more confusing to students than simply calling these virtual contacts “normally open” and “normally closed” like other PLC manufacturers do. The foundational concept for students to grasp here is that the virtual contact is not a direct representation of the real-life electrical switch contact – rather, it is a for the bit set by power coming from the real-life electrical switch contact.↩︎

  334. Referred to as “Latch” and “Unlatch” coils by Allen-Bradley.↩︎

  335. This represents the IEC 61131-3 standard, where each variable within an instruction may be “connected” to its own arbitrary tag name. Other programming conventions may differ somewhat. The Allen-Bradley Logix5000 series of controllers is one of those that differs, following a convention reminiscent of structure element addressing in the C programming language: each counter is given a tag name, and variables in each counter are addressed as elements within that structure. For example, a Logix5000 counter instruction might be named parts_count, with the accumulated count value (equivalent to the IEC’s “current value”) addressed as parts_count.ACC (each element within the counter specified as a suffix to the counter’s tag name).↩︎

  336. The “enable out” (ENO) signal on the timer instruction serves to indicate the instruction’s status: it activates when the enable input (EN) activates and de-activates when either the enable input de-activates or the instruction generates an error condition (as determined by the PLC manufacturer’s internal programming). The ENO output signal serves no useful purpose in this particular program, but it is available if there were any need for other rungs of the program to be “aware” of the run-time timer’s status.↩︎

  337. The enable (EN) input signals specified in the IEC 61131-3 programming standard make retentive off-delay timers possible (by de-activating the enable input while maintaining the “IN” input in an inactive state), but bear in mind that most PLC implementations of timers do not have separate EN and IN inputs. This means (for most PLC timer instructions) the only input available to activate the timer is the “IN” input, in which case it is impossible to create a retentive off-delay timer (since such a timer’s elapsed time value would be immediately re-set to zero each time the input re-activates).↩︎

  338. Perhaps two pumps performing the same pumping function, one serving as a backup to the other. Alternating motor control ensures the two motors’ run times are matched as closely as possible.↩︎

  339. The operation of the drum is not unlike that of an old player piano, where a strip of paper punched with holes caused hammers in the piano to automatically strike their respective strings as the strip was moved along at a set speed, thus playing a pre-programmed song.↩︎

  340. Perhaps the most practical way to give production personnel access to these bits without having them learn and use PLC programming software is to program an HMI panel to write to those memory areas of the PLC. This way, the operators may edit the sequence at any time simply by pressing “buttons” on the screen of the HMI panel, and the PLC need not have its program altered in any “hard” way by a technician or engineer.↩︎

  341. In this particular example, the mask value is FFFF hexadecimal, which means all 1’s in a 16-bit field. This mask value tells the sequencer instruction to regard all bits of each B3 word that is read. To contrast, if the mask were set to a value of 000F hexadecimal instead, the sequencer would only pay attention to the four least-significant bits of each B3 word that is read, while ignoring the 12 more-significant bits of each 16-bit word. The mask allows the SQO instruction to only write to selected bits of the destination word, rather than always writing all 16 bits of the indexed word to the destination word.↩︎

  342. An older term for an operator interface panel was the “Man-Machine Interface” or “MMI.” However, this fell out of favor due to its sexist tone.↩︎

  343. If the HMI is based on a personal computer platform (e.g. Rockwell RSView, Wonderware, FIX/Intellution software), it may even be equipped with a hard disk drive for enormous amounts of historical data storage.↩︎

  344. This particular trainer was partially constructed from recycled materials – the wooden platform, light switches, and power cord – to minimize cost.↩︎

  345. Not all industrial measurement and control signals are “live zero” like the 3-15 PSI and 4-20 mA standards. 0 to 10 volts DC is a common “dead zero” signal standard, although far more common in environmental (building heating and cooling) control systems than industrial control systems. I once encountered an old analog control system using \(-10\) volts to +10 volts as its analog signal range, which meant 0 volts represented a 50% signal! A failed signal path in such a system could have been very misleading indeed, as a 50% signal value is not suspicious in the least.↩︎

  346. This is a temperature sensing element consisting of two different metal wires joined together, which generate a small voltage proportional to temperature. The correspondence between junction temperature and DC millivoltage is very well established by scientific testing, and so we may use this principle to sense process temperature.↩︎

  347. We could have just as easily chosen 100 percent for \(x\) and 20 milliamps for \(y\), for it would have yielded the same result of \(b = 4\).↩︎

  348. A common misconception for people learning to apply the slope-intercept formula to linear instrument ranges is that they tend to assume \(b\) will always be equal to the lower-range value (LRV) of the instrument’s output range. For example, given a transmitter with a 4-20 mA output range, the false assumption is that \(b = 4\). This does happen to be true only if the instrument possesses a “dead-zero” input range, but it will not be true for instruments with a live-zero input range such in this case here where the temperature input range is 50 to 140 degrees.↩︎

  349. The “Source” and “Dest” parameters shown in this instruction box refer to special addresses in the PLC’s memory where the input (ADC count) and output (scaled flowrate) values will be found. You need not concern yourself with the meanings of I:4.2 and N7:15, because these addresses are unimportant to the task of deriving a scaling formula.↩︎

  350. Some of my students have referred to such a circuit as a smart load, since it functions as a load but nevertheless exerts control over the circuit current.↩︎

  351. Of course, a 1 ohm resistor would drop 4 mV at 4 mA loop current, and drop 20 mV at 20 mA loop current. These small voltage values necessitate a highly accurate DC voltmeter for field measurement!↩︎

  352. In the following illustrated examples, the transmitter is assumed to be a pressure transmitter with a calibrated range of 0 to 750 inches of water column, 4-20 mA. The controller’s PV (process variable) display is appropriately ranged to display 0 to 750 as well.↩︎

  353. Note the staggered layout of the tube fittings, intended to improve access to each one. Remember that the technician used a 9/16 inch wrench to loosen and tighten the tube fitting nuts, so it was important to have working room between fittings in which to maneuver a wrench.↩︎

  354. The numbers are difficult to see here, because the entire panel has been painted in a thick coat of grey paint. This particular panel was stripped of all pneumatic instruments and outfitted with electronic instruments, so the rows of bulkhead fittings no longer serve a purpose, but to remind us of legacy technology. I must wonder if some day in the future I will include a photograph of an empty terminal strip in another chapter of this book, as I explain how wired “legacy” instruments have all but been replaced by wireless (radio) instruments! Let the ghosts of the past speak to you, dear reader, testifying to the endless march of technological evolution.↩︎

  355. In ISA parlance, this would be a “WT” instrument, “W” signifying weight and “T” signifying transmitter.↩︎

  356. Compressed air is a valuable commodity because much energy is required to compress and distribute high-pressure air. Every pneumatic instrument’s nozzle is essentially a “leak” in the compressed air system, and the combined effect of many operating pneumatic instruments is that the air compressor(s) must continually run to meet demand.↩︎

  357. A more precise way to express gain as a ratio of changes is to use the “derivative” notation of calculus: \(d\hbox{Output} \over d\hbox{Input}\)↩︎

  358. An “order of magnitude” is nothing more than a ten-fold change. Do you want to sound like you’re really smart and impress those around you? Just start comparing ordinary differences in terms of orders of magnitude. “Hey dude, that last snowboarder’s jump was an order of magnitude higher than the one before!” “Whoa, that’s some big air . . .” Just don’t make the mistake of using decibels in the same way (“Whoa dude, that last jump was at least 10 dB higher than the one before!”) – you don’t want people to think you’re a nerd.↩︎

  359. In order for negative feedback to hold the input differential at zero volts, we must also assume the opamp has enough power supply voltage and output current capability to achieve this balance. No amplifier can output more voltage than its power supply gives it, nor can it output more current than its active components can conduct.↩︎

  360. In physics, the word moment refers to the product of force times lever length (the “moment arm”). This is alternatively known as torque. Thus, we could classify this pneumatic mechanism as a torque-balance system, since the two bellows’ forces are converted into torques (about the pivot point) which then cancel even though the forces themselves are unequal.↩︎

  361. An important feature of motion-balance mechanisms is that the bellows function as calibrated spring elements in addition to being force generators. Force-balance systems move so slightly that the spring characteristics of the bellows is irrelevant – not so with motion-balance mechanisms! In fact, some motion-balance mechanisms actually place coil springs inside of brass bellows to more precisely fix the elastic properties of the assembly.↩︎

  362. In my teaching experience, students try hard to find simplistic ways to distinguish force-balance from motion-balance systems. For example, many will try to associate fulcra with force-balance, assuming all motion-balance systems lack pivot points (which is not true!). Another example is to associate pivoting links with motion-balance mechanisms, which is likewise untrue. The problem with these efforts is that they are usually based on analysis of just a few different pneumatic mechanisms, making it easy to over-generalize. The truth of the matter is that a wide variety of pneumatic designs exist, defying easy categorization. My advice to you is the same as my advice to my students: you are going to have to think your way through the analysis of these mechanisms rather than memorize simple rules. Perform “thought experiments” whereby you imagine the effects of an increasing or decreasing input signal and then “see” for yourself whether the mechanism balances force with force or motion with motion, keeping in mind the simplifying assumption of an absolutely constant baffle/nozzle gap.↩︎

  363. This negating action is a hallmark of force-balance systems. When the system has reached a point of equilibrium, the components will have returned to (very nearly) their original positions. With motion-balance systems, this is not the case: one component moves, and then another component moves in response to keep the baffle/nozzle detector at a near-constant gap, but the components definitely do not return to their original positions or orientations.↩︎

  364. A good problem-solving technique to apply here is limiting cases, where we imagine the effects of extreme changes. In this case, we may imagine what would happen if the nozzle were moved all the way to the baffle’s axis, as a limiting case of moving closer to this axis. With the nozzle in this position, no amount of baffle rotation would cause the nozzle to move away, because there is no lateral motion at the axis. Only at some radius away from the axis will there be any tangential motion for the nozzle to detect and back away from, which is why the gain of the mechanism may be altered by changing the nozzle’s location with respect to the baffle’s axis.↩︎

  365. “Ferrous” simply means any substance containing the element iron.↩︎

  366. Recall the mathematical relationship between force, pressure, and area: \(F = PA\). If we desire a greater pressure (\(P\)) to generate the same force (\(F\)) as before, we must decrease the area (\(A\)) upon which that pressure acts.↩︎

  367. It is quite easy to dislodge these small-section, large-diameter O-rings from their respective grooves during re-assembly of the unit. Be very careful when inserting the module back into the housing!↩︎

  368. Having said this, pneumatic instruments can be remarkably rugged devices. I once worked on a field-mounted pneumatic controller attached to the same support structure as a severely cavitating control valve. The vibrations of the control valve transferred to the controller through the support, causing the baffle to hammer repeatedly against the nozzle until the nozzle’s tip had been worn down to a flattened shape. Remarkably, the only indication of this problem was the fact the controller was having some difficulty maintaining setpoint. Other than that, it seemed to operate adequately! I doubt any electronic device would have fared as well, unless completely “potted” in epoxy.↩︎

  369. The technical term for the “speed limit” of any data communications channel is bandwidth, usually expressed as a frequency (in Hertz).↩︎

  370. HART communications occur at a rate of 1200 bits per second, and it is this slow by design: this slow data rate avoids signal reflection problems that would occur in unterminated cables at higher speeds. For more insight into how and why this works, refer to the “transmission lines” section 5.10 beginning on page . An example of a “slow” process variable suitable for HART digital monitoring or control is the temperature of a large building or machine, where the sheer mass of the object makes temperature changes slow by nature, and therefore does not require a fast digital data channel to report that temperature.↩︎

  371. The host system in this case is an Emerson DeltaV DCS, and the device manager software is Emerson AMS.↩︎

  372. This concept is not unlike HART, where audio-tone AC signals are superimposed on DC signal cables, so that digital data may be communicated along with DC signal and power.↩︎

  373. In the early days of personal computers, many microprocessor chips lacked floating-point processing capability. As a result, floating-point calculations had to be implemented in software, with programmed algorithms instructing the microprocessor how to do floating-point arithmetic. Later, floating-point processor chips were added alongside the regular microprocessor to implement these algorithms in hardware rather than emulated them through software, resulting in increased processing speed. After that, these floating-point circuits were simply added to the internal architecture of microprocessor chips as a standard feature. Even now, however, computer programmers understand that floating-point math requires more processor cycles than integer math, and should be avoided in applications where speed is essential and floating-point representation is not. In applications demanding a small microprocessor chip and optimum speed (e.g. embedded systems), fixed-point notation is best for representing numbers containing fractional quantities.↩︎

  374. Note how the place-weights shown for the exponent field do not seem to allow for negative values. There is no negative place-weight in the most significant position as one might expect, to allow negative exponents to be represented. Instead the IEEE standard implies a bias value of \(-127\). For example, in a single-precision IEEE floating-point number, an exponent value of 11001101 represents a power of 78 (since 11001101 = 205, the exponent’s actual value is 205 \(-\) 127 = 78).↩︎

  375. This motor may be “interlocked” to prevent start-up if certain conditions are not met, thereby automatically prohibiting the operator’s instruction to start.↩︎

  376. It is also possible to “simulate” fractional resolution using an integer number, by having the HMI insert a decimal point in the numerical display. For instance, one could use a 16-bit signed integer having a numerical range of \(-32768\) to +32767 to represent motor temperature by programming the HMI to insert a decimal point between the hundreds’ and the tens’ place. This would give the motor temperature tag a (displayed) numerical range of \(-327.68\) degrees to +327.67 degrees, and a (displayed) resolution of \(\pm\)0.01 degree. This is basically the concept of a fixed-point number, where a fixed decimal point demarcates whole digits (or bits) from fractional digits (or bits).↩︎

  377. Morse code is an example of a self-compressing code, already optimized in terms of minimum bit count. Fixed-field codes such as Baudot and the more modern ASCII tend to waste bandwidth, and may be “compressed” by removing redundant bits.↩︎

  378. For example, the Baudot code 11101 meant either “Q” or “1” depending on whether the last shift character was “letters” or “figures,” respectively. The code 01010 meant either “R” or “4”. The code 00001 meant either “T” or a “5”. This overloading of codes is equivalent to using the “shift” key on a computer keyboard to switch between numbers and symbols (e.g. “5” versus “%”, or “8” versus “*”). The use of a “shift” key on a keyboard allows single keys on the keyboard to represent multiple characters.↩︎

  379. Including the digital source code for this textbook!↩︎

  380. To illustrate, the first 128 Unicode characters (0000 through 007F hexadecimal) are identical to ASCII’s 128 characters (00 through 7F hexadecimal)↩︎

  381. The origin of this word has to do with the way many ADC circuits are designed, using binary counters. In the tracking design of ADC, for instance, an up-down binary counter “tracks” the varying analog input voltage signal. The binary output of this counter is fed to a DAC (digital-to-analog converter) sending an analog voltage to a comparator circuit, comparing the digital counter’s equivalent value to the value of the measured analog input. If one is greater than the other, the up-down counter is instructed to either count up or count down as necessary to equalize the two values. Thus, the up-down counter repeatedly steps up or down as needed to keep pace with the value of that analog voltage, its digital output literally “counting” steps along a fixed scale representing the full analog measurement range of the ADC circuit.↩︎

  382. Whether or not the actual ADC will round down depends on how it is designed. Some ADCs round down, others “bobble” equally between the two nearest digital values, and others yet “bobble” proportionately between the two nearest values. No matter how you round in your calculation of count value, you will never be more than 1 count off from the real ADC’s value.↩︎

  383. A less-commonly-used synonym for aliasing is folding.↩︎

  384. A mechanical demonstration of aliasing may be seen by using a stroboscope to “freeze” the motion of a rotating object. If the frequency of a flashing strobe light is set to exactly match the rotational speed of the object (e.g. 30 Hz flashing = 1800 RPM rotation), the object will appear to stand still because your eyes only see the object when it is at the exact same position every flash. This is equivalent to sampling a sinusoidal signal exactly once per cycle: the signal appears to be constant (DC) because the sine wave gets sampled at identical points along its amplitude each time. If the strobe light’s frequency is set slightly slower than the object’s rotational speed, the object will appear to slowly rotate in the forward direction because each successive flash reveals the object to be in a slightly further angle of rotation than it was before. This is equivalent to sampling a sinusoidal signal at a rate slightly slower than the signal’s frequency: the result appears to be a sinusoidal wave, but at a much slower frequency.↩︎

  385. Remember that an ADC has a finite number of “counts” to divide its received analog signal into. A 12-bit ADC, for example, has a count range of 0 to 4095. Used to digitize an analog signal spanning the full range of 0 to 5 VDC, this means each count will be “worth” 1.22 millivolts. This is the minimum amount of signal voltage that a 12-bit, 0-5 VDC converter is able to resolve: the smallest increment of signal it is able to uniquely respond to. 1.22 mV represents 0.037% of 3.3 volts, which means this ADC may “resolve” down to the very respectable fraction 0.037% of the solar panel’s 33 volt range. If we were to use the same ADC range to directly measure the shunt resistor’s voltage drop (0 to 0.54 VDC), however, it would only be able to resolve down to 0.226% of the 0 to 5.4 amp range, which is much poorer resolution.↩︎

  386. The relationship of temperature to \(V_{signal}\) in this sensor circuit will not be precisely linear, especially if \(R_{fixed}\) is not tremendously larger than \(R_{RTD}\).↩︎

  387. To be fair, there is such a thing as a time-multiplexed analog system for industrial data communication (I’ve actually worked on one such system, used to measure voltages on electrolytic “pots” in the aluminum industry, communicating the voltages across hundreds of individual pots to a central control computer).↩︎

  388. There is, of course, the issue of reliability. Communicating thousands of process data points over a single cable may very well represent a dramatic cost savings in terms of wire, junction boxes, and electrical conduit. However, it also means you will lose all those thousands of data points if that one cable becomes severed! Even with digital technology, there may be reason to under-utilize the bandwidth of a signal cable.↩︎

  389. A common technique for high-speed parallel data communication over short distances (e.g. on a printed circuit board) is differential signaling, where each bit requires its own dedicated pair of conductors. A 16-bit parallel digital signal communicated this way would require 32 conductors between devices!↩︎

  390. I do not expect any reader of this book to have firsthand knowledge of what a “telegraph” is, but I suspect some will have never heard of one until this point. Basically, a telegraph was a primitive electrical communication system stretching between cities using a keyswitch at the transmitting end to transmit on-and-off pulses and a “sounder” to make those pulses audible on the receiving end. Trained human operators worked these systems, one at the transmitting end (encoding English-written messages into a series of pulses) and one at the receiving end (translating those pulses into English letters).↩︎

  391. A test message sent in 1924 between two teletype machines achieved a speed of 1920 characters per minute (32 characters per second), sending the sentence fragments “THE WESTERN ELECTRIC COMPANY”, “FRESHEST EGGS AT BOTTOM MARKET PRICES”, and “SHE IS HIS SISTER”.↩︎

  392. “Asynchronous” refers to the transmitting and receiving devices not having to be in perfect synchronization in order for data transfer to occur. Every industrial data communications standard I have ever seen is asynchronous rather than synchronous. In synchronous serial networks, a common “clock” signal maintains transmitting and receiving devices in a constant state of synchronization, so that data packets do not have to be preceded by “start” bits or followed by “stop” bits. Synchronous data communication networks are therefore more efficient (not having to include “extra” bits in the data stream) but also more complex. Most long-distance, heavy traffic digital networks (such as the “backbone” networks used for the Internet) are synchronous for this reason.↩︎

  393. Later versions of teletype systems employed audio tones instead of discrete electrical pulses so that many different channels of communication could be funneled along one telegraph line, each channel having its own unique audio tone frequency which could be filtered from other channels’ tones.↩︎

  394. This simply refers to the fact that the signal never settles at 0 volts.↩︎

  395. This is most definitely not the case with NRZ encoding. To see the difference for yourself, imagine a continuous string of either “0” or “1” bits transmitted in NRZ encoding: it would be nothing but a straight-line DC signal. In Manchester encoding, it is impossible to have a straight-line DC signal for an indefinite length of time. Manchester signals must oscillate at a minimum frequency equal to the clock speed, thereby guaranteeing all receiving devices the ability to detect that clock speed and thereby synchronize themselves with it.↩︎

  396. It is relatively easy to build an apparatus that makes HART tone signals audible: simply connect a small audio speaker to the low-impedance side of an audio transformer (8 ohms) and then connect the high-impedance side of that transformer (typically 1000 ohms) to the HART signal source through a coupling capacitor (a few microfarads is sufficient). When HART communications are taking place, you can hear the FSK tones reproduced by the speaker, which sound something like the noises made by a fax machine as it communicates over a telephone line.↩︎

  397. This is one of the advantages of Manchester encoding: it is a “self-clocking” signal.↩︎

  398. This is likely why “bit rate” and “baud rate” became intermingled in digital networking parlance: the earliest serial data networks requiring speed configuration were NRZ in nature, where “bps” and “baud” are one and the same.↩︎

  399. For Manchester encoding, “worst-case” is a sequence of identical bit states, such as 111111111111, where the signal must make an extra (down) transition in order to be “ready” for each meaningful (up) transition representing the next “1” state.↩︎

  400. An equivalent program for Microsoft Windows is Hyperterminal. A legacy application, available for both Microsoft Windows and UNIX operating systems, is the serial communications program called kermit.↩︎

  401. This is standard in EIA/TIA-232 communications.↩︎

  402. It should take only a moment or two of reflection to realize that such a parity check cannot detect an even number of corruptions, since flipping the states of any two or any four or any six (or even all eight!) bits will not alter the evenness/oddness of the bit count. So, parity is admittedly an imperfect error-detection scheme. However, it is certainly better than no error detection at all!↩︎

  403. The “XOFF” code tells the transmitting device to halt its serial data stream to give the receiving device a chance to “catch up.” In data terminal applications, the XOFF command may be issued by pressing the key combination \(<\)Ctrl\(><\)S\(>\). This will “freeze” the stream of text data sent to the terminal by the host computer. The key combination \(<\)Ctrl\(><\)Q\(>\) sends the “XON” code, enabling the host computer to resume data transmission to the terminal.↩︎

  404. I once encountered this very type of failure on the job, where a copper-to-fiber adapter on a personal computer’s Ethernet port jammed the entire network by constantly spewing a meaningless stream of data. Fortunately, indicator lights on all the channels of the communications equipment clearly showed where the offending device was on the network, allowing us to take it out of service for replacement.↩︎

  405. An additional layer sometimes added to the OSI model is layer 8, representing either the human user of the network system or the physical process interfacing with the network system. If the purpose of this model is to describe all the functioning portions of a communications link in the context of a system used for some practical purpose, layer 8 represents an essential part of that system and should not be ignored.↩︎

  406. If you are thinking the acronym should be “IOS” instead of “ISO,” you are thinking in terms of English. “ISO” is a non-English acronym!↩︎

  407. It should be noted here that some network standards incorporating the name “Modbus” actually do specify lower-level concerns. Modbus Plus is a layer 2 standard, for example.↩︎

  408. The designation of “RS-232” has been used for so many years that it still persists in modern writing and manufacturers’ documentation, despite the official status of the EIA/TIA label. The same is true for EIA/TIA-422 and EIA/TIA-485, which were formerly known as RS-422 and RS-485, respectively.↩︎

  409. “Daisy-chain” networks formed of more than two devices communicating via EIA/TIA-232 signals have been built, but they are rarely encountered, especially in industrial control applications.↩︎

  410. Often (incorrectly) called a “DB-9” connector.↩︎

  411. The way hardware-based flow control works in the EIA/TIA-232 standard involves two lines labeled RTS (“Request To Send”) and CTS (“Clear To Send”) connecting the two devices together on a point-to-point serial network in addition to the TD (“Transmitted Data”) and RD (“Received Data”) and signal ground lines. Like the TD and RD terminals which must be “crossed over” between devices such that the TD of one device connects to the RD of the other device and vice-versa, the RTS and CTS terminals of the two devices must be similarly crossed. The RTS is an output line while the CTS is an input, on both devices. When a device is able to receive data, it activates its RTS output line to request data. A device is not permitted to transmit data on its TD line until it is cleared to send data by an active state on its CTS input line.↩︎

  412. Also known by the unwieldy acronym DCTE (Data Circuit Terminating Equipment). Just think of “DTE” devices as being at the very end (“terminal”) of the line, whereas “DCE” devices are somewhere in the middle, helping to exchange serial data between DTE devices.↩︎

  413. In fact, the concept is not unique to digital systems at all. Try talking to someone using a telephone handset held upside-down, with the speaker near your mouth and the microphone hear your ear, and you will immediately understand the necessity of having “transmit” and “receive” channels swapped from one end of a network to the other!↩︎

  414. Once I experimented with the fastest data rate I could “push” an EIA/TIA-232 network to, using a “flat” (untwisted, unshielded pair) cable less than ten feet long, and it was 192 kbps with occasional data corruptions. Park, Mackay, and Wright, in their book Practical Data Communications for Instrumentation and Control document cable lengths as long as 20 meters at 115 kbps for EIA/TIA-232, and 50 meters (over 150 feet!) at 19.2 kbps: over three times better than the advertised EIA/TIA-232 standard.↩︎

  415. Former labels for EIA/TIA-422 and EIA/TIA-485 were RS-422 and RS-485, respectively. These older labels persist even today, to the extent that some people will not recognize what you are referring to if you say “EIA/TIA-422” or “EIA/TIA-485.”↩︎

  416. 1200 meters is the figure commonly cited in technical literature. However, Park, Mackay, and Wright, in their book Practical Data Communications for Instrumentation and Control document EIA/TIA-422 and EIA/TIA-485 networks operating with cable lengths up to 5 km (over 16000 feet!) at data rates of 1200 bps. Undoubtedly, such systems were installed with care, using high-quality cable and good wiring practices to minimize cable capacitance and noise.↩︎

  417. In fact, a great many EIA/TIA-485 networks in industry operate “unterminated” with no problems at all.↩︎

  418. For detailed explanation of how and why this is necessary, refer to section 5.10 beginning on page .↩︎

  419. Actually two terminating resistors in parallel, since one with be at each end of the cable! The actual DC biasing network will be more complicated as well if more than one device has its own set of internal bias resistors.↩︎

  420. These very same problems may arise in FOUNDATION Fieldbus networks, for the exact same reason: the cabling is passive (for increased reliability). This makes FOUNDATION Fieldbus instrument systems challenging to properly install for most applications (except in really simple cases where the cable route is straightforward), which in my mind is its single greatest weakness at the time of this writing (2009). I strongly suspect Ethernet’s history will repeat itself in FOUNDATION Fieldbus at some later date: a system of reliable “hub” devices will be introduced so that these problems may be averted, and installations made much simpler.↩︎

  421. There are practical limits as to how many hubs may be “daisy-chained” together in this manner, just as there are practical limits to how long a twisted-pair cable may be (up to 100 meters). If too many hubs are cascaded, the inevitable time delays caused by the process of repeating those electrical impulses will cause problems in the network. Also, I have neglected to specify the use of crossover cables to connect hubs to other hubs – this is a topic to be covered later in this book!↩︎

  422. With only half the available wire pairs used in a standard 10 Mbps or 100 Mbps Ethernet cable, this opens the possibility of routing two Ethernet channels over a single four-pair UTP cable and RJ-45 connector. Although this is non-standard wiring, it may be a useful way to “squeeze” more use out of existing cables in certain applications. In fact, “splitter” devices are sold to allow two RJ-45-tipped cables to be plugged into a single RJ-45 socket such that one four-pair cable will then support two Ethernet pathways.↩︎

  423. This means modern Ethernet is capable of full-duplex communication between two devices, whereas the original coaxial-based Ethernet was only capable of half-duplex communication.↩︎

  424. Even the cost difference is negligible. It should be noted, though, that switches may exhibit unintended behavior if a cable is unplugged from one of the ports and re-plugged into a different port. Since switches internally map ports to device addresses, swapping a device from one port to another will “confuse” the switch until it re-initializes the port identities. Re-initialization may be forced by cycling power to the switch, if the switch does not do so on its own.↩︎

  425. When packets travel between different kinds of networks, the “gateway” devices at those transition points may need to fragment large IP packets into smaller IP packets and then re-assemble those fragments at the other end. This fragmentation and reassembly is a function of Internet Protocol, but it happens at the packet level. The task of portioning a large data block into packet-sized pieces at the very start and then reassembling those packets into a facsimile of the original data at the very end, however, is beyond the scope of IP.↩︎

  426. In fact, this is precisely the state of affairs if you use a dial-up telephone connection to link your personal computer with the Internet. If you use dial-up, your PC may not use Ethernet at all to make the connection to your telephone provider’s network, but rather it might uses EIA/TIA-232 or USB to a modem (modulator/demodulator) device, which turns those bits into modulated waveforms transmittable over a voice-quality analog telephone line.↩︎

  427. The “ping” command is technically defined as an “Echo Request” command, which is part of the Internet Control Message Protocol (ICMP) suite.↩︎

  428. Prior to ICANN’s formation in 1999, the Internet Assigned Numbers Authority, or IANA was responsible for these functions. This effort was headed by a man named Jon Postel, who died in 1998.↩︎

  429. The term “loopback” refers to an old trick used by network technicians to diagnose suspect serial port connections on a computer. Using a short piece of copper wire (or even a paperclip) to “jumper” pins 2 and 3 on an EIA/TIA-232 serial port, any serial data transmitted (out of pin 3) would be immediately received (in pin 2), allowing the serial data to “loop back” to the computer where it could be read. This simple test, if passed, would prove the computer’s low-level communication software and hardware was working properly and that any networking problems must lie elsewhere.↩︎

  430. Also called “netmasks” or simply “masks.”↩︎

  431. These are real test cases I performed between two computers connected on a 10 Mbps Ethernet network. The error messages are those generated by the ping utility when communication was attempted between mis-matched computers.↩︎

  432. According to Douglas Giancoli’s Physics for Scientists and Engineers textbook, the mass of the Earth is \(5.98 \times 10^{24}\) kg, or \(5.98 \times 10^{27}\) grams. Dividing \(2^{128}\) (the number of unique IPv6 addresses) by the Earth’s mass in grams yields the number of available IPv6 address per gram of Earth mass. Furthermore, if we assume a grain of sand has a mass of about 1 milligram, and that the Earth is modeled as a very large collection of sand grains (not quite the truth, but good enough for a dramatic illustration!), we arrive at 57 million IPv6 addresses per grain of sand on Earth.↩︎

  433. The fully-written loopback address is actually 0000:0000:0000:0000:0000:0000:0000:0001.↩︎

  434. While it is possible to use non-contiguous subnet mask values, the practice is frowned upon by most system administrators.↩︎

  435. Indeed, subnet masks for IPv4 can be specified in this manner as well, not just IPv6 subnet masks.↩︎

  436. The “ping” command is often used to test the response of a single IP node on a network, by issuing the command followed by the IP address of interest (e.g. ping 192.168.35.70). By contrast, a “broadcast” ping request attempts to contact a range of IP addresses within a subnet. For example, if we wished to ping all the IP addresses beginning with 192.168.35, we would issue the command with all 1’s in the last octet of the IP address field (e.g. ping 192.168.35.255).↩︎

  437. In UNIX-based operating systems the program used to access the command line is often called terminal or xterm. In Microsoft Windows systems it is simply called cmd.↩︎

  438. Both IPv4 and IPv6 reserve eight bits for this purpose.↩︎

  439. In this particular case, I typed netstat -an to specify all (a) ports with numerical (n) IP addresses and port numbers shown.↩︎

  440. A Device Description, or DD (DD) file, is analogous to a “driver” file used to instruct a personal computer how to communicate with a printer, scanner, or any other complex peripheral device. In this case, the file instructs the HART configuration computer on how it should access parameters inside the field instrument’s microcontroller. Without an appropriate DD file loaded on the configuration computer, many of the field instrument’s parameters may be inaccessible.↩︎

  441. A “DD” file, or Device Descriptor file, is akin to a driver file used in a personal computer to allow it to communicate data with some peripheral device such as a printer. DD files basically tell the HART communicator how it needs to access specific data points within the HART field instrument.↩︎

  442. Every byte (8 bits) of actual HART data is sent as an asynchronous serial frame with a start bit, parity bit, and stop bit, so that 11 bits’ worth of time are necessary to communicate 8 bits of real data. These “byte frames” are then packaged into larger message units called HART telegrams (similar to Ethernet data frames) which include bits for synchronizing receiving devices, specifying device addresses, specifying the length of the data payload, communicating device status, etc.↩︎

  443. The HART standard specifies “master” devices in a HART network transmit AC voltage signals, while “slave” devices transmit AC current signals.↩︎

  444. Truth be told, HART instruments configured to operate in burst mode are still able to respond to queries from a master device, just not as often. Between bursts, the HART slave device waits a short time to allow any master devices on the network to poll. When polled, the slave device will respond as it normally would, then resumes its bursts of unpolled data once again. This means that normal master/slave communication with a HART instrument set for burst mode will occur at a slower pace than if the instrument is set for normal mode.↩︎

  445. These Modbus data frames may be communicated directly in serial form, or “wrapped” in TCP segments and IP packets and Ethernet frames, or otherwise contained in any form of packet-based protocol as needed to transport the data from one device to another. Thus, Modbus does not “care” how the data is communicated, just what the data means for the end-device.↩︎

  446. Recall that each ASCII character requires 7 bits to encode. This impacts nearly every portion of the Modbus data frame. Slave address and function code portions, for example, require 14 bits each in ASCII but only 8 bits each in RTU. The data portion of a Modbus ASCII frame requires one ASCII character (7 bits) to represent each hexadecimal symbol that in turn represents just 4 bits of actual data. The data portion of a Modbus RTU frame, by contrast, codes the data bits directly (i.e. 8 bits of data appear as 8 bits within that portion of the frame). Additionally, RTU data frames use quiet periods (pauses) as delimiters, while ASCII data frames use three ASCII characters in total to mark the start and stop of each frame, at a “cost” of 21 additional bits. These additional delimiting bits do serve a practical purpose, though: they format each Modbus ASCII data frame as its own line on the screen of a terminal program.↩︎

  447. This C-language code is typed and saved as a plain-text file on the computer, and then a compiler program is run to convert this “source” code into an “executable” file that the computer may then run. The compiler I use on my Linux-based systems is gcc (the GNU C Compiler). If I save my Modbus program source code to a file named tony_modbus.c, then the command-line instruction I will need to issue to my computer instructing GCC to compile this source code will be gcc tony_modbus.c -lmodbus. The argument -lmodbus tells GCC to “link” my code to the code of the pre-installed libmodbus library in order to compile a working executable file. By default, GCC outputs the executable as a file named a.out. If I wish to rename the executable something more meaningful, I may either do so manually after compilation, or invoke the “outfile” option of gcc and specify the desired executable filename: (e.g. gcc -o tony.exe tony_modbus -lmodbus). Once compiled, the executable file many be run and the results of the Modbus query viewed on the computer’s display.↩︎

  448. Even for devices where the register size is less than two bytes (e.g. Modicon M84 and 484 model controllers have 10 bits within each register), data is still addressed as two bytes’ worth per register, with the leading bits simply set to zero to act as placeholders.↩︎

  449. Each FF terminator resistor is actually a series resistor/capacitor network. The capacitor blocks direct current, so that the 100 \(\Omega\) resistor does not impose a DC load on the system. The substantial current that would be drawn by a 100 ohm resistor across 24 VDC source if not blocked by a series capacitor (24 V / 100 ohms = 240 mA) would not only waste power (nearly 6 watts per resistor!) but that much current would cause an unnecessary degradation of supply voltage at the field device terminals due to voltage drop along the length of the segment cable’s conductors.↩︎

  450. Be sure to check the specifications of the host system H1 interface card, because many are equipped with internal terminating resistors given the expectation that the host system will connect to one far end of the trunk!↩︎

  451. You should consult an NEC code book regarding specific limitations of ITC wiring. Some of the main points include limiting individual ITC cable lengths to a maximum of 50 feet, and mechanically securing the cable at intervals not to exceed 6 feet.↩︎

  452. Provided the metal enclosure’s door is left in the closed position at all times! Keying a radio transmitter near such a coupling device while the enclosure door is open invites trouble.↩︎

  453. Perusing documentation on an assortment of Emerson/Rosemount FF products, I found the following data: model 752 indicator = 17.5 mA, model 848L logic = 22 mA, model 848T temperature = 22 mA maximum, model 3244MV temperature = 17.5 mA typical, model DVC6000f valve positioner = 18 mA maximum, model 848L logic = 22 mA, model 848T temperature = 22 mA maximum, model 3244MV temperature = 17.5 mA typical, model 5500 guided-wave radar level = 21 mA, model 3095MV flow (differential pressure) = 17 mA approximate, model DVC6000f valve positioner = 18 mA maximum.↩︎

  454. I have successfully built several “demonstration” FF systems using cables of questionable quality, including lamp (“zip”) cord, with no termination resistors whatsoever! If the distances involved are short, just about any cable type or condition will suffice. When planning the installation of any real Fieldbus installation, however, you should never attempt to save money by purchasing lesser-grade cable. The problems you will likely encounter as a consequence of using sub-standard cable will more than offset the initial cost saved by its purchase.↩︎

  455. Total device current draw, spur length versus number, intrinsic safety voltage and current limitations, etc.↩︎

  456. At the time of this writing (2009), the ISA has yet to standardize new methods of FF documentation in the style of loop sheets and P&IDs. This is one of those circumstances where technology has outpaced convention.↩︎

  457. While many industrial control systems have been built using networks that are not strictly deterministic (e.g. Ethernet), generally good control behavior will result if the network latency time is arbitrarily short. Lack of “hard” determinism is more of a problem in safety shutdown systems where the system must respond within a certain amount of time in order to be effective in its safety function. An industrial example of a safety system requiring “hard” determinism is compressor surge control. An automotive example requiring “hard” determinism is anti-lock brake control.↩︎

  458. By “sequencing,” I mean the execution of all antecedent control functions prior to “downstream” functions requiring the processed data. If in a chain of function blocks we have some blocks lagging in their execution, other blocks relying on the output signals of those lagging blocks will be functioning on “old” data. This effectively adds dead time to the control system as a whole. The more antecedent blocks in the chain that lag in time behind the needs of their consequent blocks, the more dead time will be present in the entire system. To illustrate, if block A feeds data into block B which feeds data into block C, but the blocks are executed in reverse order (C, then B, then A) on the same period, a lag time of three whole execution periods will be manifest by the A-B-C algorithm.↩︎

  459. The engineers there are not without a sense of humor, choosing for their manufacturer code the same model number as the venerable model 1151 differential pressure transmitter, perhaps the most popular Rosemount industrial instrument in the company’s history!↩︎

  460. In addition to the main LAS, there may be “backup” LAS devices waiting ready to take over in the event the main LAS fails for any reason. These are Link Master devices configured to act as redundant Link Active Schedulers should the need arise. However, at any given time there will be only one LAS. In the event of an LAS device failure, the Link Master device with the lowest-number address will “step up” to become the new LAS.↩︎

  461. The Source/Sink VCR is the preferred method for communicating trend data, but trends may be communicated via any of the three VCR types. All other factors being equal, acyclic communication (either Source/Sink or Client/Server) of trend data occupies less network bandwidth than cyclic communication (Publisher/Subscriber).↩︎

  462. Some FF devices capable of performing advanced function block algorithms for certain process control schemes may have the raw computational power to be an LAS, but the manufacturer has decided not to make them Link Master capable simply to allow their computational power to be devoted to the function block processing rather than split between function block tasks and LAS tasks.↩︎

  463. “Reset windup” which is also known as “integral windup” is what happens when any loop controller possessing reset (integral) action senses a difference between PV and SP that it cannot eliminate. The reset action over time will drive the controller’s output to saturation. If the source of the problem is a control valve that cannot attain the desired position, the controller will “wind up” or “wind down” in a futile attempt to drive the valve to a position it cannot go. In an FF system where the final control element provides “back calculation” feedback to the PID algorithm, the controller will not attempt to drive the valve farther than it is able to respond.↩︎

  464. This is not an unreasonable loop execution time for a gas pressure control system. However, liquid pressure control is notoriously fast-acting, and will experience less than ideal response with a controller dead time of one second.↩︎

  465. For example, sub-statuses for a “Bad” status include out of service, device failure, sensor failure, and non-specific. Sub-statuses for an “Uncertain” status include last usable value (LUV), sensor conversion not accurate, engineering unit range violation, sub-normal, and non-specific.↩︎

  466. The great pioneer of mechanical computing technology, Charles Babbage, commented in his book Passages from the Life of a Philosopher in 1864 that not one but two members of the British parliament asked him whether his computer (which he called the Difference Engine) could output correct answers given incorrect data. His reaction was both frank and hilarious: “I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.”↩︎

  467. One of the tasks of the Fieldbus Foundation is to maintain approved listings of FF devices in current manufacture. The concept is that whenever a manufacturer introduces a new FF device, it must be approved by the Fieldbus Foundation in order to receive the Fieldbus “badge” (a logo with a stylized letter “F”). Approved devices are cataloged by the Fieldbus Foundation, complete with their DD file sets. This process of approval is necessary for operational compatibility (called interoperability) between FF devices of different manufacture. Without some form of centralized standardization and approval, different manufacturers would invariably produce devices mutually incompatible with each other.↩︎

  468. On the Emerson DeltaV system, most options are available as drop-down menu selections following a right-mouse-button click on the appropriate icon.↩︎

  469. Animated graphics on the Emerson DeltaV control system prominently feature an anthropomorphized globe valve named Duncan. There’s nothing like a computer programmer with a sense of humor . . .↩︎

  470. Fieldbus transmitters often have multiple channels of measurement data to select from. For example, the multi-variable Rosemount 3095MV transmitter assigns channel 1 as differential pressure, channel 2 as static pressure, channel 3 as process temperature, channel 4 as sensor temperature, and channel 5 as calculated mass flow. Setting the Channel parameter properly in the AI block is therefore critical for linking it to the proper measurement variable.↩︎

  471. If I were king for a day, I would change the labels “direct” and “indirect” to “raw” and “scaled”, respectively. Alternatively, I would abandon the “direct” option altogether, because even when this option is chosen the OUT_Scale range still exists and may contain “scaled” values even though these are ignored in “direct” mode!↩︎

  472. It is important to note that you must correctly calculate the corresponding XD_Scale and OUT_Scale parameter values in order for this to work. The Fieldbus instrument does not calculate the parameters for you, because it does not “know” how many PSI correspond to how many feet of liquid level in the tank. These values must be calculated by some knowledgeable human technician or engineer and then entered into the instrument’s AI block, after which the instrument will execute the specified scaling as a purely mathematical function.↩︎

  473. When configuring the XD_Scale high and low range values, be sure to maintain consistency with the transducer block’s Primary_Value_Range parameter unit. Errors may result from mis-matched measurement units between the transducer block’s measurement channel and the analog input block’s XD_Scale parameter.↩︎

  474. An alternative method of shield grounding is to directly connect it to earth ground at one end, and then capacitively couple it to ground at other points along the segment length. The capacitor(s) provide an AC path to ground for “bleeding off” any induced AC noise without providing a DC path which would cause a ground loop.↩︎

  475. Bear in mind the tolerable level for noise will vary with signal voltage level as well. All other factors being equal, a strong signal is less affected by the presence of noise than a weak signal (i.e. the signal-to-noise ratio, or SNR, is crucial).↩︎

  476. It is impossible to “lock in” (trigger) non-periodic waveforms on an analog oscilloscope, and so most network communications will appear as an incomprehensible blur when viewed on this kind of test instrument. Digital oscilloscopes have the ability to “capture” and display momentary pulse streams, making it possible to “freeze” any portion of a network signal for visual analysis.↩︎

  477. For a more detailed discussion of antennas and their electrical characteristics, refer to section 5.11 beginning on page .↩︎

  478. Due to the “end effect” of lumped capacitance at the tip of the antenna, an actual quarter-wave antenna needs to be slightly shorter than an actual quarter of the wavelength. This holds true for dipoles and other antenna designs as well.↩︎

  479. It is interesting to note that although the “Bel” is a metric unit, it is seldom if ever used without the metric prefix “deci” (\(1 \over 10\)). One could express powers in microbels, megabels, or any other metric prefix desired, but it is never done in industry: only the decibel is used.↩︎

  480. The dominant mode of energy dissipation in an RF cable is dielectric heating, where the AC electric field between the cable conductors excites the molecules of the conductor insulation. This energy loss manifests as heat, which explains why there is less RF energy present at the load end of the cable than is input at the source end of the cable.↩︎

  481. In fact, logarithms are one of the simplest examples of a transform function, converting one type of mathematical problem into another type. Other examples of mathematical transform functions used in engineering include the Fourier transform (converting a time-domain function into a frequency-domain function) and the Laplace transform (converting a differential equation into an algebraic equation).↩︎

  482. This is precisely how a microwave oven works: water molecules are polar (that is to say, the electrical charges of the hydrogen and oxygen atoms are not symmetrical, and therefore each water molecule has one side that is more positive and an opposite side that is more negative), and therefore vibrate when subjected to electromagnetic fields. In a microwave oven, RF energy in the gigahertz frequency range is aimed at pieces of food, causing the water molecules within the food to heat up, thus indirectly heating the rest of the food. This is a practical example of an RF system where losses are not only expected, but are actually a design objective! The food represents a load to the RF energy, the goal being complete dissipation of all incident RF energy with no leakage outside the oven. In RF cable design, however, dissipative power losses are something to be avoided, the goal being complete delivery of RF power to the far end of the cable.↩︎

  483. One should not think that the outer edges of the shaded radiation patterns represents some “hard” boundary beyond which no radiation is emitted (or detected). In reality, the radiation patterns extend out to infinity (assuming otherwise empty space surrounding the antenna). Instead, the size of each shaded area simply represents how effective the antenna is in that direction compared to other directions. In the case of the vertical whip and dipole antennas, for instance, the radiation patterns show us that these antennas have zero effectiveness along the vertical (\(Y\)) axis centerline. To express this in anthropomorphic terms, these antenna designs are “deaf and mute” in those directions where the radiation pattern is sketched having zero radius from the antenna center.↩︎

  484. Or – applying the principle of reciprocity – antenna gain is really nothing more than a way to express how sensitive a receiving antenna is compared to a truly omnidirectional antenna.↩︎

  485. Actual signal power is typically expressed as a decibel ratio to a reference power of either 1 milliwatt (dBm) or 1 watt (dBW). Thus, 250 mW of RF power may be expressed as \(10 \log {250 \over 1}\) = 23.98 dBm or as \(10 \log {0.25 \over 1}\) = \(-6.02\) dBW. Power expressed in unit of dBm will always be 30 dB greater (\(1 \times 10^3\) greater) than power expressed in dBW.↩︎

  486. Noise power may be calculated using the formula \(P_n = kTB\), where \(P_n\) is the noise power in watts, \(k\) is Boltzmann’s constant (\(1.38 \times 10^{-23}\) J/K), \(T\) is the absolute temperature in Kelvin, and \(B\) is the bandwidth of the noise in Hertz. Noise power usually expressed in units of dBm rather than watts, because typical noise power values for ambient temperatures on Earth are so incredibly small.↩︎

  487. The inverse square law applies to any form of radiation that spreads from a point-source. In any such scenario, the intensity of the radiation received by an object from the point-source diminishes with the square of the distance from that source, simply because the rest of the radiated energy misses that target and goes elsewhere in space. This is why the path loss formula begins with a \(-20\) multiplier rather than \(-10\) as is customary for decibel calculations: given the fact that the inverse square law tells us path loss is proportional to the square of distance (\(D^2\)), there is a “hidden” second power in the formula. Following the logarithmic identity that exponents may be moved to the front of the logarithm function as multipliers, this means what would normally be a \(-10\) multiplier turns into \(-20\) and we are left with \(D\) rather than \(D^2\) in the fraction.↩︎

  488. “Margin” is the professionally accepted term to express extra allowance provided to compensate for unknowns. A more colorful phrase often used in the field to describe the same thing is fudge factor.↩︎

  489. I am indebted to Eric McCollum, Kei Hao, Shankar V. Achanta, Jeremy Blair, and David Kechalo for presenting this form of diagram in a technical paper presented at the 45th annual Western Protective Relay Conference in Spokane, Washington in October of 2018. I do not know if these authors are responsible for the invention of this form of graph, but it was certainly the first time I encountered one like it, and it so clearly showed all the fundamental quantities of an RF link budget that I had to include something similar in my book!↩︎

  490. The physics of Fresnel zones is highly non-intuitive, rooted in the wave-nature of electromagnetic radiation. It should be plain to see, though, that Fresnel zones cannot describe the actual electromagnetic field pattern between two antennas, because we know waves tend to spread out over space while Fresnel zones converge at each end. Likewise, Fresnel zones vary in size according to the distance between two antennas which we know radiation field patterns do not. It is more accurate to think of Fresnel zones as keep-clear areas necessary for reliable communication between two or more antennas rather than actual field patterns.↩︎

  491. Some obvious connecting paths between field devices have been omitted from this illustration if the path length exceeds a certain maximum distance. As you can see, the instruments in the far-left cluster must rely on data packet relaying by instruments closer to the gateway, since they themselves are too far away from the gateway to directly communicate.↩︎

  492. Another exciting technological development paralleling the implementation of WirelessHART in industry is that of energy-harvesting devices to generate DC electricity from nearby energy sources such as vibrating machines (mechanical motion), hot pipes (thermal differences), photovoltaic (solar) panels, and even small wind generators. Combined with rechargeable batteries to sustain instrument operation during times those energy sources are not producing, energy-harvesters promise great extension of battery life for wireless instruments of all types.↩︎

  493. The model 1420 gateway has been superseded by the Smart Wireless Gateway, also manufactured by Emerson.↩︎

  494. Device variables are addressed at the network gateway level by the device’s HART tag (long tag, not short tag) and internal device variable name. Thus, the primary variable (PV) of temperature transmitter TEMP2 is specified as TEMP2.PV using a period symbol (.) as the delimiting character between the device name and the internal variable name.↩︎

  495. This is an example of a first-generation Rosemount WirelessHART field instrument, back when the standard radio band was 900 MHz instead of 2.4 GHz. This explains why the antenna is longer than contemporary WirelessHART instruments.↩︎

  496. Each gateway device can of course have backup gateways with the same Network ID, just waiting to take over if the primary gateway fails. The point of the Network ID is that it identifies a single network with only one active gateway.↩︎

  497. However, it is actually quite rare to find an instrument where a change to the zero adjustment affects the instrument’s span.↩︎

  498. Various digital damping algorithms exist, but it may take as simple a form as successive averaging of buffered signal values coming out of a first-in-first-out (“FIFO”) shift register.↩︎

  499. Most popularly, using the HART digital-over-analog hybrid communication standard.↩︎

  500. Although those adjustments made on a digital transmitter tend to be easier to perform than repeated zero-and-span adjustments on analog transmitters due to the inevitable “interaction” between analog zero and span adjustments requiring repeated checking and re-adjustment during the calibration period.↩︎

  501. A 4% calibration error caused by sensor aging is enormous for any modern digital transmitter, and should be understood as an exaggeration presented only for the sake of illustrating how sensor error affects overall calibration in a smart transmitter. A more realistic amount of sensor error due to aging would be expressed in small fractions of a percent.↩︎

  502. HART is a hybrid analog/digital communication protocol used by a great many field instruments, allowing maintenance personnel to access and edit digital parameters inside the instrument using a computer-based interface. Hand-held HART communicators exist for this purpose, as does HART software designed to run on a personal computer. HART modems also exist to connect personal computers to HART-compatible field instruments.↩︎

  503. The NIST broadcasts audio transmissions of “Coordinated Universal Time” (UTC) on the shortwave radio frequencies 5 MHz, 10 MHz, 15 MHz, 20 MHz, and 25 MHz. Announcements of time, in English, occur at the top of every minute.↩︎

  504. In the case of pressure transmitters, re-trimming may be necessary if the device is ever re-mounted in a different orientation. Changing the physical orientation of a pressure transmitter alters the direction in which gravity tugs on the sensing element, causing it to respond as though a constant bias pressure were applied to it. This bias is often on the order of an inch of water column (or less), and usually consequential only for low-pressure applications such as furnace draft pressure.↩︎

  505. A noteworthy exception is the case of digital instruments, which output digital rather than analog signals. In this case, there is no need to compare the digital output signal against a standard, as digital numbers are not liable to calibration drift. However, the calibration of a digital instrument still requires comparison against a trusted standard in order to validate an analog quantity. For example, a digital pressure transmitter must still have its input calibration values validated by a pressure standard, even if the transmitter’s digital output signal cannot drift or be misinterpreted.↩︎

  506. Modern “smart” electronic pressure transmitters typically boast turndown ratios exceeding 100:1, with some having turndown ratios of 200:1 or more! Large turndown ratios are good because they allow users of instrumentation to maintain a smaller quantity of new transmitters in stock, since transmitters with large turndown ratios are more versatile (i.e. applicable to a wider variety of spans) than transmitters with small turndown ratios.↩︎

  507. According to Emerson product datasheet PS-00374, revision L, June 2009.↩︎

  508. According to the book Philosophy in Practice (second edition) published by Fluke, the initial expense of their Josephson Array in 1992 was $85000, with another $25000 budgeted for start-up costs. The annual operating cost of the array is approximately $10000, mostly due to the cost of the liquid helium refrigerant necessary to keep the Josephson junction array at a superconducting temperature. This consumable cost does not include the salary of the personnel needed to maintain the system, either. Presumably, a metrology lab of this caliber would employ several engineers and scientists to maintain all standards in top condition and to perform continuing metrological research.↩︎

  509. This brings to mind a good joke. Once there was a man who walked by an antique store every day on his way to work and noticed all the wall clocks on display at this store always perfectly matched in time. One day he happened to see the store owner and complimented him on the consistent accuracy of his display clocks, noting how he used the owner’s clocks as a standard to set his own wristwatch on his way to work. He then asked the owner how he kept all the clocks so perfectly set. The owner explained he set the clocks to the sound of the steam whistle at the local factory, which always blew precisely at noon. The store owner then asked the man what he did for a living. The man replied, “I operate the steam whistle at the factory.”↩︎

  510. This, of course, assumes the potentiometer has a sufficiently fine adjustment capability that we may adjust the millivoltage signal to any desired precision. If we were forced to use a coarse potentiometer – incapable of being adjusted to the precise amount of millivoltage we desired – then the accuracy of our calibration would also be limited by our inability to precisely control the applied voltage.↩︎

  511. The Celsius scale used to be called the Centigrade scale, which literally means “100 steps.” I personally prefer the name “Centigrade” to the name “Celsius” because the former actually describes something about the unit of measurement while the latter is a surname. In the same vein, I also prefer the older label “Cycles Per Second” (cps) to “Hertz” as the unit of measurement for frequency. You may have noticed by now that the instrumentation world does not yield to my opinions, much to my chagrin.↩︎

  512. Three, if you count the triple point, but this requires more sophisticated testing apparatus to establish than either the freezing or boiling points.↩︎

  513. Pressure does have some influence on the freezing point of most substances as well, but not nearly to the degree it has on the boiling point. For a comparison between the pressure-dependence of freezing versus boiling points, consult a phase diagram for the substance in question, and observe the slopes of the solid-liquid phase line and liquid-vapor phase line. A nearly-vertical solid-liquid phase line shows a weak pressure dependence, while the liquid-vapor phase lines are typically much closer to horizontal.↩︎

  514. For each of these examples, the assumptions of a 100% pure sample and an airless testing environment are made. Impurities in the initial sample and/or resulting from chemical reactions with air at elevated temperatures, may introduce serious errors.↩︎

  515. A “black body” is an idealized object having an emissivity value of exactly one (1). In other words, a black body is a perfect radiator of thermal energy. Interestingly, a blind hole drilled into any object at sufficient depth acts as a black body, and is sometimes referred to as a cavity radiator.↩︎

  516. For example, a solution with a pH value of 4.7 has a concentration of \(10^{-4.7}\) moles of active hydrogen ions per liter. For more information on “moles” and solution concentration, see section 3.7 beginning on page .↩︎

  517. A clean and healthy pH probe should stabilize within about 30 seconds of being inserted in a buffer solution.↩︎

  518. Carbon dioxide gas in ambient air will cause carbonic acid to form in an aqueous solution. This has an especially rapid effect on high-pH (alkaline) buffers.↩︎

  519. It is assumed that the concentration of oxygen in ambient air is a stable enough quantity to serve as a calibration standard for most industrial applications. It is certainly an accessible standard!↩︎

  520. If you are having difficulty understanding this concept, imagine a simple U-tube manometer where one of the tubes is opaque, and therefore one of the two liquid columns cannot be seen. In order to be able to measure pressure just by looking at one liquid column height, we would have to make a custom scale where every inch of height registered as two inches of water column pressure, because for each inch of height change in the liquid column we can see, the liquid column we can’t see also changes by an inch. A scale custom-made for a well-type manometer is just the same concept, only without such dramatic skewing of scales.↩︎

  521. As of this writing, 2008.↩︎

  522. For a simple demonstration of metal fatigue and metal “flow,” simply take a metal paper clip and repeatedly bend it back and forth until you feel the metal wire weaken. Gentle force applied to the paper clip will cause it to deform in such a way that it returns to its original shape when the force is removed. Greater force, however, will exceed the paper clip’s elastic limit, causing permanent deformation and also altering the spring characteristics of the clip.↩︎

  523. In the following diagram, both the sensing diaphragm and the stationary metal surfaces are shown colored blue, to distinguish these electrical elements from the other structural components of the device.↩︎

  524. A chop saw is admittedly not a tool of finesse, and it did a fair job of mangling this unfortunate differential capacitance cell. A bandsaw was tried at first, but made virtually no progress in cutting the hard stainless steel of the capsule assembly. The chop saw’s abrasive wheel created a lot of heat, discoloring the metal and turning the silicone fill fluid into a crystalline mass which had to be carefully chipped out by hand using an ice pick so as to not damage the thin metal sensing diaphragm. Keep these labors in mind, dear reader, as you enjoy this textbook!↩︎

  525. Not only did applied torque of the four capsule bolts affect measurement accuracy in the older 1151 model design, but changes in temperature resulting in changing bolt tension also had a detrimental impact on accuracy. Most modern differential pressure transmitter designs strive to isolate the sensing diaphragm assembly from flange bolt stress for these reasons.↩︎

  526. For example, a doubling of force results in a frequency increase of 1.414 (precisely equal to \(\sqrt{2}\)). A four-fold increase in pressure would be necessary to double the string’s resonant frequency. This particular form of nonlinearity, where diminishing returns are realized as the applied stimulus increases, yields excellent rangeability. In other words, the instrument is inherently more sensitive to changes in pressure at the low end of its sensing range, and “de-sensitizes” itself toward the high end of its sensing range.↩︎

  527. This is an example of a micro-electro-mechanical system, or MEMS.↩︎

  528. Based on the design of Foxboro’s popular model 13A pneumatic “DP cell” differential pressure transmitter.↩︎

  529. Very loosely based on the design of Foxboro’s now-obsolete E13 electronic “DP cell” differential pressure transmitter.↩︎

  530. One instrument technician I know referred to the Foxboro E13 differential pressure transmitter as “pig iron” after having to hoist it by hand to the top of a distillation column.↩︎

  531. As far as I have been able to determine, the labels “D/P” and “DP cell” were originally trademarks of the Foxboro Company. Those particular transmitter models became so popular that the term “DP cell” came to be applied to nearly all makes and models of differential pressure transmitter, much like the trademark “Vise-Grip” is often used to describe any self-locking pliers, or “Band-Aid” is often used to describe any form of self-adhesive bandage.↩︎

  532. One transmitter manufacturer I am aware of (ABB/Bailey) actually does use the “+” and “\(-\)” labels to denote high- and low-pressure ports rather than the more customary “H” and “L” labels found on other manufacturers’ DP products.↩︎

  533. Perfect common-mode rejection is impossible for differential pressure instruments just as it is impossible for electronic voltage-measuring instruments, but in either case the effect is usually minimal. For differential pressure transmitters, the effect of common-mode pressure on the instrument’s output signal is sometimes referred to as the line pressure effect or static pressure effect, typically stated as a percentage of the instrument’s upper range limit per unit of common-mode pressure.↩︎

  534. The electrical circuit shown on the right uses a pair of series-connected resistors to divide the source voltage into two parts, 5 volts and 95 volts. The pneumatic circuit shown on the left uses a pair of series-connected hand valves to divide the source pressure into two parts, 5 PSI and 95 PSI.↩︎

  535. Also called impulse tubes, gauge tubes, or sensing tubes.↩︎

  536. Truth be told, most process variables are inferred rather than directly measured. Even pressure, which is being used here to infer measurements such as liquid level and fluid flow, is itself inferred from some other variable inside the DP instrument (e.g. capacitance, strain gauge resistance, resonant frequency)!↩︎

  537. We simply assume Earth’s gravitational acceleration (\(g\)) to be constant as well.↩︎

  538. To return the transmitter to live service, simply reverse these steps: close the bleed valve, open the low-pressure block valve, close the equalizing valve, and finally open the high-pressure block valve.↩︎

  539. The standard 3-valve manifold, for instance, does not provide a bleed valve – only block and equalizing valves.↩︎

  540. This concept will be immediately familiar to anyone who has ever had to “bleed” air bubbles out of an automobile brake system. With air bubbles in the system, the brake pedal has a “spongy” feel when depressed, and much pedal motion is required to achieve adequate braking force. After bleeding all air out of the brake fluid tubes, the pedal motion feels much more “solid” than before, with minimal motion required to achieve adequate braking force. Imagine the brake pedal being the isolating diaphragm, and the brake pads being the pressure sensing element inside the instrument. If enough gas bubbles exist in the tubes, the brake pedal might stop against the floor when fully pressed, preventing full force from ever reaching the brake pads. Likewise, if the isolating diaphragm hits a hard motion limit due to gas bubbles in the fill fluid, the sensing element will not experience full process pressure.↩︎

  541. So long as the isolating diaphragm is “slack” (i.e. has no appreciable tautness or resistance to movement), the pressure of the fill fluid inside the capillary tube will be equal to the pressure of whatever fluid is within the process vessel. If any pressure imbalance were to develop between the process and fill fluids, the isolating diaphragm would immediately shift position away from the higher-pressure fluid and toward the lower-pressure fluid until equal pressures were re-established. In real practice, isolating diaphragms do indeed have some stiffness opposing motion, and therefore do not perfectly transfer pressure from the process fluid to the fill fluid. However, this pressure difference is usually negligible.↩︎

  542. Like all instrument diaphragms, this one is sensitive to damage from contact with sharp objects. If the diaphragm ever becomes nicked, dented, or creased, it will tend to exhibit hysteresis in its motion, causing calibration errors for the instrument. For this reason, isolating diaphragms are often protected from contact by a plastic plug when the instrument is shipped from the manufacturer. This plug must be removed from the instrument before placing it into service.↩︎

  543. Anyone familiar with “bleeding” air bubbles out of automotive hydraulic brake systems will understand this concept. In order for the pedal-operated hydraulic brakes in an automobile to function as designed, the hydraulic system must be gas-free. Incompressible liquid transfers pressure without loss of motion, whereas compressible gas bubbles will “give” in to pressure and result in lost brake pad motion for any given brake pedal motion. Thus, an hydraulic brake system with air bubbles in it will have a “spongy” feel at the brake pedal, and may not give full braking force when needed.↩︎

  544. Most pressure instrument manufacturers offer a range of fill fluids for different applications. Not only is temperature a consideration in the selection of the right fill fluid, but also potential contamination of or reaction with the process if the isolating diaphragm ever suffers a leak!↩︎

  545. Truth be told, this is a requirement for all pressure transmitter fill fluids even when isolating diaphragms are in place to prevent mixing of process and fill fluids, because no diaphragm is 100% guaranteed to seal forever. This means every pressure transmitter must be chosen for the application in mind, since modern DP transmitters all use fill fluid in their internal sensors, whether or not the impulse lines are also filled with a non-reactive fluid.↩︎

  546. In fact, after you become accustomed to the regular “popping” and “hissing” sounds of steam traps blowing down, you can interpret the blow-down frequency as a crude ambient temperature thermometer! Steam traps seldom blow down during warm weather, but their “popping” is much more regular (one every minute or less) when ambient temperatures drop well below the freezing point of water.↩︎

  547. “Cryogenic” simply refers to a condition of extremely low temperature required to condense a gas into liquid. Such liquids will flash into vapor if raised to room temperature, and so it is quite easy to make impulse lines self-purging in such cases.↩︎

  548. At least in the case of a liquid-filled impulse line generating its own hydrostatic pressure, that pressure is constant and may be compensated by “zero-shifting” the range of the pressure instrument. An impulse line that generates random surges of pressure cannot be compensated at all!↩︎

  549. Although this fluid would not normally contact pure oxygen in the process, it could if the isolating diaphragm inside the transmitter were to ever leak.↩︎

  550. Liquids are considered “miscible” if they may be mixed in any proportion to each other to form a solution. Immiscible liquids refuse to mix thoroughly, and therefore tend to separate.↩︎

  551. A spring-loaded cable float only works with liquid level measurement, while a retracting float will measure liquids and solids with equal ease. The reason for this limitation is simple: a float that always contacts the material surface is likely to become buried if the material in question is a solid (powder or granules), which must be fed into the vessel from above.↩︎

  552. We may prove this mathematically by algebraic substitution. Given that the total mass (\(m\)) of any liquid sample is equal to the product of that liquid’s mass density and its sample volume (\(m = \rho V\)), that volume (\(V\)) for any vessel of constant cross-sectional area (\(A\)) is given by the expression \(V = Ah\), and that hydrostatic pressure is equal to \(P = \rho g h\), we may combine these three equations to arrive at \(m = {AP \over g}\). This final equation demonstrates how the total mass of liquid stored in a vessel (\(m\)) of constant cross-sectional area (\(A\)) is directly proportional to pressure (\(P\)), and independent of density (\(\rho\)).↩︎

  553. Or alternatively, zero depression.↩︎

  554. There is some disagreement among instrumentation professionals as to the definitions of these two terms. According to Béla G. Lipták’s Instrument Engineers’ Handbook, Process Measurement and Analysis (Fourth Edition, page 67), “suppressed zero range” refers to the transmitter being located below the 0% level (the LRV being a positive pressure value), while “suppression,” “suppressed range,” and “suppressed span” mean exactly the opposite (LRV is a negative value). The Yokogawa Corporation defines “suppression” as a condition where the LRV is a positive pressure (“Autolevel” Application Note), as does the Michael MacBeth in his CANDU Instrumentation & Control course (lesson 1, module 4, page 12), Foxboro’s technical notes on bubble tube installations (pages 4 through 7), and Rosemount’s product manual for their 1151 Alphaline pressure transmitter (page 3-7). Interestingly, the Rosemount document defines “zero range suppression” as synonymous with “suppression,” which disagrees with Lipták’s distinction. My advice: draw a picture if you want the other person to clearly understand what you mean!↩︎

  555. As you are about to see, the calibration of an elevated transmitter depends on us knowing how much hydrostatic pressure (or vacuum, in this case) is generated within the tube connecting the transmitter to the process vessel. If liquid were to ever escape from this tube, the hydrostatic pressure would be unpredictable, and so would be the accuracy of our transmitter as a level-measuring instrument. A remote seal diaphragm guarantees no fill fluid will be lost if and when the process vessel goes empty.↩︎

  556. The sea water’s positive pressure at the remote seal diaphragm adds to the negative pressure already generated by the downward length of the capillary tube’s fill fluid (\(-2.43\) PSI), which explains why the transmitter only “sees” 2.46 PSI of pressure at the 100% full mark.↩︎

  557. Sometimes this is done out of habit, other times because instrument technicians do not know the capabilities of new technology.↩︎

  558. This is due to limited transmitter resolution. Imagine an application where the elevation head was 10 PSI (maximum) yet the vapor space pressure was 200 PSI. The majority of each transmitter’s working range would be “consumed” measuring gas pressure, with hydrostatic head being a mere 5% of the measurement range. This would make precise measurement of liquid level very difficult, akin to trying to measure the sound intensity of a whisper in a noisy room.↩︎

  559. Assuming the liquid level is equal to or greater than \(x\). Otherwise, the pressure difference between \(P_{bottom}\) and \(P_{middle}\) will depend on liquid density and liquid height. However, this condition is easy to check: the level computer simply checks to see if \(P_{middle}\) and \(P_{top}\) are unequal. If so, then the computer knows the liquid level exceeds \(x\) and it is safe to calculate density. If not, and \(P_{middle}\) registers the same as \(P_{top}\), the computer knows those two transmitters are both registering gas pressure only, and it knows to stop calculating density.↩︎

  560. The details of this math depend entirely on the shape of the tank. For vertical cylinders – the most common shape for vented storage tanks – volume and height are related by the simple formula \(V = \pi r^2 h\) where \(r\) is the radius of the tank’s circular base. Other tank shapes and orientations may require much more sophisticated formulae to calculate stored volume from height. See section 26.3 beginning on page , for more details on this subject.↩︎

  561. Here I will calculate all hydrostatic pressures in units of inches water column. This is relatively easy because we have been given the specific gravities of each liquid, which make it easy to translate actual liquid column height into column heights of pure water.↩︎

  562. Remember that a differential pressure instrument cannot “tell the difference” between a positive pressure applied to the low side, an equal vacuum applied to the high side, or an equivalent difference of two positive pressures with the low side’s pressure exceeding the high side’s pressure. Simulating the exact process pressures experienced in the field to a transmitter on a workbench would be exceedingly complicated, so we “cheat” by simplifying the calibration setup and applying the equivalent difference of pressure only to the “low” side.↩︎

  563. This is not unlike the experience of feeling lighter when you are standing in a pool of water just deep enough to submerge most of your body with your feet touching the bottom. This reduction of apparent weight is due to the buoyant force of the water upward on your body, equal to the weight of water that your body displaces.↩︎

  564. So-called for its ability to “knock out” (separate and collect) condensible vapors from the gas stream. This particular photograph was taken at a natural gas compression facility, where it is very important the gas to be compressed is dry (since liquids are essentially incompressible). Sending even relatively small amounts of liquid into a compressor may cause the compressor to catastrophically fail!↩︎

  565. To anyone familiar with the front suspension of a 1960’s vintage Chevrolet truck, or the suspension of the original Volkswagen “Beetle” car, the concept of a torsion bar should be familiar. These vehicles used straight, spring-steel rods to provide suspension force instead of the more customary coil springs used in modern vehicles. However, even the familiar coil spring is an example of torsional forces at work: a coil spring is nothing more than a torsion bar bent in a coil shape. As a coil spring is stretched or compressed, torsional forces develop along the circumferential length of the spring coil, which is what makes the spring “try” to maintain a fixed height.↩︎

  566. This illustration is simplified, omitting such details as access holes into the cage, block valves between the cage and process vessel, and any other pipes or instruments attached to the process vessel. Also, the position-sensing mechanism normally located at the far left of the assembly is absent from this drawing.↩︎

  567. The general term for this form of measurement is time domain reflectometry.↩︎

  568. My own experience with this trend is within the oil refining industry, where legacy displacer instruments (typically Fisher brand “Level-Trol” units) are being replaced with new guided-wave radar transmitters, both for vapor-liquid and liquid-liquid interface applications.↩︎

  569. The speed of sound through any substance is a function of both the substance’s density and its bulk modulus (i.e. the compressibility of a substance). Mathematically, \(c = \sqrt{B \over \rho}\) where \(c\) is the sonic velocity, \(B\) is the bulk modulus, and \(\rho\) is the mass density. Water and air provide an excellent illustration of this principle: the speed of sound through water happens to be much faster than the speed of sound through air despite the vastly greater mass density of water, only because of the even greater disparity in bulk modulus between water and air.↩︎

  570. In the industrial instrumentation world, the word “transducer” usually has a very specific meaning: a device used to process or convert standardized instrumentation signals, such as 4-20 mA converted into 3-15 PSI, etc. In the general scientific world, however, the word “transducer” describes any device converting one form of energy into another. It is this latter definition of the word that I am using when I describe an ultrasonic “transducer” – a device used to convert electrical energy into ultrasonic sound waves, and vice-versa.↩︎

  571. “Radar” is an acronym: RAdio Detection And Ranging. First used as a method for detecting enemy ships and aircraft at long distances over the ocean in World War II, this technology is used for detecting the presence, distance, and/or speed of objects in a wide variety of applications.↩︎

  572. In fact, it is a common retrofit practice to install a guided-wave radar level transmitter in the exact same cage that once housed a displacement-style level transmitter.↩︎

  573. In actuality, both radio waves and light waves are electromagnetic in nature. The only difference between the two is frequency: while the radio waves used in radar systems are classified as “microwaves” with frequencies in the gigahertz (GHz) region, visible light waves range in the hundred of terahertz (THz)!↩︎

  574. This formula assumes lossless conditions: that none of the wave’s energy is converted to heat while traveling through the dielectric. For many situations, this is true enough to assume.↩︎

  575. Or if the chemical composition of the gas or vapor changes dramatically.↩︎

  576. The pressure and temperature factors in this formula come from the Ideal Gas Law (\(PV = nRT\)), manipulating that equation to express molecular gas density in terms of pressure and temperature (\(\rho = {n \over V} = {P \over RT}\)). The fraction \({P T_{ref} \over P_{ref} T}\) expresses a ratio of molecular densities: \(\rho \over \rho_{ref}\).↩︎

  577. Dielectric permittivity is one of the factors determining the speed of any electromagnetic wave through a substance, but not the only one. The material’s magnetic permeability is another factor, but it is far more common to encounter interfaces of gas-liquid or liquid-liquid where differences in permittivity rather than differences in permeability constitute the major reason for differences in radio wave velocity.↩︎

  578. Rosemount’s “Replacing Displacers with Guided Wave Radar” technical note states that the difference in dielectric constant between the upper and lower liquids must be at least 10.↩︎

  579. \(R = 0.5285\) for the 1/40 interface; \(R = 0.02944\) for the 40/80 interface; and \(R = 0.6382\) for the 1/80 interface, all based on the formula \(R = {\left({\sqrt{\epsilon_{r}} - 1}\right)^2 \over \left(\sqrt{\epsilon_{r}} + 1 \right)^2}\) using the pair of permittivity values at each interface.↩︎

  580. It should be noted that the dielectric constant of the lowest medium (the liquid in a simple, non-interface, level measurement application) is irrelevant for calibration purposes. All we are concerned with is the propagation time of the signal to and from the level of interest, nothing below it.↩︎

  581. For vented-tank level measurement applications where air is the only substance above the point of interest, the relative permittivity is so close to a value of 1 that there is little need for further consideration on this point. Where the permittivity of fluids becomes a problem for radar is in high-pressure (non-air) gas applications and liquid-liquid interface applications, especially where the upper substance composition is subject to change.↩︎

  582. Probe mounting style will also influence the lower transition zone, in the case of flexible probes anchored to the bottom of the process vessel.↩︎

  583. An approximate analogy for understanding the nature of this pulse may be performed using a length of rope. Laying a long piece of rope in a straight line on the ground, pick up one end and quickly move it in a tight circle using a “flip” motion of your wrist. You should be able to see the torsional pulse travel down the length of the rope until it either dies out from dissipation or it reaches the rope’s end. As with the torsional pulse in a magnetostrictive waveguide, this pulse in the rope is mechanical in nature: a movement of the rod’s (rope’s) molecules. As a mechanical wave, it may be properly understood as a form of sound.↩︎

  584. This “dampener” is the mechanical equivalent of a termination resistor in an electrical transmission line: it makes the traveling wave “think” the waveguide is infinitely long, preventing any reflected pulses. For more information on electrical transmission lines and termination resistors, see section 5.10 beginning on page .↩︎

  585. This particular transmitter happens to be one of the “M-Series” models manufactured by MTS.↩︎

  586. One reference gives the speed of sound in a magnetostrictive level instrument as 2850 meters per second. Rounding this up to \(3 \times 10^3\) m/s, we find that the speed of sound in the magnetostrictive waveguide is at least five orders of magnitude slower than the speed of light in a vacuum (approximately \(3 \times 10^8\) m/s). This relative slowness of wave propagation is a good thing for our purposes here, as it gives more time for the electronic timing circuit to count, yielding a more precise measurement of distance traveled by the wave. This fact grants superior resolution of measurement to magnetostrictive level sensors over radar-based and laser-based level sensors. Open-air ultrasonic level instruments deal with propagation speeds even slower than this (principally because the bulk moduli of gases and vapors is far less than that of a solid metal rod) which at first might seem to give these level sensors the advantage in precision. However, open-air level sensors experience far greater propagation velocity variations caused by changes in pressure and temperature than magnetostrictive sensors. Unlike the speed of sound in gases or liquids, the speed of sound in a solid metal rod is very stable over a large range of process temperatures, and practically constant for a large range of process pressures. Another factor adding to the calibration stability of magnetostrictive instruments is that the composition of the medium never changes. With instruments measuring time-of-flight through process fluids, the chemical composition of those fluids often affects the wave velocity. In a magnetostrictive instrument, the waves are always traveling through the same material – the metal of the waveguide bar – and thus are not subject to variation with process changes.↩︎

  587. Regardless of the vessel’s shape or internal structure, the measurement provided by a weight-sensing system is based on the true mass of the stored material. Unlike height-based level measurement technologies (float, ultrasonic, radar, etc.), no characterization will ever be necessary to convert a measurement of height into a measurement of mass.↩︎

  588. If we happened to know, somehow, that the vessel’s weight was in fact equally shared by all supports, it would be sufficient to simply measure stress at one support to infer total vessel weight. In such an installation, assuming three supports, the total vessel weight would be the stress at any one support multiplied by three.↩︎

  589. The particular “micro-brewery” process shown here is at the Pike’s Place Market in downtown Seattle, Washington. Three load cells measure the weight of a hopper filled with ingredients prior to brewing in the “mash tun” vessel.↩︎

  590. One practical solution to this problem is to shut down the source of vibration (e.g. agitator motor, pump, etc.) for a long enough time to take a sample weight measurement, then run the machine again between measurements. So long as intermittent weight measurement is adequate for the needs of the process, the interference of machine vibration may be dealt with in this manner.↩︎

  591. Beta particles are not orbital electrons, but rather than product of elementary particle decay in an atom’s nucleus. These electrons are spontaneously generated and subsequently ejected from the nucleus of the atom.↩︎

  592. The half-life of a radioactive substance is the amount of time it takes for one-half of the original quantity to experience radioactive decay. To illustrate, a 10-gram quantity consisting of 100% Cobalt-60 atoms will only contain 5 grams of Cobalt-60 after 5.3 years, and then only 2.5 grams of Cobalt-60 after another 5.3 years (10.6 years from the start), and so on. The actual mass of the sample does not change significantly over this time period because the Cobalt atoms have decayed into atoms of Nickel, which still have the same atomic mass value. However, the intensity of the gamma radiation emitted by the sample decreases over time, proportional to the percentage of Cobalt remaining therein.↩︎

  593. So much of the incident power is lost as the radar signal partially reflects off the gas-liquid interface, then the liquid-liquid interface, then again through the gas-liquid interface on its return trip to the instrument that every care must be taken to ensure optimum received signal strength. While twin-lead probes have been applied in liquid-liquid interface measurement service, the coaxial probe design is still the best for maintaining radar signal integrity.↩︎

  594. Even this advantage is not always true. It is possible to build self-powered thermocouple temperature indicators, where an analog meter movement is driven by the electrical energy a thermocouple sensing junction outputs. Here, no external electrical power source is required! However, the accuracy of self-powered thermocouple systems is poor, as is the ability to measure small temperature ranges.↩︎

  595. “Swamping” is the term given to the overshadowing of one effect by another. Here, the normal resistance of the thermistor greatly overshadows (“swamps”) any wire resistance in the circuit, such that wire resistance becomes negligible.↩︎

  596. Remember that an ideal voltmeters has infinite input impedance, and modern semiconductor-amplified voltmeters have impedances of several mega-ohms or more.↩︎

  597. Note that the middle wire resistance is of no effect because it does not carry the RTD’s current. The amount of current entering or exiting an operational amplifier is assumed to be zero for all practical purposes.↩︎

  598. These errors will result only if the paralleled wires carry current. If the two wires you paralleled happen to join the transmitter’s sensing terminal to the RTD (the one carrying no current), no errors will result. However, many RTD transmitters do not document which of the terminals sense (carry no current) versus which of them excite (carry current to the RTD), and so there is a probability of getting it wrong if you simply guess. Given that there is no real benefit to having paralleled wires connecting the transmitter’s sensing terminal to the RTD, my advice is to either use all four wires and configure the transmitter for 4-wire mode, or don’t use the fourth wire at all.↩︎

  599. By “first principles,” I mean the basic laws of electric circuits. In this case, the most important law to apply is Kirchhoff’s Voltage Law: the algebraic sum of voltages in any loop must be equal to zero.↩︎

  600. The colors in this table apply only to the United States and Canada. A stunning diversity of colors has been “standardized” for each thermocouple type per nationality. The British and Czechs use their own color code, as do the Dutch and Germans. France has its own unique color code as well. Just for fun, an “international” color code also exists which doesn’t match any of the others. There are other deviations as well: the wire colors for type R and S thermocouples, for example, are standardized for extension-grade wire but not for thermocouple-grade wire.↩︎

  601. By “oxidizing,” what is meant is any atmosphere containing sufficient oxygen molecules or molecules of a similar element such as chlorine or fluorine.↩︎

  602. “Reducing” refers to atmospheres rich in elements that readily oxidize. Practically any fuel gas (hydrogen, methane, etc.) will create a reducing atmosphere in sufficient concentration.↩︎

  603. It should be noted that no amount of engineering or design is able to completely prevent people from doing the wrong thing. I have seen this style of thermocouple plug forcibly mated the wrong way to a socket. The amount of insertion force necessary to make the plug fit backward into the socket was quite extraordinary, yet this apparently was not enough of a clue for this wayward individual to give them pause.↩︎

  604. Grounded thermocouples often have thermal time constant values less than half that of comparable ungrounded thermocouples. Exposed-tip thermocouples are even faster than grounded-tip, typically by even larger ratios than grounded-tip thermocouples are to ungrounded thermocouples.↩︎

  605. Early texts on thermocouple use describe multiple techniques for automatic compensation of the reference (“cold”) junction. One design placed a mercury bulb thermometer at the reference junction, with a loop of thin platinum wire dipped into the mercury. As junction temperature rose, the mercury column would rise and short past a greater length of the platinum wire loop, causing its resistance to decrease which in turn would electrically bias the measurement circuit to offset the effects of the reference junction’s voltage. Another design used a bi-metallic spring to offset the pointer of the meter movement, so that changes in temperature at the indicating instrument (where the reference junction was located) would result in the analog meter’s needle becoming offset from its normal “zero” point, thus compensating for the offset in voltage created by the reference junction.↩︎

  606. For any two-phase mixture of any single substance (in this case, H\(_{2}\)O) the temperature of that mixture will be a strict function of pressure, the mixture possessing only one thermodynamic degree of freedom. Any addition or removal of heat from the ice/water mix results in a phase change (e.g. either more ice melts to become water, or more water freezes to become ice) rather than a temperature change. If even more precision is desired, a triple point cell may be used to fix the reference junction’s temperature, since any substance at its triple point will possess zero degrees of thermodynamic freedom (i.e. neither its pressure nor temperature can change).↩︎

  607. Please note that “cold junction” is just a synonymous label for “reference junction.” In fact the “cold” reference junction may very well be at a warmer temperature than the so-called “hot” measurement junction! Nothing prevents anyone from using a thermocouple to measure temperatures below the freezing point of water.↩︎

  608. A junction of copper and constantan just happens to be a type T thermocouple junction.↩︎

  609. No coloring standard exists in the United States for platinum thermocouple-grade wire (e.g. types R, S, etc.).↩︎

  610. The colors I list here are for thermocouples in the United States.↩︎

  611. The effect will be exactly the same for an instrument with software compensation rather than hardware compensation. With software compensation, there is no literal \(V_{rjc}\) voltage source, but the equivalent millivolt value is digitally added to the zero input measured at the thermocouple connection terminals, resulting in the same effect of measuring ambient temperature.↩︎

  612. For those readers familiar with digital logic gate circuits, this resistor fulfills the same function as a pullup or pulldown resistor on the input of a digital gate: providing a stable logic state in the event of a floating input condition.↩︎

  613. This is a good application of fail-safe design, where we choose the transmitter’s failure mode based on the safest outcome. For example, if our temperature transmitter were being used to sense the temperature of a furnace where excessive temperature was more dangerous than insufficient temperature, we would want to configure it for “high” burnout. This way if the thermocouple fails open, the transmitter will report a dangerous (but false) measurement of furnace temperature to the controller, which in turn will automatically act to decrease the furnace’s actual temperature (i.e. the safer condition.)↩︎

  614. Although Seebeck discovered thermo-electricity in 1822, the technique of measuring temperature by sensing the voltage produced at a dissimilar-metal junction was delayed in practical development until 1886 when rugged and accurate electrical meters became available for industrial use.↩︎

  615. Anyone who has ever used a magnifying glass (a concentrating lens) to concentrate sunlight knows how this works. If you were to use a magnifying glass to concentrate sunlight onto a thermocouple-type sensor, you could (at least in principle) infer the temperature of the sun in this manner.↩︎

  616. Later versions of the Radiamatic (dubbed the Radiamatic II) were more than just a bare thermopile and optical concentrator, containing electronic circuitry to output a linearized 4-20 mA signal representing target temperature.↩︎

  617. Comparing temperature ratios versus thermopile millivoltage ratios assumes linear thermocouple behavior, which we know is not exactly true. Even if the thermopile focal point temperatures precisely followed the ratios predicted by the Stefan-Boltzmann law, we would still expect some inconsistencies due to the non-linearities of thermocouple voltages. There will also be variations from predicted values due to shifts in radiated light frequencies, changes in emissivity factor, thermal losses within the sensing head, and other factors that refuse to remain constant over wide ranges of received radiation intensity. The lesson here is to not expect perfect agreement with theory!↩︎

  618. An important caveat to this rule is so long as the target object completely fills the sensor’s field of view (FOV). The reason for this caveat will become clear at the conclusion of the explanation.↩︎

  619. The field of view (a circle where the viewing “cone” intercepts the flat surface of the object) increases linearly in diameter with increases in distance between the sensor and the object. However, since the area of a circle is proportional to the square of its diameter (\(A = {\pi D^2 \over 4}\) or \(A = \pi r^2\)), we may say that the viewing area increases with the square of the distance between the sensor and object.↩︎

  620. In general, it is better to install a thermowell in a pipe rather than in a vessel because the greater fluid turbulence of flow in a pipe expedites heat transfer by convection as well as helps to clean solid fouling off of the thermowell’s surface.↩︎

  621. The air gap acts as a thermal resistance while the mass of the element itself acts as a thermal capacitance. Thus, the inclusion of an air gap forms a thermal “RC time constant” delay network secondary to the thermal delay incurred by the thermowell. This adds another “order” of lag to the system, not just an increase in its thermal time constant. Generally speaking, multiple orders of lag are detrimental to process control because they increase phase shift in a feedback loop and may lead to oscillation.↩︎

  622. Analytical (chemical composition) measurement is undeniably more complex and diverse than flow measurement, but analytical measurement covers a great deal of specific measurement types. As a single process variable, flow measurement is probably the most complex.↩︎

  623. Sometimes referred to as a plug of fluid.↩︎

  624. What really matters in Newton’s Second Law equation is the resultant force causing the acceleration. This is the vector sum of all forces acting on the mass. Likewise, what really matters in this scenario is the resultant pressure acting on the fluid plug, and this resultant pressure is the difference of pressure between one face of the plug and the other, since those two pressures impart two forces on the fluid mass in direct opposition to each other.↩︎

  625. Think of a piezometer tube as nothing more than a manometer tube: the greater the fluid pressure at the bottom of the tube, the higher the liquid will rise inside the tube.↩︎

  626. This is a very sound assumption for liquids, and a fair assumption for gases when pressure changes through the venturi tube are modest.↩︎

  627. One of the simplifying assumptions we make in this derivation is that friction plays no significant role in the fluid’s behavior as it moves through the venturi tube. In truth, no industrial fluid flow is totally frictionless (especially through more primitive flow elements such as orifice plates), and so our “theoretical” equations must be adjusted a bit to match real life.↩︎

  628. To see a graphical relationship between fluid acceleration and fluid pressures in a venturi tube, examine the illustration found in section [Fluid acceleration in a venturi] beginning on page .↩︎

  629. This re-write is solidly grounded in the rules of algebra. We know that \(\sqrt{a} \sqrt{b} = \sqrt{ab}\), which is what allows us to do the re-write.↩︎

  630. For positive numbers only!↩︎

  631. With so many modern instruments being capable of digitally implementing this square-root function, one must be careful to ensure it is only done once in the loop. I have personally witnessed flow-measurement installations where both the pressure transmitter and the indicating device were configured for square-root characterization. This essentially performed a fourth root characterization on the signal, which is just as bad as no characterization at all! Like anything else technical, the key to successful implementation is a correct understanding of how the system is supposed to work. Simply memorizing that “the instrument must be set up with square-root to measure flow” and blindly applying that mantra is a recipe for failure.↩︎

  632. Despite the impressive craftsmanship and engineering that went into the design of pneumatic square root extractors, their obsolescence is mourned by no one. These devices were notoriously difficult to set up and calibrate accurately, especially as they aged.↩︎

  633. L.K. Spink, in his book Principles and Practice of Flow Meter Engineering, notes that drain holes intended to pass solid objects may be useless in small pipe sizes, where the hole is so small it will probably become plugged with solid debris and cease to provide benefit. In such installations he recommends re-orienting the pipe vertically instead of horizontally. This allows solids to pass through the main bore of the orifice without “damming” on the upstream side of the orifice plate. I would add the suggestion to consider a different primary element entirely, such as a venturi tube. The small size of the line will limit the cost of such an element, and the performance is likely to be far better than an orifice plate anyway.↩︎

  634. To read more about the concept of Reynolds number, refer to section [Reynolds number] beginning on page .↩︎

  635. One significant source of error for customer-drilled tap holes is the interior finish of the holes. Even a small “burr” of metal left where the hole penetrates the inner surface of the pipe wall will cause substantial flow measurement errors!↩︎

  636. What this means is that a “pipe tap” installation is actually measuring permanent pressure loss, which also happens to scale with the square of flow rate because the primary mechanism for energy loss in turbulent flow conditions is the translation of linear velocity to angular (swirling) velocity in the form of eddies. This kinetic energy is eventually dissipated in the form of heat as the eddies eventually succumb to viscosity.↩︎

  637. One installation error seen in this photograph is a green plastic impulse tube with a bend extending above the upper flange tap. Any elevated portion of the impulse tube system will tend to collect gas bubbles over time, possibly causing measurement errors. A better installation would ensure the impulse tubes never extend above the flange tap they connect to on the liquid-bearing pipe.↩︎

  638. If an orifice plate is a “donut,” the V-cone is a “donut hole.”↩︎

  639. A “slurry” is a suspension of solid particles within a liquid. Mud is a common example of a slurry.↩︎

  640. This phenomenon may be observed when watching the flow of water through a turn in a river, especially if the river is fast-moving. Water level at the far (outside) bank of the turn will be higher than the water level at the near (inside) bank of the turn, due to radial acceleration of the water and the pressure difference that acceleration generates. In fact, that difference in water height may even be used to estimate the river’s flow rate!↩︎

  641. The fact that a pipe elbow generates small differential pressure is an accuracy concern because other sources of pressure become larger by comparison. Noise generated by fluid turbulence in the elbow, for example, becomes a significant portion of the pressure sensed by the transmitter when the differential pressure is so low (i.e. the signal-to-noise ratio becomes smaller). Errors caused by differences in elbow tap elevation and different impulse line fill fluids, for example, become more significant as well.↩︎

  642. This is not always the case, as primary elements are often found on throttled process lines. In such cases where a control valve normally throttles the flow rate, any energy dissipated by the orifice plate is simply less energy that the valve would otherwise be required to dissipate. Therefore, the presence or absence of an orifice plate has no net impact on energy dissipation when used on a process flow throttled by a control valve.↩︎

  643. This is not to be confused with micro-turbulence in the fluid, which cannot be eliminated at high Reynolds number values. In fact, “fully-developed turbulent flow” is desirable for head-based meter elements such as orifice plates because it means the flow profile will be relatively flat (even velocities across the pipe’s diameter) and frictional forces (viscosity) will be negligible. The thing we are trying to avoid is large-scale turbulent effects such as eddies, swirl, and asymmetrical flow profiles, which compromise the ability of most flowmeters to accurate measure flow rate.↩︎

  644. L.K. Spink mentions in his book Principles and Practice of Flow Meter Engineering that certain tests have shown flow measurement errors induced from severe disturbances as far as 60 to 100 pipe diameters upstream of the primary flow element. The April 2000 update of API standard 14.3 (for custody-transfer measurement of natural gas using orifice plates) calls for upward of 145 pipe diameters of straight-length pipe upstream of the orifice plate!↩︎

  645. Flow elements with low beta ratio values tolerate greater disturbance in the flow pattern because they accelerate the flowstream to a greater degree. This may be best visualized by a thought experiment where we imagine an orifice plate with a very large beta ratio (i.e. one where the bore size is nearly as large as the pipe diameter): such an orifice plate would hardly accelerate the fluid at all, which would mean a mis-shapen flow profile entering the bore would probably remain mis-shapen exiting it. The acceleration imparted to a flowstream by a low-beta element tends to overshadow any asymmetries in the flow profile. However, there are disadvantages to using low-beta elements, one of them being increased permanent pressure loss which may translate to increased operating costs due to energy loss.↩︎

  646. Beauty is truly in the eye of the beholder. While a piping designer might see straight-run lengths of pipe in awkward locations – necessitating more pipe and/or more bends elsewhere in the system to accommodate – as wasteful and ugly, the instrument engineer sees it as a thing of beauty.↩︎

  647. Richard W. Miller, in his outstanding book Flow Measurement Engineering Handbook, states that venturi tubes may come within 1 to 3 percent of ideal, while a square-edged orifice plate may perform as poorly as only 60 percent of theoretical!↩︎

  648. Specified in Part 2 of the AGA Report #3, section 2.6.5, page 22. A major reason for this is von Kármán vortex shedding caused by the gas having to flow around the width of the thermowell. The “street” of vortices shed by the thermowell will cause serious pressure fluctuations at the orifice plate unless mitigated by a flow conditioner, or by locating the thermowell downstream so that the vortices do not reach the orifice.↩︎

  649. This is especially true in the gas exploration industry, where natural gas coming out of the well is laden with mineral debris.↩︎

  650. Liquids can and do compress, the measurement of their “compressibility” being what is called the bulk modulus. However, this compressibility is too slight to be of any consequence in most flow measurement applications. A notable exception is the metering of diesel fuel through a high-pressure injection pump, where liquid pressures range in the tens of thousands of PSI, and the compressibility of the liquid diesel fuel may affect the precise timing of individual injections into the engine cylinders.↩︎

  651. “Swamping” is a term commonly used in electrical engineering, where a bad effect is overshadowed by some other effect much larger in magnitude, to the point where the undesirable effect is negligible in comparison.↩︎

  652. This includes elaborate oil-bath systems where the laminar flow element is submerged in a temperature-controlled oil bath, the purpose of which is to hold temperature inside the laminar element constant despite sudden changes in the measured fluid’s temperature.↩︎

  653. If we know that the plummet’s weight will remain constant, its drag area will remain constant, and that the force generated by the pressure drop will always be in equilibrium with the plummet’s weight for any steady flow rate, then the relationship \(F = P A\) dictates a constant pressure. Thus, we may classify the rotameter as a constant-pressure, variable-area flowmeter. This stands in contrast to devices such as orifice plates, which are variable-pressure, constant-area.↩︎

  654. Orifice plates are variable-pressure, constant-area flowmeters. Rotameters are constant-pressure, variable-area flowmeters. Weirs are variable-pressure, variable-area flowmeters. As one might expect, the mathematical functions describing each of these flowmeter types is unique!↩︎

  655. It is also possible to operate a Parshall flume in fully submerged mode, where liquid level must be measured at both the upstream and throat sections of the flume. Correction factors must be applied to these equations if the flume is submerged.↩︎

  656. These figures are reported in Béla Lipták’s excellent reference book Instrument Engineers’ Handbook – Process Measurement and Analysis Volume I (Fourth Edition). To be fair to closed-pipe elements such as orifice plates and venturi tubes, much improvement in the classic 3:1 rangeability limitation has been achieved through the use of microprocessor-based differential pressure sensors. Lipták reports rangeabilities for orifice plates as great as 10:1 through the use of such modern differential pressure instruments. However, even this pales in comparison to the rangeability of a typical weir or flume, which Lipták reports to be 75:1 for “most devices” in this category.↩︎

  657. “Custody transfer” refers to measurement applications where a product is exchanging ownership. In other words, someone is selling, and someone else is buying, quantities of fluid as part of a business transaction. It is not difficult to understand why accuracy is important in such applications, as both parties have a vested interest in a fair exchange. Government institutions also have a stake in accurate metering, as taxes are typically levied on the sale of commodity fluids such as natural gas.↩︎

  658. It is important to note that the vortex-shedding phenomenon ceases altogether if the Reynolds number is too low. Laminar flow produces no vortices, but rather stream-line flow around any object placed in its way.↩︎

  659. Note that if flow rate is to be expressed in units of gallons per minute as is customary, the equation must contain a factor for minutes-to-seconds conversion: \(f = {kQ \over 60}\)↩︎

  660. This \(k\) factor is empirically determined for each flowmeter by the manufacturer using water as the test fluid (a factory “wet-calibration”), to ensure optimum accuracy.↩︎

  661. In a practical sense, only liquid flows are measurable using this technique. Gases must be super-heated into a plasma state before they are able to conduct electricity, and so electromagnetic flowmeters cannot be used with most industrial gas flowstreams.↩︎

  662. This is an application of the transitive property in mathematics: if two quantities are both equal to a common third quantity, they must also be equal to each other. This property applies to proportionalities as well as equalities: if two quantities are proportional to a common third quantity, they must also be proportional to each other.↩︎

  663. The colloquial term in the United States for this sort of thing is fudge factor.↩︎

  664. The obvious solution to this problem – relocating the pipes to give more clearance between flowmeters – would be quite expensive given the large pipe sizes involved. A “compromise” solution is to tilt the magnetic flowtubes as far as possible without the electrodes touching the adjacent flowtube. Horizontal electrode installation is ideal for horizontal pipes, but an angled installation will be better than a vertical installation.↩︎

  665. As always, check the manufacturer’s literature for specific requirements, as variations do exist for different models and sizes of magtube.↩︎

  666. Even electrically non-conducting solid matter is tolerated well by magnetic flowmeters, since the conducting liquid surrounding the solids still provides continuity from one electrode to the other.↩︎

  667. Braided conductors do a better job of shunting radio-frequency currents, because at very high frequencies the skin effect makes the surface area of a conductor a greater factor in its conductivity than its cross-sectional area.↩︎

  668. For example, in a condition of no liquid flow through the tube, the electrodes will intercept no voltage at all when the magnetic excitation is 60 Hz AC. When liquid moves slowly in the forward direction through the tube, a low-amplitude 60 Hz millivoltage signal will be detected at the electrodes. When liquid moves rapidly in the forward direction through the tube, the induced 60 Hz AC millivoltage will be greater in amplitude. Any liquid motion in the reverse direction induces a proportional 60 Hz AC voltage signal whose phase is 180\(^{o}\) shifted from the excitation signal driving the magnetic coils of the flowtube.↩︎

  669. We know this because the largest electrical noise sources in industry are electric motors, transformers, and other power devices operating on the exact same frequency (60 Hz in the United States, 50 Hz in Europe) as the flowtube coils.↩︎

  670. In the industrial instrumentation world, the word “transducer” usually has a very specific meaning: a device used to process or convert standardized instrumentation signals, such as 4-20 mA converted into 3-15 PSI, etc. In the general scientific world, however, the word “transducer” describes any device converting one form of energy into another. It is this latter definition of the word that I am using when I describe an ultrasonic “transducer” – a device used to convert electrical energy into ultrasonic sound waves, and vice-versa.↩︎

  671. This phenomenon is analogous to paddling a canoe across the width of a river, with the canoe bow angled upstream versus angled downstream. Angled upstream, the canoeist must overcome the velocity of the river and therefore takes longer to reach the other side. Angled downstream, the river’s velocity aids the canoeist’s efforts and therefore the trip takes less time.↩︎

  672. If you would like to prove this to yourself, you may do so by substituting path length (\(L\)), fluid velocity (\(v\)), and sound velocity (\(c\)) for the times in the flow formula. Use \(t_{up} = {L \over {c-v}}\) and \(t_{down} = {L \over {c+v}}\) as your substitutions, then algebraically reduce the flow formula until you find that all the \(c\) terms cancel. Your final result should be \(Q = {2kv \over L}\).↩︎

  673. An instrument called a gas chromatograph is able to provide live measurement of gas composition, with a computer calculating the average speed of sound for the gas given the known types and percentages of each molecular compound comprising the gas mixture. It just so happens that gas composition analysis by chromatograph is something typically done for custody transfer flow measurement of natural gas anyway, for the primary purpose of calculating the gas’s heating value as a fuel, and therefore no additional investment of instrumentation is necessary to calculate the gas’s speed of sound in this application.↩︎

  674. See page 10 of Friedrich Hofmann’s Fundamentals of Ultrasonic Flow Measurement for industrial applications paper.↩︎

  675. Most notably, the problem of achieving good acoustic coupling with the pipe wall so signal transmission to the fluid and signal reception back to the sensor may be optimized. Also, there is the potential for sound waves to “ring around the pipe” instead of travel through the fluid with clamp-on ultrasonic flowmeters because the sound waves must travel through the full thickness of the pipe walls in order to enter and exit the fluid stream.↩︎

  676. Recall from algebra that we may perform any arithmetic operation we wish to any equation, so long as we apply that operation equally to both sides of the equation. Dividing one equation by another equation obeys this principle, because both sides of the second equation are equal. In other words, we could divide both sides of the first equation by \(P_A V_A\) (although that would not give us the solution we are looking for), but dividing the left side by \(P_A V_A\) and the right side by \(nR T_A\) is really doing the same thing, since \(nR T_A\) is identical in value to \(P_A V_A\).↩︎

  677. Division by \(t\) does not alter the equation at all, since we are essentially multiplying the left-hand side by \(t \over t\) which is multiplication by 1. This is why we did not have to apply \(t\) to the right-hand side of the equation.↩︎

  678. The wonderful thing about standards is that there are so many to choose from!↩︎

  679. In some applications, such as the custody transfer of natural gas, we are interested in something even more abstract: heating value. However, in order to calculate the gross heating value of a fuel gas stream, we must begin with an accurate mass flow measurement – volumetric flow is not really helpful.↩︎

  680. A “mole” is equal to a value of \(6.022 \times 10^{23}\) entities. Therefore, one mole of carbon atoms is 602,200,000,000,000,000,000,000 carbon atoms. For a more detailed examination of this subject, refer to section 3.7 beginning on page .↩︎

  681. I am purposely ignoring the fact that naturally occurring carbon has an average atomic mass of 12.011, and naturally occurring oxygen has an atomic mass of 15.9994.↩︎

  682. The British unit of the “pound” is technically a measure of force or weight and not mass. The proper unit of mass measurement in the British system is the “slug.” However, for better or worse, the “slug” is rarely used, and so engineers have gotten into the habit of using “pound” as a mass measurement. In order to distinguish the use of “pound” to represent mass (an intrinsic property of matter) as opposed to the use of “pound” to represent weight (an incidental property of matter), the former is abbreviated lbm (literally, “pounds mass”). In Earth gravity, “lbm” and “lb” are synonymous. However, the standard Newtonian equation relating force, mass, and acceleration (\(F = ma\)) does not work when “lbm” is the unit used for mass and “lb” is used for force (it does when “slug” is used for mass and “lb” is used for force, though!). A weird unit of force invented to legitimize “pound” as an expression of mass is the poundal (“pdl”): one “poundal” of force is the reaction of one “pound” of mass (lbm) accelerated one foot per second squared. By this definition, a one-pound mass (1 lbm) in Earth gravity weighs 32 poundals!↩︎

  683. One could argue that orifice plates and other pressure-based flowmeters respond primarily to mass flow rather than volumetric flow, since their operation is based on the pressure created by accelerating a mass. However, fluid density does affect the relationship between mass flow rate and differential pressure (note how the density term \(\rho\) appears in the mass flow equation \(W = k\sqrt{\rho (P_1 - P_2)}\), where it would not if differential pressure were a strict function of mass flow rate and nothing else), and so the raw output of these instruments must still be “compensated” by pressure and temperature measurements.↩︎

  684. The impeller-turbine and twin-turbine mass flowmeter types are examples of mechanical true-mass flow technologies. Both work on the principle of fluid inertia. In the case of the impeller-turbine flowmeter, an impeller driven by a constant-speed electric motor imparts a “spin” to a moving fluid, which then impinges on a stationary turbine wheel to generate a measurable torque. The greater the mass flow rate, the greater the impulse force imparted to the turbine wheel. In the twin-turbine mass flowmeter, two rotating turbine wheels with different blade pitches are coupled together by a flexible coupling. As each turbine wheel attempts to spin at its own speed, the inertia of the fluid causes a differential torque to develop between the two wheels. The more mass flow rate, the greater the angular displacement (offset) between the two wheels.↩︎

  685. In fact, this density-measuring function of Coriolis flowmeters is so precise that they often find use primarily as density meters, and only secondarily as flowmeters!↩︎

  686. An interesting experiment to perform consists of holding a water hose in a U-shape and gently swinging the hose back and forth like a pendulum, then flowing water through that same hose while you continue to swing it. The hose will begin to undulate, its twisting motion becoming visually apparent.↩︎

  687. This is an example of a vector cross-product where all three vectors are perpendicular to each other, and the directions follow the right-hand rule.↩︎

  688. The Coriolis force generated by a flowing fire hose as firefighters work to point it in a different direction can be quite significant, owing to the high mass flow rate of the water as it flows through the hose and out the nozzle!↩︎

  689. For those readers with an automotive bent, this is the same principle applied in opposed-cylinder engines (e.g. Porsche “boxer” air-cooled 6-cylinder engine, Volkswagen air-cooled 4-cylinder engine, BMW air-cooled motorcycle twin engine, Citroen 2CV 2-cylinder engine, Subaru 4- and 6-cylinder opposed engines, etc.). Opposite piston pairs are always 180\(^{o}\) out of phase for the purpose of maintaining mechanical balance: both moving away from the crankshaft or both moving toward the crankshaft, at any given time.↩︎

  690. An alternative to splitting the flow is to plumb the tubes in series so they must share the exact same flow rate, like series-connected resistors sharing the exact same amount of electrical current.↩︎

  691. The force coil is powered by an electronic amplifier circuit, which receives feedback from the sensor coils. Like any amplifier circuit given positive (regenerative) feedback, it will begin to oscillate at a frequency determined by the feedback network. In this case, the feedback “network” consists of the force coil, tubes, and sensor coils. The tubes, having both resilience and mass, naturally possess their own resonant frequency. This mechanical resonance dominates the feedback characteristic of the amplifier loop, causing the amplifier circuit to oscillate at that same frequency.↩︎

  692. This usually takes the form of a simple analog oscillator circuit, using the tubes and sensors as feedback elements. It is not unlike a crystal oscillator circuit where the mechanical resonance of a quartz crystal stabilizes the circuit’s frequency at one value. The feedback system naturally finds and maintains resonance, just as a crystal oscillator circuit naturally finds and maintains the resonant frequency of the quartz crystal when provided with ample regenerative (positive) feedback. As fluid density inside the tubes changes, the tubes’ mass changes accordingly, thus altering the resonant frequency of the system. The analog nature of this mechanical oscillator explains why some early versions of Coriolis flowmeters sometimes required a minor shake or tap to the flowtubes to start their oscillation!↩︎

  693. If you consider each tube as a container with a fixed volume capacity, a change in fluid density (e.g. pounds per cubic foot) must result in a change in mass for each tube.↩︎

  694. An important caveat is that the RTD sensing tube temperature in a Coriolis flowmeter really senses the tubes’ outside skin temperature, which may not be exactly the same as the temperature of the fluid inside the tube. If the ambient temperature near the flowmeter differs substantially from the fluid’s temperature, the tube skin temperature reading may not be accurate enough for the flowmeter to double as a fluid temperature transmitter.↩︎

  695. Significant technological progress has been made on mixed-phase Coriolis flow measurement, to the point where this may no longer be a serious consideration in the future.↩︎

  696. For example, the specific heat of water is 1.00 kcal / kg \(\cdot\) \(^{o}\)C, meaning that the addition of 1000 calories of heat energy is required to raise the temperature of 1 kilogram of water by 1 degree Celsius, or that we must remove 1000 calories of heat energy to cool that same quantity of water by 1 degree Celsius. Ethyl alcohol, by contrast, has a specific heat value of only 0.58 kcal / kg \(\cdot\) \(^{o}\)C, meaning it is almost twice as easy to warm up or cool down as water (little more than half the energy required to heat or cool water needs to be transferred to heat or cool the same mass quantity of ethyl alcohol by the same amount of temperature).↩︎

  697. In a laminar flowstream, individual molecules do not cross paths, but rather travel in parallel lines. This means only those molecules traveling near the wall of a tube will be exposed to the temperature of the wall. The lack of “mixing” in a laminar flowstream means molecules traveling in the inner portions of the stream never contact the tube wall, and therefore never serve to transfer heat directly to or from the wall. At best, those inner-path molecules transfer heat by conduction with adjacent molecules which is a less efficient transfer mechanism than convection.↩︎

  698. The proper mass flow rate value corresponding to these two measurements would be 45.0 lb/h.↩︎

  699. While this may seem like a very informal definition of differential, it is actually rooted in a field of mathematics called nonstandard analysis, and closely compares with the conceptual notions envisioned by calculus’ founders.↩︎

  700. To be precise, the equation describing the function of this analog differentiator circuit is: \(V_{out} = -RC {dV_{in} \over dt}\). The negative sign is an artifact of the circuit design – being essentially an inverting amplifier with negative gain – and not an essential element of the math.↩︎

  701. This is not always the case, as primary elements are often found on throttled process lines. In such cases where a control valve normally throttles the flow rate, any energy dissipated by the orifice plate is simply less energy that the valve would otherwise be required to dissipate. Therefore, the presence or absence of an orifice plate has no net impact on energy dissipation when used on a process flow throttled by a control valve, and therefore does not affect cost over time due to energy loss.↩︎

  702. Truth be told, free hydrogen ions are extremely rare in an aqueous solution. You are far more likely to find them bound to normal water molecules to form positive hydronium ions (H\(_{3}\)O\(^{+}\)). For simplicity’s sake, though, professional literature often refers to these positive ions as “hydrogen” ions and even represent them symbolically as H\(^{+}\).↩︎

  703. Ionic compounds are formed when oppositely charged atomic ions bind together by mutual attraction. The distinguishing characteristic of an ionic compound is that it is a conductor of electricity in its pure, liquid state. That is, it readily separates into anions and cations all by itself. Even in its solid form, an ionic compound is already ionized, with its constituent atoms held together by an imbalance of electric charge. Being in a liquid state simply gives those atoms the physical mobility needed to dissociate.↩︎

  704. Covalent compounds are formed when electrically neutral atoms bind together by the mutual sharing of valence electrons. Such compounds are not good conductors of electricity in their pure, liquid states.↩︎

  705. It should be noted that the relationship between conductivity and electrolyte concentration in a solution is typically non-linear. Not only does the electrical conductivity of a solution not follow a set proportion to concentration, but even the slope of the relationship may change from positive to negative over a wide range of concentrations. This fact makes conductivity measurement in liquid solutions useful for concentration analysis only over limited ranges.↩︎

  706. The use of alternating current forces the ions to switch directions of travel many times per second, thus reducing the chance they have of bonding to the metal electrodes.↩︎

  707. There will be very little if any fouling on these electrodes anyway because they carry no current, and thus provide no reason for ions to migrate toward them.↩︎

  708. Toroidal conductivity sensors may suffer calibration errors if the fouling is so bad that the hole becomes choked off with sludge, but this is an extreme condition. These sensors are far more tolerant to fouling than any form of contact-type (electrode) conductivity cell.↩︎

  709. Note that this is opposite the behavior of a direct-contact conductivity cell, which produces less voltage as the liquid becomes more conductive.↩︎

  710. Truth be told, the color of a hydrangea blossom is only indirectly determined by soil pH. Soil pH affects the plant’s uptake of aluminum, which is the direct cause of color change. Interestingly, the pH-color relationship of a hydrangea plant is exactly opposite that of common laboratory litmus paper: red litmus paper indicates an acidic solution while blue litmus paper indicates an alkaline solution; whereas red hydrangea blossoms indicate alkaline soil while blue (or violet) hydrangea blossoms indicate acidic soil.↩︎

  711. Flavin, classified as an anthocyanin, is the pigment in red cabbage responsible for the pH-indicating behavior. This same pigment also changes color according to soil pH while the cabbage plant is growing, much like a hydrangea. Unlike hydrangeas, the coloring of a red cabbage is more akin to litmus paper, with red indicating acidic soil.↩︎

  712. Of course, ions possess no agency and therefore cannot literally “attempt” anything. What is happening here is the normal process of diffusion whereby the random motions of individual molecules tends to evenly distribute those molecules throughout a space. If a membrane divides two solutions of differing ionic concentration, ions from the more concentrated region will, over time, migrate to the region of lower concentration until the two concentrations are equal to each other. Truth be told, ions are continually migrating in both directions through the porous membrane at all times, but the rate of migration from the high concentration to the low concentration solutions is greater than the other direction simply because there are more ions present to migrate that way. After the two solutions have become equal in ionic concentration, the random migration still proceeds in both directions, but now the rates in either direction are equal and therefore there is zero net migration.↩︎

  713. This is apparent from a mathematical perspective by examination of the Nernst equation: if the concentrations are equal (i.e. \(C_1 = C_2\)), then the ratio of \(C_1 \over C_2\) will be equal to 1. Since the logarithm of 1 is zero, this predicts zero voltage generated across the membrane. From a chemical perspective, this corresponds to the condition where random ion migration through the porous membrane is equal in both directions. In this condition, the Nernst potentials generated by the randomly-migrating ions are equal in magnitude and opposite in direction (polarity), and therefore the membrane generates zero overall voltage.↩︎

  714. It is a proven fact that sodium ions in relatively high concentration (compared to hydrogen ions) will also cause a Nernst potential across the glass of a pH electrode, as will certain other ion species such as potassium, lithium, and silver. This effect is commonly referred to as sodium error, and it is usually only seen at high pH values where the hydrogen ion concentration is extremely low. Like any other analytical technology, pH measurement is subject to “interference” from species unrelated to the substance of interest.↩︎

  715. Remember that voltage is always measured between two points!↩︎

  716. Hydrogen ion concentration being practically the same as hydrogen ion activity for dilute solutions. In highly concentrated solutions, hydrogen ion concentration may exceed hydrogen ion activity because the ions may begin to interact with each other and with other ion species rather than act as independent entities. The ratio of activity to concentration is called the activity coefficient of the ion in that solution.↩︎

  717. The mathematical sign of probe voltage is arbitrary. It depends entirely on whether we consider the reference (buffer) solution’s hydrogen ion activity to be \(C_1\) or \(C_2\) in the equation. Which ever way we choose to calculate this voltage, though, the polarity will be opposite for acidic pH values as compared to alkaline pH values↩︎

  718. Glass is a very good insulator of electricity. With a thin layer of glass being an essential part of the sensor circuit, the typical impedance of that circuit will lie in the range of hundreds of mega-ohms!↩︎

  719. Operational amplifier circuits with field-effect transistor inputs may easily achieve input impedances in the tera-ohm range (\(1 \times 10^{12} \> \Omega\)).↩︎

  720. With all modern pH instruments being digital in design, this calibration process usually entails pressing a pushbutton on the faceplate of the instrument to “tell” it when the probe has stabilized in the buffer solution. Clean and healthy pH probes typically stabilize to the buffer solution’s pH value within 30 seconds of immersion.↩︎

  721. A more obvious test would be to directly measure the pH probe assembly’s voltage while immersed in 7.0 pH buffer solution. However, most portable voltmeters lack sufficient input impedance to perform this measurement, and so it is easier to calibrate the pH instrument in 7.0 pH buffer and then check its zero-voltage pH value to see where the isopotential point is at.↩︎

  722. This effect is particularly striking when paper-strip chromatography is used to analyze the composition of ink. It is really quite amazing to see how many different colors are contained in plain “black” ink!↩︎

  723. Gas chromatographs are commonly used for industrial analysis on liquid sample streams, by using a heater at the inlet of the chromatograph to vaporize the liquid sample prior to analysis. In such applications the column and sample valve(s) must be maintained in a heated condition as well so that the sample does not condense back into liquid form during the analysis.↩︎

  724. Stationary phase material used in many hydrocarbon GC’s looks much like oily sand.↩︎

  725. This is not to say that one cannot use a selective sensor as a chromatograph detector. It’s just that selectivity between different process compounds is not a necessary requirement for a chromatograph detector.↩︎

  726. It should be noted that the choice of carrier for any chromatography system, be it manual or automated, is not completely arbitrary. There are some limitations to which carrier fluids may be used, depending on the expected composition of the sample (e.g. you would not want to use a carrier that reacted chemically with any species in the sample so as to alter the sample’s composition!). However, the range of choices afforded to the person designing the chromatograph system lends a unique flexibility to this type of chemical analysis.↩︎

  727. A “solute” being one of the sample species dissolved within the carrier gas↩︎

  728. In fact, FID sensors are sometimes referred to as carbon counters, since their response is almost directly proportional to the number of carbon atoms passing through the flame.↩︎

  729. See section [thermal mass flowmeter specific heat] beginning on page . The greater the specific heat value of a gas, the more heat energy it can carry away from a hot object through convection, all other factors being equal.↩︎

  730. It is not uncommon to find chromatographs used in processes to measure the concentration of a single chemical species, even though the device is capable of measuring the concentrations of multiple species within that process stream. In those cases, chromatography is (or was at the time of installation) the most practical analytical technique to use for quantitative detection of that substance. Why else use an inherently multi-variable analyzer when you could have used a single-variable technology that was simpler? By analogy, it is possible to use a Coriolis flowmeter to measure nothing but fluid density, even though such a device is fully capable of measuring fluid density and mass flow rate and temperature.↩︎

  731. Additionally, the data collected by this GC is used to improve the flow-measurement accuracy of their AGA3 honed-run orifice meters. By measuring the concentrations of different compounds in the natural gas, the GC tabulates an average density for the gas, which is then sent to the flow computer to achieve better flow-measuring accuracy than would be possible without this compensating measurement.↩︎

  732. Laboratory chromatographs may take even longer to complete their analyses.↩︎

  733. Whereas most liquids decrease in viscosity as temperature rises, gases increase in viscosity as they get hotter. In other words, a gas becomes “thicker” as it heats up, thus slowing down its progress through a chromatograph column. Since the flow regime through a chromatograph column is most definitely laminar and not turbulent, viscosity has a great effect on flow rate.↩︎

  734. Since the degree of separation between species is roughly proportional to the species’ retention time, the slowest species (4, 5, and 6 in this case) do not need to go through two columns to be adequately separated. It is only the fastest species needing more retention time (through an additional column) to separate adequately from one another.↩︎

  735. In physics, a “blackbody” is a perfect emitter of electromagnetic radiation (photons) as it is heated. The intensity of light emitted as a function of wavelength (\(\lambda\)) and temperature (\(T\)) is \(I = {{2 \pi h c^2 \lambda^{-5}} \over {e^{hc / \lambda k T} - 1}}\).↩︎

  736. Molecules typically have much more complex interactions with light than individual atoms. The optical signatures of atoms are principally defined by electron states, light absorbed when electrons are boosted into higher-energy states and light emitted when electrons fall into lower-energy states. Molecules, on the other hand, can absorb and release energy in the inter-atomic bonds as well as in the states of individual electrons. Since molecules have more degrees of freedom with respect to optical interactions, their optical signatures tend to be much broader. This is why molecular absorption spectra consist of broad bands of wavelengths (each band comprised of many discrete lines), while atomic absorption spectra consist of relatively few lines.↩︎

  737. These photons have wavelengths longer than 700 nm, and so have energy values too low to boost electrons into higher levels. However, the attractive bonds between atoms in a molecule may be subject to the energy of these infrared photons, and so may dissipate the photons’ energy and thereby attenuate a beam of infrared light.↩︎

  738. In an absorption spectrum diagram, a non-absorbing substance results in a straight line at the 100% mark. Compounds absorbing specific wavelengths of light will produce low “dips” in the graph at those wavelength values, showing how less light (of those wavelengths) is able to pass un-absorbed through the sample to be detected at the other end. By contrast emission spectra are usually plotted with the characteristic wavelengths shown as high “peaks” in a graph that normally resides at 0%.↩︎

  739. Wavenumber, being the reciprocal of wavelengths in centimeters, may be thought of in terms of frequency: the greater the wavenumber, the higher the frequency of the light wave (the smaller its wavelength). In order to convert wavenumber into wavelength (in microns), reciprocate the wavenumber value and multiply by \(10^4\). For example, a wavenumber of 2000 cm\(^{-1}\) is equivalent to a wavelength of 5 microns. In order to convert wavenumber into wavelength (in nanometers), reciprocate the wavenumber value and multiply by \(10^7\). For example, a wavenumber of 4000 cm\(^{-1}\) is equivalent to a wavelength of 2500 nm.↩︎

  740. One such analyzer I saw in industry had a path length of a quarter-mile (1320 feet), to better measure extremely low concentrations of a gas! The gas in question was ambient air inside of a large shelter housing a chemical process. The analyzer was mounted on one side of the shelter, aiming a beam of laser light all the way to the opposite wall of the shelter 660 feet away, where a reflector was mounted. The laser beam’s path length was therefore twice the length of the shelter, or 1320 feet.↩︎

  741. You may use an old compact disk (CD) as a simple reflection and refraction grating. Holding the CD with the reflective (shiny) surface angled toward you, light reflected from a bright source such as a lamp (avoid using the sun, as you can easily damage your eyes viewing reflected sunlight!) will split into its constituent colors by reflection off the CD’s surface. Lines in the plastic of the CD perform the dispersion of wavelengths. You will likely have to experiment with the angle you hold the CD, pointing it more perpendicular to the lamp’s direction and more angled to your eyes, before you see the image of the lamp “smeared” as a colorful spectrum. To use the CD as a diffraction grating, you will have to carefully peel the reflective aluminum foil off the front side of the disk. Use a sharp tool to scribe the disk’s front surface from center to outer edge (tracing a radius line), then use sticky tape to carefully peel the scribed foil off the plastic disk. When you are finished removing all the foil, you may look through the transparent plastic and see spectra from light sources on the other side. Once again, experimentation is in order to find the optimum viewing angle, and be sure to avoid looking at the sun this way!↩︎

  742. One might wonder why the sun does not produce a line-type emission spectrum of all its constituent elements, instead of the continuous spectrum it does. The answer to this question is that emission spectra are produced only when the “excited” atoms are in relative isolation from each other, such as is the case in a low-pressure gas. In solids, liquids, and high-pressure gases, the close proximity of the atoms to each other creates many different opportunities for electrons to “jump” to lower energy levels. With all those different alternatives, the electrons emit a whole range of different wavelength photons as they seek lower energy levels, not just the few wavelengths associated with the limited energy levels offered by an isolated atom. We see the same effect on Earth when we heat metals: the electrons in a solid or liquid metal sample have so many different optional energy levels to “fall” to, they end up emitting a broad spectrum of wavelengths instead of just a few. In this way, a molten metal is a good approximation of a blackbody photon source.↩︎

  743. These details taken from pages 93-94 of Instrumentation and Control in the German Chemical Industry, a fascinating book detailing the state-of-the-art in process instrumentation in German chemical manufacturing facilities following the war.↩︎

  744. There will still be a span shift resulting from degradation of the light source, but this is inevitable. At least with this design, the zero-shift problem is eliminated.↩︎

  745. In analytical literature, you may read of some detectors having a catholic response. This is just a fancy way of saying the detector responds to a wide variety of things. The thermopiles shown in this NDIR instrument could be considered to have a catholic response to incident light. The word “catholic” in this context simply means “universal,” referring to the detector’s non-selectivity. Do not be dismayed if you encounter arcane terms such as “catholic” as you learn more about analytical instruments – the author is probably just trying to impress you with his or her vocabulary!↩︎

  746. Recall that the absorption of light by an atom or a molecule causes the photon’s energy to dissipate. An absorbed photon’s energy is transferred to the molecule, often resulting in increased motion (kinetic energy), which as we know is the very definition of temperature. Increased temperature in a gas of confined volume and fixed molecular quantity must result in an increased pressure according to the Ideal Gas Law (\(PV = nRT\)).↩︎

  747. The flow sensor is similar in design to thermal mass flow sensors discussed in the flow measurement chapter. See section 22.7.2 beginning on page for more information.↩︎

  748. And hopefully after all this filtering we still have some (unfiltered) wavelengths unique to the gas of interest we seek to measure. Otherwise, there will be no wavelengths of light remaining to be absorbed by our gas of interest inside the sample cell, which means we will have no means of spectroscopically measuring its concentration!↩︎

  749. Real GFC analyzers also have a chopper wheel preceding the filter wheel to create a pulsating light beam. This causes the detector signal to pulsate as well, allowing the analyzer to electronically filter out sensor “drift” just as in the dual-beam NDIR analyzer design. The chopper wheel has been eliminated from this diagram (and from the discussion) for simplicity. If it were not for the chopper wheel, the GFC analyzer would be prone to measurement errors caused by detector drift.↩︎

  750. As previously mentioned, real GFC analyzers have a chopper wheel preceding the filter wheel to make the light beam pulse in addition to changing its spectral composition. This chopper wheel generates multiple light pulses per rotation of the filter wheel. Thus, the signal output by the detector is actually an amplitude-modulated waveform, with the “carrier” frequency being the chopper wheel’s pulsing and the slower “modulating” frequency being the filter wheel’s rotation cycle. Hopefully by now you see why I decided to omit the chopper wheel “for simplicity.”↩︎

  751. The term “laser” is actually an acronym, standing for Light Amplification by Stimulated Emission of Radiation.↩︎

  752. It is this coherence of laser light that enables the beam to remain highly focused, unlike light from other sources which tends to spread.↩︎

  753. Such mirrors are partially silvered to let some light through while reflecting the rest of the light.↩︎

  754. A term often applied to this phenomenon of a QCL’s frequency is chirp. A “chirp” refers to a burst of signal frequencies either increasing or decreasing along some range.↩︎

  755. Blood, urine, semen, and various bodily proteins are known to fluoresce in the visible spectrum, making fluorescence a useful tool for crime-scene investigations. It’s also useful when purchasing a new house, to check for pet droppings in the carpet. Such analysis is not for the faint of heart.↩︎

  756. There is another way that light from the UV lamp could conceivably “take a corner” and reach the detector, and that is if the gas sample happens to contain dust or condensation droplets that would scatter the light. However, since gas samples are always dried and filtered prior to entering the sample chamber, this possibility is eliminated.↩︎

  757. If one were to install an optical filter in front of the photomultiplier tube designed to block fluorescent light emitted by hydrocarbon molecules, this filter would also block the light emitted by fluorescing SO\(_{2}\) molecules thereby defeating the very purpose of the analyzer: measuring SO\(_{2}\) concentration by optical fluorescence!↩︎

  758. Combustion is primarily a reaction between carbon and/or hydrogen atoms in fuel, and oxygen atoms in air. However, about 78% of the air (by volume) is nitrogen, and only about 20.9% is oxygen, which means a lot of nitrogen gets pulled in with the oxygen during combustion. Some of these nitrogen atoms combine with oxygen atoms under the high temperature of combustion to form various oxides of nitrogen.↩︎

  759. The measures used to mitigate nitric oxide emissions are the same measures used to mitigate the other oxides of nitrogen: reduce combustion temperature, and/or reduce the NO\(_{x}\) compounds to elemental nitrogen by mixing the combustion exhaust gases with ammonia (NH\(_{3}\)) in the presence of a catalyst. So here we have a case where we really don’t care to distinguish NO from NO\(_{x}\): we want to measure it all.↩︎

  760. This particular interference compound is especially problematic if we are using the analyzer to control the NO\(_{x}\) concentration in the exhaust of a combustion process, and the manipulated variable for the NO\(_{x}\) control loop is pure ammonia injected into the exhaust. Un-reacted ammonia (commonly called ammonia slip in the industry) sampled by the analyzer will be falsely interpreted as NO\(_{x}\), rendering the measurement meaningless, and therefore making control virtually impossible.↩︎

  761. In-situ pH probes are manufactured for high-pressure applications, but they suffer short lifespans (due to the accelerated erosion of the measurement glass) and decreased sensitivity (due to the extra thickness of the measurement glass) and are substantially more expensive than pH probes designed for atmospheric pressure conditions.↩︎

  762. Pressure control is important in gas analysis because changes in sample gas pressure will result in different gas densities, thereby directly affecting how many molecules of the gas of interest will be present and therefore detectable inside the analyzer.↩︎

  763. Temperature control is important for similar reasons: the gas species of interest may become more reactive as temperature changes, thereby resulting in a stronger indication even when concentration remains constant.↩︎

  764. It is important to thoroughly filter the gas input to an analyzer so that contaminants do not foul the sensing element(s). This is rather obvious in the case of optical analyzers, where the light to be analyzed must pass through a transparent window of some kind, and that window must be kept clean of dust, condensation, and any other substances that could interfere with the transmission of light.↩︎

  765. Some types of plastic sample tubes are permeable to gases, and so represent potential contamination points when the concentrations of interest are in the range of parts per million (ppm) or parts per billion (ppb). In such critical applications, only metal sample tubes (stainless steel, typically) are appropriate.↩︎

  766. The “other” gas in the mixture besides the gas or gases of interest.↩︎

  767. Interestingly, there is a documented case of an NDIR “Luft” analyzer being used as a safety monitor for carbon monoxide, ranged 0 to 0.1% (0-1000 ppm), at one of I.G. Farbenindustrie’s chemical plants in Germany during the 1940’s. This was definitely not a portable analyzer, but rather stationary-mounted in a process unit where high concentrations of carbon monoxide gas existed in the pipes and reaction vessels. The relatively fast response and high selectivity of the NDIR technology made it an ideal match for the application, considering the other (more primitive) methods of carbon monoxide gas detection which could be “fooled” by hydrogen, methane, and other gases.↩︎

  768. Some water treatment facilities use powerful ultraviolet lamps to disinfect water without the use of chemicals. Some potable (drinking) water treatment plants use ozone gas (O\(_{3}\)) as a disinfectant, which is generated on-site from atmospheric oxygen. A disadvantage to both chlorine-free approaches for drinking water is that neither one provides lasting disinfection throughout the distribution and storage system to the same degree that chlorine does.↩︎

  769. The “spectrum analyzer” display often seen on high-quality audio reproduction equipment such as stereo equalizers and amplifiers is an example of the Fourier Transform applied to music. This exact same technology may be applied to the analysis of a machine’s vibration to indicate sources of vibration, since different components of a machine tend to generate vibratory waves of differing frequencies.↩︎

  770. This rule makes intuitive sense as well: if a sine or cosine wave increases frequency while maintaining a constant peak-to-peak amplitude, the rate of its rise and fall must increase as well, since the higher frequency represents less time (shorter period) for the wave to travel the same amplitude. Since the derivative is the rate of change of the waveform, this means the derivative of a waveform must increase with that waveform’s frequency.↩︎

  771. Recall that the derivative of the sinusoidal function \(\sin \omega t\) is equal to \(\omega \cos \omega t\), and that the second derivative of \(\sin \omega t\) is equal to \(-\omega^2 \sin \omega t\). With each differentiation, the constant of angular velocity (\(\omega\)) is applied as a multiplier to the entire function.↩︎

  772. There is an additional term missing in this Fourier series, and that is the “DC” or “bias” term \(A_0\). Many non-sinusoidal waveforms having peak values centered about zero on a graph or oscilloscope display actually have average values that are non-zero, and the \(A_0\) term accounts for this. However, this is usually not relevant in discussions of machine vibration, which is why I have opted to present the simplified Fourier series here.↩︎

  773. We have no way of knowing this from the Fourier spectrum plot, since that only shows us amplitude (height) and frequency (position on the x-axis).↩︎

  774. Machines with reciprocating components, such as pistons, cam followers, poppet valves, and such are notorious for generating vibration signatures which are anything but sinusoidal even under normal operating circumstances!↩︎

  775. From the perspective of measurement, it would be ideal to affix a velocimeter or accelerometer sensor directly to the rotating element of the machine, but this leads to the problem of electrically connecting the (now rotating!) sensor to stationary analysis equipment. Unless the velocity or acceleration sensor is wireless, the only practical mounting location is on the stationary frame of the machine.↩︎

  776. Single-line electrical diagrams are similar to Process Flow Diagrams (PFDs) used in industrial instrumentation, concentrating on the process flows more than the monitoring and control equipment. It is important to note that single-line diagrams are not the same as electrical schematics: in a single-line diagram, each line represents a set of power conductors (typically three or four conductors if the power system is 3-phase, which most large-scale AC power systems are). For this reason, we must interpret a single-line diagram much more like a pipeline system than an electrical circuit, in that the electrical power flows in one direction at any given time through these single lines, never making a complete loop as is the case in real life and in an electrical schematic diagram.↩︎

  777. In the electrical power industry, the color red universally represents an energized (closed breaker) condition while the color green represents a de-energized (open breaker) condition.↩︎

  778. For example, a potential transformer (PT) constructed to step 13.8 kilovolts down to 120 volts for safe monitoring of that line voltage must have a turns ratio equivalent to 13800:120, or 115:1.↩︎

  779. For example, a current transformer (CT) constructed to step 400 amps down to 5 amps for safe monitoring of that line current must have a turns ratio equivalent to 400:5, or 80:1. This means the single “turn” of the power conductor through the center of the CT is flanked by exactly 80 turns of wire wrapped around the toroidal iron core of the CT.↩︎

  780. To review, the power factor of an AC circuit is the cosine of the phase angle between total (source) voltage and total (source) current. Power factor represents how much of the line current goes toward doing useful work. Reactive loads do not transform electrical energy into work, but rather alternately store and release electrical energy. Current at a purely reactive load, therefore, is not as useful as current at a purely resistive load. However, reactive current still “occupies” ampacity on a power line, and so the existence of a low power factor means the system is not delivering as much power as it could.↩︎

  781. This legacy technology is called Power Line Carrier, or PLC which is unfortunately confusing because it has nothing to do with Programmable Logic Controllers (also abbreviated PLC). The concept is not unlike the HART analog-digital hybrid system used to communicate digital information to process transmitters over 4-20 mA analog signal lines, except in the case of power-line carrier systems the signal frequencies are much higher and the challenge of safely coupling these signals to high-voltage power line conductors is much greater.↩︎

  782. To review, impedance is the sum total opposition to electric current in a circuit, consisting of resistance and/or reactance. Impedance is measured in ohms, and so a distance relay (21) is set to “pick up” a fault in a power line if the measured impedance of that line falls below a threshold value based on the length of that line.↩︎

  783. The difference between an instantaneous overcurrent (50) function and a time-overcurrent (51) function is the amount of time delay between the detection of an overcurrent event and the relay’s command to trip the circuit breaker. Any detected level of line current in excess of the instantaneous overcurrent “pickup” threshold will immediately issue a trip command, while the level of line current in excess of the time-overcurrent “pickup” threshold will determine the amount of time delay before the issuance of a trip command.↩︎

  784. Directional relays are useful for protecting electrical generators susceptible to acting as a motor and drawing power from the network rather than delivering power to the network. Generators driven by wind turbines are an example of this class: even a relatively small amount of power flowing in reverse direction (from the grid to the generator, “motoring” the generator) is undesirable, and so it is wise to isolate a “motoring” generator based on a much lower current than what would be considered unacceptable in the generating direction. A regular 50 or 51 overcurrent relay cannot discriminate between the two directions of power flow, but a 67 overcurrent relay can.↩︎

  785. These mechanisms are similar in principle to the trigger, spring, and hammer of a firearm: the mechanical energy necessary to ignite the primer of a cartridge comes from a spring that has been “charged” either by manual operation of by the action of the gun during the last firing cycle. This spring energy is released by a sensitive sear mechanism driven by the finger-operated trigger, requiring very little energy to operate. In a similar manner, the operating springs of large circuit breakers are “charged” by an electric motor whenever a relaxed state is detected. That mechanical energy is then released by a relatively sensitive mechanism driven by an electric solenoid, allowing a small electrical signal to rapidly operate the large contact mechanism.↩︎

  786. The sole purpose of transforming voltage and current levels in a power grid is to minimize power losses due to the electrical resistance of the conductors. Recall from basic DC electrical theory that the amount of power dissipated by a current-carrying resistance is \(P = I^2 R\). This means doubling the current through a resistive conductor will increase that conductor’s power dissipation four-fold, all other factors being equal. Metal wire is expensive, especially when thousands of miles of it must be run to form a power grid. In the interest of reducing this expense, transformers are used to maintain long-distance power line voltages high and currents low, permitting the use of smaller-gauge conductors to carry that current.↩︎

  787. Thomas Edison’s original DC-based power grid was limited in radius to the size of a city, because all components operated at one voltage level (about 110 VDC). Large copper busbars served as distribution lines from coal-fired generating stations to points throughout the city, the sheer mass of these copper bars necessitating their installation in underground trenches rather than as overhead lines. Voltage losses from the generating station to points at the furthest reaches of the DC grid were significant, meaning customers at the “end of the line” had to tolerate dimmer lamps than customers located nearer the generating station.↩︎

  788. The source for this historical illustration is Cassier’s Magazine, which was an engineering periodical published in the late 1800’s and early 1900’s out of London, England. The Smithsonian Institute maintains online archives of Cassier’s spanning many years, and it is a treasure-trove for those interested in the history of mechanical, electrical, chemical, and civil engineering.↩︎

  789. Other options may exist for some grids. For example, large-scale industrial customers may be requested to curtail their power consumption at certain times in order to offset a deficit in supply. An example of this might be an aluminum smelter (which uses hundreds of megawatts of electricity to reduce alumina powder to molten aluminum metal) operating as a sheddable load while the same grid employs a nuclear fission power plant as one of its sources. If the nuclear generator’s reactor happens to “scram” (shut down for any reason), that reactor’s power output will drop off the grid immediately, which may constitute hundreds of megawatts of lost generation. In such an event, the grid dispatch system may issue a “load shed” command to the aluminum smelter to drop a substantial portion of its consumption, as it may not be practical to immediately bring that much extra power on-line from some other source.↩︎

  790. This phenomenon is just one more application of the Law of Energy Conservation, which states energy cannot be created or destroyed, but must be accounted for in all processes. Every joule of energy delivered to the load in this example circuit must be supplied by the generator, which in turn draws (at least) one joule of energy from the prime mover (e.g. engine, turbine). Since the power “grid” shown in this diagram has no means of storing energy for future use, the load’s demand must be instantaneously met by the generator, and in turn by the prime mover. Thus, sudden changes in load resistance result in instantaneous changes in power drawn from the prime mover, all in accordance with the Law of Energy Conservation.↩︎

  791. The standard frequency for a power grid is typically 50 Hz or 60 Hz, depending on which part of the world you are in. North American power grids typically operate at 60 Hz, while 50 Hz is more common in Europe.↩︎

  792. A common analogy for this is two children swinging on adjacent swings in a playground. Imagine the distance between the children being the amount of voltage difference between the two generators at any given point in time, with the amplitude of each child’s swing representing the peak voltage of each generator and the pace of each child’s oscillation being the frequency of each generator. When two children are swinging in perfect synchronization, the distance between them remains minimal at all times. When they swing 180\(^{o}\) out of phase each other, the distance between them varies from minimal to maximal at a pace equal to the difference in their individual swinging rates.↩︎

  793. This “coupling” is not perfectly rigid, but does allow for some degree of phase difference between the generator and the grid. A more accurate analogy would be to say the generators act as if their shafts were linked by a flexible coupling.↩︎

  794. It should be noted that a grid-connected AC generator can in fact be over-sped with sufficient mechanical power input, but only if it “slips a pole” and falls out of synchronization as a result. Such an event can be catastrophically to the offending generator unless it is immediately disconnected from the grid to avoid damage from overcurrent.↩︎

  795. In this example, three current transformers, or CTs, are shown stepping down the bus line current to levels safely measured by panel-mounted ammeters. Current transformers typically step down line current to a nominal value of 5 amps to drive meters, relays, and other monitoring instruments.↩︎

  796. In the United States, the term “low voltage” with reference to power circuits usually refers to circuits of 600 volt or less potential.↩︎

  797. For an equitable size comparison between the two different types of circuit breaker, consider the fact that the insulators on this gas-quenched circuit breaker are approximately the same physical height as the insulators on the previously-shown oil-tank circuit breaker.↩︎

  798. While pure SF\(_{6}\) gas is benign, it should be noted that one of the potential chemical byproducts of arcing in an SF\(_{6}\)-quenched circuit breaker is hydrofluoric acid (HF) which is extremely toxic. HF is formed when SF\(_{6}\) gas arcs in the presence of water vapor (H\(_{2}\)O), the latter being nearly impossible to completely eliminate from the interior chambers of the circuit breaker. This means any maintenance work on an SF\(_{6}\)-quenched circuit breaker must take this chemical hazard into consideration.↩︎

  799. This particular circuit breaker, like most live-tank circuit breakers, interrupts just one phase (i.e. one “pole”) of a three-phase bus. Portions of the second and third live-tank SF\(_{6}\) breakers comprising the full three-phase breaker array for this bus may be seen near the left-hand edge of the photograph.↩︎

  800. A “toroid” is shaped like a donut: a circular object with a hole through the center.↩︎

  801. This raises an interesting possibility: if the power conductor were to be wrapped around the toroidal core of the CT so that it passes through the center twice instead of once, the current step-down ratio will be cut in half. For example, a 100:5 CT with the power conductor wrapped around so it passes through the center twice will exhibit an actual current ratio of only 50:5. If wrapped so that it passed through the CT’s center three times, the ratio would be reduced to 33.33:5. This useful “trick” may be used in applications where a lesser CT ratio cannot be found, and one must make do with whatever CT happens to be available. If you choose to do this, however, beware that the current-measuring capacity of the CT will be correspondingly reduced. Each extra turn of the power conductor adds to the magnetic flux experienced by the CT’s core for any given amount of line current, making it possible to magnetically saturate the core if the line current exceeds the reduced value (e.g. 50 amps for the home-made 50:5 CT where the line passes twice through the center of a 100:5 CT).↩︎

  802. High-voltage devices situate their connection terminals at the ends of long insulators, to provide a large air gap between the conductors and the grounded metal chassis of the device. The point at which the long insulator (with a conductor inside of it) penetrates the housing of the device is called the bushing.↩︎

  803. The battery-and-switch test circuit shown here is not just hypothetical, but may actually be used to test the polarity of an unmarked transformer. Simply connect a DC voltmeter to the secondary winding while pressing and releasing the pushbutton switch: the voltmeter’s polarity indicated while the button is pressed will indicate the relative phasing of the two windings. Note that the voltmeter’s polarity will reverse when the pushbutton switch is released and the magnetic field collapses in the transformer coil, so be sure to pay attention to the voltmeter’s indication only during the time of switch closure! This is an application where an analog voltmeter may actually be superior to a digital voltmeter, since the instantaneous movement of a mechanical needle (pointer) is easier to visually interpret than the sign of a digital number display.↩︎

  804. The amount of magnetic force \(H\) applied to the transformer’s core is a direct function of winding current. If the DC test source is capable of pushing significant amounts of current through the transformer, it may leave the core in a partially magnetized state which will then affect its performance when powered by AC. A relatively “weak” source such as a 9 volt “transistor” battery helps ensure this will not happen as a result of the polarity test.↩︎

  805. The IEEE standard C57.12.00-2010 (“IEEE Standard for General Requirements for Liquid-Immersed Distribution, Power, and Regulating Transformers”) states that single-phase transformers having power ratings of 200 kVA and below and high-voltage winding ratings of 8.66 kV and below must have additive polarity, and that all other types of power transformers must have subtractive polarity.↩︎

  806. This particular transformer happens to be a tap-changing unit, designed to provide a number of ratio increments useful for adjusting voltages in a power distribution system. Its typical primary voltage is 115 kV and its typical secondary voltage is 12.5 kV. If the secondary voltage happens to sag due to a heavy-load conditions, the transformer’s tap setting may be manually adjusted to output a slightly greater secondary voltage (i.e. a lesser step-down ratio). This is how electric power distribution utilities manage to keep voltages to customers relatively stable despite ongoing changes in load conditions.↩︎

  807. The hazards of an open-circuited CT can be spectacular. I have spoken with power electricians who have personally witnessed huge arcs develop across the opened terminals in a CT circuit! This safety tip is not one to be lightly regarded.↩︎

  808. For example, in an application where the maximum fault current is expected to be 40,000 amps, we would choose a CT with a ratio of at least 2000:5 to drive the protective relay, because 40,000 amps is twenty times this CT’s primary current rating of 2000 amps. We could also select a CT with a larger ratio such as 3000:5. The point is to have the CT be able to faithfully transform any reasonable fault current into a proportionately lower value for the protective relay(s) to sense.↩︎

  809. An illustrative example to consider is the venerable Westinghouse model CO-11 overcurrent relay, exhibiting a burden of 1.07 volt-amps at a CT secondary current of 5 amps with a 5-amp tap setting. By contrast, an SEL-551 digital overcurrent relay exhibits only 0.16 volt-amps of burden at the same CT current of 5 amps: nearly seven times less burden than the electromechanical relay. The reason for this stark disparity in burden values is the design of each relay: the electromechanical relay demands power from the CT to spin an aluminum disk against the restraining forces of a spring and a drag magnet, while the electronic relay receives operating power from a separate source (station power) and only requires that the CT drive the input of an analog-to-digital converter (ADC) circuit.↩︎

  810. Iron and iron alloys (“ferrous”) reach a point of maximum magnetization where all the magnetic “domains” in a sample are oriented in the same direction, leaving no more left to orient. Once a sample of ferrous material has thus “saturated”, it is of no further benefit to the establishment of a magnetic field. Increases in magnetic force will still produce additional lines of magnetic flux, but not at the rate experienced when the material was not saturated. In other words, a magnetically saturated inductor or transformer core essentially behaves like an air-core inductor or transformer for all additional current values beyond full saturation.↩︎

  811. In the electric power industry this is commonly referred to as a “rat/sat” test.↩︎

  812. If you think carefully about this, you realize that the number of turns of wire in either CT must be identical, because there is only one “turn” of wire passing through the center of either CT. In order to achieve a 2000:5 ratio, you must have 400 turns of wire wrapped around the toroidal ferrous core per the 1 “turn” of wire passing through the center of that core.↩︎

  813. Calculations based on the specific resistance of copper at 20 \(^{o}\)C place 10 AWG wire at 0.9989 ohms per 1000 feet. \(R = {\rho l \over A}\)↩︎

  814. What this means is that the relay will permit the circuit breaker to remain in its closed state indefinitely so long as the current is at or below 100% of its rated value. If the current ever exceeds the 100% limit, the protective relay begins to measure the length of time for the overcurrent event, commanding the circuit breaker to trip open after a certain amount of time inversely proportional to the degree of overcurrent. A 300% overcurrent condition, for example, will cause the circuit breaker to trip in a shorter amount of time than a 200% overcurrent condition.↩︎

  815. In many legacy electromechanical protective relays, the trip contact is designed to latch in the closed position even after the event prompting the breaker trip has passed. A special “seal-in” circuit with its own coil and contact provides this latching action, the purpose of which is to ensure the relay will continuously command the breaker to trip for as long as it takes for the breaker to reach the tripped condition. Only the 52a auxiliary contact inside the circuit breaker can interrupt a latched trip circuit, and that will only happen when the breaker achieves a tripped state.↩︎

  816. It should be noted that some microprocessor-based protective relays may operate on DC or AC power, as well as at power supply voltages other than 125 volts, in addition to the standard of 125 VDC.↩︎

  817. In protective relay circuit diagrams, it is conventional to show relay coils as “zig-zag” symbols rather than as actual coils of wire as is customary in electronic schematics. Those familiar with “ladder” style electrical wiring diagrams may recognize this as the symbol for a solenoid coil. Once again, we see here the context-dependence of symbols and diagram types: a component type may have multiple symbols depending on which type of diagram it’s represented in, while a common symbol may have different meanings in different diagrams.↩︎

  818. Note that this General Electric relay provides pickup tap settings well in excess of 5 amps, which is the nominal full-load rating of most current transformers. CTs rated for protective relay applications are fully capable of exceeding their normal full-load capacity for short time periods, which is a necessary feature due to the extreme nature of fault current conditions. It is not uncommon for fault currents in a power system to exceed full-load current conditions by a factor of 20!↩︎

  819. Geometrically, at least three points are required to define the shape of any curve, just as two points are the minimum for defining a line. However, since the curvature of a relay’s timing function is fixed by the construction of its components and therefore not liable to drift over time, it is common within the protective relay field to check the curve at just two points to ensure the adjustments are correct. The drag magnet is the principal adjustment for the timing of an electromechanical 51 relay.↩︎

  820. If you examine the induction disk from a 51 relay, you will note a that the disk’s radius is not constant, and that there is actually a “step” along the circumference of the disk where its radius transitions from minimum to maximum. The amount of disk material exposed to the stator coil’s magnetic field to generate operating torque therefore changes with rotation angle, providing a nonlinear function altering the shape of the relay’s timing curve.↩︎

  821. In practice, perfect cancellation of currents is nearly impossible due to mismatched CTs and other imperfections, and so a small amount of current typically passes through the differential relay’s operating coil even under normal circumstances. The pickup value of this relay must be set such that this small amount of current does not unnecessarily trip the relay.↩︎

  822. Transformers exhibit inrush current for reasons different than capacitors (reactance) or motors (counter-EMF). Residual magnetism in a transformer core from the last time it was energized biases that core toward saturation in one direction. If the applied power happens to match that direction, and have sufficient magnitude, the transformer core will saturate on power-up which results in abnormally high current for multiple cycles until the core’s magnetic domains normalize.↩︎

  823. Restraint coils are sometimes labeled as “RC” and other times labeled as “R”. It should be noted that the principle of a “restraining element” within a protective relay is not unique to differential (87) relays. Other relay types, notably distance (21) relays, also employ restraint coils or other mechanisms to prevent the relay from tripping under specific circumstances.↩︎

  824. It should be mentioned that an external fault generating currents high enough to saturate one or more of the CTs used in the differential protection system may cause the differential current system to falsely trip, due to saturation causing the affected CT(s) to no longer faithfully represent line current to the relay.↩︎

  825. Power-line carrier, or PLC as it is known in the electric power industry, consists of data communications conveyed over the power line conductors themselves. This usually takes the form of a high-frequency AC signal (in the hundreds of kilohertz range) which is then modulated with the data of interest, similar to radio communication except that the RF signals travel along power lines rather than through empty space as electromagnetic waves. Power-line carrier systems are generally less reliable than fiber optic networks, because the presence of faults on the protected line may compromise the pilot communication.↩︎

  826. Schweitzer Engineering Laboratories manufactures a differential current relay specifically designed for line protection called the model 387L. It is billed as a “zero settings” relay because there are no parameters to configure. Simply set up a pair of 387L’s (one at each end of the line), each one connected to matched CTs monitoring current through all three line conductors, and then link the relays together via a pair of fiber optic cables, and it’s ready to work.↩︎

  827. There is a potential problem arising from CT secondaries in Wye when those CTs are measuring currents on the Wye-connected side of a power transformer, and that is the problem of zero sequence currents. A “zero sequence” set of currents is equivalent to in-phase currents flowing through all three lines of a three-phase power system, lacking the normal 120 degree phase shift from each other. The mathematical foundations of this concept are beyond the immediate scope of this section (for more information, refer to section 5.8.4 on “Symmetrical Components” beginning on page ), but suffice to say zero-sequence currents are found in certain fault conditions as well as circuits containing “triplen” harmonics (i.e. harmonic frequencies that are some multiple of 3\(\times\) the fundamental, e.g. 180 Hz, 240 Hz, 540 Hz for a 60 Hz power system). Zero-sequence currents flow through the neutral conductor in a 4-wire Wye-connected system, but circle through the phase elements of a Delta-connected system. This means a Wye-Delta connected transformer where a fourth conductor attaches to the center of the Wye winding set may experience line currents on the Wye side that are not seen in the line conductors of the Delta side, and may therefore cause a differential current relay to operate. This is another reason why connecting CTs differently than the power transformer windings they sense (i.e. Delta-connected CTs on a power transformer’s Wye side) is a good idea: any zero-sequence currents within the power transformer’s Wye-connected winding will circulate harmlessly through the Delta-connected CT secondaries and never enter the 87 relay. For digitally compensated 87 relay installations where all CTs are Wye-connected, the relay must also be configured to mathematically cancel out any zero-sequence currents on the Wye-connected side of the power transformer.↩︎

  828. Note the reversal of polarity for the voltage drop across each line resistance in the DC example diagram. A shunt resistor intentionally placed in series with the generator current could fulfill that same directional-sensing role.↩︎

  829. In the electric power industry, the probability that protective relays and associated equipment will reliably interrupt power in the event of a fault is called dependability.↩︎

  830. In the electric power industry, the probability that protective relays and associated equipment will not interrupt power unnecessarily is called security. As one might guess, dependability and security are two competing interests in the design of any protection scheme, the challenge being how to strike a reasonable balance between the two.↩︎

  831. This solution works best for measuring the flow rate of gases, not liquids, since the manometer obviously must use a liquid of its own to indicate pressure, and mixing or other interference between the process liquid and the manometer liquid could be problematic.↩︎

  832. There is no theoretical limit to the number of points in a digital computer’s characterizer function given sufficient processing power and memory. There is, however, a limit to the patience of the human programmer who must encode all the necessary \(x,y\) data points defining this function. Most of the piecewise characterizing functions I have seen available in digital instrumentation systems provide 10 to 20 (\(x,y\)) coordinate points to define the function. Fewer than 10 coordinate points risks excessive interpolation errors, and more than 20 would just be tedious to configure.↩︎

  833. The configuration software is Emerson’s AMS, running on an engineering workstation in a DeltaV control system network. The radar level transmitter is a Rosemount model 3301 (guided-wave) unit.↩︎

  834. To be honest, there are some valve body designs that work far better in on/off service (e.g. ball valves and plug valves) while other designs do a better job at throttling (e.g. double-ported globe valves). Many valve designs, however, may be pressed into either type of service merely by attaching the appropriate actuator.↩︎

  835. The standard preparatory technique is called lapping. To “lap” a valve plug and seat assembly, an abrasive paste known as lapping compound is applied to the valve plug(s) and seat(s) at the areas of mutual contact when the valve is disassembled. The valve mechanism is reassembled, and the stem is then rotated in a cyclic motion such that the plug(s) grind into the seat(s), creating a matched fit. The precision of this fit may be checked by disassembling the valve, cleaning off all remaining lapping compound, applying a metal-staining compound such as Prussian blue, then reassembling. The stem is rotated once more such that the plug(s) will rub against the seat(s), wearing through the applied stain. Upon disassembly, the worn stain may be inspected to reveal the extend of metal-to-metal contact between the plug(s) and the seat(s). If the contact area is deemed insufficient, the lapping process may be repeated.↩︎

  836. Of course, gate valves also offer obstructionless flow when wide-open, but their poor throttling characteristics give most rotary valve designs the overall advantage.↩︎

  837. Some packing materials, most notably Teflon and graphite, tend to be self-lubricating.↩︎

  838. Based on friction values shown on page 131 of Fisher’s Control Valve Handbook (Third Edition), Teflon packing friction is typically 5 to 10 times less than graphite packing for the same stem size!↩︎

  839. Graphite packing is usable in services ranging from cryogenic temperatures to 1200 degrees Fahrenheit, as opposed to Teflon which is typically rated between \(-40\) \(^{o}\)F and 450 \(^{o}\)F.↩︎

  840. Asbestos fibers have the ability to permanently lodge in the air sacs of human lungs, leading to long-term health problems if those fibers are inhaled.↩︎

  841. Bellows have a limited service life, which means the possibility of a rupture is likely. This is why a conventional packing assembly is always included in a bellows-equipped bonnet.↩︎

  842. Data in this table taken from Fisher’s Control Valve Handbook.↩︎

  843. The greater pressure rating of a piston actuator comes from the fact that the only “soft” component (the sealing ring) has far less surface area exposed to the high pressure than a rolling diaphragm. This results in significantly less stress on the elastic ring than there would be on an elastic diaphragm exposed to the same pressure. There really is no limit to the stroke length of a piston actuator as there is with the stroke length of a diaphragm actuator. It is possible to build a piston actuator miles long, but such a feat would be impossible for a diaphragm actuator, where the diaphragm must stretch (or roll) the entire stroke length.↩︎

  844. Exceptions exist for valves designed to fail in place, where a valve may be engineered to “lock” in position through the action of an external device whether the valve itself is air-to-open or air-to-close.↩︎

  845. Note that reverse indication is not the same thing as reverse action for a loop controller. Reverse indication simply means the output display shows 100% valve position at 4 mA output, and 0% valve position at 20 mA output. Reverse action means the output decreases when the input (process variable) increases.↩︎

  846. 3 PSI could mean fully closed and 15 PSI fully open, or vice-versa, depending on what form of actuator is coupled to what form of valve body. A direct-acting actuator coupled to a direct-acting valve body will be open at low pressure and closed at high pressure (increasing pressure pushing the valve stem toward the body, closing off the valve trim), resulting in air-to-close action. Reversing either actuator or valve type (e.g. reverse actuator with direct valve or direct actuator with reverse valve) will result in air-to-open action.↩︎

  847. The volume booster design shown here is loosely based on the Fisher model 2625 volume boosting relay.↩︎

  848. One way to minimize dynamic forces on a globe valve plug is to use a double-ported plug design, or to use a balanced plug on a cage-guided globe valve. A disadvantage to both these valve plug designs, though, is greater difficulty achieving tight shut-off.↩︎

  849. The technical term for this type of control system is cascade, where one controller’s output becomes the setpoint for a different controller. In the case of a valve positioner, the positioner receives a valve stem position setpoint from the main process controller. We could say that the main process controller in this case is the primary or master controller, while the valve positioner is the secondary or slave controller.↩︎

  850. This is not to say valve positioners have no need for external volume boosters, just that the actuating air flow capacity of a typical positioner greatly exceeds the air flow capacity of a typical I/P transducer.↩︎

  851. In an earlier chapter of this book, force- and motion-balance pneumatic mechanisms were likened to “tug-of-war” contestants versus ballroom dancers, respectively. Force-balance mechanisms pit force against force to achieve mechanical balance, like two teams competing in a tug-of-war where opposing forces are perfectly balanced and no motion takes place. Motion-balance mechanisms match one motion with another motion to achieve mechanical balance, like two ballroom dancers moving across a dance floor while maintaining a constant distance between each other. All valve positioner mechanisms require motion on the part of the valve stem, and so it is natural to assume all valve positioner mechanisms will be motion-balance because unlike a tug-of-war something is definitely moving. However, if we examine the simple force-balance positioner mechanism closely we will see that only the valve stem moves in this mechanism, while nothing else does. To apply the tug-of-war analogy to this application, it is as if one team of contestants pulls on a stiff rope while the other team pulls on an elastic rope, the two ropes tied together in a knot at the center. In order to achieve a perfect balance of forces so the knot won’t move to one side or the other, the team holding the elastic rope must stretch their rope further in order to balance an increased force from the team holding the stiff rope. The fact that one team is moving does not negate the fact that balance between the two teams is still a matter of force against force. To illustrate this point more vividly, we may ask the question: if the elastic rope is replaced by one that is even more elastic than before, will it advantage one team of contestants over the other? The answer to this question is no, as the two teams will still be equally matched if they were equally matched before. The only difference now is that the team holding the elastic rope will have to stretch the rope further than before to apply the same force as before. In a true motion-balance system, a greater motion imparted by one portion of the mechanism must be matched by a greater motion in the other portion of the mechanism.↩︎

  852. Recall from basic physics that friction force always opposes the direction of motion. Thus, when the valve is opening, friction works against the actuator’s air pressure (assuming an air-to-open valve), requiring additional air pressure to maintain motion. When the valve is closing, though, packing friction works in the same direction as the actuator’s air pressure “helping” the valve stay more open than it should. This is why the positioner must maintain less actuator air pressure for any given position while moving closed than while the valve moves open. The difference in air pressure moving open versus moving closed at any given stem position is proportional to twice the dynamic packing friction. Stated mathematically, \(F_{packing} = {1 \over 2} (P_{opening} - P_{closing}) A\).↩︎

  853. Prior to the advent of motor-actuated valves, practically all shutoff valves in industrial facilities were manually operated. While this is an inconvenience for operations personnel, it did carry one advantage: the human operators tasked with closing these valves by hand could feel how each valve seated. The amount of effort and the onset of closing torque sensed while turning the valve handle shut gave operators tactile feedback on the condition of each valve seat. Motor-powered valve actuators eliminated the need for this routine manual labor, but also eliminated the routine collection of this valuable diagnostic information. Modern electric valve actuators now provide the best of both worlds: convenient and fast valve operation with accurate self-diagnostic assessment of valve seating.↩︎

  854. I have searched in vain for standardized names to categorize different forms of control valve sequencing. The names “complementary,” “exclusive,” and “progressive” are my own invention. If I have missed someone else’s categorization of split-ranging in my research, I sincerely apologize.↩︎

  855. In mathematics, a “complement” is a value whose sum with another quantity always results in a fixed total. Complementary angles, for instance, always add to 90\(^{o}\) (a right angle).↩︎

  856. Also known as a mixing valve or a diverting valve, depending on how it is applied to process service.↩︎

  857. Although the HART standard does support “multidrop” mode where multiple devices exist on the same current loop, this mode is digital-only with no analog signal support. Not only do many host systems not support HART multidrop mode, but the relatively slow data communication rate of HART makes this choice unwise for most process control applications. If analog control of multiple HART valve positioner devices from the same 4-20 mA signal is desired, the address conflict problem may be resolved through the use of one or more isolator devices, allowing all devices to share the same analog current signal but isolating each other from HART signals.↩︎

  858. To review, Fieldbus is an all-digital industrial control protocol, where instruments connect to a control system and to each other by means of a single network cable. Signals are routed not by specific wire connections, but rather by software entities called function blocks whereby the engineer or technician programs the instruments and control system what to do with those signals. The function blocks shown in this example would typically be accessed through the graphic display of a DCS in a real Fieldbus system, lines drawn between the blocks instructing the system where each of the instrument signals need to go.↩︎

  859. Both controllers should be equipped with provisions for reset windup control (or have no integral action at all), such that the output signal values are predictable enough that they behave as a synchronized pair rather than as two separate controllers.↩︎

  860. Valve noise may be severe in some cases, especially in certain gas flow applications. An important performance metric for control valves is noise production expressed in decibels (dB).↩︎

  861. In case you were wondering, it is appropriate to express energy loss per unit volume in the same units of measurement as pressure. For a more detailed discussion of dimensional analysis, see section 2.11.13 beginning on page where Bernoulli’s equation is examined and you will see how the units of \({1 \over 2} \rho v^2\) and \(P\) are actually the same.↩︎

  862. In a case of minimal throttling, almost none of the fluid’s kinetic energy is lost to turbulence, but rather passes right through the valve unrestricted.↩︎

  863. The specification of certain British units of measurement for flow and pressure drop means that there is more to \(C_v\) than just \(\sqrt{2 A^2 \over k \rho_{water}}\). \(C_v\) also incorporates a factor necessary to account for the arbitrary choice of units.↩︎

  864. Such factors include fluid compressibility, viscosity, specific heat, vapor pressure to name a few. Not only will modern valve sizing software more accurately predict valve sizes for particular applications than these simple formulae, but this software may also provide estimations of noise levels produced by the valve.↩︎

  865. This is a good example of a general problem-solving strategy in action: making some dramatic change to the scenario and then reasoning the consequences of that change to better understand general principles. For those readers who may be unfamiliar with American terminology, a fire hydrant is a large hand valve installed at intervals along public roadways, allowing connection of fire hoses to an underground water supply pipe in the event of an emergency fire. These valves are quite large, and would be comically oversized if installed inside a person’s house, for any purpose.↩︎

  866. This is particularly true when one considers the piping changes usually necessary to accommodate a valve size change. Undersized valves installed in a pipe often require reducer fittings to “narrow” the full-bore size of the pipe down to the flange size of the control valve body. Upon replacement of the under-sized valve, these reducers must be removed to accommodate the larger valve body. The piping itself may need to be cut and re-welded to match the flange-to-flange dimensions of the new (larger) control valve. All of this requires time, labor, and material investment. If a large valve body with reduced-port trim were initially installed, however, most of this time, labor, and expense could be avoided when the time comes to replace the reduced-port trim with full-port trim.↩︎

  867. Reduced-port cage-guided trim may also take the form of a cage, plug, and seat of reduced diameter, with flanges attached in such a way that this smaller trim still fits inside the larger valve body. The example illustrated here, with a full-diameter cage having narrower ports on it, is just one way of achieving reduced flow capacity in a cage-guided design but certainly not the only way.↩︎

  868. The ISA Handbook of Control Valves cites this equation as being valid for conditions where the valve’s downstream pressure (\(P_2\)) is equal to or greater than one-half the upstream pressure (\(P_1\)), with both pressures expressed in absolute units. In other words, \(P_2 \geq 0.5P_1\) or \(P_1 \leq 2P_2\). An upstream:downstream pressure ratio in excess of 2:1 usually means flow through a valve will become choked.↩︎

  869. Source for \(C_d\) factors: of Béla Lipták’s Instrument Engineers’ Handbook, Process Control (Volume II), Third Edition, page 590.↩︎

  870. For those readers with an electronics background, the concept of “characteristic curves” for a control valve is exactly the same as that of characteristic curves for transistors. Instead of plotting the amount of current a bipolar transistor will pass through its collector terminal (\(I_C\)) given varying amounts of collector-emitter voltage drop (\(V_{CE}\)), we are plotting the rate of water flow through the valve (\(Q\)) given varying amounts of supply pressure (\(\Delta P\)).↩︎

  871. Once again, the exact same concept applied in transistor circuit analysis finds application here in control valve behavior! The load line for a transistor circuit describes the amount of voltage available to the transistor under different current conditions, just like the load line here describes the amount of pressure available to the valve under different flow conditions.↩︎

  872. Load line plots are a graphical method of solving nonlinear, simultaneous equations. Since each curve represents a set of solutions to a particular equation, the intersection of two curves represents values uniquely satisfying both equations at the same time.↩︎

  873. The precise determination of this curve is based on a model of the narrow pipe as a flow-restricting element, similar in behavior to an orifice, or to a control valve with a fixed stem position. Since pressure is dropped along the pipe’s length as a function of turbulence (velocity), the load “line” curves for the exact reason the valve’s own characteristic plots are curved: the relationship between fluid velocity and turbulent pressure loss is naturally non-linear.↩︎

  874. Not only is the response of the valve altered by this degradation of upstream pressure, but we can also see from the load line that a certain maximum flow rate has been asserted by the narrow pipe which did not previously exist: 75 GPM. Even if we unbolted the control valve from the pipe and let water gush freely into the atmosphere, the flow rate would saturate at only 75 GPM because that is the amount of flow where all 20 PSI of hydrostatic “head” is lost to friction in the pipe. Contrast this against the close-coupled scenario, where the load line was vertical on the graph, implying no theoretical limit to flow at all! With an absolutely constant upstream pressure, the only limit on flow rate was the maximum \(C_v\) of the valve (analogous to a perfect electrical voltage source with zero internal resistance, capable of sourcing any amount of current to a load).↩︎

  875. The amount of fluid pressure output by any pump tends to vary with the fluid flow rate through the pump as well as the pump speed. This is especially true for centrifugal pumps, the most common pump design in process industries. Generally speaking, the discharge (output) pressure of a pump rises as flow rate decreases, and falls as flow rate increases. Variations in system fluid pressure caused by the pump constitutes one more variable for control valves to contend with.↩︎

  876. Even then, achieving the ideal maximum flow rate may be impossible. Our previous 100% flow rate for the valve was 80.5 GPM, but this goal has been rendered impossible by the narrow pipe, which according to the load line limits flow to an absolute maximum of 75 GPM (even with an infinitely large control valve).↩︎

  877. Note that the equal percentage formula given here can never achieve a \(C_v\) value of zero, regardless of stem position. This is untrue for real control valves, which of course achieve \(C_v = 0\) when the stem is in the fully closed position. Therefore, the equal percentage formula shown here cannot be precisely trusted at small stem position values.↩︎

  878. Data for the three graphs were derived from actual \(C_v\) factors published in Fisher’s ED, EAD, and EDR sliding-stem control valve product bulletin (51.1:ED). I did not copy the exact data, however; I “normalized” the data so all three valves would have the exact same full-open \(C_v\) rating of 50.↩︎

  879. Astute readers will also note how the stem diameter of the left-hand (linear) plug is significantly greater than the stem diameter of the right-hand (equal-percentage) plug. This has nothing to do with characterization, and is simply an irrelevant difference between the two plugs. The truth of the matter is, dear reader, that these are the only two valve plugs I had on hand suitable for illustrating the difference between linear and equal-percentage trim. One just happened to have a thicker stem than the other.↩︎

  880. Such applications are typically found when the purpose of the control valve is to regulate process fluid pressure. Consider, for example, a control valve regulating upstream gas pressure in a vessel by venting gas from that vessel to atmosphere. In such an application, the valve’s upstream pressure (\(P_1\)) will be nearly constant due to the control loop’s action, and the valve’s downstream pressure (\(P_2\)) will be constant due to it being atmospheric pressure.↩︎

  881. An example of such a process is temperature control through a heat exchanger where the controlled fluid flow regime happens to transition from laminar to turbulent as the control valve opens further: at low stem positions (nearly shut) where the flow is laminar and heat transfer is impeded, large changes in flow rate may be necessary to effect modest changes in temperature; at high stem positions (nearly open) where the flow is turbulent and heat transfer is efficient, only small changes in flow rate are necessary to create modest changes in temperature. In such an application a quick-opening installed characteristic may actually yield more consistent behavior than a linear installed characteristic.↩︎

  882. Bellows seals are theoretically frictionless, but in practice bellows seals are almost always combined with standard packing to prevent catastrophic blow-out in the event of the bellows rupturing, and so the theoretical advantage of low friction is never realized.↩︎

  883. Other measures of a control valve’s mechanical status, such as flow capacity, flow characterization, and seat shut-off, cannot be inferred from measurements of actuator force and stem position.↩︎

  884. It should be noted that vapor pressure is a strong function of temperature. The warmer a liquid is, the more vapor pressure it will exhibit and thus the more prone it will be to flashing within a control valve.↩︎

  885. The Control Valve Sourcebook – Power & Severe Service on page 6-3 and the ISA Handbook of Control Valves on page 211 both suggest that the mechanism for choking in liquid service may be related to the speed of sound just as it is for choked flow in gas services. Normally, liquids have higher sonic velocities than gases due to their far greater bulk moduli (incompressibility). This makes choking due to sonic velocity very unlikely in liquid flowstreams. However, when a liquid flashes into vapor, the speed of sound for that two-phase mixture of liquid and vapor will be much less than it is for the liquid itself, opening up the possibility of sonic velocity choking.↩︎

  886. A colleague of mine humorously refers to these valve trim samples as “shock and awe,” because they so dramatically reveal the damaging nature of certain process fluid services.↩︎

  887. Regulating fluid flow by using a throttling valve along with a constant-speed pump is analogous to regulating an automobile’s speed by applying varying force to the brake pedal while holding the accelerator pedal at its full-power position!↩︎

  888. AC drives also vary the amount of voltage applied to the motor along with frequency, but this of secondary importance to the varying of frequency to control speed.↩︎

  889. This includes using an AC induction motor as a servo for precise positioning control!↩︎

  890. This equivalence was mathematically proven by Jean Baptiste Joseph Fourier (1768-1830), and is known as a Fourier series.↩︎

  891. The difference between the synchronous speed and the rotor’s actual speed is called the motor’s slip speed.↩︎

  892. Multi-speed motors do exist, with selectable pole configurations. An example of this is an electric motor with extra sets of stator windings, which may be connected to form a 4-pole configuration for high speed, and an 8-pole configuration for low speed. If the normal full-load “high” speed for this motor is 1740 RPM, the normal full-load “low” speed will be approximately half that, or 870 RPM. Given a fixed line frequency, this motor will only have these two speeds to choose from.↩︎

  893. Note the reverse-connected diodes across the source and drain terminals of each power transistor. These diodes serve to protect the transistors against damage from reverse voltage drop, but they also permit the motor to “back feed” power to the DC bus (acting as a generator) when the motor’s speed exceeds that of the rotating magnetic field, which may happen when the drive commands the motor to slow down. This leads to interesting possibilities, such as regenerative braking, with the addition of some more components.↩︎

  894. The VFD achieves variable output voltage using the same technique used to create variable output frequency: rapid pulse-width-modulation of the DC bus voltage through the output transistors. When lower output voltage is necessary, the duty cycle of the pulses are reduced throughout the cycle (i.e. transistors are turned on for shorter periods of time) to generate a lower average voltage of the synthesized sine wave.↩︎

  895. For more precise control of AC motor speed (especially at low speeds where slip speed becomes a greater percentage of actual speed), speed sensors may indeed be necessary.↩︎

  896. This equivalence was mathematically proven by Jean Baptiste Joseph Fourier (1768-1830), and is known as a Fourier series.↩︎

  897. One such application is machine motion control, where one part of the machine always needs to slow down while another part is accelerating. Another application is coupling the drive motors of two conveyor belts together, where one conveyor always lifts the load uphill and the other conveyor always lowers the load downhill.↩︎

  898. This is accomplished in very different ways for DC versus AC motors. To dynamically brake a DC motor, the field winding must be kept energized while a high-power load resistor is connected to the armature. As the motor turns, the armature will push current through the resistor, generating a braking torque as it does. One way to dynamically brake an AC motor is to inject a small DC current through the stator windings, causing large braking currents to be induced in the rotor. Another way is to regeneratively brake into a resistive load.↩︎

  899. In Europe, the fundamental power line frequency is 50 Hz rather than 60 Hz. Also noteworthy is the fact that since the distortion caused by motor drives is typically symmetrical above and below the center-line of the AC waveform, the only significant harmonics will be odd and not even. In a 60 Hz system, the odd harmonics will include 180 Hz (3rd), 300 Hz (5th), 420 Hz (7th), and higher. For a 50 Hz system, the corresponding harmonic frequencies are 150 Hz, 250 Hz, 350 Hz, etc.↩︎

  900. Harmonic voltages and currents whose frequencies are multiples of three of the fundamental (e.g. 3rd, 6th, 9th, 12th, 15th harmonics). The reason these particular harmonics are noteworthy in three-phase systems is due to their relative phase shifts. Whereas the fundamental phase shift angle between different phase elements of a three-phase electrical system is 120\(^{o}\), the phase shift between triplen harmonics is zero. Thus, triplen harmonics are directly additive in three-phase systems.↩︎

  901. As you may recall, any sufficiently long set of conductors will act as a transmission line for high-frequency pulse signals. An unterminated (or poorly-terminated) transmission line will reflect pulse signals reaching its ends. In the case of a motor drive circuit, these reflected pulses may constructively interfere to produce nodes of high voltage or high current, causing premature wiring failure. Output line reactors help minimize these effects by filtering out high-frequency pulse signals from reaching the long motor power conductors.↩︎

  902. To be precise, this form of on/off control is known as differential gap because there are two setpoints with a gap in between. While on/off control is possible with a single setpoint (FCE on when below setpoint and off when above), it is usually not practical due to the frequent cycling of the final control element.↩︎

  903. In electronics, the unit of decibels is commonly used to express gains. Thankfully, the world of process control was spared the introduction of decibels as a unit of measurement for controller gain. The last thing we need is a third way to express the degree of proportional action in a controller!↩︎

  904. One could argue that the presence of loads actually justifies a control system, for if there were no loads, there would be nothing to compensate for, and therefore no need for an automatic control system at all! In the total absence of loads, a manually-set final control element would be enough to hold most process variables at setpoint.↩︎

  905. An older term for this mode of control is floating, which I happen to think is particularly descriptive. With a “floating” controller, the final control element continually “floats” to whatever value it must in order to completely eliminate offset.↩︎

  906. At least the old-fashioned mechanical odometers would. Modern cars use a pulse detector on the driveshaft which cannot tell the difference between forward and reverse, and therefore their odometers always increment. Shades of the movie Ferris Bueller’s Day Off.↩︎

  907. The equation for a proportional + integral controller is often written without the bias term (\(b\)), because the presence of integral action makes it unnecessary. In fact, if we let the integral term completely replace the bias term, we may consider the integral term to be a self-resetting bias. This, in fact, is the meaning of the word “reset” in the context of PID controller action: the “reset” term of the controller acts to eliminate offset by continuously adjusting (resetting) the bias as necessary.↩︎

  908. Since integration is fundamentally a process of multiplication followed by addition, the units of measurement are always the product (multiplication) of the function’s variables. In the case of reset (integral) control, we are multiplying controller error (the difference between PV and SP, usually expressed in percent) by time (usually expressed in minutes or seconds). Therefore the result will be an “error-time” product. In order for an integral controller to self-recover following windup, the error must switch signs and the error-time product accumulate to a sufficient value to cancel out the error-time product accumulated during the windup period.↩︎

  909. An example of such an application is where the output of a loop controller may be “de-selected” or otherwise “over-ridden” by some other control function. This sort of control strategy is often used in energy-conserving controls, where multiple controllers monitoring different process variables selectively command a single FCE.↩︎

  910. It should not be assumed that such spikes are always undesirable. In processes characterized by long lag times, such a response may be quite helpful in overcoming that lag for the purpose of rapidly achieving new setpoint values. Slave (secondary) controllers in cascaded systems – where the controller receives its setpoint signal from the output of another (primary, or master) controller – may similarly benefit from derivative action calculated on error instead of just PV. As usual, the specific needs of the application dictate the ideal controller configuration.↩︎

  911. This is the meaning of the vertical-pointing arrowheads shown on the trend graph: momentary saturation of the output all the way up to 100%.↩︎

  912. This is a good example of how integral controller action represents the history of the PV \(-\) SP error. The continued offset of integral action from its starting point “remembers” the area accumulated under the rectangular “step” between PV and SP. This offset will go away only if a negative error appears having the same percent-minute product (area) as the positive error step.↩︎

  913. This is the meaning of the vertical-pointing arrowheads shown on the trend graph: momentary saturation of the output all the way up to 100% (or down to 0%).↩︎

  914. In this example, I have omitted the constant of integration (\(C\)) to keep things simple. The actual integral is as such: \(\int \sin x \> dx = - \cos x + C = \sin (x - 90^o) + C\). This constant value is essential to explaining why the integral response does not immediately “step” like the derivative response does at the beginning of the PV sine wavelet.↩︎

  915. An example of a case where it is better for gain (\(K_p\)) to influence all three control modes is when a technician re-ranges a transmitter to have a larger or smaller span than before, and must re-tune the controller to maintain the same loop gain as before. If the controller’s PID equation takes the parallel form, the technician must adjust the P, I, and D tuning parameters proportionately. If the controller’s PID equation uses \(K_p\) as a factor in all three modes, the technician need only adjust \(K_p\) to re-stabilize the loop.↩︎

  916. This becomes especially apparent when using derivative action with low values of \(\tau_i\) (aggressive integral action). The error-multiplying term \({\tau_d \over \tau_i} + 1\) may become quite large if \(\tau_i\) is small, even with modest \(\tau_d\) values.↩︎

  917. Being a motion-balance mechanism, these bellows must act as spring elements in order to produce consistent pressure/motion behavior. Some pneumatic controllers employ coil springs inside the brass bellows assembly to provide the necessary “stiffness” and repeatability.↩︎

  918. Practical integral action also requires the elimination of the bias spring and adjustment, which formerly provided a constant downward force on the left-hand side of the beam to give the output signal the positive offset necessary to avoid saturation at 0 PSI. Not only is a bias adjustment completely unnecessary with the addition of integral action, but it would actually cause problems by making the integral action “think” an error existed between PV and SP when there was none.↩︎

  919. These restrictor valves are designed to encourage laminar air flow, making the relationship between volumetric flow rate and differential pressure drop linear rather than quadratic as it is for large control valves. Thus, a doubling of pressure drop across the restrictor valve results in a doubling of flow rate into (or out of) the reset bellows, and a consequent doubling of integration rate. This is precisely what we desire and expect from a controller with integral action.↩︎

  920. In case you are wondering, this controller happens to be reverse-acting instead of direct. This is of no consequence to the feature of external reset.↩︎

  921. The reason for this is the low component count compared to a comparable digital control circuit. For any given technology, a simpler device will tend to be more reliable than a complex device if only due to there being fewer components to fail. This also suggests a third advantage of analog controllers over digital controllers, and that is the possibility of easily designing and constructing your own for some custom application such as a hobby project. A digital controller is not outside the reach of a serious hobbyist to design and build, but it is definitely more challenging due to the requirement of programming expertise in addition to electronic hardware expertise.↩︎

  922. It is noteworthy that analog control systems are completely immune from “cyber-attacks” (malicious attempts to foil the integrity of a control system by remote access), due to the simple fact that their algorithms are fixed by physical laws and properties of electronic components rather than by code which may be edited. This new threat constitutes an inherent weakness of digital technology, and has spurred some thinkers in the field to reconsider analog controls for the most critical applications.↩︎

  923. The real problem with digital controller speed is that the time delay between successive “scans” translates into dead time for the control loop. Dead time is the single greatest impediment to feedback control.↩︎

  924. This circuit configuration is called “inverting” because the mathematical sign of the output is always opposite that of the input. This sign inversion is not an intentional circuit feature, but rather a consequence of the input signal facing the opamp’s inverting input. Non-inverting multiplier circuits also exist, but are more complicated when built to achieve multiplication factors less than one.↩︎

  925. This inversion of function caused by the swapping of input and feedback components in an operational amplifier circuit points to a fundamental principle of negative feedback networks: namely, that placing a mathematical element within the feedback loop causes the amplifier to exhibit the inverse of that element’s intrinsic function. This is why voltage dividers placed within the feedback loop cause an opamp to have a multiplicative gain (division \(\rightarrow\) multiplication). A circuit element exhibiting a logarithmic response, when placed within a negative feedback loop, will cause the amplifier to exhibit an exponential response (logarithm \(\rightarrow\) exponent). Here, an element having a time-differentiating response, when placed inside the feedback loop, causes the amplifier to time-integrate (differentiation \(\rightarrow\) integration). Since the opamp’s output voltage must assume any value possible to maintain (nearly) zero differential voltage at the input terminals, placing a mathematical function in the feedback loop forces the output to assume the inverse of that function in order to “cancel out” its effects and achieve balance at the input terminals.↩︎

  926. If this is not apparent, imagine a scenario where the +1.7 volt input existed for precisely one second’s worth of time. However much the output voltage ramps in that amount of time must therefore be its rate of change in volts per second (assuming a linear ramp). Since we know the area accumulated under a constant value of 1.7 (high) over a time of 1 second (wide) must be 1.7 volt-seconds, and \(\tau_i\) is equal to 3.807 seconds, the integrator circuit’s output voltage must ramp 0.447 volts during that interval of time. If the input voltage is positive and we know this is an inverting opamp circuit, the direction of the output voltage’s ramping must be negative, thus a ramping rate of \(-\)0.447 volts per second.↩︎

  927. The two input terminals shown, Input\(_{(+)}\) and Input\(_{(-)}\) are used as PV and SP signal inputs, the correlation of each depending on whether one desires direct or reverse controller action.↩︎

  928. This particular design has integral and derivative time value limits of 10 seconds, maximum. These relatively “quick” tuning values are the result of having to use non-polarized capacitors in the integrator and differentiator stages. The practical limits of cost and size restrict the maximum value of on-board capacitance to around 10 \(\mu\)F each.↩︎

  929. An interesting example of engineering tradition is found in electronic PID controller designs. While it is not too terribly difficult to build an analog electronic controller implementing either the parallel or ideal PID equation (just a few more parts are needed), it is quite challenging to do the same in a pneumatic mechanism. When analog electronic controllers were first introduced to industry, they were often destined to replace old pneumatic controllers. In order to ease the transition from pneumatic to electronic control, manufacturers built their new electronic controllers to behave exactly the same as the old pneumatic controllers they would be replacing. The same legacy followed the advent of digital electronic controllers: many digital controllers were programmed to behave in the same manner as the old pneumatic controllers, for the sake of operational familiarity, not because it was easier to design a digital controller that way.↩︎

  930. Although the SPEC 200 system – like most analog electronic control systems – is considered “mature” (Foxboro officially declared the SPEC 200 and SPEC 200 Micro systems as such in March 2007), working installations may still be found at the time of this writing (2010). A report published by the Electric Power Research Institute (see References at the end of this chapter) in 2001 documents a SPEC 200 analog control system installed in a nuclear power plant in the United States as recently as 1992, and another as recently as 2001 in a Korean nuclear power plant.↩︎

  931. Foxboro provided the option of a self-contained, panel-mounted SPEC 200 controller unit with all electronics contained in a single module, but the split architecture of the display/nest areas was preferred for large installations where many dozens of loops (especially cascade, feedforward, ratio, and other multi-component control strategies) would be serviced by the same system.↩︎

  932. I once encountered an engineer who joked that the number “200” in “SPEC 200” represented the number of years the system was designed to continuously operate. At another facility, I encountered instrument technicians who were a bit afraid of a SPEC 200 system running a section of their plant: the system had never suffered a failure of any kind since it was installed decades ago, and as a result no one in the shop had any experience troubleshooting it. As it turns out, the entire facility was eventually shut down and sold, with the SPEC 200 nest running faithfully until the day its power was turned off! The functioning SPEC 200 controllers shown in the photograph were in continuous use at British Columbia Institute of Technology at the time of the photograph, taken in December of 2014.↩︎

  933. Thanks to the explosion of network growth accompanying personal computers in the workplace, Ethernet is ubiquitous. The relatively high speed and low cost of Ethernet communications equipment makes it an attractive network standard over which a great many high-level industrial protocols communicate.↩︎

  934. An aspect common to many PLC implementations of PID control is the use of the “parallel” PID algorithm instead of the superior “ISA” or “non-interacting” algorithm. The choice of algorithm may have a profound effect on tuning, and on tuning procedures, especially when tuning parameters must be re-adjusted to accommodate changes in transmitter range.↩︎

  935. Modern DDC systems of the type used for building automation (heating, cooling, security, etc.) almost always consist of networked control nodes, each node tasked with monitoring and control of a limited area. The same may be said for modern PLC technology, which not only exhibits advanced networking capability (fieldbus I/O networks, Ethernet, Modbus, wireless communications), but is often also capable of redundancy in both processing and I/O. As technology becomes more sophisticated, the distinction between a DDC (or a networked PLC system) and a DCS becomes more ambiguous.↩︎

  936. An example of such a self-check is scheduled switching of the networks: if the system has been operating on network cable “A” for the past four hours, it might switch to cable “B” for the next four hours, then back again after another four hours to continually ensure both cables are functioning properly.↩︎

  937. To be fair, the Yokogawa Electric Corporation of Japan introduced their CENTUM distributed control system the same year as Honeywell. Unfortunately, while I have personal experience maintaining and using the Honeywell TDC2000 system, I have zero personal experience with the Yokogawa CENTUM system, and neither have I been able to obtain technical documentation for the original incarnation of this DCS (Yokogawa’s latest DCS offering goes by the same name). Consequently, I can do little in this chapter but mention its existence, despite the fact that it deserves just as much recognition as the Honeywell TDC2000 system.↩︎

  938. Just to give some perspective, the original TDC2000 system used whole-board processors rather than microprocessor chips, and magnetic core memory rather than static or dynamic RAM circuits! Communication between controller nodes and operator stations occurred over thick coaxial cables, implementing master/slave arbitration with a separate device (a “Hiway Traffic Director” or HTD) coordinating all communications between nodes. Like Bob Metcalfe’s original version of Ethernet, these coaxial cables were terminated at their end-points by termination resistors, with coaxial “tee” connectors providing branch points for multiple nodes to connect along the network.↩︎

  939. I know of a major industrial manufacturing facility (which shall remain nameless) where a PLC vendor promised the same technical capability as a full DCS at approximately one-tenth the installed cost. Several years and several tens of thousands of man-hours later, the sad realization was this “bargain” did not live up to its promise, and the decision was made to remove the PLCs and go with a complete DCS from another manufacturer. Caveat emptor!↩︎

  940. Although it is customary for the host system to be configured as the Link Active Scheduler (LAS) device to schedule and coordinate all fieldbus device communications, this is not absolutely necessary. Any suitable field instrument may also serve as the LAS, which means a host system is not even necessary except to provide DC power to the instruments, and serve as a point of interface for human operators, engineers, and technicians.↩︎

  941. With the PID function block programmed in the flow transmitter, there will be twice as many scheduled communication events per macrocycle than if the function block is programmed into the valve positioner. This is evident by the number of signal lines connecting circled block(s) to circled block(s) in the above illustration.↩︎

  942. The only reason I say “may” instead of “will” is because some modern digital controllers are designed to automatically switch to manual-mode operation in the event of a sensor or transmitter signal loss. Any controller not “smart” enough to shed its operating mode to manual in the event of PV signal loss will react dramatically when that PV signal dies, and this is not a good thing for an operating loop!↩︎

  943. I once had the misfortune of working on an analog PID controller for a chlorine-based wastewater disinfection system that lacked output tracking. The chlorine sensor on this system would occasionally fail due to sample system plugging by algae in the wastewater. When this happened, the PV signal would fail low (indicating abnormally low levels of chlorine gas dissolved in the wastewater) even though the actual dissolved chlorine gas concentration was adequate. The controller, thinking the PV was well below SP, would ramp the chlorine gas control valve further and further open over time, as integral action attempted to reduce the error between PV and SP. The error never went away, of course, because the chlorine sensor was plugged with algae and simply could not detect the actual chlorine gas concentration in the wastewater. By the time I arrived to address the “low chlorine” alarm, the controller output was already wound up to 100%. After cleaning the sensor, and seeing the PV value jump up to some outrageously high level, the controller would take a long time to “wind down” its output because its integral action was very slow. I could not use manual mode to “unwind” the output signal, because this controller lacked the feature of output tracking. My “work-around” solution to this problem was to re-tune the integral term of the controller to some really fast time constant, watch the output “wind down” in fast-motion until it reached a reasonable value, then adjust the integral time constant back to its previous value for continued automatic operation.↩︎

  944. Boiler steam drum water level control, for example, is a process where the setpoint really should be left at a 50% value at all times, even if there maybe legitimate reasons for occasionally switching the controller into manual mode.↩︎

  945. It is very important to note that soft alarms are not a replacement for hard alarms. There is much wisdom in maintaining both hard and soft alarms for a process, so there will be redundant, non-interactive levels of alarming. Hard and soft alarms should complement each other in any critical process.↩︎

  946. Some PID controllers limit manual-mode output values as well, so be sure to check the manufacturer’s documentation for output limiting on your particular PID controller!↩︎

  947. I have used a typesetting convention to help make my pseudocode easier for human beings to read: all formal commands appear in bold-faced blue type, while all comments appear in italicized red type. All other text appears as normal-faced black type. One should remember that the computer running any program cares not for how the text is typeset: all it cares is that the commands are properly used (i.e. no “grammatical” or “syntactical” errors).↩︎

  948. It should be noted that this is precisely what happens when you change the gain in a pneumatic or an analog electronic controller, since all analog PID controllers implement the “position” equation. Although the choice between “position” and “velocity” algorithms in a digital controller is arbitrary, it is much easier to build an analog mechanism or circuit implementing the position algorithm than it is to build an analog “velocity” controller.↩︎

  949. We call this an adaptive gain control system.↩︎

  950. Many instrument manufacturers sell simple, single-loop controllers for reasonable prices, comparable to the price of a college textbook. You need to get one that accepts 1-5 VDC input signals and generates 4-20 mA output signals, and has a “manual” mode of operation in addition to automatic – these features are very important! Avoid controllers that can only accept thermocouple inputs, and/or only have time-proportioning (PWM) outputs.↩︎

  951. To illustrate, self-regulating processes require significant integral action from a controller in order to avoid large offsets between PV and SP, with minimal proportional action and no derivative action. Integrating processes, in contrast, may be successfully controlled primarily on proportional action, with minimal integral action to eliminate offset. Runaway processes absolutely require derivative action for dynamic stability, but derivative action alone is not enough: some integral action will be necessary to eliminate offset. Even if knowledge of a process’s dominant characteristic does not give enough information for us to quantify P, I, or D values, it will tell us which tuning constant will be most important for achieving stability.↩︎

  952. Recall that wind-up is what happens when integral action “demands” more from a process than the process can deliver. If integral action is too aggressive for a process (i.e. fast integral controller action in a process with slow time lags), the output will ramp too quickly, causing the process variable to overshoot setpoint which then causes integral action to wind the other direction. As with proportional action, too much integral action will cause a self-regulating process to oscillate.↩︎

  953. In a proportional-only controller, the output is a function of error (PV \(-\) SP) and bias. When PV = SP, bias alone determines the output value (valve position). However, in a controller with integral action, the zero-offset output value is determined by how long and how far the PV has previously strayed from SP. In other words, there is no fixed bias value anymore. Thus, the output of a controller with integral action will not return to its previous value once the new SP is reached. In a purely integrating process, this means the PV will not reach stability at the new setpoint, but will continue to rise until all the “winding up” of integral action is un-done.↩︎

  954. When a nucleus of uranium or plutonium undergoes fission (“splits”), it releases more neutrons capable of splitting additional uranium or plutonium nuclei. The ratio of new nuclei “split” versus old nuclei “split” is the multiplication factor. If this factor has a value of one (1), the chain reaction will sustain at a constant power level, with each new generation of atoms “split” equal to the number of atoms “split” in the previous generation. If this multiplication factor exceeds unity, the rate of fission will increase over time. If the factor is less than one, the rate of fission will decrease over time. Like an inverted pendulum, the chain reaction has a tendency to “fall” toward infinite activity or toward no activity, depending on the value of its multiplication factor.↩︎

  955. The mechanism by which this occurs varies with the reactor design, and is too detailed to warrant a full explanation here. In pressurized light-water reactors – the dominant design in the United States of America – this action occurs due to the water’s ability to moderate (slow down) the velocity of neutrons. Slow neutrons have a greater probability of being “captured” by fissile nuclei than fast neutrons, and so the water’s moderating ability will have a direct effect on the reactor core’s multiplication factor. As a light-water reactor core increases temperature, the water becomes less dense and therefore less effective at moderating (slowing down) fast neutrons emitted by “splitting” nuclei. These fast(er) neutrons then “miss” the nuclei of atoms they would have otherwise split, effectively reducing the reactor’s multiplication factor without any need for regulatory control rod motion. The reactor’s power level therefore self-stabilizes as it warms, rather than “running away” to dangerously high levels, and may thus be classified as a self-regulating process.↩︎

  956. Discounting, of course, the intentional discharge of nuclear weapons, whose sole design purpose is to self-destruct in a “runaway” chain reaction.↩︎

  957. The general definition of gain is the ratio of output change over input change (\(\Delta \hbox{Out} \over \Delta \hbox{In}\)). Here, you may have noticed we calculate process gain by dividing the process variable change (7.5%) by the controller output change (10%). If this seems “inverted” to you because we placed the output change value in the denominator of the fraction instead of the numerator, you need to keep in mind the perspective of our gain measurement. We are not calculating the gain of the controller, but rather the gain of the process. Since the output of the controller is the “input” to the process, it is entirely appropriate to refer to the 10% manual step-change as the change of input when calculating process gain.↩︎

  958. While this is true of analog-signal transmitters, it is not necessarily true of digital-signal transmitters such as Fieldbus or wireless (digital radio). The reason for this distinction is that in a digital-signal transmitter, the reported process variable value is scaled in engineering units rather than percent. Applied to this case, if the flow transmitter gets re-ranged from 0-200 LPM to 0-150 LPM, the controller sees no change in process gain because a change of 10 LPM is still reported as a change in 10 LPM regardless of the transmitter’s range.↩︎

  959. For more information on different PID equations, refer to Section 29.10 beginning on page .↩︎

  960. It is also possible to configure many instruments to deliberately damp their response to input conditions. This is called damping, and it is covered in more detail in section 18.4 beginning on page .↩︎

  961. Assuming a constant discharge valve position. If someone alters the hand valve’s position, the relationship between incoming flow rate and final liquid level changes.↩︎

  962. We will assume here the heating element reaches its final temperature immediately upon the application of power, with no lag time of its own.↩︎

  963. Given the presence of water in the potato which turns to steam at 212 \(^{o}\)F, things are just a bit more complicated than this, but let’s ignore the effects of water in the potato for now!↩︎

  964. The amount of time the potato’s temperature will continue to rise following the down-step in heating element power is equal to the time it takes for the oven’s air temperature to equal the potato’s temperature. The reason the potato’s temperature keeps rising after the heating element turns off is because the air inside the oven is (for a short time) still hotter than the potato, and therefore the potato continues to absorb thermal energy from the air for a time following power-off.↩︎

  965. The so-called Barkhausen criterion for oscillation in a feedback system is that the total loop gain is at least unity (1) and the total loop phase shift is 360\(^{o}\).↩︎

  966. The conditions necessary for self-sustaining oscillations to occur is a total phase shift of 360\(^{o}\) and a total loop gain of 1. Merely having positive feedback or having a total gain of 1 or more will not guarantee self-sustaining oscillations; both conditions must simultaneously exist. As a measure of how close any feedback system is to this critical confluence of conditions, we may quantify a system’s phase margin (how many degrees of phase shift the system is away from 360\(^{o}\) while at a loop gain of 1) and/or a system’s gain margin (how many decibels of gain the system is away from 0 dB while at a phase shift of 360\(^{o}\)). The less phase or gain margin a feedback system has, the closer it is to a condition of instability.↩︎

  967. At maximum phase shift, the gain of any first-order RC network is zero. Both phase shift and attenuation in an RC lag network are frequency-dependent: as frequency increases, phase shift grows larger (from 0\(^{o}\) to a maximum of \(-90^{o}\)) and the output signal grows weaker. At its theoretical maximum phase shift of exactly \(-90^{o}\), the output signal would be reduced to nothing!↩︎

  968. In its pure, theoretical form at least. In practice, even a single-lag circuit may oscillate given enough gain due to the unavoidable presence of parasitic capacitances and inductances in the wiring and components causing multiple orders of lag (and even some dead time). By the same token, even a “pure” first-order process will oscillate given enough controller gain due to unavoidable lags and dead times in the field instrumentation (especially the control valve). The point I am trying to make here is that there is more to the question of stability (or instability) than loop gain.↩︎

  969. Truth be told, the same principle holds for purely integrating processes as well. A purely integrating process always exhibits a phase shift of \(-90^{o}\) at any frequency, because that is the nature of integration in calculus. A purely first-order lag process will exhibit a phase shift anywhere from 0\(^{o}\) to \(-90^{o}\) depending on frequency, but never more lagging than \(-90^{o}\), which is not enough to turn negative feedback into positive feedback. In either case, so long as we don’t have process noise to deal with, we can increase the controller’s gain all the way to eleven. If that last sentence (a joke) does not make sense to you, be sure to watch the 1984 movie This is Spinal Tap as soon as possible. Seriously, I have used controller gains as high as 50 on low-noise, first-order processes such as furnace temperature control. With such high gain in the controller, response to setpoint and load changes is quite swift, and integral action is almost unnecessary because the offset is naturally so small.↩︎

  970. A sophisticated way of saying this is that a dead-time function has no phase margin, only gain margin. All that is needed in a feedback system with dead time is sufficient gain to make the system oscillate.↩︎

  971. Sometimes referred to as the Barkhausen criterion.↩︎

  972. An interesting analogy is that of a narcoleptic human operator manually controlling a process with a lot of dead time. If we imagine this person helplessly falling asleep at periodic intervals, then waking up to re-check the process variable and make another valve adjustment before falling asleep again, we see that the dead time of the process disappears from the perspective of the operator. The operator never realizes the process even has dead time, because they don’t remain awake long enough to notice. So long as the poor operator’s narcolepsy occurs at just the right intervals (i.e. not too short so as to notice dead time, and not too long so as to miss important changes in setpoint or load), good control of the process is possible.↩︎

  973. A 10% hysteresis value means that the signal must be changed by 10% following a reversal of direction before any movement is seen from the valve stem.↩︎

  974. Some integral controllers are equipped with a useful feature called integral deadband or reset deadband. This is a special PID function inhibiting integration whenever the process variable comes close enough to setpoint, the “deadband” value specifying how close the PV must come to SP before integration stops. If this deadband value is set equal to or wider than the error caused by the valve’s stiction, the controller will stop its integral-driven cycling. The trade-off, of course, is that the controller will no longer work to eliminate all error, but rather will be content with an error equal to or less than the specified deadband.↩︎

  975. An alternate solution is to install a positioner on the control valve, which acts as a secondary (cascaded) controller seeking to equalize stem position with the loop controller’s output signal at all times. However, this just “shifts” the problem from one controller to another. I have seen examples of control valves with severe packing friction which will self-oscillate their own positioners (i.e. the positioner will “hunt” back and forth for the correct valve stem position given a constant signal from the loop controller)! If valve stem friction can be minimized, it should be minimized.↩︎

  976. Note that this is truly the gain of the controller, not the proportional band. If you were to enter a proportional band value one-half the proportional band value necessary to sustain oscillations, the controller would (obviously) oscillate completely out of control!↩︎

  977. Either minutes per repeat or seconds per repeat. If the controller’s integral rate is expressed in units of repeats per minute (or second), the formula would be \(K_i = {1.2 \over P_u}\).↩︎

  978. Imagine informing the lead operations manager or a unit supervisor in a chemical processing facility you wish to over-tune the temperature controller in the main reaction furnace or the pressure controller in one of the larger distillation columns until it nearly oscillates out of control, and that doing so may necessitate hours of unstable operation before you find the perfect gain setting. Consider yourself fortunate if your declaration of intent does not result in security personnel escorting you out of the control room.↩︎

  979. Unfortunately, Ziegler and Nichols chose to refer to dead time by the word lag in their paper. In modern technical parlance, “lag” refers to a first-order inverse-exponential function, which is fundamentally different from dead time.↩︎

  980. Right away, we see a weakness in the Ziegler-Nichols open-loop method: it makes absolutely no distinction between self-regulating and integrating process types. We know this is problematic from the analysis of each process type in sections 30.1.1 and 30.1.2.↩︎

  981. Ziegler and Nichols’ approach was to define a normalized reaction rate called the unit reaction rate, equal in value to \(R \over \Delta m\). I opt to explicitly include \(\Delta m\) in all the tuning parameter equations in order to avoid the possibility of confusing reaction rate with unit reaction rate.↩︎

  982. This is very important: no degree of controller “tuning” will fix a poor control valve, noisy transmitter, or ill-designed process. If your open-loop tests reveal significant process problems, you must remedy them before attempting to tune the controller.↩︎

  983. It is important to know which PID equation your controller implements in order to adjust just one action (P, I, or D) of the controller without affecting the others. Most PID controllers, for example, implement either the “Ideal” or “Series” equations, where the gain value (\(K_p\)) multiplies every action in the controller including integral and derivative. If you happen to be tuning such a controller for integral-dominant control, you cannot set the gain to zero (in order to minimize proportional action) because this will nullify integral action too! Instead, you must set \(K_p\) to some value small enough that the proportional action is minimal while allowing integral action to function.↩︎

  984. Recall that an open-loop response test consists of placing the loop controller in manual mode, introducing a step-change to the controller output (manipulated variable), and analyzing the time-domain response of the process variable as it reacts to that perturbation.↩︎

  985. For reverse-acting controllers, I am ignoring the obvious 180\(^{o}\) phase shift necessary for negative feedback control when I say “no phase shift” between PV and output waveforms. I am also ignoring dead time resulting from the scanning of the PID algorithm in the digital controller. For some controllers, this scan time may be significant enough to see on a trend!↩︎

  986. The term “porpoise” comes from the movements of a porpoise swimming rapidly toward the water’s surface as it chases along the bow of a moving ship. In order to generate speed, the animal undulates its body up and down to powerfully drive forward with its horizontal tail, tracing a sinusoidal path on its way up to breaching the surface of the water.↩︎

  987. You could try reducing the controller’s gain as a first step, but if the controller implements the Ideal or Series algorithm, reduction in gain will also reduce derivative action, which may mask an over-tuned derivative problem.↩︎

  988. The astute observer will note the presence of some limiting (saturation) in the output waveform, as it attempts to go below zero percent. Normally, this is unacceptable while determining the ultimate gain of a process, but here it was impossible to make the process oscillate at consistent amplitude without saturating on the output signal. The gain of this process falls off quite a bit at the ultimate frequency, such that a high controller gain is necessary to sustain oscillations, causing the output waveform to have a large amplitude.↩︎

  989. We would have to be very careful with the addition of damping, since the oscillations could create may not appear on the trend. Remember that the insertion of damping (low-pass filtering) in the PV signal is essentially an act of “lying” to the controller: telling the controller something that differs from the real, measured signal. If our PV trend shows us this damped signal and not the “raw” signal from the transmitter, it is possible for the process to oscillate and the PV trend to be deceptively stable!↩︎

  990. Many instrument manufacturers sell simple, single-loop controllers for reasonable prices, comparable to the price of a college textbook. You need to get one that accepts 1-5 VDC input signals and generates 4-20 mA output signals, and has a “manual” mode of operation in addition to automatic – these features are very important! Avoid controllers that can only accept thermocouple inputs, and/or only have time-proportioning (PWM) outputs. Additionally, I strongly recommend you take the time to experimentally learn the actions of proportional, integral, and derivative as outlined in section 29.16 beginning on page before you embark on any PID tuning exercises.↩︎

  991. Among these different controllers were a Distech ESP-410 building (HVAC) controller and a small PLC programmed with a custom PID control algorithm. In fact, a Desktop Process is ideal for courses where students create their own control algorithms in PLC or data acquisition hardware. The significance of controller scan rate becomes very easy to comprehend when controlling a process like this with such a short time constant. The contrast between a DDC controller with a 500 millisecond scan rate and a PLC with a 50 millisecond scan rate, for example, is marked.↩︎

  992. In honor of the system’s ability to slowly “ramp” temperature up or down at a specified rate, then “soak” the metal at a constant temperature for set periods of time. Many single-loop process controllers have the ability to perform ramp-and-soak setpoint scheduling without the need of an external “supervisory” computer.↩︎

  993. I once attended a meeting of industry representatives where one person talked at length about a highly automated lumber mill where logs were cut into lumber not only according to minimum waste, but also according to the real-time market value of different board types and stored inventory. The joke was, if the market value of wooden toothpicks suddenly spiked up, the control system would shred every log into toothpicks in an attempt to maximize profit!↩︎

  994. Interestingly, servo motor control is one application where analog loop controllers have historically been favored over digital loop controllers, simply for their superior speed. An opamp-based P, PI, or PID controller is lightning-fast because it has no need to digitize any analog process variables (analog-to-digital conversion) nor does it require time for a clock to sequence step-by-step through a written program as a microprocessor does. Servomechanism processes are inherently fast-responding, and so the controller(s) used to control servos must be faster yet.↩︎

  995. At one specific current level, the motor will develop just enough torque to hold the platform’s weight, at which point the acceleration will be zero. Any amount of current above this value will cause an upward acceleration, while any amount of current below this value will cause a downward acceleration.↩︎

  996. The conversion from hydrocarbon and steam to hydrogen and carbon dioxide is typically a two-stage process: the first (reforming) stage produces hydrogen gas and carbon monoxide, while a second (water-gas-shift) stage adds more steam to convert the carbon monoxide into carbon dioxide with more hydrogen liberated. Both reactions are endothermic, with the reforming reaction being more endothermic than the water-gas-shift reaction.↩︎

  997. Steam has a formula weight of 18 amu per molecule, with two hydrogen atoms (1 amu each) and one oxygen atom (16 amu). Methane has a formula weight of 16 amu per molecule, with one carbon atom (12 amu) and four hydrogen atoms (1 amu each). If we wish to have a molecular ratio of 2:1, steam-to-methane, this makes a formula weight ratio of 36:16, or 9:4.↩︎

  998. It is quite common for industrial control systems to operate at ratios a little bit “skewed” from what is stoichiometrically ideal due to imperfect reaction efficiencies. Given the fact that no chemical reaction ever goes to 100% completion, a decision must be made as to which form of incompleteness is worse. In a steam-hydrocarbon reforming system, we must ask ourselves which is worse: excess (unreacted) steam at the outlet, or excess (unreacted) hydrocarbon at the outlet. Excess hydrocarbon content will “coke” the catalyst and heater tubes, which is very bad for the process over time. Excess steam merely results in a bit more operating energy loss, with no degradation to equipment life. The choice, then, is clear: it is better to operate this process “hydrocarbon-lean” (more steam than ideal) than “hydrocarbon-rich” (less steam than ideal).↩︎

  999. This mixing of superheated steam and cold water happens in a specially-designed device called a desuperheater. The basic concept is that the water will absorb heat from the superheated steam, turning that injected water completely into steam and also reducing the temperature of the superheated steam. The result is a greater volume of steam than before, at a reduced temperature. So long as some amount of superheat remains, the de-superheated steam will still be “dry” (above its condensing temperature). The desuperheater control merely adds the appropriate amount of water until it achieves the desired superheat value.↩︎

  1000. This statement is true only for self-regulating processes. Integrating and “runaway” processes require control systems to achieve stability even in the complete absence of any loads. However, since self-regulation typifies the vast majority of industrial processes, we may conclude that the fundamental purpose of most control systems is to counteract the effects of loads.↩︎

  1001. The load variables I keep mentioning that influence a car’s speed constitute an incomplete list at best. Many other variables come into play, such as fuel quality, engine tuning, and tire pressure, just to name a few. In order for a purely feedforward (i.e. no feedback monitoring of the process variable) control system to work, every single load variable must be accurately monitored and factored into the system’s output signal. This is impractical or impossible for a great many applications, which is why we usually find feedforward control used in conjunction with feedback control, rather than feedforward control used alone.↩︎

  1002. In fact, the only pure feedforward control strategies I have ever seen have been in cases where the process variable was nearly impossible to measure and could only be inferred from other variables.↩︎

  1003. If the liquid level drops too low, there will be insufficient retention time in the vessel for the fluids to mix before they exit the product line at the bottom.↩︎

  1004. The device or computer function performing the summation is shown in the P&ID as a bubble with “FY” as the label. The letter “F” denotes Flow, while the letter “Y” denotes a signal relay or transducer.↩︎

  1005. Incidentally, this is a good example of an integrating mass-balance process, where the rate of process variable change over time is proportional to the imbalance of flow rates in and out of the process. Stated another way, total accumulated (or lost) mass in a mass-balance system such as this is the time-integral of the difference between incoming and outgoing mass flow rates: \(\Delta m = \int_0^T (W_{in} - W_{out}) \> dt\).↩︎

  1006. Residence time or Retention time is the average amount of time each liquid molecule spends inside the vessel. It is an important variable in chemical reaction processes, where adequate time must be given to the reactant molecules in order to ensure a complete reaction. It is also important for non-reactive mixing processes such as paint and food manufacturing, to ensure the ingredients are thoroughly mixed together and not stratified. For any given flow rate through a vessel, the residence time is directly proportional to the volume of liquid contained in that vessel: double the captive volume, and you double the residence time. For any given captive volume, the residence time is inversely proportional to the flow rate through the vessel: double the flow rate through the vessel, and you halve the residence time. In some mixing systems where residence time is critical to the thorough mixing of liquids, vessel level control may be coupled to measured flow rate, such that an increase in flow rate results in an increased level setpoint, thus maintaining a constant residence time despite changes in production rate.↩︎

  1007. Energy demand is an example of what is called an inferred variable: a physical quantity that we cannot measure directly but instead calculate from measurements made of other variables.↩︎

  1008. Most control systems’ feedforward function blocks are designed in such a way that both the feedback and the feedforward signal paths are disabled when the controller is placed into manual mode, in order to give the human operator 100% control over the final element (valve) in that mode. For the purpose of “tuning” the feedforward gain/bias function block, one must disable the feedback control only so feedforward action is still able to respond to load changes. If simply switching the feedback controller to manual mode is not an option (which it usually is not), one may achieve the equivalent result by setting the gain value of the feedback controller to zero and ensuring the PID equation is not the “parallel” type. If the PID equation is parallel, you will need to set all three terms (P, I, and D) at their minimum settings.↩︎

  1009. This is why it was recommended to leave the feedback controller’s output at or near 50%. The goal is to have the feedforward action adjusted such that the feedback controller’s output is “neutral,” and has room to swing either direction if needed to provide necessary trim to the process.↩︎

  1010. Tuning this gain/bias block is done with the pH controller in manual mode with its output at 50%. The gain value is adjusted such that step-changes in flocculant feed rate have little long-term effect on pH. The bias value is adjusted until the pH approaches setpoint (even with the pH controller in manual mode).↩︎

  1011. This “thought experiment” assumes no compensating action on the part of the feedback pH controller for the sake of simplicity. However, even if we include the pH controller’s efforts, the problem does not go away. As pH rises due to the premature addition of extra lime, the controller will try to reduce the lime feed rate. This will initially reduce the degree to which pH deviates from setpoint, but then the reverse problem will occur when the increased flocculant enters the vessel 55 seconds later. Now, the pH will drop below setpoint, and the feedback controller will have to ramp up lime addition (to the amount it was before the additional lime reached the vessel) to achieve setpoint.↩︎

  1012. Let me know if you are ever able to invent such a thing. I’ll even pay your transportation costs to Stockholm, Sweden so you can collect your Nobel prize. Of course, I will demand to see the prize before buying tickets for your travel, but with your time-travel device that should not be a problem for you.↩︎

  1013. For a more detailed discussion of lag times and their meaning, see section 30.1.5 beginning on page .↩︎

  1014. Knowing this allows us to avoid measuring the incoming cold oil temperature and just measure incoming cold oil flow rate as the feedforward variable. If the incoming oil’s temperature were known to vary substantially over time, we would be forced to measure it as well as flow rate, combining the two variables together to calculate the energy demand and use this inferred variable as the feedforward variable.↩︎

  1015. Transport delay (dead time) in heat exchanger systems can be a thorny problem to overcome, as they they tend to change with flow rate! For reasons of simplicity in our illustration, we will treat this process as if it only possessed lag times, not dead times.↩︎

  1016. Technically, two cascaded lag times is not the same as one large lag time, no matter the time constant values. Two first-order lags in series with one another create a second-order lag, which is a different effect. However imperfect as the added lag solution is, it is still better than nothing at all!↩︎

  1017. I generally suggest keeping such limit values inaccessible to low-level operations personnel. This is especially true in cases such as this where the presence of a high temperature setpoint limit is intended for the longevity of the equipment. There is a strong tendency in manufacturing environments to “push the limits” of production beyond values considered safe or expedient by the engineers who designed the equipment. Limits are there for a reason, and should not be altered except by people with full understanding of and full responsibility over the consequences!↩︎

  1018. Only the coolant flow control instruments and piping are shown in this diagram, for simplicity. In a real P&ID, there would be many more pipes, valves, and other apparatus shown surrounding this process vessel.↩︎

  1019. In order to understand how this works, I advise you try a “thought experiment” for each function block network whereby you arbitrarily assign three different numerical values for A, B, and C, then see for yourself which of those three values becomes the output value.↩︎

  1020. In FOUNDATION Fieldbus, each and every signal path not only carries the signal value, but also a “status” flag declaring it to be “Good,” “Bad,” or “Uncertain.” This status value gets propagated down the entire chain of connected function blocks, to alert dependent blocks of a possible signal integrity problem if one were to occur.↩︎

  1021. This principle holds true even for systems with no function blocks “voting” between the redundant transmitters. Perhaps the installation consists of two transmitters with remote indications for a human operator to view. If the two displays substantially disagree, which one should the operator trust? A set of three indicators would be much better, providing the operator with enough information to make an intelligent decision on which display(s) to trust.↩︎

  1022. In most applications this takes the form of an AC induction motor receiving power from a Variable Frequency Drive or VFD. Since the rotational speed of an induction motor is a function of frequency, the VFD achieves motor speed control by electronically converting the fixed-frequency line power into variable-frequency power to drive the motor.↩︎

  1023. Some differential pressure transmitter manufacturers, such as Bailey, apply the same convention to denote the actions of a DP transmitter’s two pressure ports: using a “+” label to represent direct action (i.e. increasing pressure at this port drives the output signal up) and a “\(-\)” symbol to represent reverse action (i.e. increasing pressure at this port drives the output signal down).↩︎

  1024. For that matter, it is impossible to eliminate all danger from life in general. Every thing you do (or don’t do) involves some level of risk. The question really should be, “how much risk is there in a given action, and how much risk am I willing to tolerate?” To illustrate, there does exist a non-zero probability that something you will read in this book is so shocking it will cause you to suffer a heart attack. However, the odds of you walking away from this book and never reading it again over concern of epiphany-induced cardiac arrest are just as slim.↩︎

  1025. Also humorously referred to as the “belt and suspenders” school of engineering.↩︎

  1026. Frangible roofs are a common design applied to liquid storage tanks harboring the potential for overpressure, such as sulfuric acid storage tanks which may generate accumulations of explosive hydrogen gas. Having the roof seam rupture from overpressure is a far less destructive event than having a side seam or floor seam rupture and consequently spill large volumes of acid. This technique of mitigating overpressure risk does not work to reduce pressure in the system, but it does reduce the risk of damage caused by overpressure in the system.↩︎

  1027. Chemical corrosiveness, biohazardous substances, poisonous materials, and radiation are all examples of other types of industrial hazards not covered by the label “hazardous” in this context. This is not to understate the danger of these other hazards, but merely to focus our attention on the specific hazard of explosions and how to build instrument systems that will not trigger explosions due to electrical spark.↩︎

  1028. Article 506 is a new addition to the NEC as of 2008. Prior to that, the only “zone”-based categories were those specified in Article 505.↩︎

  1029. The final authority on Class and Division definitions is the National Electrical Code itself. The definitions presented here, especially with regard to Divisions, may not be precise enough for many applications. Article 500 of the NEC is quite specific for each Class and Division combination, and should be referred to for detailed information in any particular application.↩︎

  1030. Once again, the final authority on this is the National Electrical Code, in this case Article 505. My descriptions of Zones and Divisions are for general information only, and may not be specific or detailed enough for many applications.↩︎

  1031. Traditionally, the three elements of a “fire triangle” were fuel, oxidizer, and ignition source. However, this model fails to account for fuels not requiring oxygen as well as cases where a chemical inhibitor prevents a self-sustaining reaction even in the presence of fuel, oxidizer, and ignition source.↩︎

  1032. To illustrate this concept in a different context, consider my own personal history of automobiles. For many years I drove an ugly and inexpensive truck which I joked had “intrinsic theft protection:” it was so ugly, no one would ever want to steal it. Due to this “intrinsic” property of my vehicle, I had no need to invest in an alarm system or any other protective measure to deter theft. Similarly, the components of an intrinsically safe system need not be located in explosion-proof or purged enclosures because the intrinsic energy limitation of the system is protection enough.↩︎

  1033. Real passive barriers often used redundant zener diodes connected in parallel to ensure protection against excessive voltage even in the event of a zener diode failing open.↩︎

  1034. Of course, transformers cannot be used to pass DC signals of any kind, which is why chopper/converter circuits are used before and after the signal transformer to convert each DC current signal into a form of chopped (AC) signal that can be fed through the transformer. This way, the information carried by each 4-20 mA DC current signal passes through the barrier, but electrical fault current cannot.↩︎

  1035. To be honest, the coin could also land on its edge, which is a third possibility. However, that third possibility is so remote as to be negligible in the presence of the other two. Strictly speaking, \(P(\hbox{``heads''}) + P(\hbox{``tails''}) + P(\hbox{``edge''}) = 1\).↩︎

  1036. In his excellent book, Reliability Theory and Practice, Igor Bazovsky describes the relationship between true probability (\(P\)) calculated from ideal values and estimated probability (\(\hat P\)) calculated from experimental trials as a limit function: \(P = \lim_{N \to \infty} \hat P\), where \(N\) is the number of trials.↩︎

  1037. Most people can recall instances where a weather forecast proved to be completely false: a prediction for rainfall resulting in a completely dry day, or vice-versa. In such cases, one is tempted to blame the weather service for poor forecasting, but in reality it has more to do with the nature of probability, specifically the meaninglessness of probability calculations in predicting singular events.↩︎

  1038. Here, “essential” means the system will fail if any of these identified components fails. Thus, Lusser’s Law implies a logical “AND” relationship between the components’ reliability values and the overall system reliability.↩︎

  1039. According to Bazovsky (pp. 275-276), the first reliability principle adopted by the design team was that the system could be no more reliable than its least-reliable (weakest) component. While this is technically true, the mistake was to assume that the system would be as reliable as its weakest component (i.e. the “chain” would be exactly as strong as its weakest link). This proved to be too optimistic, as the system would still fail due to the failure of “stronger” components even when the “weaker” components happened to survive. After noting the influence of “stronger” components’ unreliabilities on overall system reliability, engineers somehow reached the bizarre conclusion that system reliability was equal to the mathematical average of the components’ reliabilities. Not surprisingly, this proved even less accurate than the “weakest link” principle. Finally, the designers were assisted by the mathematician Erich Pieruschka, who helped formulate Lusser’s Law.↩︎

  1040. Here we have an example where dependability and security are lumped together into one “reliability” quantity.↩︎

  1041. An easy way to remember what each of these terms mean in the context of a protective system is to associate \(D\) (Dependability) with a dangerous scenario and \(S\) (Security) with a safe scenario: \(D\) expresses what the system or component will do when a dangerous condition presents itself to the protective system and it needs to act; \(S\) expresses what the system or component will do when conditions are safe and there is no need to act.↩︎

  1042. Since most high-quality industrial devices and systems are repairable for most faults, MTBF and MTTF are interchangeable terms.↩︎

  1043. This does not mean the amount of time for all components to fail, but rather the amount of time to log a total number of failures equal to the total number of components tested. Some of those failures may be multiple for single components, while some other components in the batch might never fail within the MTBF time.↩︎

  1044. The typically large values we see for MTBF and MTTF can be misleading, as they represent a theoretical time based on the failure rate seen over relatively short testing times where all components are “young.” In reality, the wear-out time of a component will be less than its MTBF. In the case of these control valves, they would likely all “die” of old age and wear long before reaching an age of 66.667 years!↩︎

  1045. One could even imagine some theoretical component immune to wear-out, but still having finite values for failure rate and MTBF. Remember, \(\lambda_{useful}\) and MTBF refer to chance failures, not the normal failures associated with age and extended use.↩︎

  1046. Preventive maintenance is not the only example of such a dynamic. Modern society is filled with monetarily expensive programs and institutions existing for the ultimate purpose of avoiding greater costs, monetary and otherwise. Public education, health care, and national militaries are just a few that come to my mind. Not only is it a challenge to continue justifying the expense of a well-functioning cost-avoidance program, but it is also a challenge to detect and remove unnecessary expenses (waste) within that program. To extend the preventive maintenance example, an appeal by maintenance personnel to continue (or further) the maintenance budget may happen to be legitimate, but a certain degree of self-interest will always be present in the argument. Just because preventive maintenance is actually necessary to avoid greater expense due to failure, does not mean all preventive maintenance demands are economically justified! Proper funding of any such program depends on the financiers being fair in their judgment and the executors being honest in their requests. So long as both parties are human, this territory will remain contentious.↩︎

  1047. Sustained vibrations can do really strange things to equipment. It is not uncommon to see threaded fasteners undone slowly over time by vibrations, as well as cracks forming in what appear to be extremely strong supporting elements such as beams, pipes, etc. Vibration is almost never good for mechanical (or electrical!) equipment, so it should be eliminated wherever reliability is a concern.↩︎

  1048. On an anecdotal note, a friend of mine once destroyed his car’s engine, having never performed an oil or filter change on it since the day he purchased it. His poor car expired after only 70000 miles of driving – a mere fraction of its normal service life with regular maintenance. Given the type of car it was, he could have easily expected 200000 miles of service between engine rebuilds had he performed the recommended maintenance on it.↩︎

  1049. Another friend of mine used to work as a traffic signal technician in a major American city. Since the light bulbs they replaced still had some service life remaining, they decided to donate the bulbs to a charity organization where the used bulbs would be freely given to low-income citizens. Incidentally, this same friend also instructed me on the proper method of inserting a new bulb into a socket: twisting the bulb just enough to maintain some spring tension on the base, rather than twisting the bulb until it will not turn farther (as most people do). Maintaining some natural spring tension on the metal leaf within the socket helps extend the socket’s useful life as well!↩︎

  1050. Many components do not exhibit any relationship between load and lifespan. An electronic PID controller, for example, will last just as long controlling an “easy” self-regulating process as it will controlling a “difficult” unstable (“runaway”) process. The same might not be said for the other components of those loops, however! If the control valve in the self-regulating process rarely changes position, but the control valve in the runaway process continually moves in an effort to stabilize it at setpoint, the less active control valve will most likely enjoy a longer service life.↩︎

  1051. This redundancy module has its own MTBF value, and so by including it in the system we are adding one more component that can fail. However, the MTBF rate of a simple diode network greatly exceeds that of an entire AC-to-DC power supply, and so we find ourselves at a greater level of reliability using this diode redundancy module than if we did not (and only had one power supply).↩︎

  1052. Of course, this assumes good communication and proper planning between all parties involved. It is not uncommon for piping engineers and instrument engineers to mis-communicate during the crucial stages of process vessel design, so that the vessel turns out not to be configured as needed for redundant instruments.↩︎

  1053. If a swirling fluid inside the vessel encounters a stationary baffle, it will tend to “pile up” on one side of that baffle, causing the liquid level to actually be greater in that region of the vessel than anywhere else inside the vessel. Any transmitter placed within this region will register a greater level, regardless of the measurement technology used.↩︎

  1054. The father of a certain friend of mine has operated a used automobile business for many years. One of the tasks given to this friend when he was a young man, growing up helping his father in his business, was to regularly drive some of the cars on the lot which had not been driven for some time. If an automobile is left un-operated for many weeks, there is a marked tendency for batteries to fail and tires to lose their air pressure, among other things. The salespeople at this used car business jokingly referred to this as lot rot, and the only preventive measure was to routinely drive the cars so they would not “rot” in stagnation. Machines, like people, suffer if subjected to a lack of physical activity.↩︎

  1055. A simple “memory trick” I use to correctly distinguish between relief and safety valves is to remember that a safety valve has snap action (both words beginning with the letter “s”).↩︎

  1056. To illustrate, consider a (vertical) cylindrical storage tank 15 feet tall and 20 feet in diameter, with an internal gas pressure of 8 inches water column. The total force exerted radially on the walls of this tank from this very modest internal pressure would be in excess of 39000 pounds! The force exerted by the same pressure on the tank’s circular lid would exceed 13000 pounds (6.5 tons)!↩︎

  1057. Think: a safety valve has snap action!↩︎

  1058. This photograph courtesy of the National Transportation Safety Board’s report of the 1999 petroleum pipeline rupture in Bellingham, Washington. Improper setting of this relief valve pilot played a role in the pipeline rupture, the result of which was nearly a quarter-million gallons of gasoline spilling into a creek and subsequently igniting. One of the lessons to take from this event is the importance of proper instrument maintenance and configuration, and how such technical details concerning industrial components may have consequences reaching far beyond the industrial facility where those components are located.↩︎

  1059. Many synonyms exist to describe the action of a safety system needlessly shutting down a process. The term “nuisance trip” is often (aptly) used to describe such events. Another (more charitable) label is “fail-to-safe,” meaning the failure brings the process to a safe condition, as opposed to a dangerous condition.↩︎

  1060. Of course, there do exist industrial facilities operating at a financial loss for the greater public benefit (e.g. certain waste processing operations), but these are the exception rather than the rule. It is obviously the point of a business to turn a profit, and so the vast majority of industries simply cannot sustain a philosophy of safety at any cost. One could argue that a “paranoid” safety system even at a waste processing plant is unsustainable, because too many “false trips” result in inefficient processing of the waste, posing a greater public health threat the longer it remains unprocessed.↩︎

  1061. As drawn, these valves happen to be ball-design, the first actuated by an electric motor and the second actuated by a pneumatic piston. As is often the case with redundant instruments, an effort is made to diversify the technology applied to the redundant elements in order to minimize the probability of common-cause failures. If both block valves were electrically actuated, a failure of the electric power supply would disable both valves. If both block valves were pneumatically actuated, a failure of the compressed air supply would disable both valves. The use of one electric valve and one pneumatic valve grants greater independence of operation to the double-block valve system.↩︎

  1062. For what it’s worth, the ISA safety standard 84 defines this notation as “MooN,” but I have seen sufficient examples of the contrary (“NooM”) to question the authority of either label.↩︎

  1063. For a general introduction to process switches, refer to chapter 9 beginning on page .↩︎

  1064. Of course, the presence of some variation in a transmitter’s output over time is no guarantee of proper operation. Some failures may cause a transmitter to output a randomly “walking” signal when in fact it is not registering the process at all. However, being able to measure the continuous output of a process transmitter provides the instrument technician with far more data than is available with a discrete process switch. A safety transmitter’s output signal may be correlated against the output signal of another transmitter measuring the same process variable, perhaps even the transmitter used in the regulatory control loop. If two transmitters measuring the same process variable agree closely with one another over time, chances are extremely good are both functioning properly.↩︎

  1065. It should be noted that the use of a single orifice plate and of common (parallel-connected) impulse lines represents a point of common-cause failure. A blockage at one or more of the orifice plate ports, or a closure of a manual block valve, would disable all three transmitters. As such, this might not be the best method of achieving high flow-measurement reliability.↩︎

  1066. The best way to prove to yourself the median-selecting abilities of both function block networks is to perform a series of “thought experiments” where you declare three arbitrary transmitter signal values, then follow through the selection functions until you reach the output. For any three signal values you might choose, the result should always be the same: the median signal value is the one chosen by the voter.↩︎

  1067. MTBF stands for Mean Time Between Failure, and represents the reliability of a large collection of components or systems. For any large batch of identical components or systems constantly subjected to ordinary stresses, MTBF is the theoretical length of time it will take for 63.2% of them to fail based on ordinary failure rates within the lifetime of those components or systems. Thus, MTBF may be thought of as the “time constant” (\(\tau\)) for failure within a batch of identical components or systems.↩︎

  1068. This is assuming, of course, that there are no air leaks anywhere in the actuator, tubing, or solenoid which would cause the trapped pressure to decrease over time.↩︎

  1069. Of course, if there is opportunity to fully stroke the safety valve to the point of process shutdown without undue interruption to production, this is the superior way of performing valve proof tests. Such “test-to-shutdown” proof testing may be scheduled at a time convenient to operations personnel, such as at the beginning of a planned process shutdown.↩︎

  1070. Probability is a quantitative measure of a particular outcome’s likelihood. A probability value of 1, or 100%, means the outcome in question is certain to happen. A probability value of 0 (0%) means the outcome is impossible. A probability value of 0.3 (30%) means it will happen an average of three times out of ten.↩︎

  1071. Lusser’s Law of Reliability states that the total reliability of a system dependent on the function of several independent components is the mathematical product of those components’ individual reliabilities. For example, a system with three essential components, each of those components having an individual reliability value of 70%, will exhibit a reliability of only 34.3% because \(0.7 \times 0.7 \times 0.7 = 0.343\). This is why a safety function may utilize a pressure transmitter rated for use in SIL-3 applications, but exhibit a much lower total SIL rating due to the use of an ordinary final control element.↩︎

  1072. Yes, maintenance and operations personnel alike are often tempted to bypass the purge time of a burner management system out of impatience and a desire to resume production. I have personally witnessed this in action, performed by an electrician with a screwdriver and a “jumper” wire, overriding the timing function of a flame safety system during a troubleshooting exercise simply to get the job done faster. The electrician’s rationale was that since the burner system was having problems lighting, and had been repeatedly purged in prior attempts, the purge cycle did not have to be full-length in subsequent attempts. I asked him if he would feel comfortable repeating those same words in court as part of the investigation of why the furnace exploded. He didn’t think this was funny.↩︎

  1073. Boiling-water reactors (BWR), the other major design type in the United States, output saturated steam at the top rather than heated water. Control rods enter a BWR from the bottom of the pressure vessel, rather than from the top as is standard for PWRs.↩︎

  1074. Other means of reactor shutdown exist, such as the purposeful injection of “neutron poisons” into the coolant system which act as neutron-absorbing control rods on a molecular level. The insertion of “scram” rods into the reactor, though, is by far the fastest method for quenching the chain-reaction.↩︎

  1075. This appears courtesy of the Nuclear Regulatory Commission’s special inquiry group report following the accident at Three Mile Island, on page 159.↩︎

  1076. The term isotope refers to differences in atomic mass for any chemical element. For example, the most common isotope of the element carbon (C) has six neutrons and six protons within each carbon atom nucleus, giving that isotope an atomic mass of twelve (\(^{12}\)C). A carbon atom having two more neutrons in its nucleus would be an example of the isotope \(^{14}\)C, which just happens to be radioactive: the nucleus is unstable, and will over time decay, emitting energy and particles and in the process change into another element.↩︎

  1077. It is noteworthy that \(^{238}\)U can be converted into a different, fissile element called plutonium through the process of neutron bombardment. Likewise, naturally-occurring thorium 232 (\(^{232}\)Th) may be converted into \(^{233}\)U which is fissile. However, converting non-fissile uranium into fissile plutonium, or converting non-fissile thorium into fissile uranium, requires intense neutron bombardment at a scale only seen within the core of a nuclear reactor running on some other fuel such as \(^{235}\)U, which makes \(^{235}\)U the critical ingredient for any independent nuclear program.↩︎

  1078. Power reactors using “heavy” water as the moderator (such as the Canadian “CANDU” design) are in fact able to use uranium at natural \(^{235}\)U concentration levels as fuel, but most of the power reactors in the world do not employ this design.↩︎

  1079. The formula weight for UF\(_{6}\) containing fissile \(^{235}\)U is 349 grams per mole, while the formula weight for UF\(_{6}\) containing non-fissile \(^{238}\)U is only slightly higher: 352 grams per mole. Thus, the difference in mass between the two molecules is less than one percent.↩︎

  1080. By some estimates, gas centrifuge enrichment is 40 to 50 times more energy efficient than gaseous diffusion enrichment.↩︎

  1081. A typical gas centrifuge’s mass flow rating is on the order of milligrams per second. At their very low (vacuum) operating pressures, a typical centrifuge rotor will hold only a few grams of gas at any moment in time.↩︎

  1082. Three major factors influence the efficiency of a gas centrifuge: rotor wall speed, rotor length, and gas temperature. Of these, rotor wall speed is the most influential. Higher speeds separate isotopes more effectively, because higher wall speeds result in greater amounts of radial acceleration, which increases the amount of centrifugal force experienced by the gas molecules. Longer rotors also separate isotopes more effectively because they provide more opportunity for the counter-flowing gas streams to separate lighter molecules toward the center and heavier molecules toward the wall. Higher temperatures reduce separation efficiency, because gas molecules at higher temperatures are more mobile and therefore diffuse (i.e. mix together) at higher rates. Therefore, the optimum gas centrifuge design will be long, spin as fast as possible, and operate as cool as possible.↩︎

  1083. To give you an idea of just how long some gas centrifuge rotors are, the units built for the US Department of Energy facility in Ohio used rotors 40 feet in length!↩︎

  1084. This means the hollow casing exists in a state of vacuum, with no air or other gases present. This is done in order to help thermally insulate the rotor from ambient conditions, as well as avoid generating heat from air friction against the rotor’s outside surface. Remember, elevated temperatures cause the gas to diffuse at a faster rate, which in turn causes the gas to randomly mix and therefore not separate into light and heavy isotopes as intended.↩︎

  1085. The term zero-day in the digital security world refers to vulnerabilities that are unknown to the manufacturer of the software, as opposed to known vulnerabilities that have been on record with the manufacturer for some time. The fact that Stuxnet 1.x employed no less than four zero-day Windows exploits strongly suggests it was developed by an agency with highly sophisticated resources. In other words, Stuxnet 1.x wasn’t made by amateurs. This is literally world-class hacking in action!↩︎

  1086. Consider what forms of sabotage striking employees might be willing to do in order to gain leverage at the bargaining table.↩︎

  1087. Before you laugh at the idea of losing one’s own body, consider something as plausible as a fingerprint scanner programmed to accept the image of al fingers on one hand, and then that user suffering an injury to one of the fingers on that hand either obscuring the fingerprint or destroying the finger entirely.↩︎

  1088. For the curious, iptables is an administration-level utility application for Linux operating systems, used to edit the ACL rulebase of the operating system’s built-in software firewall. Each line of text in these examples is a command that may be typed manually at the command-line interface of the operating system, or more commonly written to a script file to be automatically read and executed upon start-up of the computer. The -A option instructs iptables to Append a new rule to the ACL. These rules are organized into groups called “chains” which are given names such as INPUT and OUTPUT. While the specific format of ACL rules are unique to each firewall, they share many common features.↩︎

  1089. No device connected directly to the internet should bear an IP address within any of these three ranges, and therefore any data packets received from devices with such an address is immediately suspect.↩︎

  1090. If a TCP-capable device receives too many SYN (“synchronize”) messages in rapid succession, it may lock up and refuse to accept any others.↩︎

  1091. These external computers are called clients, and in this network could include the office workstations as well as workstation PCs at corporate headquarters and the regional manager’s office.↩︎

  1092. Data Historians have existed in Distributed Control Systems (DCSs) for many years, and in fact pre-date DMZs. Their purpose during those halcyon days prior to network security concerns was to provide operations and maintenance personnel with long-term data useful for running the process and diagnosing a range of problems. DCS controllers are typically limited in memory, and simply cannot archive the vast quantities of process data capable within a general-purpose computer. Their function in modern times as part of an industrial control system DMZ is simply an extension of their original purpose.↩︎

  1093. Like all tools, VPN must be used with care. What follows is a cautionary tale. A controls engineer was hired to do PLC programming at an industrial facility, and the technical staff there insisted he connect his portable computer to the facility’s PLC network via a VPN so that he could work via the internet. This limited his need to be on-site by ensuring he could securely upload, edit, and download code to PLC systems from any location. After completing the job and traveling to a different client to do more PLC programming work, this engineer accidently logged into the old client’s VPN and placed one of their operating PLCs in Stop mode, causing a loss of control on a major process there, hundreds of miles away from where he was. Apart from the lesson of carefully checking login parameters when initiating a VPN connection, this example shows just how vulnerable some industrial control systems are and how over-confident some people are in tools such as VPN to protect their digital assets! Just because a VPN promises secure communication does not mean it is therefore safe to allow low-level access to control system components along public networks.↩︎

  1094. An example of this strategy in action is an internet-connected personal computer system I once commissioned, running the Linux operating system from a DVD-ROM optical disk rather than a magnetic hard drive. The system would access the optical disk upon start-up to load the operating system kernel into its RAM memory, and then access the disk as needed for application executable files, shared library files, and other data. The principal use of this system was web browsing, and my intent was to make the computer as “hacker-proof” as I possibly could. Since the operating system files were stored on a read-only optical disk, it was impossible for an attacker to modify that data without having physical access to the machine. In order to thwart attacks on the data stored in the machine’s RAM memory, I configured the system to automatically shut down and re-start every day at an hour when no one would be using it. Every time the computer re-booted, its memory would be a tabula rasa (“clean slate”). Of course, this meant no one could permanently store downloaded files or other data on this machine from the internet, but from a security perspective that was the very point.↩︎

  1095. Consider the very realistic scenario of logging in as administrator (or “root” in Unix systems) and then opening an email message which happens to carry an attached file infected with malware. Any file executed by a user is by default run at that user’s level of privilege because the operating system assumes that is the user’s intent.↩︎

  1096. Telnet is a legacy software utility used to remotely access command-line computer operating systems. Inherently unsecure, telnet exchanges login credentials (user name and password) unencrypted over the network connection. A modern replacement for telnet is SSH (Secure SHell).↩︎

  1097. I am reminded of an example from the world of “smart” mobile telephones, commonly equipped with accelerometer sensors for detecting physical orientation. Accelerometers detect the force of acceleration and of gravity, and are useful for a variety of convenient “apps” having nothing to do with telephony. Smart phone manufacturers include such sensors in their mobile devices and link those sensors to the phone’s operating system because doing so permits innovative applications, which in turn makes the product more desirable to application developers and ultimately consumers. It was discovered, though, that the signals generated by these accelerometers could be used to detect “keystrokes” made by the user, the sensors picking up vibrations made as the user taps their finger against the glass touch-screen of the smart phone. With the right signal processing, the accelerometers’ signals could be combined in such a way to identify which characters the user was tapping on the virtual keyboard, and thereby eavesdrop on their text-based communications!↩︎

  1098. An example of this is where a piece of obsolete industrial software runs on the computer’s operating system, for example a data acquisition program or data-analysis program made by a company that no longer exists. If this specialized software was written to run on a particular operating system, and no others, future versions of that operating system might not permit proper function of that specialized software. I have seen such cases in industry, where industrial facilities continue to run obsolete (unsupported) operating systems in order to keep running some specialized industrial software (e.g. PLC programming editors), which is needed to operate or maintain some specialized piece of control hardware which itself is obsolete but still functions adequately for the task. In order to upgrade to a modern operating system on that computer (e.g. an obsolete version of Microsoft Windows), one must upgrade the specialized software (e.g. the PLC programming editor software), which in turn would mean upgrading the control hardware (e.g. the PLCs themselves). All of this requires time and money, much more than just what is required to upgrade the operating system software itself.↩︎

  1099. As a case in point, there are still a great many industrial computers running Microsoft Windows XP at the time of this writing (2016), even though this operating system is no longer supported by Microsoft. This means no more Service Pack upgrades from Microsoft, security patches, or even research on vulnerabilities for this obsolete operating system. All users of Windows XP are “on their own” with regard to cyber-attacks.↩︎

  1100. This raises a potential problem from the perspective of outside technical support, since such support often entails contracted or manufacturer-employed personnel entering the site and using their work computers to perform system configuration tasks. For any organization implementing a strong security access policy, this point will need to be negotiated into every service contract to ensure all the necessary pieces of hardware and software exist “in-house” for the service personnel to use while on the job.↩︎

  1101. With \(R_2\) dropping zero voltage, test point B is now essentially common to the node at the top of the bridge circuit. With test point A already common with the lower terminal of \(R_1\) and now test point B common to the upper terminal of \(R_1\), \(V_{out}\) is exactly the same as \(V_{R1}\).↩︎

  1102. As before, the limiting case of a thermistor fault causes test points A and B to become synonymous with the terminals of one of the remaining resistors, in this case \(R_3\). Since point A is already common with the upper terminal of \(R_3\) and the shorted fault has now made point B common with the lower terminal of \(R_3\), \(V_{out}\) must be exactly the same as \(V_{R3}\).↩︎

  1103. Other possible tests include inspecting the LED status light on that PLC output card channel (a light indicates the HMI and PLC program are working correctly, and that the problem could lie within the output card or beyond to the motor) or measuring voltage at the drive output (voltage there indicates the problem must lie with the motor or the cable to the motor rather than further back).↩︎

  1104. As a child, I often watched episodes of the American science-fiction television show Star Trek, in which the characters made frequent use of a diagnostic tool called a tricorder. Week after week the protagonists of this show would avoid trouble and solve problems using this nifty device. The sonic screwdriver was a similar tool in the British science-fiction television show Doctor Who. Little did I realize while growing up that my career would make just as frequent use of another diagnostic tool: the electrical multimeter.↩︎

  1105. I honestly considered naming this section “Stupid Multimeter Tricks,” but changed my mind when I realized how confusing this could be for some of my readers not familiar with colloquial American English.↩︎

  1106. I have personally measured “phantom” voltages in excess of 100 volts AC, in systems where the source voltage was 120 volts AC.↩︎

  1107. Before there was such an accessory available, I used a 20 k\(\Omega\) high-power resistor network connected in parallel with my DMM’s input terminals, which I fabricated myself. It was ugly and cumbersome, but it worked well. When I made this, I took great care in selecting resistors with power ratings high enough that accidental contact with a truly “live” AC power source (up to 600 volts) would not cause damage to them. A pre-manufactured device such as the Fluke SV225, however, is a much better option.↩︎

  1108. These are AC voltages having frequencies that are integer-multiples of the fundamental powerline frequency. In the United States, where 60 Hz is standard, harmonic frequencies would be whole-number multiples of 60: 120 Hz, 180 Hz, 240 Hz, 300 Hz, etc.↩︎

  1109. There is a design reason for this. Most digital multimeters are designed to be used on semiconductor circuits, where the minimum “turn-on” voltage of a silicon PN junction is approximately 500 to 700 millivolts. The diode-check function must output more than that, in order to force a PN junction into forward conduction. However, it is useful to be able to check ohmic resistance in a circuit without activating any PN junctions, and so the resistance measurement function typically uses test voltages less than 500 millivolts.↩︎

  1110. Since we get to choose whatever \(k\) value we need to make this an equality, we don’t have to keep \(k\) inside the radicand, and so you will usually see the equation written as it is shown in the last step with \(k\) outside the radicand.↩︎

  1111. In engineering, this goes by the romantic name of swamping. We say that the overshadowing effect “swamps” out all others because of its vastly superior magnitude, and so it is safe (not to mention simpler!) to ignore the smaller effect(s). The most elegant cases of “swamping” are when an engineer intentionally designs a system so the desired effect is many times greater than the undesired effect(s), thereby forcing the system to behave more like the ideal. This application of swamping is prevalent in electrical engineering, where resistors are often added to circuits for the purpose of overshadowing the effects of stray (undesirable) resistance in wiring and components.↩︎

  1112. To be sure, there are some gifted lecturers in the world. However, rather than rely on a human being’s live performance, it is better to capture the brilliance of an excellent presentation in static form where it may be peer-reviewed and edited to perfection, then placed into the hands of an unlimited number of students in perpetuity. In other words, if you think you’re great at explaining things, do us all a favor and translate that brilliance into a format capable of reaching more people!↩︎

  1113. It would be arrogant of me to suggest my book is the best source of information for your students. Have them research information on instrumentation from other textbooks, from manufacturers’ literature, from whitepapers, from reference manuals, from encyclopedia sets, or whatever source(s) you deem most appropriate. If you possess knowledge that your students need to know that isn’t readily found in any book, publish it for everyone’s benefit!↩︎

  1114. And multimedia resources, too! With all the advances in multimedia presentations, there is no reason why an instructor cannot build a library of videos, computer simulations, and other engaging resources to present facts and concepts to students outside of class time.↩︎

  1115. Any instructor who can be replaced with a book or a video should be replaced by a book or a video!↩︎

  1116. Of course, we had to have plenty of instruments to install in this loop system, and industrial instruments are not cheap. My point is that the infrastructure of control panel, trunk cabling, field wiring, terminal blocks, etc. was very low-cost. If an Instrumentation program already has an array of field instruments for students to work with in a lab setting, it will not cost much at all to integrate these instruments into a realistic multi-loop system as opposed to having students work with individual instruments on the benchtop or installed in dedicated “trainer” modules.↩︎

  1117. When I built my first fully-fledged educational loop system in 2006 at Bellingham Technical College in Washington state (I built a crude prototype in 2003), I opted for Cooper B-Line metal strut because it seemed the natural choice for the application. It wasn’t until 2009 when I needed to expand and upgrade the loop system to accommodate more students that I happened to come up with the idea of using pallet racking as the framework material. Used pallet racking is plentiful, and very inexpensive compared to building a comparable structure out of metal strut. As these photographs show, I still used Cooper B-Line strut for some portions, but the bulk of the framework is simply pallet racking adapted for this unconventional application.↩︎

  1118. One of the reasons diagnostic skill is so highly prized in industry is because so few people are actually good at it. This is a classic case of supply and demand establishing the value of a commodity. Demand for technicians who know how to troubleshoot will always be high, because technology will always break. Supply, however, is short because the skill is difficult to teach. This combination elevates the value of diagnostic skill to a very high level.↩︎

  1119. Yes, I have actually heard people make this claim!↩︎

  1120. The infamous “divide and conquer” strategy of troubleshooting where the technician works to divide the system into halves, isolating which half the problem is in, is but one particular procedure: merely one tool in the diagnostician’s toolbox, and does not constitute the whole of diagnostic method.↩︎

  1121. Other things could be at fault. An “open” test lead on the multimeter for example could account for both the zero-current measurement and the zero-voltage measurement. This scientific concept eludes many people: it is far easier to disprove an hypothesis than it is to prove one. To quote Albert Einstein, “No amount of experimentation can ever prove me right; a single experiment can prove me wrong.”↩︎

  1122. Jammed turbine wheel in flowmeter, failed pickup coil in flowmeter, open wire in cable FT-112 or pair 1 of cable 3 (assuming the flow controller’s display was not configured to register below 0% in an open-loop condition), etc.↩︎

  1123. I must confess to having a lot of fun here. Sometimes I even try to describe the problem incorrectly. For instance, if the problem is a huge damping constant, I might tell the student that the instrument simply does not respond, because that it what it looks like it you do not take the time to watch it respond very slowly.↩︎

  1124. The instructor may opt to step away from the group at this time and allow the student to proceed unsupervised for some time before returning to observe.↩︎

  1125. I distinctly remember a time during my first assignment as an industrial instrument technician that I had to troubleshoot a problem in a loop where the transmitter was an oxygen analyzer. I had no idea how this particular analyzer functioned, but I realized from the loop documentation that it measured oxygen concentration and output a signal corresponding to the percentage concentration (0 to 21 percent) of O\(_{2}\). By subjecting the analyzer to known concentrations of oxygen (ambient air for 21%, inert gas for 0%) I was able to determine the analyzer was responding quite well, and that the problem was somewhere else in the system. If the analyzer had failed my simple calibration test, I would have known there was something wrong with it, which would have led me to either get help from other technicians working at that facility or simply replace the analyzer with a new unit and try to learn about and repair the old unit in the shop. In other words, my ignorance of the transmitter’s specific workings did not prevent me from diagnosing the loop in general.↩︎

  1126. Anyone can (eventually) find a fault if they check every detail of the system. Randomly probing wire connections or aimlessly searching through a digital instrument’s configuration is not troubleshooting. I have seen technicians waste incredible amounts of time on the job randomly searching for faults, when they could have proceeded much more efficiently by taking a few multimeter measurements and/or stimulating the system in ways revealing what and where the problem is. One of your tasks as a technical educator is to discourage this bad habit by refusing to tolerate random behavior during a troubleshooting exercise!↩︎

  1127. It should be noted that some incentive ought to be built in to the mastery exams, or else students will tend to not study for them (knowing they can always retest with no penalty). This incentive may take the form of time (e.g. mastery re-takes compete for time needed to complete other coursework) and/or take the form of a percentage score awarded on each student’s first attempt on that exam.↩︎

  1128. This latter concept is called the mesh hypothesis: that learning is enhanced when one’s learning style meshes well with instruction given in that style↩︎

  1129. You cannot pass my original work to anyone else under different terms or conditions than the Attribution license. That is called sublicensing, and the Attribution license forbids it. In fact, any re-distribution of my original work must come with a notice to the Attribution license, so anyone receiving the book through you knows their rights.↩︎