Reliability of PC Automation

T
(Originally posted Tue. 1/20/98)
Are we to say that "office grade" PC's are junk? Or is it impossible to find a quality unit? A PC is designed to meet certain environmental and
operational conditions. Since an office grade PC may not be designed to operate in as wide a range of conditions as an industrial unit, it will be more vulnerable to misapplication. In my experience with both industrial and office grade PC's, the component most likely to fail is the hard disk drive. The industrial PC's I have experience with utilize the same drives found in their "weaker" cousins. The difference being
anti-vibratory mounts. Since the drives are the same, so should the demonstrated reliability provided they are applied properly. Industrial
PC's are more expensive because effort has gone into designing them to be, well, industrial. To say that industrial units are more reliable
because they are industrial, or office grade are not because they are office grade is not correct. Judge each by what goes into it, and apply each by the same rule. A designer should analyze the needs of the application, and consider the total cost of ownership. For example, we have $5000 industrial units and $1600 office units. The office units offer a magnitude greater cpu speed, additional ram, and more standard options. The industrial units feature ease of maintenance and a magnitude greater resistance to environmental fluctuations. However, any environmental conditioning needed for the office grade unit must be considered to produce a fair comparison. The decision to use one type over the other should be approached with the same process as any other design decision.

Todd Wright - end user.
 
M

Michael Griffin

(Originally posted Tue. 1/20/98)
The problem is that most of the people promoting "PC automation"
begin by claiming that PCs are cheap (although they are a lot more expensive than most of the PLCs that I use). They also like to say that you can buy cheap hardware at the nearest computer store. So it should be no surprise that when people take this literally, PCs get a bad name for themselves. Note here that I am *not* referring to industrial computers when speaking of
PCs, no matter how much they may resemble PCs from a *software* point of view.

It can be pretty hard to justify the cost of using a real industrial computer to someone who doesn't understand the issues when there are so many promoters of cheap hardware. But the people promoting PCs are not the PC manufacturers, they are the software vendors, or people selling their software expertise. When they set a target price for a PC system, they
obviously would like to keep as much of that for themselves as possible.

I'm not saying that top quality desktop PCs have no place on the factory floor. I'm just saying that I believe that their application is limited. I do use them, but only for special applications.

I find it interesting that you sell mainly to OEMs. I've seen quite a bit of equipment controlled by STD bus computers. All except two of these were OEM machines which were produced in fairly large numbers. All of the VME computers I've seen have been used by robot manufacturers (or other similar equipment). The "cheap PC" systems that I've heard about (not including MMI systems) seem to have been built mainly as one-off jobs by consultants. Does anyone know of a good reason for this seeming split in the
market?

*******************
Michael Griffin
London, Ont. Canada
[email protected]
*******************
 
A

Andrew Ashton

(Originally posted Tues. 1/20/98)
Well let's try to sort this out with a quick reminder of
some of the basic concepts of reliability.

MTBF = Mean Time Between Failure - defined for repairable items
MTBF is the reciprocal of the sum of the failure rates for all the component parts of the system.
(The equivalent concept for non-repairable items is MTTF = Mean Time To Failure)

"MTB 'getting the darn thing back on line'" is called MTTR (Mean Time To Repair).

Availability is determined by 1-Unavailability

Unavailability (U) - defined as that portion of an
equipment's life for which it is out of service
(And this may be not only for failure, but for upgrades, system maintenance etc.)

U = MTTR/(MTBF + MTTR)

So to minimize Unavailability you must maximize MTBF and minimize MTTR.

*Ways of increasing MTBF*
Factors such as temperature, hermetic sealing, number of gates, environmental conditions and number of functional pins (and hence number of components) are key criteria. The incorporation of VLSI and SMD technology and CMOS circuitry has reduced power consumption and heat generation
whilst reducing pin counts.

*Ways of decreasing MTTR*
- Level of self-diagnostics incorporated
- Modularity
- Availability of spares
- Availability of competent personnel to diagnose and replace faulty unit

Even after equipment selection (select equipment with Appropriate testing bodies' marks - UL / IEC / CSA ...) the system integrator or end user can do much to maximize MTBF by ensuring a satisfactory environment (power, heat, dirt, moisture ingress, vapours, accessibility, surge protection, grounding, decoupling comms via optocouplers, ...)

There are lots of things that you can do about MTTR
- Use the diagnostics that the manufacturer has incorporated (module health etc.)
- Build in self-diagnostics / diagnostic aids (test programs, SCADA overview of hardware etc.)
- Hold an economically appropriate level of spares (what does it cost if this system is down for an hour, a day etc.) preferably warm - i.e. powered up and themselves monitored for failure
- Accessible and understandable up-to-date project
documentation
- Regular backups (data and application!!) with a formal Grandfather, father, son schema
- Formal disaster practice - can you restore from the backups?
Best Regards

Andrew Ashton
Managing Director

ProLoCon
Control - Automation - Petrochemical - Pharmaceutical

ProLoCon (Pty) Ltd
South Africa
Intl Tel +27-11-465-7861
Intl Fax +27-11-465-8455
URL http://www.prolocon.co.za
 
G

George Robertson

(Originally posted Tue. 1/20/98)
Well said. Also, we typically don't put the PC part of a control system in the middle of the process. Typically, it's in a control room environment. I find it interesting that HoneyWell thought the
Dell to be good enough, yet members of this forum seem to fear mass market PCs.

George Robertson
Saulsbury E & C
[email protected]

> Are we to say that "office grade" PC's are junk? Or is it impossible to
> find a quality unit? A PC is designed to meet certain environmental and
> operational conditions. Since an office grade PC may not be designed to
> operate in as wide a range of conditions as an industrial unit, it will

snip
 
(Originally posted Tue. 1/20/98)
Just to add a bit of confusion: One of my clients (entirely on their own, without my input I hasten to add), decided to replace an expensive T-xx computer used for MMI on a ship-loader with a London Drugs special (Packard-Bell if you must know). Since the environment is a high vibration area and quite dusty, including sulphor and other corrosives I thought to myself "this will not last long", but kept my mouth shut. I'm glad I did
because the computer, although laying on it's side in the kick space (with boot marks on the case) is still operating fine after nearly three years!
What does that prove?

Hugo
 
H

Hevelton Araujo Junior

(Originally posted Tue. 1/20/98)
If my understanding is correct, what you are saying is that no matter what hardware you use, you still have the "software" problem to deal with. I tend to agree with that, since I've experienced lots of
problems using Windows NT, although I would not consider it the "worst" (I had more problems using IBM's AIX). Watching this discussion for a while, a question came to mind that I would like to post for comments. Here in Brazil we are watching some end users request that we ("we" are systems integrators) evaluate the possibility of using Windows NT on a DEC Alpha machine to run critical applications. Their idea is to have a very reliable system, so they are willing to pay the (many) extra dollars for the Alpha box; but since they will be using Windows NT, won't they have the same "software" problems as if they were using a PC ?? And for the cost of an Alpha, you can buy three or four high-end PC's (yes, at least here it is that much difference), have them running in "hot" stand-by mode and save the extra cost of hardware (especially hardware maintenance).
If anyone has any experience with Windows NT and Alpha machines, I would appreciate any comments.

Hevelton Araujo Junior
IHM Engenharia e Sistemas de Automação LTDA
[email protected]
 
R

Randy Sweeney

(Originally posted Wed. 1/21/98)
I would assume the STD/VME versus PC market split is historical.

The STD/VME provided the first real time power in a standard package.... this replaced the proprietary SBC's and multiboard systems of the 70's and early 80's. The PC on the other hand just reached sufficient speed to supplant the VME's which formed the core of high capability systems.

We have ultrahigh speed packaging equipment with PC based control cores which replace previous VME and PLC... interestingly... the control is
hosted in the MMI PC. This is a little uncomfortable even to a PC enthusiast like me!

Seems to work ok though...

Randy Sweeney
Philip Morris R&D
 
(Originally posted Wed. 1/21/98)
I have been following this thread for some time. I believe this discussion started with the September 8, 1997 article by Joseph Garber intitled "The PLC versus the PC."

I believe that the ultimate goal we all are trying to achieve is to arrive at a system design that is appropriate and cost effective for the
application and client we are dealing with. With this in mind, the big question that keeps bugging me is, WHY?

Years ago, computer control started with I/O coming into a central computer which ran the control algorithms, generated alarms, etc. Later distributed systems and PLCs relieved the central computer of this load for more efficient and reliable operation. Without going into a long history, it seems that all PC control has done is to go back to the old central computer type control. If you deal with the typical MIS type people in your organization, their usual solution is a bigger faster computer. The only advance is that PCs are now orders of magnitude faster and more powerfull than the old central computers. But so are the PLCs and local processor units of DCS systems. MMI and SCADA software packages that run on PCs now have the capacity for all of the sophisticated control, sequencing, PID loops, etc..

So again, WHY? Putting aside the discussion of industrial hardened PCs versus desk top types, why this step back? What advance in the arena of
control systems is being made here? I am sure that there are applications where this approach may be suitable and cost effective, but I do not
understand how, in the main, one can prefer a general purpose type machine such as the PC in an application that would best be served by a machine
more closely aligned to the application.

I think that with the speed and power available, we may be losing a sense of the direction we are going. Being a greybeard, I have been around long
enough to see most of the evolution of computers, PLC's, DCS's, and especally the PC. I for one, do not see the advantages of PC control. To me, it represents a step back.

In my career I have made many mistakes, but usually learned something. I therefore wait for the slings and arrows of outraged foes and special
interests, but will always ask, WHY.

Jim Lang
 
T
(Originally posted Wed. 1/21/98)
The same approach can be applied to the software. For whatever reason, I seem to be very fortunate regarding my experiences with PC's in factory
automation. As stated in my last correspondence, we have many PC's in operation in my plant. The majority of these are running Windows 3.1 and
HMI software. However, I do have a line which utilizes a PC running Windows NT 4.0 to actually control a machine. The same PC also runs Wonderware. I purposefully chose a mediocre platform regarding horsepower, (32 meg ram,
133 mhz Pentium), and I did not experience any difficulties what so ever installing NT. The machine requires discrete IO operations, several PID loops for an oven, and open loop variable frequency drive control. For comparison, I have several other machines of the same type controlled with PLC's. The software and OS have performed without a flaw. The only quirks I experienced occurred during development. While I had both the HMI and control software open in development and runtime modes, and was making
runtime edits to both, I did have 4 instances of "BSOD", (blue screen of death). This was a one time, one day occurrence. After limiting the
editing to one application at a time, I have never had a fault since. The scan time is superior to the PLC we are utilizing, and the update of the HMI is more than sufficient. Please note that the line above has been operating
in a 7/24 operation since 08/97, and during days for two months previously.

To my knowledge, the Windows 3.1 lines have performed without software/OS problems also. As I have previously stated, the only events necessitating reboot were for hardware failures, or when an operator had roamed outside of the HMI environment, (which was addressed). My experiences direct attention to detail concerning the actual kernel and drivers used in any software
running on the PC. This is not to say OS related bugs don't exist, only that I have not been "affected" by them. The most disturbing problem I
encountered was "vaporware". This problem reared its ugly head in the decision analysis phase of the system design. In order to implement a
successful system, I would recommend using evaluation tools. Examples will include visiting any reference sites, phone references, demos, etc. Take some time to develop a test system on a spare PC. Most of the vendors I have dealt with are more or less willing to provide a consignment package. Some would even make claims that they would install the system for free,
removing it at their cost if unsatisfactory.

Finally, I would like to comment on any relative value or savings. Two areas to consider would be hardware costs and software costs. The hardware
comparison should be fairly simple to make, if proper analysis methods are employed. Concerning software, I tend to analogize things to "efficiency" or how much effort will be necessary to provide a fully integrated system. This effort will be different for every developer. However, I have been exposed to enough systems to know what I don't want. For instance, I don't want to implement my own serial communications routine when there are systems out there which provide canned drivers. The application will drive
what is required in both areas, I just wouldn't limit myself without reason. I am fortunate enough to be allowed the freedom to evaluate newer
techniques and technologies. So far, PC control has worked for me.

Todd Wright - end user.
 
J

Johnson Lukose

(Originally posted Wed. 1/21/98)
It proves that you only need a 'London Drugs special' to run a critical operation. I am of the opinion that this industrial computer bit is overblown. It is far better to get a 'London Drugs special' with a proper hard disk backup!! Anything requiring more reliability will be the realm of PLC, DCS, TMR, etc.

thanks.

Can be reached at;
=S= (M) Sdn. Bhd., Malaysia
Tel : +60 (0)3 7051150
Fax : +60 (0)3 7051170
 
J

Johnson Lukose

(Originally posted Wed. 1/21/98)
DEC builds good machines. I worked with DEC machines before, some years ago I must admit. You can say "plug and forget". You are right; software is the weakest link especially in these days of given hardware reliability. If you face any problem, it will NT playing havoc. NT is not going to
have OpenVMS in sight for ions when it comes to rock sturdy operating system reliability and recovery if you ever need it.

The reality is the users have the money, and common sense says the one with the money to spend is always RIGHT!! You will be up against the wall in this matter. You are going to have a hell of a time to convince them otherwise. The propoganda of PC + W95 / NT has created a market perception of propotions even this list does not realise. It will make everyone a winner if you agree with the users and take the contract. They get the systems they want and you get the project you need.

thanks.
 
T

Tony Robinson

(Originally posted Wed. 1/21/98)
Proves they are lucking at Dice and should go directly to Vegas...
Penny wise, Pound stupid... I have seen the same case, but there have been others where the cheapy system went down, and took a little more down with it.

Tony
 
R

Ramer-1, Carl

(Originally posted Wed. 1/21/98)
Just two cents worth I'd like to add to Andrew Ashton's excellent posting on designing in system reliability (local buzzwords) by increasing MTBF and decreasing MTTR.

Since you're most likely NOT using a prototype in an integration project, you can also take advantage of predictive maintenance
techniques and replace known failure items just before their expected demise. You can schedule your downtime for more opportune times as well. Actual reduction in MTBF and MTTR is not too great, but operations run more smoothly.

Carl Ramer, Sr. Engineer
Controls & Protective Systems Design
EG&G Florida
Kennedy Space Center
 
C

Cindy Hollenbeck

(Originally posted Wed. 1/21/98)
To all re: this subject -

Extremely reliable PC hardware is definitely available, and not always at twice the cost of standard PC's. The I/O interface cards to a lot of the I/O buses/systems are also well-equipped (watchdog timers, etc.).

> You have a good point about the hardware. But, what about the
> operating systems and software? Most of the application software
> vendors are pushing Windows (apparently Windows is now the leader in
> control operating systems). My direct experience with Windows (2 NT,
> 2Win95), is that it is the worst of any I have used.

Operating systems do NOT do control, they are only a platform. It falls directly onto the provider of the control software to ensure that what they sell will work for the applications they are marketing their product into (eg: if triple redundancy is required, and you can't do it, don't say you can with a PC solution!).

Good control software should not fail, nor be dependent on some other company's O/S code. There are a number of control software vendors
who provide products based on this policy. The vendors who take the easy way out and write to WinNT or Win95 are doing an injustice to the PC control industry - IF they advertise that they have a deterministic, real-time, reliable system that can be used in virtually any control application.

If you're planting a garden, use hand tools - if you're plowing a field, you need a tractor!


Best Regards,
Cindy Hollenbeck
email: [email protected]
http://www.softplc.com
281/852-5366, fax 281/852-3869
 
R

Randy Sweeney

(Originally posted Wed. 1/21/98)
We have NT running on both Alphas and PC's... the Alpha is an excellent machine and makes a VERY strong database server... unfortunately most industrial software will not run on it (Wintel only!).

Make sure that the application software you want will run on Alpha (native-- not the slower Intel compatibility mode) and make sure that the
software supplier is committed to maintaining the Alpha port - few are.


Randy Sweeney
Philip Morris R&D
 
C

Christopher Wells

(Originally posted Wed. 1/21/98)
James Lang [[email protected]] wrote:

>I have been following this thread for some time. I believe this
>discussion started with the September 8, 1997 article by Joseph
>Garber intitled "The PLC versus the PC."
>
>I believe that the ultimate goal we all are trying to achieve is to
>arrive at a system design that is appropriate and cost effective for
>the application and client we are dealing with. With this in mind,
>the big question that keeps bugging me is, WHY?

[Wells, Christopher D]
<snip> Jim & others
I have been in a design group responsible for PLCs in the 80's and now I work on embedded designs for power distribution. We have some
large volume on our smart meters and here a dedicated proprietary design does make sense on many fronts. This is our expertise and focus so we can hone the design. However It is very expensive to embed designs, again on many fronts.

My involvement is with communications, that is getting all of these meters to give up their info to an energy monitoring/management - data acquisition system. At the system level this expense becomes overwhelming. Designing operating systems, software & hardware is too expensive to do it on your own . That is where the COTS - "Commercial Off The Shelf" terminology comes to mind. We need to leverage off of other peoples efforts - that is where the PC environment
looks so attractive. Look at Grayhill's open line control platform (Grayhill.com) - the whole marketing thrust is based on this concept.

My latest project is to create a LAN/WAN interface for our meter products and I am struggling with all of these PC reliability issues. I will use one of the leading RTOSs and have looked a lot at off the shelf single board computers. The hope is that I can use a wide variety of PC104 boards and all the standard communication ports with their software drivers already finished for future development, and not
have to design them myself.


>Years ago, computer control started with I/O coming into a
>central computer which ran the control algorithms, generated alarms,
>etc. Later distributed systems and PLCs relieved the central
>computer of this load for more efficient and reliable operation.
>Without going into a long history, it seems that all PC control has
>done is to go back to the old central computer type control.

[Wells, Christopher D]
I disagree - take a look at the way client and server applications are being distributed over LANs and WANs - for example look at HP's
Vantera product up on their web site. (interestingly though they use their own HW platform down at the lowest level - 68331 but with COTS RTOS from WindRiver)

<clip>
 
(Originally posted Wed. 1/21/98)
I don't think the advance is on the technology end of things. I think the advance is in the familiarity of the equipment. There are thousands of new
graduates, IS/IT programmers, supervisors, users, operators, etc. that feel perfectly comfortable with a PC but would be afraid to touch anything when standing in front of a PLC or a DCS. I've watched several competent VB/C/C++ programmers get frustrated with the "foreign" style of ladder logic programming. On the bright side: Just because a technology is inferior at the moment does not mean it will always stay that way. As long as the PC control companies have competition and are making sales, they will enhance the
products. (And more quickly than the PLC companies enhance theirs since the PC control people do not have a large installed based for which they have to provide an upgrade path.) It's funny how history repeats itself. I seem to
remember control engineers questioning the reliablility and appropriateness of PLCs when they were first making inroads into process control.

Carl Lemp

James Lang wrote:

> With this in mind, the big question that keeps bugging me is, WHY?
>
> So again, WHY? Putting aside the discussion of industrial hardened PCs
> versus desk top types, why this step back? What advance in the arena of
> control systems is being made here? I am sure that there are applications
> where this approach may be suitable and cost effective, but I do not
> understand how, in the main, one can prefer a general purpose type machine
> such as the PC in an application that would best be served by a machine more
> closely aligned to the application.
 
(Originally posted Wed. 1/21/98)
Our waste water treatment plant uses dual DEC servers using Open VMS to run our HMI package and in the five years they have been online I do not recall the operating system crashing once. We have two plants (four servers) running 24 hours a day. When was the last time Win95/NT ran more than a month without a problem? We are working on a desktop software package to interface NT with VMS so our clients out there that love NT/Win95 can read live data.
 
B

Barry C. Ezell

(Originally posted Wed. 1/21/98)
Why is it when one requests information on actual reliability or coverage, customers can not get real information. I would like to see the data to
help me decide on the best system.

Barry

Barry C. Ezell
[email protected]
[email protected]
(804) 975-3525
11 Tennis Dr
Charlottesville, Va 22901
 
A

A. V. Pawlowski

(Originally posted Wed. 1/21/98)
It appears that as many people are having good luck running PC's with MS OS's as those who are having bad luck. I don't plan to retire to a desert island so I hope (and trust) my luck with these systems will improve.

I might add that in the 20-30 crashes I have experienced since starting to use Windows seriously, I have only had one BSOD. All of the others have been application-followed by computer lock ups. Usually, the windows fail to update and close first. Then the start menu stays open. And then the mouse cursor freezes.

My guess has been that this indicates memory fragmentation, but I have been advised that incompatible video/graphic cards/drivers (followed closely by Ethernet cards), while otherwise appearing to work fine, can be a significant source of such problems. I will be, especially, careful with their use in the future.
 
Top