# Reliability of PC Automation

J

#### Jerry Sanders

(Originally posted Fri. 1/9/98)
I am wanting to get into this whole automation field, but a chief question concerns me (as I'm sure it concerns all of you):

How reliable IS a PC-automated system??? Especially running under Windows 95 and the like? I realize that PCs are probably less reliable than PLCs, but if others are putting their $4-5 million (or more) factories in the hands of these PCs, I would think that reliability would cross their minds more than once, and something convinced them that they are worth the effort. I would like to know from all of you what you think about PC reliability. Please back up your opinions with specific examples, not just intuitive feelings (which is all I have right now), or what you think it probably is. If PCs have been failing you, let me know. If they have been running for 5 solid years with nary a problem, let me know that, too. This is my single biggest concern and I must get that out of the way before I continue. Thanks in advance, Dave Sanders D #### Dan Hollenbeck (Originally posted Monday 1/12/98) Dave, > How reliable IS a PC-automated system??? Can be VERY reliable AND VERY Un-reliable. It depends on a lot of factors. > Especially running under Windows 95 and the like? This would be crazy to automate moving factory machine with Win 95 and the like. Just as it would be crazy for a NASCAR racer to race a Ford Pinto in the Indy 500! The point is that a general purpose PC (hardware and software) is MUCH less reliable than a PLC. However, if you take special purpose industrial computers and software, they can be MORE reliable than PLCs (and less expensive too)! The NASCAR is special purpose race car compared to the Ford Pinto! GENERAL Reliability is the "probability of success of each component in a system" - both software and hardware. Increasing reliability comes by decreasing the complexity and increasing the reliability of each component. HARDWARE FACTORS There are 30 vendors that can sell you industrial computer hardware that BEAT the temp, vibration, and EMF specs of PLC vendors for less cost too. Always use spike and surge protection. Don't use a hard disk. For$80 dollars you can get a solid state Flash disk that works the same. Purchase an assembled, tested system instead of components.

SOFTWARE FACTORS
Choose software vendors carefully. Make sure they have years of industry experience and rigorous testing of product. GET reference sites.

Minimize runtime user interaction where possible. Unplug the monitor, keyboard, mouse, etc...remove the floppy disk, CD-ROM drive and disable the reset and power switch.

By this time, you can NOT really call this "box" a computer can you? No it is a CPU and some memory. Sounds a lot like a PLC, no???

Keep control software design simple. Minimize the risk of other vendors software affecting control software execution.(This removes vendor finger pointing later when there is a problem.)

OTHER FACTORS

Remember, just because something can be done with a PC does not mean it SHOULD be done for factory automation. I don't see many NASCAR's with Fork Pinto logos.

Regards, Dan

Daniel Hollenbeck
SoftPLC Corporation
http://www.softplc.com

M

#### Michael Griffin

(Originally posted Monday 1/12/98)
We use PLCs for machine control, but we do use a number of PCs for production test equipment. We used the PCs because we needed to do the sort
of data aquisition, number crunching and data logging that was simply not feasible for a PLC. I'm speaking here of actual PCs, not industrial
computers. We have quite a few STD bus computers on OEM equipment that we really don't have any problems with, but I consider them to be a separate class of device, not PCs.

Most of the PCs we have are running DOS based programs - 'C' (with National Instruments LabWindows to do the screens) or QuickBasic. Most of the applications running were written ourselves, with some of the QuickBasic stuff written by other companies. Lock-ups, crashes, etc. due to application or operating system issues are simply not a problem. This is a factor of carefull program design and implementation, plus a lot more testing and validation than we would put into a PLC program (this *is* test equipment after all).

We've had one piece of OEM test equipment we bought about a year ago which runs a Visual Basic program under Windows 95. There were a number of
application program bugs (a similar system being installed this week also seems to have a lot of bugs), but no real complaints that we've been able to trace to Windows 95 itself.

We have one vision system operating on a PC with Windows NT for six months now. We've had no operating system problems that have been brought to my attention, and this machine runs 24 hours a day, seven days a week. I might note though that all the vision processing and control is done on
special hardware, and the PC just displays the results and stores the data. A PC just isn't powerfull enough to do the real work.

I might note though, that none of these PCs happens to be used in a way that safety is an issue. This was not by design but just the way these processes happen to work.

Our main problem seems to be the reliability and difficulty of using the PC hardware itself. We have hundreds of PLCs, and no matter how badly
they are abused, they seldom fail - and a "failure" is normally not the CPU itself, it's the built-in I/O (which you don't have on a PC). Replacing them is easy. You just pull a new one out of the box, mount it in the panel, reload the program, and away you go.

The PCs seem to be a different story. We decided to use good quality office type PCs since we belived that we could replace them or get them
repaired readily. Our factory enviroment is quite benign. The plant is clean and the ambient temperature range is from 20 degrees in the winter to a maximum of 35 degrees on the worst days in the hottest summers. The PCs are
normally mounted in EEMAC 12 rated enclosures (NEMA 12 if you're an American) with filtered positive pressure ventilation (where required), and are not exposed to any shock or vibration. We have hard drive failures, monitor failures, keyboard failures, power supply problems, etc. I haven't collected any hard statistics, but I would expect to have trouble with a PC at least once during it's service life, while it's very unusual for us to ever have any hardware (not including I/O) problems with a PLC.

Other than reliability problems, the biggest problems I have with PCs are their cost, their size, and their short product life cycles.

The type of manufacturing we do involves a lot of relatively small machines located close together. For reasons of production line flexibility, each machine has it's own control system. This means we use a lot of small (and therefore cheap) PLCs. A normal PC is much more expensive than a small
PLC (and even more so when you add in the cost of a run-time package for the PC control software). A PC is also a lot larger than a PLC, and everyone
wants the control panel to be as small as (or smaller than) possible.

The other big problem is that the life cycle for a model of PC seems to be typically about six months, while our industrial equipment will run
for a decade or more. This has caused us many headaches already when we would buy a new PC and suddenly find a CPU fan where the DSP board is
supposed to go. To put this in some sort of perspective, look at the multitude of busses (XT, AT, Microchannel, EISA, VIESA, PCI) the "common IBM compatible" PC has gone through in its short life, never mind what you would find in S-100, Apple II, Macintosh, Sun, etc. All these hardware and software changes are wonderful if you are running CAD software on your desk, but it can be a real problem when it comes to maintaining equipment.

It also seems to take a lot more engineering effort to use a PC than it would a PLC. Complain all you like about proprietary PLCs, but when you
buy one, you expect it to work without having to put any real thought into it. When you buy a PC, you *expect* to have problems just getting all the
different elements to work together and sorting out the IRQs and DMA channels.

Having said all this, we do use PCs in the factory. We use then in applications (some test equipment) where due to their peculiar
characteristics they happen to be the best choice and we are willing to live with their drawbacks. But for the bulk of our applications, small PLCs are the best design choice. I really don't expect this to change much in the forseeable future. After all, I don't strap a PC to my wrist just because I *could* use one to replace my digital watch.

What I would not be surprised to see though is industrially hardened PC type systems replacing proprietary CNC controllers in many applications.
There is a good size, cost, and functionality match in that market (we also have several CNC controllers). It's probably no coincidence that the examples usually given for "PC based control" seem to be in industries with the need for a lot of CNC control.

*******************
Michael Griffin
[email protected]
*******************

K

#### Kevin Wixom

(Originally posted Monday 1/12/98)
Jerry,

I just took my brand new (3 months old), Pentium PC back into the shop to be repaired yesterday... again. It will be down for at least 2 weeks.
Since I backed up critical data, I can get back on line again. If I had a backup machine on standby, it would've taken at least 4-8 hours to get back online.

The question is, is the the down time and re-setup time worth the dollars you've saved by purchasing cheaper computers? At home, it IS. At your factory??? I'd bet NOT.

The last software development group I ran estimated 2 -6 severe crashes per week requiring at least 1 hour to get back up to speed. (using W95 and W95 based development tools) This is NOT even counting bad power supplies, hard disk crashes, etc. This translates into approximately 100 hours/year/developer... which is probably about $10,000 or so. Including the PC cost, that probably gives you about$15,000 to spend on better hardware to be "even". Needless to say, we began purchasing better hardware.

If the data is not critical, and you can afford the downtime, buy PCs. But, if you need to keep a factory running.... I wouldn't.

Kevin Wixom

M

#### Michael Whitwam

(Originally posted Tuesday 1/13/98)
Yo,

No way is my experience anywhere near as tragic. Inhouse, I have 2 NT servers, 3 NT workstations, and 1 Notebook running NT Workstation. I have
yet to crash the OS. I have crashed VB, VC++, VJ++, Access etc., but never the OS.

As far as hardware goes buy decent HW, with the correct drivers available, and always soak test the PC before delivering to site. Using this process, I have had 1 on site failure in the last 2 years, a hard disk failed within the first month.

Having said all that, I don't use PCs for control, only MMI, management info etc. I use PLCs for the control end of things. They are so cheap today, and have outstanding MTBF figures. Personally, I have never had a PLC CPU fail in all my 20 years in the game.

Michael Whitwam

A

#### Armin Steinhoff

(Originally posted Tuesday 1/13/98)
We have several PC controlled systems running in the field. The oldest system is running now 6 years without any problems ... really zero problems.
It's a simple 19" industrial PC running QNX 4.1 and is controlling a complex materialflow system. The PC works like a PLC from monday to friday
with out switching off the power, so the PC electronic and the disk (only for system start) are working always under optimal conditions ... may be that is one keypoint for the success.

Best regards

Armin Steinhoff <[email protected]>
STEINHOFF Automations-& Feldbus-Systeme
+49-6431-529366 FAX +49-6431-57454
http://www.DACHS.net

R

#### Randy Sweeney

(Originally posted Tuesday 1/13/98)
>If the data is not critical, and you can afford the downtime, buy PCs.
>But, if you need to keep a factory running.... I wouldn't.

>Kevin Wixom

This is scare mongering of the highest order. A PC in a software development environment is entirely different from a PC in a more stable
factory floor application. Software developers (by their very nature) live on and push the edge.

We have hundreds of PC's running Wonderware InTouch (on various WfW 3.11, Win95, and NT 4.0 OS platforms) and our experience has been VERY positive. Availability is excellent and MTBF seems to be about 3 years (7day/24 hr/365 day/yr service). Hardware failures are mainly disk drives, video boards, power supplies - all easily replaced.

Software failures occur mainly when the system software or support drivers are modified. This is done off line prior to installation to reduce the
chance for affecting production.

We use only high end Dells and Compaqs (not their loss leader home units) and we take care to protect the machines from electrical noise,
temp/humidity/chemicals and vibration. We also maintain backup disk images for reliable download into replacement hardware when it is (infrequently) needed.

PC's and reliability are not mutually exclusive terms.

Randy Sweeney
Philip Morris R&D

G

#### George Robertson

(Originally posted Monday 1/12/98)
Kevin,

In my experience, software development groups crash PCs a whole lot more often than do users. This is due to the nature of the work. If you had a factory floor PC going down that often, you would have the worst luck I've heard of in this industry.

-George Robertson
Saulsbury E & C.

D

#### Dave Gee

(Originally posted Tuesday 1/13/98)
First, don't even think about Win95. Use Windows NT for the non real-time portions of your application.

Reliability is one of the biggest issues to consider when implementing a control system, whether PC based or not. This is the basis for two of the five rules of PC Based Control:

-- Your control system must survive a hard disk failure. The hard disk (with its moving parts) is the highest failure rate component of a PC. If
your control system will fail just because your disk crashes, then you have a major reliability problem.

-- Your control system must survive the Blue Screen Of Death (BSOD). If you are using a PC and not using Windows NT, then you are giving up most of the data connectivity and diagnostic benefits that the PC system offers. Windows NT, however, is NOT a real-time operating system and is not meant for use in control systems that could endanger people or property. If a Windows NT failure causes your control system to fail, then
you don't have a control system.

>I would like to know from all of you what you think about PC
>just intuitive feelings (which is all I have right now), or what you
>think it probably is. If PCs have been failing you, let me know. If
>they have been running for 5 solid years with nary a problem, let me
>know that, too. This is my single biggest concern and I must get that
>out of the way before I continue.

There are two big considerations here:

(1) If something happens to Windows NT or to your hard disk, you need the control system to keep running.

(2) The vast majority of failures on the floor are mechanical or electromechanical
in nature and do not involve the control system. When these events happen, a PC based control system provides you with a range of tools that will get the machine back into full automatic mode in the shortest possible time. In one installation, the plant has reported average down time reduced from 20 minutes to two minutes.
In short, your control system must survive the common PC failure modes. Given that, the total system reliability benefits of the PC are dramatic.

K

#### Kevin Wixom

(Originally posted Tuesday 1/13/98)
Scare mongering?? Certainly wasn't meant to be. The original email request asked for objective examples - this is just one of many scenarios
that do exist in the real world. Software development IS much different than a stable factory environment.... that was part of the point. It may not apply to his specific application at all but he needs to make that decision - we can't do it for him in a 2 minute email.

No one would develop software on the same machine(s) that are used on the factory floor... would they???

The real scare would be to put the wrong device in the wrong place, think it will work acceptably, and then get fired when it doesn't, or causes some castastrophic problem or loss of data.

You need to evaluate the application and select the right tools given your priorities and budget.

Kevin Wixom

D

#### Don Lavery

(Originally posted Wednesday 1/14/98)
Randy:

I thought I had been following the discussion of PC's vs. PLC's rather well. At least until I read your response. Now I'm confused!

>>Kevin Wixom wrote:
>
> >If the data is not critical, and you can afford the downtime, buy PCs.
> >But, if you need to keep a factory running.... I wouldn't.

> You wrote:
>
> This is scare mongering of the highest order. A PC in a software
> development environment is entirely different from a PC in a more stable
> factory floor application

Scare mongering? Let's examine your claims and decide who is scare mongering. First of all, "...more stable..."? What would cause a PC to be
less stable in a software developer's hands? The hardware? C'mon, I wasn't born yesterday! The software? Okay, I'll buy that. But isn't that
the _purpose_ of software development?

>Software developers (by their very nature) live
> on and push the edge.

Are you trying to say that software developers push hardware to the edge of failure? If we are talking OS software, I think not! They might more
reasonably be expected to push hardware to the limit in terms of capacity (memory, storage, etc.), but to imply that they can cause hardware to fail seems a mite far fetched.

> We have hundreds of PC's running Wonderware InTouch (on various WfW 3.11,
> Win95, and NT 4.0 OS platforms) and our experience has been VERY positive.
> Availability is excellent and MTBF seems to be about 3 years (7day/24hr/365
>day/yr service).

Let me see, now. Three years MTBF for PC's and you say that this is a "VERY positive" experience. Gracious! What was your MTBF for the PLC's you were using? It must have been substantially less than three years.

> Hardware failures are mainly disk drives, video
> boards, power supplies - all easily replaced.

Are you listening to what you are saying? "Hardware failures...disk drives, video boards, power supplies..." Sure, they may be easily
replaced, but that's not the point, is it? I'd like to know how often you were replacing memory and power supplies in your PLC's. 'Fess up, now.
Was it more often than the replacement rate for your PC's?

> Software failures occur mainly when the system software or support
drivers
> are modified. This is done off line prior to installation to reduce the
> chance for affecting production.

You're getting better at hiding the truth, Randy. Even if you don't tell us how many times you've experienced PC software failures, at least you're honest enough to admit that it happens. So tell me, how many times did you experience software failure with your PLC's? Probably once or twice, IF that much.

> We use only high end Dells and Compaqs (not their loss leader home units)

Hold the phone! I thought we were supposed to be comparing PC's with PLC's. No wonder I'm so confused. We're actually comparing "high end"
PC's to "home units". Ohhhh. Now I see!

> and we take care to protect the machines from electrical noise,
> temp/humidity/chemicals and vibration.

Yes, yes, yes!!! This is one thing that PLC manufacturers are notoriously bad about! Well, I think they are, aren't they?

>We also maintain backup disk images
> needed.

This is (seriously, now) an excellent procedure - applicable to both PC's AND PLC's.

> PC's and reliability are not mutually exclusive terms.

Well, now! I'm glad we've cleared up all the confusion. I feel so much better now! PC's are what I'm going to recommend from now on. Yes, sir!

Randy, were you REALLY serious or just being humorously sarcastic? If you were being serious, I hope I have not offended you with my tongue-in-cheek reply. It's just that when I read what you wrote, your points were destroying every premise you were trying to create. Sorry.

Don Lavery
Lavery Controls

M

#### Michael Griffin

(Originally posted Wednesday 1/14/98)
Is this one failure per PC every 3 years, or do you mean one failure every 3 years in total? If the former, then with hundreds of PCs in use, you
must have more than one failure per week! I must obviously be misinterpreting either your statistics or your application, as that degree
of reliability in the sort of environment I am used to would be considered truly awful.

I do use PCs in test equipment, but the limited number used keeps the amount of repair required down to a managable level. How do you handle
repair with that many PCs in service?

*******************
Michael Griffin
[email protected]
*******************

W

#### Woodard, Ken

(Originally posted Wednesday 1/14/98)
Within the Olin Chlor Alkali Products Division we have close to 40 PC's controlling process systems that are 7x24 operations. The oldest commercial installation is in Brazil where the PC's are desktop 386-16 variety operating with OMNX 3 software on QNX 2.2 for over seven years with no problems. This is in a complex chemical plant operation with corrosive chemical enviroments, and the computers are located with the humans. The field mounted I/O system is Kiethly Metra-byte hardware. The second oldest is the same process operating in Charleston TN for over 6 years. Desktop 386-20's in the control room, field mounted OPTO-22 PAMUX I/O. The list goes on, three fuel Boiler Control, SO2 generation plant, and the newest is a 250,000 Ton/year membrane cell Chlor Alkali Plant with four hot backed up Pentium 166 PC's located in separate processing areas with over 1700 hardware I/O, and close to 10,000 tags, all ethernet real-time deterministic networked control using OMNX 4 software on QNX 4.2. We have the PC direct process control experience for over 10 years. It can be reliable. It can be dependable. It is proven in our OMNX software. It wasn't easy, it requires lots of learnings, and if you are contemplating developing it, it isn't for the
faint of heart.

T

#### Tim Philipp

I have used PCs for machine control for the past 3 years.

The control system that I have used is the Automation Intelligence (owned by Pacific Scientific) SERCOS communications controller ISA card. The hard real time operating system is iRMX that includes a DOS shell that can run a
standard mode Windows 3.1 based mmi. I have also networked these machine together using Netbios.

SERCOS is a multi servo axis control system.

AI's software product, AML is one of the most advanced high level motion language that I have ever seen. It is object oriented and event driven. it is capable of full machine control as well as highspeed motion control.

So with one PC performing Motion Control, machine (plc) control and operator interface, I have found PCs very reliable. I prefer this set up over the same machine implemented with three computers (motion controller, plc, mmi).

G

#### George Robertson

(Originally posted Wednesday 1/14/98)
Don, I really have to weigh in on this one.

You said to Randy:
> Randy:
>
> I thought I had been following the discussion of PC's vs. PLC's rather
> well. At least until I read your response. Now I'm confused!
>
> >>Kevin Wixom wrote:
> >
> > >If the data is not critical, and you can afford the downtime, buy PCs.
> > >But, if you need to keep a factory running.... I wouldn't.
>
> > You wrote:
> >
> > This is scare mongering of the highest order. A PC in a software
> > development environment is entirely different from a PC in a more stable
> > factory floor application
>
> Scare mongering? Let's examine your claims and decide who is scare
> mongering. First of all, "...more stable..."? What would cause a PC to be
> less stable in a software developer's hands? The hardware? C'mon, I
> wasn't born yesterday! The software? Okay, I'll buy that. But isn't that
> the _purpose_ of software development?
>
> >Software developers (by their very nature) live
> > on and push the edge.
>
> Are you trying to say that software developers push hardware to the edge of
> failure? If we are talking OS software, I think not! They might more
> reasonably be expected to push hardware to the limit in terms of capacity
> (memory, storage, etc.), but to imply that they can cause hardware to fail
> seems a mite far fetched.

They don't cause hardware to fail, rather they are more prone to do the things which are known to cause software problems in the course of
development, even to the point of discovering them. The products in service typically do not do this.

> > We have hundreds of PC's running Wonderware InTouch (on various WfW
> 3.11,
> > Win95, and NT 4.0 OS platforms) and our experience has been VERY
> positive.
> > Availability is excellent and MTBF seems to be about 3 years
> (7day/24hr/365
> >day/yr service).
>
> Let me see, now. Three years MTBF for PC's and you say that this is a
> "VERY positive" experience. Gracious! What was your MTBF for the PLC's
> you were using? It must have been substantially less than three years.

Probably so. How many systems stay in place for more than three years without modification or upgrade?

> > Hardware failures are mainly disk drives, video
> > boards, power supplies - all easily replaced.
>
> Are you listening to what you are saying? "Hardware failures...disk
> drives, video boards, power supplies..." Sure, they may be easily
<Snip>
After this, your message became emotional (Humorous?) and, while fun to read, unworthy of comment.

-George Robertson
Saulsbury E & C
[email protected].

The opinions expressed here are not necessarily those of my employer, but they should be.

B

#### Bob Colburn

(Originally posted Wednesday 1/14/98)
Hello All,

I wanted to make a brief comment on using PCs in automation. The term PC is commonly used to describe a desktop unit with video cards, keyboards, monitors, rotating media, and other peripherals. There are several manufacturers that manufacture industrial versions of the PC. These products are designed to work in harse environments that the desktop is not intended for. Most of these do not have the monitor or keyboard and if there is a harddrive, it is a solid state device. Perhaps a better way to
define PC Automation is to say that the operating system that is running on the hardware is one that also runs on your desktop (DOS, WinNT, etc) and supports a PC backplane (ISA, PC104, etc). Mr Sulu, raise shields!

Bob Colburn
Grayhill Inc.

M

#### Malcolm J. Clements

(Originally posted Thurs. 1/15/98)
As an end user of a number of types of control systems, I have been an interested bystander on this discussion. As of yesterday however, we have had a practical demonstration of the 'reliability' of PC Automation. We have a network of PLCs with a batch management system running on the PCs. The PC network has two file servers and two
external hard drives, one a mirror of the other. Yesterday the system crashed out ( we believe one the external drives failed). Following the handbook, which the system builder wrote for our setup, we have tried to restore the system using the second hard drive. Guess what, it won't do it all we can get is the C: drive. In the meantime we are trying to locate a new ext. hard drive whilst seeing if our MIS group or the system builder can come up with an alternative solution.
Despite ringing round the country we are finding that the drive we have is no longer made and we are having great difficulty in identifying a suitable replacement.

We also have a DCS system on site which have suffered an equal number of hardware failure. However when part of the DCS system fails it
normally continues to operate and the component can be changed on-line. When a component does fail it can easily be identified and the DCS supplier will know if the original part number is not available whether there is something else that is compatible.

So what's my point?

Well it seems from my experience that all systems have similar component and therefore failure rates. However the effects of a component failure and the back up available varies dramatically. MTBF figures by themselves, I'm afraid, tell me nothing. Of far more significance is MTB 'getting the darn thing back on line'.

Malcolm J. Clements

M

#### Meir C. Saggie

(Originally posted Thurs. 1/15/98)
A minor note - that thing is called MTTR - Mean Time To Repair.
Out of MTBF and MTTR one calculates "availability" = what percentage of the time is the system "available" to perform its mission.

> significance is MTB 'getting the darn thing back on line'.

E

#### Erich Mertz

(Originally posted Fri. 1/16/98)
This discussion amazes me. I sell industrial "WINTEL" systems to OEM's. These systems typically cost twice as much as the normal "desktop" pc's. Features include watchdog timers, broad temperature capability, high MTBF's etc, and product longevity and availability.

These features cost money up front but provide lower total lifetime cost to the typical customer.

Who is my competition? The desktop PC. The guys who are whining about short life, product failure and lack of available replacement parts are the ones who have invested in cheap PC's and want them to perform as well as products that cost twice as much.

What else is new?

Erich Mertz
[email protected]

A

#### A. V. Pawlowski

(Originally posted Tue. 1/20/98)

You have a good point about the hardware. But, what about the operating systems and software? Most of the application software vendors are
pushing Windows (apparently Windows is now the leader in control operating systems). My direct experience with Windows (2 NT, 2Win95), is that it is the worst of any I have used.

----------
Erich Mertz <[email protected]> wrote:

>This discussion amazes me. I sell industrial "WINTEL" systems to OEM's.
>These systems typically cost twice as much as the normal "desktop"
>pc's. Features include watchdog timers, broad temperature capability,
>high MTBF's etc, and product longevity and availability. ...<clip>