Reliability of PC Automation

(Originally posted Mon. 1/26/98)
> Don Lavery wrote:
> Was the questioning due to the fact that PLC's were prone to software
> crashes and hardware failures, or was it pure reluctance to use > > something new and different?

> PC's are NOT noted for working every time,
> even right out of the box ?

> it seems to me that those who are currently
> reluctant to implement PC's in a control environment have a pretty > > solid foundation on which to base their opinions.

> >Carl Lemp <[email protected]> wrote:
> > It's funny how history repeats itself. I seem to
> > remember control engineers questioning the reliablility and
> appropriateness of PLCs

I can confirm this. I come with a background of 20 years with Siemens, in Germany and in India , right from the days before PLCs were born (or were essentially LCs - ie wired-"programmable" modules). The first system I implemented for automation of a complete cement plant had 14
logic centers, but no PLCS! The clients had not much confidence in the PLCs. (We had even less! - No not because we suspected the electronics
robustness, but the functioning in our Indian ambiance!)

The logic centers were constructed modularly , entirely with contactor logic, but suitable for direct replacement with PLCs at a later date
(remove the contactor baseplate, reconnect the terminal wires to the PLC mounted base plate. We did implement the PLCs 3 years later at the same plant.

The first PLC locations (Why, even Variable frequency drives) required back up systems to be parallel wired !

These are not entirely due to experience of the new electronics failing, but more a mind set.

I now run several companies engaged in designing bus-linked modules, PLCs (CAN Bus) and applications concentrating on drives, industrial
controls and BMS. We have implemented several systems with bus-linked modules (upto 120 nodes in some cases) entirely orchestrated for signal exchange and logging by the central PC. The programs were originally under DOS platform. The PCs work 24 hours, 365 days a year. NULL PROBLEMO! We have at worst, 1 breakdown call per year, and this is normally due to a bus disconnection.

Today we offer Windows based systems. We have also developed buffer intelligent (controller) interfaces to the buses.

Our personal experience is -

1. PLCs are indeed far more robust than commercial PCs. I would not include industrial grade PCs in this comparison.

2. Intelligent buffers developed to link the nodes to the PCs were a result of power considerations (UPS for PC costs more than a 24/12 V battery backup system)

3. PC failures at hard disk levels have almost been negligible, even though we would have normally placed this as the most failure prone
area (moving mechanism)

4. Windows OS (and beyond) add a large amount of code and hardware superiority to make the OS function efficiently, but reduce the MTBF, for the very same reason. We have had more crashes in Windows based systems than in DOS based. Clearly, the systems are better looking, more powerful, more salable - but, also more failure prone! I am sure this would change too, given time. Given that more efficient programmers are needed for the newer OS, less lazy too (!) the time needed to master each level of the new hardware and software is getting worse than keeping up with the Jones's !

5. The observation 4 above is NOT a mind set, but very real.

6. PC programming does permit use of a variety of Tricks (undocumented or otherwise), but the PLCs are not beyond these (different methods to
achieve a more efficient end). The CAN open poses us enough challenges to carry out multi CPU dialog as would probably a PC level program to
build simultaneously tabular compilations of field data and graphical display of same.

7. Given the tasks of controlling and monitoring, every solution, be it using a PC or a PLC is as good, unless these systems are so expensive and need a longevity without upgradation of several decades. The sole criteria would be that the solution clearly meets the need.

I am sure that the above views are debatable, and look forward to more views on the subject.

Best regards to all


From: ICON microcircuits & Software Technologies pvt ltd
12, First Street, Nandanam Extension, Madras-600035, INDIA
Ph: +91-44-4321857
Fax :+91-44-4335578
EMail : [email protected]

Michael Whitwam

(Originally posted Mon. 1/26/98)
Yes, you probably would, but you still get more power per $$, and you a more widely supported platform.

At 12:37 22/01/98 -0500, "Michael Whitwam <[email protected]>" wrote:
>>I think that the sales of DEC Alpha speak for themselves. A decent
>>modern PC is every bit as good as the DEC. If you want power, go
>>multiprocessor. <clip>

Hevelton Araujo Junior <[email protected]> replied:
>Won't you raise the price to around the Alpha range once you start
>adding processors ? (I'm not being sarcastic, I really don't know)

A. V. Pawlowski

(Originally posted Mon. 1/26/98)
> I would like to see all PLC's come with ethernet or at least 2 comm ports capable of 115 kilobaud serial comms. PC's have had both of these luxuries for years. No wonder they are becoming more popular. <

At any particular point in time, I think you could get higher speed serial ports on PLC's than you could find built-in to PC's. I think Ethernet was supported by PC's earlier than PLC's, but not by much. Of course, the difference is cost for the feature and whether it comes as a built-in item. As far as I know, only Apple includes an Ethernet port built-in to their motherboard and, although most PC's come with them, they are plug-in, separate cost items.
(Originally posted Mon. 1/26/98)
(Quoting Bill Sturm - [email protected])

> I think that one of the reasons for the trend back to centralized control is
> that people are collecting and monitoring much more
> data than in the past.

This is my main reason for it, anyway.

> Many PLC's have very slow networking
> facilities. This makes the PLC to PC interface much more difficult.

19.2 kbaud RS-485 doesn't cut it for me.

> One way to solve this problem is to do the control in the PC, this
> way you have can have on tag database and very fast screen
> updates and data acquisition.
> I am not saying that this is the best way, however. I would prefer
> to stay with a more distributed system with many small processors.

I agree distributed would be better, because I have some control functions that require sub-millisecond response times which isn't compatible with the way the dumb-I/O-only network is handled. I would like to be able to send small, Java-like, control applets to my distributed I/O for higher-speed local processing.

> Some of the new PLC's are starting to have faster networking, such
> as ethernet, that makes it easier and more economical to connect with
> a host computer. No more 19.2 kb multi-drop links or $1000.00
> interface cards. I would like to see all PLC's come with ethernet or
> at least 2 comm ports capable of 115 kilobaud serial comms.

I second that!

How about USB ports? They should certainly be inexpensive to add.

Rufus V. Smith
[email protected]

Hevelton Araujo Junior

(Originally posted Tues. 1/27/98)
Agree with you on that. From the discussions here, and from some more studying on my own, I believe that sticking with PC's (vs. Alpha) is better. High-end PC's have very stable hardware these days, and software, well, I guess we just have to strip the system down to its minimum, leaving NO room for the operator to mess with the system (out with internet, screen-savers, games, etc.), take out any possibility for operators to get things back onto the system (floppy, CD), and find a way to protect our networks.


Hevelton Araujo Junior

Raghu Krishnaswamy

(Originally posted Tues. 1/27/98)
Use of PC's for control application might be illegal (for certain cases when the potential for death or injury exists). Surprised? OSHA (Occupational Safety and Health Administration) requires any new system
to be qualified, and in order for the system to be classified the MTBF rate is required. I seriously doubt if one can get a published MTBF rate from Microsoft for Windows NT. Atleast I am not aware of one.

Again this is just one interpretation of the OSHA rules, and I would love to hear different interpretations.

I am running an HMI on a Pentium PC NT3.51 to monitor(not control) a process. The system has perfomed reliably, with a few hang ups here and
there. We never had any problem with MS DOS on which we were running the previous HMI. Can one automatically conclude that DOS is superior to NT?
Probably not. NT is going through its evolution process like DOS did. In order for NT to be accepted by engineers, MICROSOFT should adapt to the world of engineers. They need to accept the fact that engineers are different from accountants, and developing and marketing a product to engineers is a different ball game altogether.

Raghu Krishnaswamy
Senior Project Engineer
Westinghouse Electric
Commercial Nuclear Fuel Division
Columbia South Carolina
(Originally posted Tues. 1/27/98)
I just finished a job on a Desktop PC running a pharmaceutical batch process. I used Taylor Waltz with Taylor Process Windows. We are using
BECKHOFF Devicenet I/O with the SS driver card. We are running 12 serial ports. On the serial ports we are talking to 8 Total Control 6" colour
QuickPanels. We are talking to 3 other PLC's for communication and control. We have a parallel port ZIP drive. We have 256Megs of memory. We run the control kernel and log a large number of variables and then plot the variables out for each batch. This is all done on the same Desktop
box. I installed the NT 4.0 with service pack 3. I am not a system administrator. It has been running for 4 months now and we do not have
the blue screen problem.

The Blue screen problem I have seen usually only occurs on memory deficient machines. I mean machine with under 128Megs of memory. Our
application never requires over 44megs according the NT task master but NT does some funny things with under 128Megs.

I believe NT is stable enough for control with proper installation. I might add that I have seen some really strange problems with reliability
on AB PLC 5's. I have done a lot of those also. PLC 5's still fault and quite with division by Zero. I would not call that fault tolerant. I hae
had remote racks quit communication with analog cards in a rack but not all.

My experience with Taylor Waltz, NT, Process Windows, QuickPanels and a desktop machine is that it seems as reliable as the PLC 5's I have had to work with. The desktop solution was much cheaper purchase and much friendlier software.

There is my two bits worth on control with a PC and NT. I did it and it works.

Owen Day

Armin Steinhoff

(Originally posted Tues. 1/27/98)
[email protected] wrote:
>How about USB ports? They should certainly be inexpensive to add.

Yes, that's right for the hardware ... but have you read the USB specification ? It contains much 'technology prosa' about the USB protocol which is really not easy to implement. It is much work to realize it, so it can't be inexpensive :-( .

BTW, is the USB more used in the field than FireWire ?? For which bus system are today more devices available ??

Armin Steinhoff

Michael Whitwam

(Originally posted Tues. 1/27/98)
I think you have that hit the nail on the head here. When last did QNX add a new scanner driver, or support for a 32 bit sound card?

Stick to tried and test hardware, and I am sure that NT will provide you with many happy customers. Experiment with new fangled add-ons in your own office or test facility, not on the customer's mission critical systems.

Michael Whitwam
[email protected]
> A new piece of hardware with a poorly written kernel mode device driver is
> another matter. It is hard for the OS to protect against this. I
> QNX guards against this by running all device drivers as user level
> processes. I have seen many reports that NT has decent soft real-time
> performance, at least on a Pentium II. But many of these reports caution
> that a poor device driver could disable interrupts for a long time
> and screw up it's response times.

A. V. Pawlowski

(Originally posted Tues. 1/27/98)
It has been pointed out to me that I was wrong in my comment below and PC's have indeed been commonly supporting both Ethernet and high speed serial ports (>57.6K) since the mid to late 1980's ie. many more years than PLC's. I should have checked my facts before I opened my mouth. I appologize to Bill Sturm and anyone else who may have been upset over my post.

BTW, I believe both PLC's and PC's have their place in today's control systems. The choice for me is application dependent. I also think that some PLC manufacturer's are charging some exorbitant prices for Ethernet capability.

I wrote:
At any particular point in time, I think you could get higher speed serial ports on PLC's than you could find built-in to PC's. I think Ethernet was supported by PC's earlier than PLC's, but not by much. Of course, the difference is cost for the feature and whether it comes as a built-in item. As far as I know, only Apple includes an Ethernet port built-in to their motherboard and, although most PC's come with them, they are plug-in, separate cost items.
(Originally posted Tues. 1/27/98)
Actually the chips for implementing USB are relatively inexpensive. The problem is (from my reading of the specs) that USB was designed with PC's and their peripheral devices in mind, not remote sensing, etc. It would be fairly easy to create a PC USB interface, but putting that into, say, a photoelectric sensor, would be very difficult. I'll suggest a few reasons:

1) the physical size of the chip set (the chip set I last looked at (Intel?) was two or three fairly large devices, plus interface hw.
2) the data exchange protocol is designed for sending data to a printer or getting data from an optical scanner. i.e. non deterministic,
large data packets, with a limited number of nodes per network.
3) the standard connector is huge and not practical for plant applications.

Just my two-cents' worth...

Tom Kirby
Richmond Automation Design, Inc.
804-262-6421 FAX
[email protected]

Michael Whitwam

(Originally posted Tues. 1/27/98)

Yes, you probably would, but you still get more power per $$, and you a more widely supported platform.

Hevelton Araujo Junior <[email protected]> replied:
>Won't you raise the price to around the Alpha range once you start
>adding processors ? (I'm not being sarcastic, I really don't know)

Michael Griffin

(Originally posted Wed. 1/28/98)
At 07:28 24/01/98 -0000, you wrote:
>I have seen a poorly written RLL (and STL too) crash a Siemens S5 PLC....
>happens all the time if you don't keep your variable addressing straight
>(especially mixed type variables of different length/structure) using
>Step5. Normally this happens after download and the little run light on the
>PC goes out!

Crashing from addressing variables on an S5? This is certainly a new one on me, unless you are referring to faulting the processor by attempting
to for example write to a Data Word that doesn't exist. In this case though, the processor does not crash; it detects the error in your program and shuts itself down in a controlled stop.

I don't use Siemens' "Step 5" software. I use someone else's programming software, so perhaps the software I use simply doesn't let me make the types of mistakes you are talking about. What sort of variable addressing are you talking about? Load and Transfer instructions automatically adjust to byte or word size, while the software I use simply won't let me enter an incorrect function block parameter size.

I've done quite a bit of S5 programming, and I'm not sure what it is you are describing. Could you explain what you mean a little further?

Michael Griffin
London, Ont. Canada
[email protected]
(Originally posted Wed. 1/28/98)
The problem was with Step 5 combined with S5 processor - no local type / map checking in programmer and memory map checking in the PLC..

You can co-locate structures on top of each other and Step 5 does not warn or provided error checking. Once downloaded, the program could crash the PLC if the data types and variable contents resulted in invalid words for a
particular operation.

This was several years ago and Siemens has fixed the problems since then...

Randy Sweeney

Johnson Lukose

(Originally posted Mon. 1/26/98)
>What you say is true. However my experience is that PLCs originally of
>American origin, tend to be far more idiot proof than their European
>counterparts. Have you ever managed to crash Modicon 984?

And similarly robust was the Telemecanique Serie 7.
(Originally posted Tues. 2/03/98)
The discussion of the relative reliability of different platforms could be much better resolved by quantitative data rather than opinion and
anecdotes. Isn't there anyone who knows how many operating hours they have on a PC on NT and how many failures who would be willing to share the data with this mailing list? Would anyone be willing to share their reliability experience on other platforms (such as PLCs)?

Once we have this data, and if we have it from several sources, I personally will be glad to do the MTBF and confidence limit calculations as
a contribution to the discussion posted on this list. If we can get further data on what the failure modes were and whether a recovery was
possible, then we can deal with the issues of under what circumstances the platform is usable.
4-5 million factory is a peanut. Think 4-5 billions. I spent 30 years in Process Control Instrumentation. Never used PC. Distributed Systems, PLC plenty. It is impossible to program
fault proof lengthy piece of information. Remember that missing comma some years ago in the NASA. We went 12 persons in Philadelphia, playing with one of the world top Distributed System, for nuclear Power plant: It failed, We then went to Phoenix playing with the same system OK. Believe me,the authorities in Phila. were not newbies and soon, there were plenty of them: Nyet Camarad !
Yes, the first loop in an industrial system is expensive because of the minimum requirements.
As the plant grows in millions/billions the overall automation maintains around 5%.
Let's talk seriously. Have a blank PC no DOS, nothing. Install a DOS, an operating system,a
complete error message, same thing for all kinds of I/O's: that is reshaping the wheel with rope
and nail. Yes there are systems of that kind on the market. The maths that come with are University or book type, do not bet your head.
Numerical maths are unsure friends unless you are an expert (I know a great lot about numerical approximation of functions and what is running in computers,also I use scientific sotware package) Discouraged? No.
Now, the best piece of sofware that you might add to Microsoft is Microsoft dependant.
It is a monopoly of nuts and bolts just thrown
Examples of Microsoft stupidities:
Excel is the Math tool of Windows, it is impossible when you write a math page in Excel to use the characters font that is in Word. So for a rich math page I use Publicon on top of Excel,
nicely enough, double click reopens Publicon.
Excel does not accept implied multiplication
so 3x=3xx.
3x-5y=-5y+3x but Excel does not accept -5y+3x,
an idiotic space before -5y is required. idiosycresis are endless with Microsoft.
Error..Error F..k show me.
Excel I like it but is not faultless.
An other example of software incompatibility :
In my approximations, I am great user of the Thiélé approximation (it works where polynomial approx work and works where polynomial approx do not work).So, the last convergeant may be negative .../-C)))))) which Excel digest but an other scientific package does not and must be written .../(-C))))))).
All that to say that PC's are not designed for
Plant Automation. Loop structure, copied from
analog systems is extremly complex. Millions
man/hours just can not be imaged overnight.
PC's are suitable for Data Acquisition and Plant Optimization but not for closing loops.
For each loop, I would use individual modules (probably numerical) or a multisystem.
On Multisystems with loops, I incline Foxboro. It will do lot of logic too. Some years ago, for logic (from small to large size) there was Reliance. 15 years ago there was an 8 loops
module redundant, extensible, powerfull out of imagination so simple to use: too advanced not on the market anymore. A system like Foxboro is a 60 years continuation. Fully compatible with what can be adjoined.
I started profession using relays, then appeared
PLC's could not do this could not do that. I quitted profession in 1995. I had no math coming with the PLC. That particular client had installed three legs RTD terminating on three terminals but two leg bridge so I told him he will be by about 3°C off, I supplied a small piece of correcting polynomial but no math to run it; the client had to wait a three bridge card
if there was enough demand. In 1992 a fellow worker of mine lost his hairs trying to have the derivative work on a PLC (a great name). If you need derivative that's because you need it work. There was no way with that system because she is a three term one: it's like multiplying diameter by 3 or by 4 instead of pi !!!
You see, there are at least two kinds of maths. Maths that work and college maths
Same philosophy applies to Process control:
proven ones and imagination.
Whichever one that may be selected make sure you square the limits and the bugs
[email protected]
Yes, many factories run on PC . But they run part of the year, so downtime anytime is no problem.

Christopher Blaszczykowski

Lets create some pictures:
1. PC/PLC combination 2. PC only 1. PC/PLC combination Manufacturing environment with several production lines. First at all you have to think about fact that any hardware combination is and always will be more reliable than any software. THAT’S A FACT! With PLC you have a choice of some redundancy combination, which increase both stability and reliability, as well as increased option of running in manual, semi-automatic and automatic mode. Maintenance is much more easy. Even if OLC and PC fail you still can run manually, especially if you provide safety of using single devices controlling single process. Example: PID loop running on hardware with communication to PLC but capable on running independently. In semi-automatic mode all connected devices are dependent on PLC programming and all variables can be set from PLC Ladder Logic. In automatic mode you have security of stable programming and “pretty graphical interface”, but still the core of program is running from PLC! Another factor is ability to secure program for quick retrieval from EPROM. In this situation even if one of the element failed you still can run production and have a time for correcting problems without losing too much of production time, which can run into millions otherwise. This also allow you for safe and uninterrupting PC maintenance – a very important factor especially if you store a lots of data. Lack of such maintenance can cause total lost of valuable engineering resources for R&D, production, business, and process. Also very important factor is securing resources on the network. By allowing only for read-only access for others network servers you will be able to prevent any unwanted access to resources needed for same as above reasons. I know that it is insufficient explanation – but that is for now. Later I will explain it in more detail so you can have a full picture. 2.PC only Taking to consideration for all of above picture the same or similar operation based only on PC. A. Under any circumstances you can't perform multi-level PC operation like you can with PLC. If PLC architecture allow you to create pretty complex chain of CPU communicating at different levels and control defined functions, PC is limited to the amount of PC boards, which you can insert to motherboard. And in many cases or you have limited slots or there is problem finding PC with more than 2 or 3 slots max. So how many slots can you used for IO’s? B. In majority of cases all of the Industrial PC’s are obsolete and is hard to find or operating systems or parts for them, Besides they cost to much. Now. Lets assume that you successfully implement PC operation, and failure occurs! You are stuck! You may loose a valuable data, production stops until you fix the problem, and this may take a considerable time. I don’t think that I have to explain consequences of this. Another problem – someone introduces somewhere virus in the network environment – I don’t think it require any more explanation! Lets talk about networking environment. If every production line is networked through PLC’s in majority cases it is low maintenance and it is hard to interrupt entire process with exception for lack of electricity. In my long practice I never heard of virus infected PLC. In case of PC’s anything can happened. Lets make another assumption. All of PC’s running lines are networked. You have several PC’s, which can cause problem. In the simple case only one line is out for several hours, or in worst case entire networked factory can be out for several days until all of the elements are fixed. Matter of usage of network resources – major problem especially if engineering servers and production servers are connected to business server. From my practice I was forced to limit and block resources for each level due to prevent accessing and abuse (production and engineering) resources by business section. Not only majority of viruses come exactly from business section! In some cases you can find around 90 viruses or viruses sources. Imagine effect of it on production. With PLC it can be prevented much easier, and much quicker. It is necessity to block all of the resources and limit it if necessary to read-only! Otherwise you are in trouble. Christopher Blaszczykowski [email protected]
I guess that you are already getting the gist from the other respondees that it really depends on apllication and how it is applied.

The main rules of thumb that I can suggest are:

Shy away from Win9x OS's, more for office / home use. NT2000 seems to be fairly stable but is still not deterministic and real time. VenturCom seem to have an application that is used by most of the big SoftPLC manufacturers.

Use good quality PC's. I have successfully used DELLs for many years. Beware of some big name suppliers that use bespoke type hardware and bus systems - YOU KNOW WHO YOU ARE. Make sure that the PC uses standard parts that you can easily get hold of. Try to stay away from hard-disk storage, use flash disks, etc.

Way up the risks. If your a&*e is on the line, stick to the old faithful PLC. In my 14 years at this game I have never had a PLC CPU fail. I/O yes, but good MTBF's nonetheless.

I have had SoftPLC's that needed weekly reboots, NT Scada packages that "freeze" and all were sat on standard OS (from you know who).

But to be more positive, I am currently investigating using a (well 30+) European SoftPLC's (that can have a PLC slot card put in it) for the main controllers in a very large automation project in the UK.

Yariv Blumkine

Hi guys,
SoHaR (Our name is derived from a contraction of Software and Hardware Reliability)
is a company dedicated to analysis and improvement of Reliability and Availability in critical systems. (nuclear reactors, airborne systems etc.)
I came across your group debate regarding reliability in the automation industry, and was wondering what are the major concerns and problems you have when addressing reliability and availability issues.
I’m not trying to sell you anything (yet), but to understand your pains and debates when facing these issues.
Maybe latter – we could help.
I would very much appreciate responses to [email protected]

Thank you,
Yariv Blumkine