Today is...
Thursday, March 23, 2017
Welcome to Control.com, the global online
community of automation professionals.
Featured Video...
Featured Video
EtherCAT with CTC’s master lets your multivendor network play well together...
Our Advertisers
Help keep our servers running...
Patronize our advertisers!
Visit our Post Archive
Reliability of PC Automation
(Originally posted Fri. 1/9/98) I am wanting to get into this whole automation field, but a chief question concerns me (as I'm sure it concerns all of you): How reliable IS a PC-automated system???
By Jerry Sanders on 12 April, 2000 - 4:32 pm

(Originally posted Fri. 1/9/98)
I am wanting to get into this whole automation field, but a chief question concerns me (as I'm sure it concerns all of you):

How reliable IS a PC-automated system??? Especially running under Windows 95 and the like? I realize that PCs are probably less reliable than PLCs, but if others are putting their $4-5 million (or more) factories in the hands of these PCs, I would think that reliability
would cross their minds more than once, and something convinced them that they are worth the effort.

I would like to know from all of you what you think about PC reliability. Please back up your opinions with specific examples, not just intuitive feelings (which is all I have right now), or what you think it probably is. If PCs have been failing you, let me know. If they have been running for 5 solid years with nary a problem, let me know that, too. This is my single biggest concern and I must get that out of the way before I continue.

Thanks in advance,
Dave Sanders

By Dan Hollenbeck on 12 April, 2000 - 4:35 pm

(Originally posted Monday 1/12/98)
Dave,

> How reliable IS a PC-automated system???

Can be VERY reliable AND VERY Un-reliable. It depends on a lot of factors.

> Especially running under Windows 95 and the like?

This would be crazy to automate moving factory machine with Win 95 and the like.

Just as it would be crazy for a NASCAR racer to race a Ford Pinto in the Indy 500!

The point is that a general purpose PC (hardware and software) is MUCH less reliable than a PLC. However, if you take special purpose industrial computers and software, they can be MORE reliable
than PLCs (and less expensive too)! The NASCAR is special purpose race car compared to the Ford Pinto!

GENERAL
Reliability is the "probability of success of each component in a system" - both software and hardware.

Increasing reliability comes by decreasing the complexity and increasing the reliability of each component.

HARDWARE FACTORS
There are 30 vendors that can sell you industrial computer hardware that BEAT the temp, vibration, and EMF specs of PLC vendors for less cost too. Always use spike and surge protection. Don't use a hard disk. For $80 dollars you can get a solid state Flash disk that works the same. Purchase an assembled, tested system instead of components.

SOFTWARE FACTORS
Choose software vendors carefully. Make sure they have years of industry experience and rigorous testing of product. GET reference sites.

Minimize runtime user interaction where possible. Unplug the monitor, keyboard, mouse, etc...remove the floppy disk, CD-ROM drive and disable the reset and power switch.

By this time, you can NOT really call this "box" a computer can you? No it is a CPU and some memory. Sounds a lot like a PLC, no???

Keep control software design simple. Minimize the risk of other vendors software affecting control software execution.(This removes vendor finger pointing later when there is a problem.)

OTHER FACTORS

Remember, just because something can be done with a PC does not mean it SHOULD be done for factory automation. I don't see many NASCAR's with Fork Pinto logos.

Regards, Dan

Daniel Hollenbeck
SoftPLC Corporation
http://www.softplc.com

By Michael Griffin on 12 April, 2000 - 4:38 pm

(Originally posted Monday 1/12/98)
We use PLCs for machine control, but we do use a number of PCs for production test equipment. We used the PCs because we needed to do the sort
of data aquisition, number crunching and data logging that was simply not feasible for a PLC. I'm speaking here of actual PCs, not industrial
computers. We have quite a few STD bus computers on OEM equipment that we really don't have any problems with, but I consider them to be a separate class of device, not PCs.

Most of the PCs we have are running DOS based programs - 'C' (with National Instruments LabWindows to do the screens) or QuickBasic. Most of the applications running were written ourselves, with some of the QuickBasic stuff written by other companies. Lock-ups, crashes, etc. due to application or operating system issues are simply not a problem. This is a factor of carefull program design and implementation, plus a lot more testing and validation than we would put into a PLC program (this *is* test equipment after all).

We've had one piece of OEM test equipment we bought about a year ago which runs a Visual Basic program under Windows 95. There were a number of
application program bugs (a similar system being installed this week also seems to have a lot of bugs), but no real complaints that we've been able to trace to Windows 95 itself.

We have one vision system operating on a PC with Windows NT for six months now. We've had no operating system problems that have been brought to my attention, and this machine runs 24 hours a day, seven days a week. I might note though that all the vision processing and control is done on
special hardware, and the PC just displays the results and stores the data. A PC just isn't powerfull enough to do the real work.

I might note though, that none of these PCs happens to be used in a way that safety is an issue. This was not by design but just the way these processes happen to work.

Our main problem seems to be the reliability and difficulty of using the PC hardware itself. We have hundreds of PLCs, and no matter how badly
they are abused, they seldom fail - and a "failure" is normally not the CPU itself, it's the built-in I/O (which you don't have on a PC). Replacing them is easy. You just pull a new one out of the box, mount it in the panel, reload the program, and away you go.

The PCs seem to be a different story. We decided to use good quality office type PCs since we belived that we could replace them or get them
repaired readily. Our factory enviroment is quite benign. The plant is clean and the ambient temperature range is from 20 degrees in the winter to a maximum of 35 degrees on the worst days in the hottest summers. The PCs are
normally mounted in EEMAC 12 rated enclosures (NEMA 12 if you're an American) with filtered positive pressure ventilation (where required), and are not exposed to any shock or vibration. We have hard drive failures, monitor failures, keyboard failures, power supply problems, etc. I haven't collected any hard statistics, but I would expect to have trouble with a PC at least once during it's service life, while it's very unusual for us to ever have any hardware (not including I/O) problems with a PLC.

Other than reliability problems, the biggest problems I have with PCs are their cost, their size, and their short product life cycles.

The type of manufacturing we do involves a lot of relatively small machines located close together. For reasons of production line flexibility, each machine has it's own control system. This means we use a lot of small (and therefore cheap) PLCs. A normal PC is much more expensive than a small
PLC (and even more so when you add in the cost of a run-time package for the PC control software). A PC is also a lot larger than a PLC, and everyone
wants the control panel to be as small as (or smaller than) possible.

The other big problem is that the life cycle for a model of PC seems to be typically about six months, while our industrial equipment will run
for a decade or more. This has caused us many headaches already when we would buy a new PC and suddenly find a CPU fan where the DSP board is
supposed to go. To put this in some sort of perspective, look at the multitude of busses (XT, AT, Microchannel, EISA, VIESA, PCI) the "common IBM compatible" PC has gone through in its short life, never mind what you would find in S-100, Apple II, Macintosh, Sun, etc. All these hardware and software changes are wonderful if you are running CAD software on your desk, but it can be a real problem when it comes to maintaining equipment.

It also seems to take a lot more engineering effort to use a PC than it would a PLC. Complain all you like about proprietary PLCs, but when you
buy one, you expect it to work without having to put any real thought into it. When you buy a PC, you *expect* to have problems just getting all the
different elements to work together and sorting out the IRQs and DMA channels.

Having said all this, we do use PCs in the factory. We use then in applications (some test equipment) where due to their peculiar
characteristics they happen to be the best choice and we are willing to live with their drawbacks. But for the bulk of our applications, small PLCs are the best design choice. I really don't expect this to change much in the forseeable future. After all, I don't strap a PC to my wrist just because I *could* use one to replace my digital watch.

What I would not be surprised to see though is industrially hardened PC type systems replacing proprietary CNC controllers in many applications.
There is a good size, cost, and functionality match in that market (we also have several CNC controllers). It's probably no coincidence that the examples usually given for "PC based control" seem to be in industries with the need for a lot of CNC control.


*******************
Michael Griffin
London, Ont. Canada
mgriffin@wwdc.com
*******************

By Kevin Wixom on 12 April, 2000 - 4:40 pm

(Originally posted Monday 1/12/98)
Jerry,

I just took my brand new (3 months old), Pentium PC back into the shop to be repaired yesterday... again. It will be down for at least 2 weeks.
Since I backed up critical data, I can get back on line again. If I had a backup machine on standby, it would've taken at least 4-8 hours to get back online.

The question is, is the the down time and re-setup time worth the dollars you've saved by purchasing cheaper computers? At home, it IS. At your factory??? I'd bet NOT.

The last software development group I ran estimated 2 -6 severe crashes per week requiring at least 1 hour to get back up to speed. (using W95 and W95 based development tools) This is NOT even counting bad power supplies, hard disk crashes, etc. This translates into approximately 100 hours/year/developer... which is probably about $10,000 or so. Including the PC cost, that probably gives you about $15,000 to spend on better hardware to be "even". Needless to say, we began purchasing better hardware.

If the data is not critical, and you can afford the downtime, buy PCs. But, if you need to keep a factory running.... I wouldn't.

Kevin Wixom

By Michael Whitwam on 12 April, 2000 - 4:43 pm

(Originally posted Tuesday 1/13/98)
Yo,

No way is my experience anywhere near as tragic. Inhouse, I have 2 NT servers, 3 NT workstations, and 1 Notebook running NT Workstation. I have
yet to crash the OS. I have crashed VB, VC++, VJ++, Access etc., but never the OS.

As far as hardware goes buy decent HW, with the correct drivers available, and always soak test the PC before delivering to site. Using this process, I have had 1 on site failure in the last 2 years, a hard disk failed within the first month.

Having said all that, I don't use PCs for control, only MMI, management info etc. I use PLCs for the control end of things. They are so cheap today, and have outstanding MTBF figures. Personally, I have never had a PLC CPU fail in all my 20 years in the game.

Michael Whitwam

By Randy Sweeney on 12 April, 2000 - 4:58 pm

(Originally posted Tuesday 1/13/98)
>If the data is not critical, and you can afford the downtime, buy PCs.
>But, if you need to keep a factory running.... I wouldn't.

>Kevin Wixom

This is scare mongering of the highest order. A PC in a software development environment is entirely different from a PC in a more stable
factory floor application. Software developers (by their very nature) live on and push the edge.

We have hundreds of PC's running Wonderware InTouch (on various WfW 3.11, Win95, and NT 4.0 OS platforms) and our experience has been VERY positive. Availability is excellent and MTBF seems to be about 3 years (7day/24 hr/365 day/yr service). Hardware failures are mainly disk drives, video boards, power supplies - all easily replaced.

Software failures occur mainly when the system software or support drivers are modified. This is done off line prior to installation to reduce the
chance for affecting production.

We use only high end Dells and Compaqs (not their loss leader home units) and we take care to protect the machines from electrical noise,
temp/humidity/chemicals and vibration. We also maintain backup disk images for reliable download into replacement hardware when it is (infrequently) needed.

PC's and reliability are not mutually exclusive terms.

Randy Sweeney
Philip Morris R&D

By Kevin Wixom on 12 April, 2000 - 5:07 pm

(Originally posted Tuesday 1/13/98)
Scare mongering?? Certainly wasn't meant to be. The original email request asked for objective examples - this is just one of many scenarios
that do exist in the real world. Software development IS much different than a stable factory environment.... that was part of the point. It may not apply to his specific application at all but he needs to make that decision - we can't do it for him in a 2 minute email.

No one would develop software on the same machine(s) that are used on the factory floor... would they???

The real scare would be to put the wrong device in the wrong place, think it will work acceptably, and then get fired when it doesn't, or causes some castastrophic problem or loss of data.

You need to evaluate the application and select the right tools given your priorities and budget.

Kevin Wixom

By Don Lavery on 12 April, 2000 - 5:09 pm

(Originally posted Wednesday 1/14/98)
Randy:

I thought I had been following the discussion of PC's vs. PLC's rather well. At least until I read your response. Now I'm confused!

>>Kevin Wixom wrote:
>
> >If the data is not critical, and you can afford the downtime, buy PCs.
> >But, if you need to keep a factory running.... I wouldn't.

> You wrote:
>
> This is scare mongering of the highest order. A PC in a software
> development environment is entirely different from a PC in a more stable
> factory floor application

Scare mongering? Let's examine your claims and decide who is scare mongering. First of all, "...more stable..."? What would cause a PC to be
less stable in a software developer's hands? The hardware? C'mon, I wasn't born yesterday! The software? Okay, I'll buy that. But isn't that
the _purpose_ of software development?

>Software developers (by their very nature) live
> on and push the edge.

Are you trying to say that software developers push hardware to the edge of failure? If we are talking OS software, I think not! They might more
reasonably be expected to push hardware to the limit in terms of capacity (memory, storage, etc.), but to imply that they can cause hardware to fail seems a mite far fetched.

> We have hundreds of PC's running Wonderware InTouch (on various WfW 3.11,
> Win95, and NT 4.0 OS platforms) and our experience has been VERY positive.
> Availability is excellent and MTBF seems to be about 3 years (7day/24hr/365
>day/yr service).

Let me see, now. Three years MTBF for PC's and you say that this is a "VERY positive" experience. Gracious! What was your MTBF for the PLC's you were using? It must have been substantially less than three years.

> Hardware failures are mainly disk drives, video
> boards, power supplies - all easily replaced.

Are you listening to what you are saying? "Hardware failures...disk drives, video boards, power supplies..." Sure, they may be easily
replaced, but that's not the point, is it? I'd like to know how often you were replacing memory and power supplies in your PLC's. 'Fess up, now.
Was it more often than the replacement rate for your PC's?

> Software failures occur mainly when the system software or support
drivers
> are modified. This is done off line prior to installation to reduce the
> chance for affecting production.

You're getting better at hiding the truth, Randy. Even if you don't tell us how many times you've experienced PC software failures, at least you're honest enough to admit that it happens. So tell me, how many times did you experience software failure with your PLC's? Probably once or twice, IF that much.

> We use only high end Dells and Compaqs (not their loss leader home units)

Hold the phone! I thought we were supposed to be comparing PC's with PLC's. No wonder I'm so confused. We're actually comparing "high end"
PC's to "home units". Ohhhh. Now I see!

> and we take care to protect the machines from electrical noise,
> temp/humidity/chemicals and vibration.

Yes, yes, yes!!! This is one thing that PLC manufacturers are notoriously bad about! Well, I think they are, aren't they?

>We also maintain backup disk images
> for reliable download into replacement hardware when it is (infrequently)
> needed.

This is (seriously, now) an excellent procedure - applicable to both PC's AND PLC's.

> PC's and reliability are not mutually exclusive terms.

Well, now! I'm glad we've cleared up all the confusion. I feel so much better now! PC's are what I'm going to recommend from now on. Yes, sir!

Randy, were you REALLY serious or just being humorously sarcastic? If you were being serious, I hope I have not offended you with my tongue-in-cheek reply. It's just that when I read what you wrote, your points were destroying every premise you were trying to create. Sorry.

Don Lavery
Lavery Controls

By George Robertson on 13 April, 2000 - 8:52 am

(Originally posted Wednesday 1/14/98)
Don, I really have to weigh in on this one.

You said to Randy:
> Randy:
>
> I thought I had been following the discussion of PC's vs. PLC's rather
> well. At least until I read your response. Now I'm confused!
>
> >>Kevin Wixom wrote:
> >
> > >If the data is not critical, and you can afford the downtime, buy PCs.
> > >But, if you need to keep a factory running.... I wouldn't.
>
> > You wrote:
> >
> > This is scare mongering of the highest order. A PC in a software
> > development environment is entirely different from a PC in a more stable
> > factory floor application
>
> Scare mongering? Let's examine your claims and decide who is scare
> mongering. First of all, "...more stable..."? What would cause a PC to be
> less stable in a software developer's hands? The hardware? C'mon, I
> wasn't born yesterday! The software? Okay, I'll buy that. But isn't that
> the _purpose_ of software development?
>
> >Software developers (by their very nature) live
> > on and push the edge.
>
> Are you trying to say that software developers push hardware to the edge of
> failure? If we are talking OS software, I think not! They might more
> reasonably be expected to push hardware to the limit in terms of capacity
> (memory, storage, etc.), but to imply that they can cause hardware to fail
> seems a mite far fetched.

They don't cause hardware to fail, rather they are more prone to do the things which are known to cause software problems in the course of
development, even to the point of discovering them. The products in service typically do not do this.

> > We have hundreds of PC's running Wonderware InTouch (on various WfW
> 3.11,
> > Win95, and NT 4.0 OS platforms) and our experience has been VERY
> positive.
> > Availability is excellent and MTBF seems to be about 3 years
> (7day/24hr/365
> >day/yr service).
>
> Let me see, now. Three years MTBF for PC's and you say that this is a
> "VERY positive" experience. Gracious! What was your MTBF for the PLC's
> you were using? It must have been substantially less than three years.

Probably so. How many systems stay in place for more than three years without modification or upgrade?

> > Hardware failures are mainly disk drives, video
> > boards, power supplies - all easily replaced.
>
> Are you listening to what you are saying? "Hardware failures...disk
> drives, video boards, power supplies..." Sure, they may be easily
<Snip>
After this, your message became emotional (Humorous?) and, while fun to read, unworthy of comment.

-George Robertson
Saulsbury E & C
ggrobertson@mindspring.com.

The opinions expressed here are not necessarily those of my employer, but they should be.

By Michael Griffin on 12 April, 2000 - 5:11 pm

(Originally posted Wednesday 1/14/98)
Is this one failure per PC every 3 years, or do you mean one failure every 3 years in total? If the former, then with hundreds of PCs in use, you
must have more than one failure per week! I must obviously be misinterpreting either your statistics or your application, as that degree
of reliability in the sort of environment I am used to would be considered truly awful.

I do use PCs in test equipment, but the limited number used keeps the amount of repair required down to a managable level. How do you handle
repair with that many PCs in service?

*******************
Michael Griffin
London, Ont. Canada
mgriffin@wwdc.com
*******************

By Carl Ramer on 14 April, 2000 - 9:45 am

(Originally posted Thursday 1/22/98)
Well, I guess I'll throw my two cents worth in on this. It's more venting and opinion than anything else, but having been subjected to Windows NT for a couple months, I'm totally underwhelmed!

Before I got disgusted and started logging the blue screens, there were at least half a dozen others. This machine is connected to a network.

Date Time Activity/Application

12/20/97 1105 VB application
12/23/97 0745 Installing Citect
1/5/98 0749 Switching Servers
1/5/98 1148 Screen saver
1/14/98 0820 Screen saver
1/19/98 0930 Microstation 95
1/20/98 0905 Netscape 3.01 @Thomas Register URL
1/21/98 1030 Microstaion 95

Now am I planning on recommending WinNT for critical functions? Sure, and I also have some ocean front property for sale in Phoenix, Arizona.

Carl Ramer, Sr. Engineer
Controls & Protective Systems Design
EG&G Florida
Kennedy Space Center

p.s. If the weather holds, Space Shuttle Endeavor launches tonight.

By Dan Hollenbeck on 14 April, 2000 - 10:42 am

(Originally posted Thursday 1/22/98)
Hi Carl,

I am sorry to hear about your problems with Win NT. Sure glad I did not have to live through what you did.

Here is what I learned from your misfortune.

Win NT might work for control if:

12/20/97 ONLY run the control kernel on that box, nothing else.
12/23/97 Don't run or install SCADA on the control box.
01/05/98 Don't change network configuration.
01/05/98 Uninstall screen saver.
01/14/98 Really make sure the screen saver is removed.
01/19/98 Don't touch or look at the control box.
01/20/98 Don't treat the control box like it is a computer.
Unplug the monitor, keyboard, and mouse.
01/21/98 Lock the computer up in a control cabinet.

This sure sounds like a traditional PLC to me. However, if I need to do all this to get Win NT to work, what is the point of using it? ;-)

Regards, Dan

By Stephen Fullerton on 20 December, 2001 - 11:40 am

Look its real Simple on NT if your getting Blue Screens.
1> either there is a component not Functioning Properly.
2> NT deosnt Like one of the Components in the machine, <like Not on the NT Hardware Compatiblility List> or Software Apps...
3> or you have a bad Install, Try Looking at Service packs Reinstalling or even going back one or two Versions, <or forward depending on
your current version>
4> Also try Checking out your Event Viewer for system and Apps as to what is failing or throwing errors...
5> and if al else fail why dont you NASA guys Crank the Commadore 64s back up.... LOL

Sorry man but NT isnt that hard and the Information on it is EXTREMELY easy to look up. RTFM Geese o pete Bra

By George Robertson on 12 April, 2000 - 4:59 pm

(Originally posted Monday 1/12/98)
Kevin,

In my experience, software development groups crash PCs a whole lot more often than do users. This is due to the nature of the work. If you had a factory floor PC going down that often, you would have the worst luck I've heard of in this industry.

-George Robertson
Saulsbury E & C.

By Armin Steinhoff on 12 April, 2000 - 4:45 pm

(Originally posted Tuesday 1/13/98)
We have several PC controlled systems running in the field. The oldest system is running now 6 years without any problems ... really zero problems.
It's a simple 19" industrial PC running QNX 4.1 and is controlling a complex materialflow system. The PC works like a PLC from monday to friday
with out switching off the power, so the PC electronic and the disk (only for system start) are working always under optimal conditions ... may be that is one keypoint for the success.

Best regards

Armin Steinhoff <Armin@Steinhoff.de>
STEINHOFF Automations-& Feldbus-Systeme
+49-6431-529366 FAX +49-6431-57454
http://www.DACHS.net

By Woodard, Ken on 12 April, 2000 - 5:13 pm

(Originally posted Wednesday 1/14/98)
Within the Olin Chlor Alkali Products Division we have close to 40 PC's controlling process systems that are 7x24 operations. The oldest commercial installation is in Brazil where the PC's are desktop 386-16 variety operating with OMNX 3 software on QNX 2.2 for over seven years with no problems. This is in a complex chemical plant operation with corrosive chemical enviroments, and the computers are located with the humans. The field mounted I/O system is Kiethly Metra-byte hardware. The second oldest is the same process operating in Charleston TN for over 6 years. Desktop 386-20's in the control room, field mounted OPTO-22 PAMUX I/O. The list goes on, three fuel Boiler Control, SO2 generation plant, and the newest is a 250,000 Ton/year membrane cell Chlor Alkali Plant with four hot backed up Pentium 166 PC's located in separate processing areas with over 1700 hardware I/O, and close to 10,000 tags, all ethernet real-time deterministic networked control using OMNX 4 software on QNX 4.2. We have the PC direct process control experience for over 10 years. It can be reliable. It can be dependable. It is proven in our OMNX software. It wasn't easy, it requires lots of learnings, and if you are contemplating developing it, it isn't for the
faint of heart.

(Originally posted Tuesday 1/13/98)
First, don't even think about Win95. Use Windows NT for the non real-time portions of your application.

Reliability is one of the biggest issues to consider when implementing a control system, whether PC based or not. This is the basis for two of the five rules of PC Based Control:

-- Your control system must survive a hard disk failure. The hard disk (with its moving parts) is the highest failure rate component of a PC. If
your control system will fail just because your disk crashes, then you have a major reliability problem.

-- Your control system must survive the Blue Screen Of Death (BSOD). If you are using a PC and not using Windows NT, then you are giving up most of the data connectivity and diagnostic benefits that the PC system offers. Windows NT, however, is NOT a real-time operating system and is not meant for use in control systems that could endanger people or property. If a Windows NT failure causes your control system to fail, then
you don't have a control system.

>I would like to know from all of you what you think about PC
>reliability. Please back up your opinions with specific examples, not
>just intuitive feelings (which is all I have right now), or what you
>think it probably is. If PCs have been failing you, let me know. If
>they have been running for 5 solid years with nary a problem, let me
>know that, too. This is my single biggest concern and I must get that
>out of the way before I continue.

There are two big considerations here:

(1) If something happens to Windows NT or to your hard disk, you need the control system to keep running.

(2) The vast majority of failures on the floor are mechanical or electromechanical
in nature and do not involve the control system. When these events happen, a PC based control system provides you with a range of tools that will get the machine back into full automatic mode in the shortest possible time. In one installation, the plant has reported average down time reduced from 20 minutes to two minutes.
In short, your control system must survive the common PC failure modes. Given that, the total system reliability benefits of the PC are dramatic.

By Tim Philipp on 12 April, 2000 - 5:19 pm

I have used PCs for machine control for the past 3 years.

The control system that I have used is the Automation Intelligence (owned by Pacific Scientific) SERCOS communications controller ISA card. The hard real time operating system is iRMX that includes a DOS shell that can run a
standard mode Windows 3.1 based mmi. I have also networked these machine together using Netbios.

SERCOS is a multi servo axis control system.

AI's software product, AML is one of the most advanced high level motion language that I have ever seen. It is object oriented and event driven. it is capable of full machine control as well as highspeed motion control.

So with one PC performing Motion Control, machine (plc) control and operator interface, I have found PCs very reliable. I prefer this set up over the same machine implemented with three computers (motion controller, plc, mmi).

By Bob Colburn on 13 April, 2000 - 8:54 am

(Originally posted Wednesday 1/14/98)
Hello All,

I wanted to make a brief comment on using PCs in automation. The term PC is commonly used to describe a desktop unit with video cards, keyboards, monitors, rotating media, and other peripherals. There are several manufacturers that manufacture industrial versions of the PC. These products are designed to work in harse environments that the desktop is not intended for. Most of these do not have the monitor or keyboard and if there is a harddrive, it is a solid state device. Perhaps a better way to
define PC Automation is to say that the operating system that is running on the hardware is one that also runs on your desktop (DOS, WinNT, etc) and supports a PC backplane (ISA, PC104, etc). Mr Sulu, raise shields!

Bob Colburn
Grayhill Inc.

By Tom C Wiesen on 14 April, 2000 - 3:38 pm

(Originally posted Monday 1/26/98)
I would rather refer to what people call 'PCs' as Intel x86 based controllers when I am talking about Industrial control applications.

By Malcolm J. Clements on 13 April, 2000 - 8:55 am

(Originally posted Thurs. 1/15/98)
As an end user of a number of types of control systems, I have been an interested bystander on this discussion. As of yesterday however, we have had a practical demonstration of the 'reliability' of PC Automation. We have a network of PLCs with a batch management system running on the PCs. The PC network has two file servers and two
external hard drives, one a mirror of the other. Yesterday the system crashed out ( we believe one the external drives failed). Following the handbook, which the system builder wrote for our setup, we have tried to restore the system using the second hard drive. Guess what, it won't do it all we can get is the C: drive. In the meantime we are trying to locate a new ext. hard drive whilst seeing if our MIS group or the system builder can come up with an alternative solution.
Despite ringing round the country we are finding that the drive we have is no longer made and we are having great difficulty in identifying a suitable replacement.

We also have a DCS system on site which have suffered an equal number of hardware failure. However when part of the DCS system fails it
normally continues to operate and the component can be changed on-line. When a component does fail it can easily be identified and the DCS supplier will know if the original part number is not available whether there is something else that is compatible.

So what's my point?

Well it seems from my experience that all systems have similar component and therefore failure rates. However the effects of a component failure and the back up available varies dramatically. MTBF figures by themselves, I'm afraid, tell me nothing. Of far more significance is MTB 'getting the darn thing back on line'.


Malcolm J. Clements

By Meir C. Saggie on 13 April, 2000 - 8:59 am

(Originally posted Thurs. 1/15/98)
A minor note - that thing is called MTTR - Mean Time To Repair.
Out of MTBF and MTTR one calculates "availability" = what percentage of the time is the system "available" to perform its mission.

> significance is MTB 'getting the darn thing back on line'.

By Andrew Ashton on 13 April, 2000 - 12:22 pm

(Originally posted Tues. 1/20/98)
Well let's try to sort this out with a quick reminder of
some of the basic concepts of reliability.

MTBF = Mean Time Between Failure - defined for repairable items
MTBF is the reciprocal of the sum of the failure rates for all the component parts of the system.
(The equivalent concept for non-repairable items is MTTF = Mean Time To Failure)

"MTB 'getting the darn thing back on line'" is called MTTR (Mean Time To Repair).

Availability is determined by 1-Unavailability

Unavailability (U) - defined as that portion of an
equipment's life for which it is out of service
(And this may be not only for failure, but for upgrades, system maintenance etc.)

U = MTTR/(MTBF + MTTR)

So to minimize Unavailability you must maximize MTBF and minimize MTTR.

*Ways of increasing MTBF*
Factors such as temperature, hermetic sealing, number of gates, environmental conditions and number of functional pins (and hence number of components) are key criteria. The incorporation of VLSI and SMD technology and CMOS circuitry has reduced power consumption and heat generation
whilst reducing pin counts.

*Ways of decreasing MTTR*
- Level of self-diagnostics incorporated
- Modularity
- Availability of spares
- Availability of competent personnel to diagnose and replace faulty unit

Even after equipment selection (select equipment with Appropriate testing bodies' marks - UL / IEC / CSA ...) the system integrator or end user can do much to maximize MTBF by ensuring a satisfactory environment (power, heat, dirt, moisture ingress, vapours, accessibility, surge protection, grounding, decoupling comms via optocouplers, ...)

There are lots of things that you can do about MTTR
- Use the diagnostics that the manufacturer has incorporated (module health etc.)
- Build in self-diagnostics / diagnostic aids (test programs, SCADA overview of hardware etc.)
- Hold an economically appropriate level of spares (what does it cost if this system is down for an hour, a day etc.) preferably warm - i.e. powered up and themselves monitored for failure
- Accessible and understandable up-to-date project
documentation
- Regular backups (data and application!!) with a formal Grandfather, father, son schema
- Formal disaster practice - can you restore from the backups?
Best Regards

Andrew Ashton
Managing Director

ProLoCon
Control - Automation - Petrochemical - Pharmaceutical

ProLoCon (Pty) Ltd
South Africa
Intl Tel +27-11-465-7861
Intl Fax +27-11-465-8455
URL http://www.prolocon.co.za

By Ramer-1, Carl on 13 April, 2000 - 3:42 pm

(Originally posted Wed. 1/21/98)
Just two cents worth I'd like to add to Andrew Ashton's excellent posting on designing in system reliability (local buzzwords) by increasing MTBF and decreasing MTTR.

Since you're most likely NOT using a prototype in an integration project, you can also take advantage of predictive maintenance
techniques and replace known failure items just before their expected demise. You can schedule your downtime for more opportune times as well. Actual reduction in MTBF and MTTR is not too great, but operations run more smoothly.

Carl Ramer, Sr. Engineer
Controls & Protective Systems Design
EG&G Florida
Kennedy Space Center

Lots of the facts and figures mentioned are used in the design of a system to meet the customer's reliability requirements. However, the long-term reliability of any system is more accurately determined by the implementation of systems designed to keep the design reliability than the reliability figure itself. For instance, during a FAT or PAT, there should be sections that cover system support roles, the implementation of maintenance, spares, change control, obsoletion prevention, training and eventual replacement at the end-of-active-service-life. The availiabilty of support services such as cooling and electricity supply should also be covered when considering the above list.

By Erich Mertz on 13 April, 2000 - 9:00 am

(Originally posted Fri. 1/16/98)
This discussion amazes me. I sell industrial "WINTEL" systems to OEM's. These systems typically cost twice as much as the normal "desktop" pc's. Features include watchdog timers, broad temperature capability, high MTBF's etc, and product longevity and availability.

These features cost money up front but provide lower total lifetime cost to the typical customer.

Who is my competition? The desktop PC. The guys who are whining about short life, product failure and lack of available replacement parts are the ones who have invested in cheap PC's and want them to perform as well as products that cost twice as much.

What else is new?

Erich Mertz
mertz@intac.com

By A. V. Pawlowski on 13 April, 2000 - 9:05 am

(Originally posted Tue. 1/20/98)

You have a good point about the hardware. But, what about the operating systems and software? Most of the application software vendors are
pushing Windows (apparently Windows is now the leader in control operating systems). My direct experience with Windows (2 NT, 2Win95), is that it is the worst of any I have used.

----------
Erich Mertz <mertz@intac.com> wrote:

>This discussion amazes me. I sell industrial "WINTEL" systems to OEM's.
>These systems typically cost twice as much as the normal "desktop"
>pc's. Features include watchdog timers, broad temperature capability,
>high MTBF's etc, and product longevity and availability. ...<clip>

By Hevelton Araujo Junior on 13 April, 2000 - 1:42 pm

(Originally posted Tue. 1/20/98)
If my understanding is correct, what you are saying is that no matter what hardware you use, you still have the "software" problem to deal with. I tend to agree with that, since I've experienced lots of
problems using Windows NT, although I would not consider it the "worst" (I had more problems using IBM's AIX). Watching this discussion for a while, a question came to mind that I would like to post for comments. Here in Brazil we are watching some end users request that we ("we" are systems integrators) evaluate the possibility of using Windows NT on a DEC Alpha machine to run critical applications. Their idea is to have a very reliable system, so they are willing to pay the (many) extra dollars for the Alpha box; but since they will be using Windows NT, won't they have the same "software" problems as if they were using a PC ?? And for the cost of an Alpha, you can buy three or four high-end PC's (yes, at least here it is that much difference), have them running in "hot" stand-by mode and save the extra cost of hardware (especially hardware maintenance).
If anyone has any experience with Windows NT and Alpha machines, I would appreciate any comments.

Hevelton Araujo Junior
IHM Engenharia e Sistemas de Automação LTDA
hevelton@task.com.br

By Johnson Lukose on 13 April, 2000 - 3:36 pm

(Originally posted Wed. 1/21/98)
DEC builds good machines. I worked with DEC machines before, some years ago I must admit. You can say "plug and forget". You are right; software is the weakest link especially in these days of given hardware reliability. If you face any problem, it will NT playing havoc. NT is not going to
have OpenVMS in sight for ions when it comes to rock sturdy operating system reliability and recovery if you ever need it.

The reality is the users have the money, and common sense says the one with the money to spend is always RIGHT!! You will be up against the wall in this matter. You are going to have a hell of a time to convince them otherwise. The propoganda of PC + W95 / NT has created a market perception of propotions even this list does not realise. It will make everyone a winner if you agree with the users and take the contract. They get the systems they want and you get the project you need.

thanks.

By Dan Brock on 14 April, 2000 - 9:28 am

(Originally posted Wed. 1/21/98)
Our waste water treatment plant uses dual DEC servers using Open VMS to run our HMI package and in the five years they have been online I do not recall the operating system crashing once. We have two plants (four servers) running 24 hours a day. When was the last time Win95/NT ran more than a month without a problem? We are working on a desktop software package to interface NT with VMS so our clients out there that love NT/Win95 can read live data.

By Carl Lemp on 14 April, 2000 - 9:55 am

(Originally posted Thurs. 1/22/98)
Dan,
I agree that the older operating systems are definitely more stable than the newer ones but lets compare things on an equal basis. My question is: When's the last time one of the VMS machine was installed, tuned, operated, and maintained without a knowledgeable system administrator. If we want a fair comparison we should try one of two things.

1. Set up the VMS machine so that the users can install and uninstall any applications or operating system patches they want. Let the users download every "cool" program they see on the internet and try it out for a few days
before they delete it off the machine. Let the users change operating systems parameters as they see fit. Give the users access to the on/off
switch so they can turn the thing off every time they get tired of waiting and ssume it must be locked up. If we do these things then we will have a reasonable approximation of the typical environment of a Win95 machine.


2. Have a knowledgeable system administrator set up a Win95/NT machine. The system administrator will then install all the applications, test the new applications in a non-public account, release the application for all users after the conflicts have been resolved. Finally, the sys admin will then check on the machine on a periodic basis and make any adjustments needed to resolve user complaints. If we do these things then we will have a reasonable approximation of the typical environment of a VMS machine.

Does anybody have any experience ith the relative reliability of VMS network servers vs. Win NT network servers? At least the environment and the use pattern of these would be the same.

Carl Lemp

(Originally posted Thurs. 1/22/98)
Well, Carl and David (servoboy) are right. We seem to be
comparing apples and oranges. His point is well taken in that not too many operators know enough to cause "problems" on a VMS operating system. But if they did know enough to change configurations and mess with TPU on the wrong files...

The whole conversation should boil down to comparing systems of the same relative cost and complexity.

One problem we just recently discovered was another twist on the reliability issue. Is your Ethernet connecting all your process systems
directly connected to your PC systems? When we switched to fast Ethernet between plants the vender (nameless on purpose) of this equipment did
not tell us that some of their own Ethernet devices were incompatible with it. This crashed PC's for some unknown (to us) reason. We purchased
new network interface cards for the PC's and the blue screens went away.

(Originally posted Fri. 1/23/98)
I have seen a poorly written RLL (and STL too) crash a Siemens S5 PLC.... happens all the time if you don't keep your variable addressing straight (especially mixed type variables of different length/structure) using Step5. Normally this happens after download and the little run light on the PC goes out!

The S5/Step5 system (which I believe is/was the world's #1 installed PLC!) has no type checking so it is easy to corrupt your memory if you are not careful.

PLC's are not idiot proof... just idiot resistant (and some more than others)

Randy Sweeney
Philip Morris R&D

By Michael Whitwam on 14 April, 2000 - 3:05 pm

(Originally posted Mon. 1/26/98)
What you say is true. However my experience is that PLCs originally of American origin, tend to be far more idiot proof than their European counterparts. Have you ever managed to crash Modicon 984?

To be fair of course, it has to said that the European guys have more feature rich software.

By Johnson Lukose on 18 April, 2000 - 3:08 pm

(Originally posted Mon. 1/26/98)
>What you say is true. However my experience is that PLCs originally of
>American origin, tend to be far more idiot proof than their European
>counterparts. Have you ever managed to crash Modicon 984?

And similarly robust was the Telemecanique Serie 7.

By Myron Hecht on 18 April, 2000 - 3:11 pm

(Originally posted Tues. 2/03/98)
The discussion of the relative reliability of different platforms could be much better resolved by quantitative data rather than opinion and
anecdotes. Isn't there anyone who knows how many operating hours they have on a PC on NT and how many failures who would be willing to share the data with this mailing list? Would anyone be willing to share their reliability experience on other platforms (such as PLCs)?

Once we have this data, and if we have it from several sources, I personally will be glad to do the MTBF and confidence limit calculations as
a contribution to the discussion posted on this list. If we can get further data on what the failure modes were and whether a recovery was
possible, then we can deal with the issues of under what circumstances the platform is usable.

By Michael Griffin on 18 April, 2000 - 12:59 pm

(Originally posted Wed. 1/28/98)
At 07:28 24/01/98 -0000, you wrote:
>I have seen a poorly written RLL (and STL too) crash a Siemens S5 PLC....
>happens all the time if you don't keep your variable addressing straight
>(especially mixed type variables of different length/structure) using
>Step5. Normally this happens after download and the little run light on the
>PC goes out!

Crashing from addressing variables on an S5? This is certainly a new one on me, unless you are referring to faulting the processor by attempting
to for example write to a Data Word that doesn't exist. In this case though, the processor does not crash; it detects the error in your program and shuts itself down in a controlled stop.

I don't use Siemens' "Step 5" software. I use someone else's programming software, so perhaps the software I use simply doesn't let me make the types of mistakes you are talking about. What sort of variable addressing are you talking about? Load and Transfer instructions automatically adjust to byte or word size, while the software I use simply won't let me enter an incorrect function block parameter size.

I've done quite a bit of S5 programming, and I'm not sure what it is you are describing. Could you explain what you mean a little further?


*******************
Michael Griffin
London, Ont. Canada
mgriffin@wwdc.com
*******************

(Originally posted Wed. 1/28/98)
The problem was with Step 5 combined with S5 processor - no local type / map checking in programmer and memory map checking in the PLC..

You can co-locate structures on top of each other and Step 5 does not warn or provided error checking. Once downloaded, the program could crash the PLC if the data types and variable contents resulted in invalid words for a
particular operation.

This was several years ago and Siemens has fixed the problems since then...


Randy Sweeney

By Bill Sturm on 14 April, 2000 - 2:34 pm

(Originally posted Mon. 1/26/98)
Carl Lemp wrote:

> 1. Set up the VMS machine so that the users can install and uninstall any
> applications or operating system patches they want. Let the users download
> every "cool" program they see on the internet and try it out for a few days
> before they delete it off the machine. Let the users change operating
> systems parameters as they see fit. Give the users access to the on/off
> switch so they can turn the thing off every time they get tired of waiting
> and ssume it must be locked up. If we do these things then we will have a
> reasonable approximation of the typical environment of a Win95 machine.

This ability for people to mess with their PC's is a serious flaw. You could potentially design a stable system using a typical desktop OS, and
the end user could turn it into garbage by installing some new whiz bang software or hardware.

This is one of the curses of open systems, the end user can buy who knows what and try to install it into his control system, sound cards, modems...

Now any good OS should be able to protect the system against a rogue application, isn't that what protected mode and hardware memory management
is supposed to do.

A new piece of hardware with a poorly written kernel mode device driver is another matter. It is hard for the OS to protect against this. I believe QNX guards against this by running all device drivers as user level processes. I have seen many reports that NT has decent soft real-time performance, at least on a Pentium II. But many of these reports caution that a poor device driver could disable interrupts for a long time
and screw up it's response times.


--
Bill Sturm

By Michael Whitwam on 18 April, 2000 - 12:49 pm

(Originally posted Tues. 1/27/98)
I think you have that hit the nail on the head here. When last did QNX add a new scanner driver, or support for a 32 bit sound card?

Stick to tried and test hardware, and I am sure that NT will provide you with many happy customers. Experiment with new fangled add-ons in your own office or test facility, not on the customer's mission critical systems.


Michael Whitwam
whitwam@global.co.za
http://www.wisetech.co.za
----------
>
> A new piece of hardware with a poorly written kernel mode device driver is
> another matter. It is hard for the OS to protect against this. I
believe
> QNX guards against this by running all device drivers as user level
> processes. I have seen many reports that NT has decent soft real-time
> performance, at least on a Pentium II. But many of these reports caution
> that a poor device driver could disable interrupts for a long time
> and screw up it's response times.

By Davis Gentry on 14 April, 2000 - 9:41 am

(Originally posted Thurs. 1/22/98)
"Brock, Dan" <DBROCK@CSDOC.ORG> wrote:
> When was the last time Win95/NT ran more than a month without a
> problem?

[Gentry, Davis] If you get (write) good software, hardware compatible with its environment, and know what you are doing when you set it up, then an NT 4.0 PC which crashes once a month on a production box would surprise the hell out of me. I would be disappointed if it crashed
once a *year*. If you miss *any* one of the three criteria above, then yes, it will crash on you. Frequently. One difference with the Wintel
boxes is that everybody and his grandmother thinks that they can program it. And they do. And it is not always stable. What a shock. How many amateur programmers out there are there who try to write applications on VMS? Or HP-UX? And from the other end of that equation, how many
of you have ever seen poorly written RLL causing havoc in a PLC?

> Johnson Lukose [SMTP:jluqaz@PC.JARING.MY] wrote:
>
> > The reality is the users have the money, and common sense says
> >the one with the money to spend is always RIGHT!! You will be up
> >against the wall in this matter. You are going to have a hell of a
> >time to convince them otherwise. The propoganda of PC + W95 / NT
> >has created a market perception of propotions even this list does
> >not realise.
> >It will make everyone a winner if you agree with the users and take
> >the contract. They get the systems they want and you get the project
> >you need.


[Gentry, Davis] I agree with Mr. Lukose, but he is missing one point. What do you do with your data? Many (most?) of the users today who want to look at manufacturing data (in any form) are using a wintel PC on their desk. And they are running Microsoft Office on it. And if you generate your data on a platform which is compatible with MS (whatever our feelings may be about MS, its tactics, and its products) you will find it *much* easier to get the data to the customer in a timely, efficient, and inexpensive manner. When your customer's accountants are getting the data they want, and the IEs are getting the data they want, and the process engineers are getting the data they want, and the executives are getting the pretty pictures on their desktops that they want, then everyone is happy. *That* is the true advantage to the PC. That is where your cost savings are, if that data is appropriately utilized. Be sure to help your customer *use* the data. They may not be used to having data quickly and easily available. Show
them the advantages.

Come on guys. The PC is just another tool. It may or may not be applicable to your problem. Analyse your problem, decide the best tools for the job, decide the cheapest tools for the job, and present your customer with the options, the pros, and the cons of each option. And if you don't know enough about PCs to program them and/or set them up correctly for a factory setting, then sell something else. But you should probably learn, because their market share is not going *down* any time soon.

Davis Gentry
White Oak Semiconductor
Sandston, Virginia

By A. V. Pawlowski on 14 April, 2000 - 10:13 am

(Originally posted Thurs. 1/22/98)
If you are putting together, or just have a, non-custom (configured commercial product based) SCADA system running on Win95/NT and it runs for a year with normal operator interaction, please name its makeup. Seriously, I am interested. If you have a setup that works well, I would like to know what it is.

By John Lindsey on 14 April, 2000 - 1:33 pm

(Originally posted Fri. 1/23/98)
YES -
Compaq 233 CiTect Software although with only 500 points, COMx driver on a Digiboard, Modbus Protocol over MDS Radios at 4800 Baud and through a repeater to 34 Remotes.

All Items OUT OF THE BOX, WinNT Svc pack 1 Installed, 3,700,000 analog reads and counting without the blues....

Purely SCADA in a water plant / well field system.

No Netscape, No PackMan, No Screen Savers, No monkeying with the I/O map while on-line.

The key seems to be that you don't release a system until it's right and all possible operator actions, alarm events, and process variable ranges are proven, you don't do anything unnecessary on that machine, and you don't let
your operators monkey with the kernel.

and there are surely others.

John Lindsey
Niles Radio Communications

By Michael Whitwam on 14 April, 2000 - 9:56 am

(Originally posted Thurs. 1/22/98)
I think that the sales of DEC Alpha speak for themselves. A decent modern PC is every bit as good as the DEC. If you want power, go multiprocessor.

At the risk of sounding like a stuck record, let me repeat. If correctly setup, NT is very robust. Did anyone ever ask a beginner to install VMX on
a VAX. No, so why do it with NT?

Further, the end user software also plays a major factor. If a system is left alone, it will probably do just fine. I have one customer that has been running InTouch on W95, since W95 came out. The system is an operator interface, running 24 hours a day. So far we have not had a single failure. (and W95 s*cks at robustness)

*** The reason in my opinion, is that the client has no PC literate maintenance staff, so nobody hacks! ***

Michael Whitwam
http://www.wisetech.co.za

By Hevelton Araujo Junior on 14 April, 2000 - 10:43 am

(Originally posted Thurs. 1/22/98)
>I think that the sales of DEC Alpha speak for themselves. A decent modern
>PC is every bit as good as the DEC. If you want power, go multiprocessor.


Won't you raise the price to around the Alpha range once you start adding processors ? (I'm not being sarcastic, I really don't know)

>At the risk of sounding like a stuck record, let me repeat. If correctly
>setup, NT is very robust. Did anyone ever ask a beginner to install VMX on
>a VAX. No, so why do it with NT?

Could't agree with you more on that. The problem is that, the "hardware compatibility list" for VMS is one line long and works. The NT list, when
you use PC's, is the size of a book, and is not always right.


Hevelton Araujo Junior
IHM Engenharia e Sistemas de Automação LTDA
<hevelton@task.com.br>

By Michael Whitwam on 14 April, 2000 - 4:06 pm

(Originally posted Mon. 1/26/98)
Yes, you probably would, but you still get more power per $$, and you a more widely supported platform.

At 12:37 22/01/98 -0500, "Michael Whitwam <whitwam@global.co.za>" wrote:
>>
>>I think that the sales of DEC Alpha speak for themselves. A decent
>>modern PC is every bit as good as the DEC. If you want power, go
>>multiprocessor. <clip>

Hevelton Araujo Junior <hevelton@task.com.br> replied:
>
>Won't you raise the price to around the Alpha range once you start
>adding processors ? (I'm not being sarcastic, I really don't know)

By Hevelton Araujo Junior on 14 April, 2000 - 4:19 pm

(Originally posted Tues. 1/27/98)
Agree with you on that. From the discussions here, and from some more studying on my own, I believe that sticking with PC's (vs. Alpha) is better. High-end PC's have very stable hardware these days, and software, well, I guess we just have to strip the system down to its minimum, leaving NO room for the operator to mess with the system (out with internet, screen-savers, games, etc.), take out any possibility for operators to get things back onto the system (floppy, CD), and find a way to protect our networks.

Regards,

Hevelton Araujo Junior

By Michael Whitwam on 18 April, 2000 - 12:56 pm

(Originally posted Tues. 1/27/98)

Yes, you probably would, but you still get more power per $$, and you a more widely supported platform.

Hevelton Araujo Junior <hevelton@task.com.br> replied:
>
>Won't you raise the price to around the Alpha range once you start
>adding processors ? (I'm not being sarcastic, I really don't know)

By Todd Wright on 13 April, 2000 - 3:07 pm

(Originally posted Wed. 1/21/98)
The same approach can be applied to the software. For whatever reason, I seem to be very fortunate regarding my experiences with PC's in factory
automation. As stated in my last correspondence, we have many PC's in operation in my plant. The majority of these are running Windows 3.1 and
HMI software. However, I do have a line which utilizes a PC running Windows NT 4.0 to actually control a machine. The same PC also runs Wonderware. I purposefully chose a mediocre platform regarding horsepower, (32 meg ram,
133 mhz Pentium), and I did not experience any difficulties what so ever installing NT. The machine requires discrete IO operations, several PID loops for an oven, and open loop variable frequency drive control. For comparison, I have several other machines of the same type controlled with PLC's. The software and OS have performed without a flaw. The only quirks I experienced occurred during development. While I had both the HMI and control software open in development and runtime modes, and was making
runtime edits to both, I did have 4 instances of "BSOD", (blue screen of death). This was a one time, one day occurrence. After limiting the
editing to one application at a time, I have never had a fault since. The scan time is superior to the PLC we are utilizing, and the update of the HMI is more than sufficient. Please note that the line above has been operating
in a 7/24 operation since 08/97, and during days for two months previously.

To my knowledge, the Windows 3.1 lines have performed without software/OS problems also. As I have previously stated, the only events necessitating reboot were for hardware failures, or when an operator had roamed outside of the HMI environment, (which was addressed). My experiences direct attention to detail concerning the actual kernel and drivers used in any software
running on the PC. This is not to say OS related bugs don't exist, only that I have not been "affected" by them. The most disturbing problem I
encountered was "vaporware". This problem reared its ugly head in the decision analysis phase of the system design. In order to implement a
successful system, I would recommend using evaluation tools. Examples will include visiting any reference sites, phone references, demos, etc. Take some time to develop a test system on a spare PC. Most of the vendors I have dealt with are more or less willing to provide a consignment package. Some would even make claims that they would install the system for free,
removing it at their cost if unsatisfactory.

Finally, I would like to comment on any relative value or savings. Two areas to consider would be hardware costs and software costs. The hardware
comparison should be fairly simple to make, if proper analysis methods are employed. Concerning software, I tend to analogize things to "efficiency" or how much effort will be necessary to provide a fully integrated system. This effort will be different for every developer. However, I have been exposed to enough systems to know what I don't want. For instance, I don't want to implement my own serial communications routine when there are systems out there which provide canned drivers. The application will drive
what is required in both areas, I just wouldn't limit myself without reason. I am fortunate enough to be allowed the freedom to evaluate newer
techniques and technologies. So far, PC control has worked for me.

Todd Wright - end user.

By Cindy Hollenbeck on 13 April, 2000 - 5:04 pm

(Originally posted Wed. 1/21/98)
To all re: this subject -

Extremely reliable PC hardware is definitely available, and not always at twice the cost of standard PC's. The I/O interface cards to a lot of the I/O buses/systems are also well-equipped (watchdog timers, etc.).

> You have a good point about the hardware. But, what about the
> operating systems and software? Most of the application software
> vendors are pushing Windows (apparently Windows is now the leader in
> control operating systems). My direct experience with Windows (2 NT,
> 2Win95), is that it is the worst of any I have used.

Operating systems do NOT do control, they are only a platform. It falls directly onto the provider of the control software to ensure that what they sell will work for the applications they are marketing their product into (eg: if triple redundancy is required, and you can't do it, don't say you can with a PC solution!).

Good control software should not fail, nor be dependent on some other company's O/S code. There are a number of control software vendors
who provide products based on this policy. The vendors who take the easy way out and write to WinNT or Win95 are doing an injustice to the PC control industry - IF they advertise that they have a deterministic, real-time, reliable system that can be used in virtually any control application.

If you're planting a garden, use hand tools - if you're plowing a field, you need a tractor!


Best Regards,
Cindy Hollenbeck
email: cindy@softplc.com
http://www.softplc.com
281/852-5366, fax 281/852-3869

By Barry C. Ezell on 14 April, 2000 - 9:36 am

(Originally posted Wed. 1/21/98)
Why is it when one requests information on actual reliability or coverage, customers can not get real information. I would like to see the data to
help me decide on the best system.

Barry

Barry C. Ezell
bce4k@virginia.edu
bcezell@aol.com
(804) 975-3525
11 Tennis Dr
Charlottesville, Va 22901

By A. V. Pawlowski on 14 April, 2000 - 9:59 am

(Originally posted Thurs. 1/22/98)
It should be sunny when I finish work and go out to my car in the evening too. Your note is just a little off the mark.

It is reasonable to expect products to be used as intended, but not all control situations need absolute reliability and it would be silly to
have to reinvent every wheel yourself for every situation. Many control situations can be, and are, satisfied through the use of PC products.

My comment was based on the fact that many people are pushing Windows as the latest and greatest and my personal experience so far indicates
otherwise. I wanted to see if I was the only one or just having bad luck.

----------
On Tuesday, January 20, 1998, Cindy Hollenbeck <cindy@softplc.com> wrote:

.........Good control software should not fail, nor be dependent on
some other company's O/S code. There are a number of control software
vendors who provide products based on this policy. The vendors who take
the easy way out and write to WinNT or Win95 are doing an injustice to
the PC control industry - IF they advertise that they have a
deterministic, real-time, reliable system that can be used in
virtually any control application.

If you're planting a garden, use hand tools - if you're plowing a
field, you need a tractor!.................

By Todd Wright on 13 April, 2000 - 9:07 am

(Originally posted Tue. 1/20/98)
Are we to say that "office grade" PC's are junk? Or is it impossible to find a quality unit? A PC is designed to meet certain environmental and
operational conditions. Since an office grade PC may not be designed to operate in as wide a range of conditions as an industrial unit, it will be more vulnerable to misapplication. In my experience with both industrial and office grade PC's, the component most likely to fail is the hard disk drive. The industrial PC's I have experience with utilize the same drives found in their "weaker" cousins. The difference being
anti-vibratory mounts. Since the drives are the same, so should the demonstrated reliability provided they are applied properly. Industrial
PC's are more expensive because effort has gone into designing them to be, well, industrial. To say that industrial units are more reliable
because they are industrial, or office grade are not because they are office grade is not correct. Judge each by what goes into it, and apply each by the same rule. A designer should analyze the needs of the application, and consider the total cost of ownership. For example, we have $5000 industrial units and $1600 office units. The office units offer a magnitude greater cpu speed, additional ram, and more standard options. The industrial units feature ease of maintenance and a magnitude greater resistance to environmental fluctuations. However, any environmental conditioning needed for the office grade unit must be considered to produce a fair comparison. The decision to use one type over the other should be approached with the same process as any other design decision.

Todd Wright - end user.

By George Robertson on 13 April, 2000 - 1:40 pm

(Originally posted Tue. 1/20/98)
Well said. Also, we typically don't put the PC part of a control system in the middle of the process. Typically, it's in a control room environment. I find it interesting that HoneyWell thought the
Dell to be good enough, yet members of this forum seem to fear mass market PCs.

George Robertson
Saulsbury E & C
ggrobertson@mindspring.com

> Are we to say that "office grade" PC's are junk? Or is it impossible to
> find a quality unit? A PC is designed to meet certain environmental and
> operational conditions. Since an office grade PC may not be designed to
> operate in as wide a range of conditions as an industrial unit, it will

snip

By H. Ahrens on 13 April, 2000 - 1:41 pm

(Originally posted Tue. 1/20/98)
Just to add a bit of confusion: One of my clients (entirely on their own, without my input I hasten to add), decided to replace an expensive T-xx computer used for MMI on a ship-loader with a London Drugs special (Packard-Bell if you must know). Since the environment is a high vibration area and quite dusty, including sulphor and other corrosives I thought to myself "this will not last long", but kept my mouth shut. I'm glad I did
because the computer, although laying on it's side in the kick space (with boot marks on the case) is still operating fine after nearly three years!
What does that prove?

Hugo

By Johnson Lukose on 13 April, 2000 - 3:35 pm

(Originally posted Wed. 1/21/98)
It proves that you only need a 'London Drugs special' to run a critical operation. I am of the opinion that this industrial computer bit is overblown. It is far better to get a 'London Drugs special' with a proper hard disk backup!! Anything requiring more reliability will be the realm of PLC, DCS, TMR, etc.

thanks.

Can be reached at;
=S= (M) Sdn. Bhd., Malaysia
Tel : +60 (0)3 7051150
Fax : +60 (0)3 7051170

By Tony Robinson on 13 April, 2000 - 3:40 pm

(Originally posted Wed. 1/21/98)
Proves they are lucking at Dice and should go directly to Vegas...
Penny wise, Pound stupid... I have seen the same case, but there have been others where the cheapy system went down, and took a little more down with it.

Tony

By A. V. Pawlowski on 14 April, 2000 - 9:38 am

(Originally posted Wed. 1/21/98)
It appears that as many people are having good luck running PC's with MS OS's as those who are having bad luck. I don't plan to retire to a desert island so I hope (and trust) my luck with these systems will improve.

I might add that in the 20-30 crashes I have experienced since starting to use Windows seriously, I have only had one BSOD. All of the others have been application-followed by computer lock ups. Usually, the windows fail to update and close first. Then the start menu stays open. And then the mouse cursor freezes.

My guess has been that this indicates memory fragmentation, but I have been advised that incompatible video/graphic cards/drivers (followed closely by Ethernet cards), while otherwise appearing to work fine, can be a significant source of such problems. I will be, especially, careful with their use in the future.

By Randy Sweeney on 13 April, 2000 - 1:44 pm

(Originally posted Wed. 1/21/98)
I would assume the STD/VME versus PC market split is historical.

The STD/VME provided the first real time power in a standard package.... this replaced the proprietary SBC's and multiboard systems of the 70's and early 80's. The PC on the other hand just reached sufficient speed to supplant the VME's which formed the core of high capability systems.

We have ultrahigh speed packaging equipment with PC based control cores which replace previous VME and PLC... interestingly... the control is
hosted in the MMI PC. This is a little uncomfortable even to a PC enthusiast like me!

Seems to work ok though...

Randy Sweeney
Philip Morris R&D

By Randy Sweeney on 13 April, 2000 - 5:06 pm

(Originally posted Wed. 1/21/98)
We have NT running on both Alphas and PC's... the Alpha is an excellent machine and makes a VERY strong database server... unfortunately most industrial software will not run on it (Wintel only!).

Make sure that the application software you want will run on Alpha (native-- not the slower Intel compatibility mode) and make sure that the
software supplier is committed to maintaining the Alpha port - few are.


Randy Sweeney
Philip Morris R&D

By Michael Griffin on 13 April, 2000 - 12:16 pm

(Originally posted Tue. 1/20/98)
The problem is that most of the people promoting "PC automation"
begin by claiming that PCs are cheap (although they are a lot more expensive than most of the PLCs that I use). They also like to say that you can buy cheap hardware at the nearest computer store. So it should be no surprise that when people take this literally, PCs get a bad name for themselves. Note here that I am *not* referring to industrial computers when speaking of
PCs, no matter how much they may resemble PCs from a *software* point of view.

It can be pretty hard to justify the cost of using a real industrial computer to someone who doesn't understand the issues when there are so many promoters of cheap hardware. But the people promoting PCs are not the PC manufacturers, they are the software vendors, or people selling their software expertise. When they set a target price for a PC system, they
obviously would like to keep as much of that for themselves as possible.

I'm not saying that top quality desktop PCs have no place on the factory floor. I'm just saying that I believe that their application is limited. I do use them, but only for special applications.

I find it interesting that you sell mainly to OEMs. I've seen quite a bit of equipment controlled by STD bus computers. All except two of these were OEM machines which were produced in fairly large numbers. All of the VME computers I've seen have been used by robot manufacturers (or other similar equipment). The "cheap PC" systems that I've heard about (not including MMI systems) seem to have been built mainly as one-off jobs by consultants. Does anyone know of a good reason for this seeming split in the
market?

*******************
Michael Griffin
London, Ont. Canada
mgriffin@wwdc.com
*******************

By James Lang on 13 April, 2000 - 3:05 pm

(Originally posted Wed. 1/21/98)
I have been following this thread for some time. I believe this discussion started with the September 8, 1997 article by Joseph Garber intitled "The PLC versus the PC."

I believe that the ultimate goal we all are trying to achieve is to arrive at a system design that is appropriate and cost effective for the
application and client we are dealing with. With this in mind, the big question that keeps bugging me is, WHY?

Years ago, computer control started with I/O coming into a central computer which ran the control algorithms, generated alarms, etc. Later distributed systems and PLCs relieved the central computer of this load for more efficient and reliable operation. Without going into a long history, it seems that all PC control has done is to go back to the old central computer type control. If you deal with the typical MIS type people in your organization, their usual solution is a bigger faster computer. The only advance is that PCs are now orders of magnitude faster and more powerfull than the old central computers. But so are the PLCs and local processor units of DCS systems. MMI and SCADA software packages that run on PCs now have the capacity for all of the sophisticated control, sequencing, PID loops, etc..

So again, WHY? Putting aside the discussion of industrial hardened PCs versus desk top types, why this step back? What advance in the arena of
control systems is being made here? I am sure that there are applications where this approach may be suitable and cost effective, but I do not
understand how, in the main, one can prefer a general purpose type machine such as the PC in an application that would best be served by a machine
more closely aligned to the application.

I think that with the speed and power available, we may be losing a sense of the direction we are going. Being a greybeard, I have been around long
enough to see most of the evolution of computers, PLC's, DCS's, and especally the PC. I for one, do not see the advantages of PC control. To me, it represents a step back.

In my career I have made many mistakes, but usually learned something. I therefore wait for the slings and arrows of outraged foes and special
interests, but will always ask, WHY.

Jim Lang

By Christopher Wells on 14 April, 2000 - 9:23 am

(Originally posted Wed. 1/21/98)
James Lang [JLang@BRWNCALD.COM] wrote:

>I have been following this thread for some time. I believe this
>discussion started with the September 8, 1997 article by Joseph
>Garber intitled "The PLC versus the PC."
>
>I believe that the ultimate goal we all are trying to achieve is to
>arrive at a system design that is appropriate and cost effective for
>the application and client we are dealing with. With this in mind,
>the big question that keeps bugging me is, WHY?

[Wells, Christopher D]
<snip> Jim & others
I have been in a design group responsible for PLCs in the 80's and now I work on embedded designs for power distribution. We have some
large volume on our smart meters and here a dedicated proprietary design does make sense on many fronts. This is our expertise and focus so we can hone the design. However It is very expensive to embed designs, again on many fronts.

My involvement is with communications, that is getting all of these meters to give up their info to an energy monitoring/management - data acquisition system. At the system level this expense becomes overwhelming. Designing operating systems, software & hardware is too expensive to do it on your own . That is where the COTS - "Commercial Off The Shelf" terminology comes to mind. We need to leverage off of other peoples efforts - that is where the PC environment
looks so attractive. Look at Grayhill's open line control platform (Grayhill.com) - the whole marketing thrust is based on this concept.

My latest project is to create a LAN/WAN interface for our meter products and I am struggling with all of these PC reliability issues. I will use one of the leading RTOSs and have looked a lot at off the shelf single board computers. The hope is that I can use a wide variety of PC104 boards and all the standard communication ports with their software drivers already finished for future development, and not
have to design them myself.


>Years ago, computer control started with I/O coming into a
>central computer which ran the control algorithms, generated alarms,
>etc. Later distributed systems and PLCs relieved the central
>computer of this load for more efficient and reliable operation.
>Without going into a long history, it seems that all PC control has
>done is to go back to the old central computer type control.

[Wells, Christopher D]
I disagree - take a look at the way client and server applications are being distributed over LANs and WANs - for example look at HP's
Vantera product up on their web site. (interestingly though they use their own HW platform down at the lowest level - 68331 but with COTS RTOS from WindRiver)

<clip>

By Carl Lemp on 14 April, 2000 - 9:26 am

(Originally posted Wed. 1/21/98)
I don't think the advance is on the technology end of things. I think the advance is in the familiarity of the equipment. There are thousands of new
graduates, IS/IT programmers, supervisors, users, operators, etc. that feel perfectly comfortable with a PC but would be afraid to touch anything when standing in front of a PLC or a DCS. I've watched several competent VB/C/C++ programmers get frustrated with the "foreign" style of ladder logic programming. On the bright side: Just because a technology is inferior at the moment does not mean it will always stay that way. As long as the PC control companies have competition and are making sales, they will enhance the
products. (And more quickly than the PLC companies enhance theirs since the PC control people do not have a large installed based for which they have to provide an upgrade path.) It's funny how history repeats itself. I seem to
remember control engineers questioning the reliablility and appropriateness of PLCs when they were first making inroads into process control.

Carl Lemp

James Lang wrote:

> With this in mind, the big question that keeps bugging me is, WHY?
>
> So again, WHY? Putting aside the discussion of industrial hardened PCs
> versus desk top types, why this step back? What advance in the arena of
> control systems is being made here? I am sure that there are applications
> where this approach may be suitable and cost effective, but I do not
> understand how, in the main, one can prefer a general purpose type machine
> such as the PC in an application that would best be served by a machine more
> closely aligned to the application.

By Don Lavery on 14 April, 2000 - 10:51 am

(Originally posted Fri. 1/23/98)
Carl:

Was the questioning due to the fact that PLC's were prone to software crashes and hardware failures, or was it pure reluctance to use something new and different? Just curious, as I was not involved with the industry at the time.

Aside from one forum participant's response that software crashes are due mainly to amateur/inexperienced programmers or programmers too lazy to keep their skills up to date, it seems to me that those who are currently
reluctant to implement PC's in a control environment have a pretty solid foundation on which to base their opinions. I'm not inclined to believe that even most O/S crashes are the result of some unauthorized or inexperienced twiddling. Otherwise, why would PC manufacturers spend so much time and money on personnel for consumer helplines? PC's are NOT noted for working every time, even right out of the box. The ideal of Plug and Play falls so far short of real life that even a PC helpline technician
I once talked to sarcastically referred to it as Plug and Pray. There are even websites that deal with, get this - UNDOCUMENTED tips and tricks for
Win95/NT! If I couldn't expect anything better from PLC manufacturers, then I guess that I, too, would be greatly concerned about PLC reliability
and appropriateness. I wonder, sometimes, whether the competitive race among PC manufacturers for bigger/more bells and whistles has relegated system/hardware reliability to a lower priority than profitability. Remember the Intel Pentium chip? The one with the floating-point error that was discovered, but the marketing continued because endusers like me
who do not need massive number crunching would never be seriously affected by the the bug? Thanks, but I think that I prefer PLC's.

Don Lavery
Lavery Controls
dlave@carol.net

By Carl Lemp on 14 April, 2000 - 1:38 pm

(Originally posted Mon. 1/26/98)
Don Lavery wrote:

> Was the questioning due to the fact that PLC's were prone to software
> crashes and hardware failures, or was it pure reluctance to use something
> new and different?

The arguments at the plant I worked at were specifically about the reliability of the PLC. The engineer doing the questioning was used to working with a DCS and didn't trust a PLC in a process (as opposed to a machine) control
application. However, I think these arguments against new technologies usually start out with some basis in fact (Early PLC's were not nearly as reliable as the ones being sold now.) but the "pure reluctance to use something new and
different" lingers on long after the technical issues have been solved.

Don Lavery wrote:

> Thanks, but I think that I prefer PLC's.

I also prefer the PLC...for the time being. However, I applaud those with the courage to take the risks of installing and debugging bleeding edge technology. If it weren't for them, new technologies would never become reliable enough for the rest of us and I would be spending my time tracing wires in relay cabinets and tracing tubes in pneumatic control cabinets instead of tracing ladder logic on a laptop PC.

By R. Suresh on 14 April, 2000 - 3:43 pm

(Originally posted Mon. 1/26/98)
> Don Lavery wrote:
> Was the questioning due to the fact that PLC's were prone to software
> crashes and hardware failures, or was it pure reluctance to use > > something new and different?

> PC's are NOT noted for working every time,
> even right out of the box ?

> it seems to me that those who are currently
> reluctant to implement PC's in a control environment have a pretty > > solid foundation on which to base their opinions.

> >Carl Lemp <clemp@coqui.net> wrote:
> > It's funny how history repeats itself. I seem to
> > remember control engineers questioning the reliablility and
> appropriateness of PLCs


I can confirm this. I come with a background of 20 years with Siemens, in Germany and in India , right from the days before PLCs were born (or were essentially LCs - ie wired-"programmable" modules). The first system I implemented for automation of a complete cement plant had 14
logic centers, but no PLCS! The clients had not much confidence in the PLCs. (We had even less! - No not because we suspected the electronics
robustness, but the functioning in our Indian ambiance!)

The logic centers were constructed modularly , entirely with contactor logic, but suitable for direct replacement with PLCs at a later date
(remove the contactor baseplate, reconnect the terminal wires to the PLC mounted base plate. We did implement the PLCs 3 years later at the same plant.

The first PLC locations (Why, even Variable frequency drives) required back up systems to be parallel wired !

These are not entirely due to experience of the new electronics failing, but more a mind set.

I now run several companies engaged in designing bus-linked modules, PLCs (CAN Bus) and applications concentrating on drives, industrial
controls and BMS. We have implemented several systems with bus-linked modules (upto 120 nodes in some cases) entirely orchestrated for signal exchange and logging by the central PC. The programs were originally under DOS platform. The PCs work 24 hours, 365 days a year. NULL PROBLEMO! We have at worst, 1 breakdown call per year, and this is normally due to a bus disconnection.

Today we offer Windows based systems. We have also developed buffer intelligent (controller) interfaces to the buses.

Our personal experience is -

1. PLCs are indeed far more robust than commercial PCs. I would not include industrial grade PCs in this comparison.

2. Intelligent buffers developed to link the nodes to the PCs were a result of power considerations (UPS for PC costs more than a 24/12 V battery backup system)

3. PC failures at hard disk levels have almost been negligible, even though we would have normally placed this as the most failure prone
area (moving mechanism)

4. Windows OS (and beyond) add a large amount of code and hardware superiority to make the OS function efficiently, but reduce the MTBF, for the very same reason. We have had more crashes in Windows based systems than in DOS based. Clearly, the systems are better looking, more powerful, more salable - but, also more failure prone! I am sure this would change too, given time. Given that more efficient programmers are needed for the newer OS, less lazy too (!) the time needed to master each level of the new hardware and software is getting worse than keeping up with the Jones's !

5. The observation 4 above is NOT a mind set, but very real.

6. PC programming does permit use of a variety of Tricks (undocumented or otherwise), but the PLCs are not beyond these (different methods to
achieve a more efficient end). The CAN open poses us enough challenges to carry out multi CPU dialog as would probably a PC level program to
build simultaneously tabular compilations of field data and graphical display of same.

7. Given the tasks of controlling and monitoring, every solution, be it using a PC or a PLC is as good, unless these systems are so expensive and need a longevity without upgradation of several decades. The sole criteria would be that the solution clearly meets the need.

I am sure that the above views are debatable, and look forward to more views on the subject.

Best regards to all
Suresh

========================================================

From: ICON microcircuits & Software Technologies pvt ltd
12, First Street, Nandanam Extension, Madras-600035, INDIA
Ph: +91-44-4321857
Fax :+91-44-4335578
EMail : rajaram@md2.vsnl.net.in

By Bill Sturm on 14 April, 2000 - 3:37 pm

(Originally posted Mon. 1/26/98)
James Lang wrote:

> Years ago, computer control started with I/O coming into a central computer
> which ran the control algorithms, generated alarms, etc. Later distributed
> systems and PLCs relieved the central computer of this load for more
> efficient and reliable operation. Without going into a long history, it
> seems that all PC control has done is to go back to the old central
> computer type control.

I think that one of the reasons for the trend back to centralizedcontrol is that people are collecting and monitoring much more data than in the past. Many PLC's have very slow networking
facilities. This makes the PLC to PC interface much more difficult. I had just spent many hours trying to get a few hundred points between a SLC 5/03 and a PC based MMI, with a reasonable update speed. You have to regroup and shuffle memory inside the PLC to get contiguous blocks and it takes several reads to get all of the different data types. (at least with A-B)

One way to solve this problem is to do the control in the PC, this way you have can have on tag database and very fast screen updates and data acquisition.

I am not saying that this is the best way, however. I would prefer to stay with a more distributed system with many small processors.
Some of the new PLC's are starting to have faster networking, such as ethernet, that makes it easier and more economical to connect with a host computer. No more 19.2 kb multi-drop links or $1000.00 interface cards. I would like to see all PLC's come with ethernet or at least 2 comm ports capable of 115 kilobaud serial comms. PC's
have had both of these luxuries for years. No wonder they are becoming more popular.

--
Bill Sturm
bsturm@gatecom.com

By A. V. Pawlowski on 14 April, 2000 - 4:08 pm

(Originally posted Mon. 1/26/98)
> I would like to see all PLC's come with ethernet or at least 2 comm ports capable of 115 kilobaud serial comms. PC's have had both of these luxuries for years. No wonder they are becoming more popular. <

At any particular point in time, I think you could get higher speed serial ports on PLC's than you could find built-in to PC's. I think Ethernet was supported by PC's earlier than PLC's, but not by much. Of course, the difference is cost for the feature and whether it comes as a built-in item. As far as I know, only Apple includes an Ethernet port built-in to their motherboard and, although most PC's come with them, they are plug-in, separate cost items.

By A. V. Pawlowski on 18 April, 2000 - 12:51 pm

(Originally posted Tues. 1/27/98)
It has been pointed out to me that I was wrong in my comment below and PC's have indeed been commonly supporting both Ethernet and high speed serial ports (>57.6K) since the mid to late 1980's ie. many more years than PLC's. I should have checked my facts before I opened my mouth. I appologize to Bill Sturm and anyone else who may have been upset over my post.

BTW, I believe both PLC's and PC's have their place in today's control systems. The choice for me is application dependent. I also think that some PLC manufacturer's are charging some exorbitant prices for Ethernet capability.


I wrote:
At any particular point in time, I think you could get higher speed serial ports on PLC's than you could find built-in to PC's. I think Ethernet was supported by PC's earlier than PLC's, but not by much. Of course, the difference is cost for the feature and whether it comes as a built-in item. As far as I know, only Apple includes an Ethernet port built-in to their motherboard and, although most PC's come with them, they are plug-in, separate cost items.

(Originally posted Mon. 1/26/98)
(Quoting Bill Sturm - bsturm@gatecom.com)

> I think that one of the reasons for the trend back to centralized control is
> that people are collecting and monitoring much more
> data than in the past.

This is my main reason for it, anyway.

> Many PLC's have very slow networking
> facilities. This makes the PLC to PC interface much more difficult.

19.2 kbaud RS-485 doesn't cut it for me.

> One way to solve this problem is to do the control in the PC, this
> way you have can have on tag database and very fast screen
> updates and data acquisition.
> I am not saying that this is the best way, however. I would prefer
> to stay with a more distributed system with many small processors.

I agree distributed would be better, because I have some control functions that require sub-millisecond response times which isn't compatible with the way the dumb-I/O-only network is handled. I would like to be able to send small, Java-like, control applets to my distributed I/O for higher-speed local processing.

> Some of the new PLC's are starting to have faster networking, such
> as ethernet, that makes it easier and more economical to connect with
> a host computer. No more 19.2 kb multi-drop links or $1000.00
> interface cards. I would like to see all PLC's come with ethernet or
> at least 2 comm ports capable of 115 kilobaud serial comms.

I second that!

How about USB ports? They should certainly be inexpensive to add.

Rufus V. Smith
RufusVS@aol.com

By Armin Steinhoff on 18 April, 2000 - 12:46 pm

(Originally posted Tues. 1/27/98)
RufusVS@aol.com wrote:
>How about USB ports? They should certainly be inexpensive to add.

Yes, that's right for the hardware ... but have you read the USB specification ? It contains much 'technology prosa' about the USB protocol which is really not easy to implement. It is much work to realize it, so it can't be inexpensive :-( .

BTW, is the USB more used in the field than FireWire ?? For which bus system are today more devices available ??

Armin Steinhoff


http://www.DACHS.net

(Originally posted Tues. 1/27/98)
Actually the chips for implementing USB are relatively inexpensive. The problem is (from my reading of the specs) that USB was designed with PC's and their peripheral devices in mind, not remote sensing, etc. It would be fairly easy to create a PC USB interface, but putting that into, say, a photoelectric sensor, would be very difficult. I'll suggest a few reasons:

1) the physical size of the chip set (the chip set I last looked at (Intel?) was two or three fairly large devices, plus interface hw.
2) the data exchange protocol is designed for sending data to a printer or getting data from an optical scanner. i.e. non deterministic,
large data packets, with a limited number of nodes per network.
3) the standard connector is huge and not practical for plant applications.

Just my two-cents' worth...

Tom Kirby
Richmond Automation Design, Inc.
804-262-6029
804-262-6421 FAX
tkirby@ricauto.com
www.ricauto.com

(Originally posted Fri. 1/23/98)
I've been ... casually .. following this thread of "PC Reliability" which is driving me a little crazy. Reasons being - to me this sounds like the old PC<->MAC, Win<->OS/2, BSD<->Linux conversations.. However, because I have so much at stake here, I'd like to intervene and ask a question.

In what way are PC's supposedly unrealiable? I.e. the hardware can obviously be unreliable due to 2 aspects.
1) Poor assembly
2) Poor engineering
now, assuming someone is serious about getting their hardware, we can rule out #1. If the assembly is poor, don't use it.
As for #2, the PC concept has been around since early 80's - the hardware is not perfect, but not very far from it.
The software can also have 3 main aspects which can be "bad"
1) The BIOS code is bad
2) The OS is bad
3) The PC based programming is bad
well, just like #2 for hardware, the BIOS is good if your hardware is good. There is no difference if a device is a PLC or a PC, if it's basic
programming is wrong, it bad. If not, it is good (in that respect)
2) The OS. What OS are we all yelling about here? Other than Windows, there is DOS (which has had MANY years of testing and proves to be VERY reliable, from that which I see) There is Linux and similar systems, which are even far beyond DOS in reliability. And then there are the specially made micro OS's , which are GUARANTEED to be reliable by their manufacturers...
3) The PC based programming is bad? Well, if the implementor is bad, what can you expect? This (since I _AM_ a programmer here I am not
understanding)

Now, maybe I'm misinterpreting something. Maybe people are talking about master stations here, and I'm just clueless and out of whack. But I'm
developing a device on the PC architecture right now - I haven't had a single problem yet (except that the current board I use has no CO Processor, which isn't exactly a big problem). The hardware and software have been working nothing less than excellent. So, can someone PLEASE give me the spark if I am doing/assuming something wrong here? I don't want to invest huge sums of money only to find out that I missed something very basic.

By George Robertson on 14 April, 2000 - 3:00 pm

(Originally posted Mon. 1/26/98)
OK, You asked for it:

> I've been ... casually .. following this thread of "PC Reliability" which is
> driving me a little crazy. Reasons being - to me this sounds like the old
> PC<->MAC, Win<->OS/2, BSD<->Linux conversations.. However, because
> I have so much at stake here, I'd like to intervene and ask a question.
>
> In what way are PC's supposedly unrealiable? I.e. the hardware can
> obviously be unreliable due to 2 aspects.
> 1) Poor assembly
Sometimes, though not so common.
> 2) Poor engineering

In some cases, particularly with regard to "true" PC compatibility, whatever that is.

> now, assuming someone is serious about getting their hardware, we can
> rule out #1. If the assembly is poor, don't use it.

How do you know whether it's poor?

> As for #2, the PC concept has been around since early 80's - the hardware
> is not perfect, but not very far from it.
> The software can also have 3 main aspects which can be "bad"
> 1) The BIOS code is bad

Bad, or just different. It is difficult to develop code that runs on everyone's BIOS. Unless you use the same BIOS that the developer
used, you will be "beta" testing. I know it shouldn't be so, but them's the facts.

> 2) The OS is bad

Definitely. If you have a bug free one, let me know.

> 3) The PC based programming is bad
> well, just like #2 for hardware, the BIOS is good if your hardware is good.
See above
> There is no difference if a device is a PLC or a PC, if it's basic
> programming is wrong, it bad. If not, it is good (in that respect)

Big difference. PC OS is non-deterministic, interrrupt driven collection of code for allocating PC resources. (Definition). PLC OS is deterministic, probably NOT interrupt driven, very limited engine that does a very specific task, and is probably completely testable.

> 2) The OS. What OS are we all yelling about here? Other than Windows,
> there is DOS (which has had MANY years of testing and proves to be
> VERY reliable, from that which I see) There is Linux and similar systems,
> which are even far beyond DOS in reliability. And then there are the
> specially made micro OS's , which are GUARANTEED to be reliable by
> their manufacturers...

What do the manufacturers do if they fail? What's the guarantee?

> 3) The PC based programming is bad? Well, if the implementor is bad,
> what can you expect? This (since I _AM_ a programmer here I am not
> understanding)

With most of the complex systems, you've hit the nail on the head. Modern programming is such a hodge-podge of DLLs and objects that it's hard to see who's to blame. If you want something really tight, you have to write it in assembler, and totally pre-empt the OS. Which is basically what's going on in a PLC.


> Now, maybe I'm misinterpreting something. Maybe people are talking about
> master stations here, and I'm just clueless and out of whack. But I'm
> developing a device on the PC architecture right now - I haven't had a single
> problem yet (except that the current board I use has no CO Processor, which
> isn't exactly a big problem). The hardware and software have been working
> nothing less than excellent.
> So, can someone PLEASE give me the spark if I am doing/assuming something
> wrong here? I don't want to invest huge sums of money only to find out that I
> missed something very basic.

Just test your package as completely as possible, and insist that your customer's run your code on the same hardware, with the same OS, and don't run anything else on the same box (I'm not kidding here, if this is for process control) and you'll be golden.

-George Robertson
Saulsbury E & C
Getting grayer, and perhaps a bit jaded. (realistic?)

K.I.S.S. Hmmm, is a PLC OS simpler than NT?

George Robertson

(Originally posted Mon. 1/26/98)
Off the top of my head, my quick list of reasons PC control is prefereable to PLC's would include:

1) Hardware cost (CPU/Display/Hard Drive/Network Cards)
2) Choice of development languages for Control and/or Data processing
3) Simpler support for custom and semicustom boards
4) Simpler software upgrades (i.e. Modem downloads)
5) Mainframe connectivity
6) Multiple hardware suppliers
7) Simulation of system without physical hardware

Admittedly, my applications contain a lot of data processing into and out of the controlled system, and the need for extensive audit trails and report
generation, all of which is more PC-like in processing, along with the need to control and coordinate switches/cams/motors/indicators etc.

I still have small control needs best met by little PLC's.

And frankly, I would still be nervous putting a PC in a system where failure is truly hazardous.


Rufus V. Smith
RufusVS\@aol.com

By Raghu Krishnaswamy on 14 April, 2000 - 4:25 pm

(Originally posted Tues. 1/27/98)
Use of PC's for control application might be illegal (for certain cases when the potential for death or injury exists). Surprised? OSHA (Occupational Safety and Health Administration) requires any new system
to be qualified, and in order for the system to be classified the MTBF rate is required. I seriously doubt if one can get a published MTBF rate from Microsoft for Windows NT. Atleast I am not aware of one.

Again this is just one interpretation of the OSHA rules, and I would love to hear different interpretations.

I am running an HMI on a Pentium PC NT3.51 to monitor(not control) a process. The system has perfomed reliably, with a few hang ups here and
there. We never had any problem with MS DOS on which we were running the previous HMI. Can one automatically conclude that DOS is superior to NT?
Probably not. NT is going through its evolution process like DOS did. In order for NT to be accepted by engineers, MICROSOFT should adapt to the world of engineers. They need to accept the fact that engineers are different from accountants, and developing and marketing a product to engineers is a different ball game altogether.

Raghu Krishnaswamy
Senior Project Engineer
Westinghouse Electric
Commercial Nuclear Fuel Division
Columbia South Carolina

(Originally posted Tues. 1/27/98)
I just finished a job on a Desktop PC running a pharmaceutical batch process. I used Taylor Waltz with Taylor Process Windows. We are using
BECKHOFF Devicenet I/O with the SS driver card. We are running 12 serial ports. On the serial ports we are talking to 8 Total Control 6" colour
QuickPanels. We are talking to 3 other PLC's for communication and control. We have a parallel port ZIP drive. We have 256Megs of memory. We run the control kernel and log a large number of variables and then plot the variables out for each batch. This is all done on the same Desktop
box. I installed the NT 4.0 with service pack 3. I am not a system administrator. It has been running for 4 months now and we do not have
the blue screen problem.

The Blue screen problem I have seen usually only occurs on memory deficient machines. I mean machine with under 128Megs of memory. Our
application never requires over 44megs according the NT task master but NT does some funny things with under 128Megs.

I believe NT is stable enough for control with proper installation. I might add that I have seen some really strange problems with reliability
on AB PLC 5's. I have done a lot of those also. PLC 5's still fault and quite with division by Zero. I would not call that fault tolerant. I hae
had remote racks quit communication with analog cards in a rack but not all.

My experience with Taylor Waltz, NT, Process Windows, QuickPanels and a desktop machine is that it seems as reliable as the PLC 5's I have had to work with. The desktop solution was much cheaper purchase and much friendlier software.

There is my two bits worth on control with a PC and NT. I did it and it works.

Owen Day
Engineer

By jm Giraud on 25 June, 2000 - 4:01 pm

Opinion:
4-5 million factory is a peanut. Think 4-5 billions. I spent 30 years in Process Control Instrumentation. Never used PC. Distributed Systems, PLC plenty. It is impossible to program
fault proof lengthy piece of information. Remember that missing comma some years ago in the NASA. We went 12 persons in Philadelphia, playing with one of the world top Distributed System, for nuclear Power plant: It failed, We then went to Phoenix playing with the same system OK. Believe me,the authorities in Phila. were not newbies and soon, there were plenty of them: Nyet Camarad !
Yes, the first loop in an industrial system is expensive because of the minimum requirements.
As the plant grows in millions/billions the overall automation maintains around 5%.
Let's talk seriously. Have a blank PC no DOS, nothing. Install a DOS, an operating system,a
complete error message, same thing for all kinds of I/O's: that is reshaping the wheel with rope
and nail. Yes there are systems of that kind on the market. The maths that come with are University or book type, do not bet your head.
Numerical maths are unsure friends unless you are an expert (I know a great lot about numerical approximation of functions and what is running in computers,also I use scientific sotware package) Discouraged? No.
Now, the best piece of sofware that you might add to Microsoft is Microsoft dependant.
It is a monopoly of nuts and bolts just thrown
in.
Examples of Microsoft stupidities:
Excel is the Math tool of Windows, it is impossible when you write a math page in Excel to use the characters font that is in Word. So for a rich math page I use Publicon on top of Excel,
nicely enough, double click reopens Publicon.
Excel does not accept implied multiplication
so 3x=3xx.
3x-5y=-5y+3x but Excel does not accept -5y+3x,
an idiotic space before -5y is required. idiosycresis are endless with Microsoft.
Error..Error F..k show me.
Excel I like it but is not faultless.
An other example of software incompatibility :
In my approximations, I am great user of the Thiélé approximation (it works where polynomial approx work and works where polynomial approx do not work).So, the last convergeant may be negative .../-C)))))) which Excel digest but an other scientific package does not and must be written .../(-C))))))).
All that to say that PC's are not designed for
Plant Automation. Loop structure, copied from
analog systems is extremly complex. Millions
man/hours just can not be imaged overnight.
PC's are suitable for Data Acquisition and Plant Optimization but not for closing loops.
For each loop, I would use individual modules (probably numerical) or a multisystem.
On Multisystems with loops, I incline Foxboro. It will do lot of logic too. Some years ago, for logic (from small to large size) there was Reliance. 15 years ago there was an 8 loops
module redundant, extensible, powerfull out of imagination so simple to use: too advanced not on the market anymore. A system like Foxboro is a 60 years continuation. Fully compatible with what can be adjoined.
I started profession using relays, then appeared
PLC's could not do this could not do that. I quitted profession in 1995. I had no math coming with the PLC. That particular client had installed three legs RTD terminating on three terminals but two leg bridge so I told him he will be by about 3°C off, I supplied a small piece of correcting polynomial but no math to run it; the client had to wait a three bridge card
if there was enough demand. In 1992 a fellow worker of mine lost his hairs trying to have the derivative work on a PLC (a great name). If you need derivative that's because you need it work. There was no way with that system because she is a three term one: it's like multiplying diameter by 3 or by 4 instead of pi !!!
You see, there are at least two kinds of maths. Maths that work and college maths
Same philosophy applies to Process control:
proven ones and imagination.
Whichever one that may be selected make sure you square the limits and the bugs
jmgiraud@infoteck.dr.qc.ca
Yes, many factories run on PC . But they run part of the year, so downtime anytime is no problem.

By Dan Berger on 27 June, 2001 - 12:48 am

From a hardware POV PCs can be quite reliable if environmental conditions are fine (temperature, moisture, dust, vibrations/shocks, clean UPS power,...) but operating systems, especially MS-Windows are not reliable.
The major problem is that in some domains customers demand more and more Win-based process control systems although others OS's (Unix-based, OS9, OS/2,...) are much more robust. One should admit the fact than ANY Win-based system will crash sooner or later and also show apparently random instabilities leading to impredictable behaviour. With fairly redundant client-server architectures used as HMI's it's (probably???) possible to guarantee acceptable system operability but there is a price to pay for.
Also PCs can't replace PLCs, a Soft-PLC is NOT a PLC CPU equivalent, same for PC-bus "PLC CPU" boards. There are models which use a separate PSU so the "PLC CPU" board works even if the PC power is off but if so finally, why should one get such a board instead of a regular real PLC CPU?

By Christopher Blaszczykowski on 24 April, 2001 - 6:34 pm

Lets create some pictures:
1. PC/PLC combination 2. PC only 1. PC/PLC combination Manufacturing environment with several production lines. First at all you have to think about fact that any hardware combination is and always will be more reliable than any software. THAT'S A FACT! With PLC you have a choice of some redundancy combination, which increase both stability and reliability, as well as increased option of running in manual, semi-automatic and automatic mode. Maintenance is much more easy. Even if OLC and PC fail you still can run manually, especially if you provide safety of using single devices controlling single process. Example: PID loop running on hardware with communication to PLC but capable on running independently. In semi-automatic mode all connected devices are dependent on PLC programming and all variables can be set from PLC Ladder Logic. In automatic mode you have security of stable programming and "pretty graphical interface", but still the core of program is running from PLC! Another factor is ability to secure program for quick retrieval from EPROM. In this situation even if one of the element failed you still can run production and have a time for correcting problems without losing too much of production time, which can run into millions otherwise. This also allow you for safe and uninterrupting PC maintenance – a very important factor especially if you store a lots of data. Lack of such maintenance can cause total lost of valuable engineering resources for R&D, production, business, and process. Also very important factor is securing resources on the network. By allowing only for read-only access for others network servers you will be able to prevent any unwanted access to resources needed for same as above reasons. I know that it is insufficient explanation – but that is for now. Later I will explain it in more detail so you can have a full picture. 2.PC only Taking to consideration for all of above picture the same or similar operation based only on PC. A. Under any circumstances you can't perform multi-level PC operation like you can with PLC. If PLC architecture allow you to create pretty complex chain of CPU communicating at different levels and control defined functions, PC is limited to the amount of PC boards, which you can insert to motherboard. And in many cases or you have limited slots or there is problem finding PC with more than 2 or 3 slots max. So how many slots can you used for IO's? B. In majority of cases all of the Industrial PC's are obsolete and is hard to find or operating systems or parts for them, Besides they cost to much. Now. Lets assume that you successfully implement PC operation, and failure occurs! You are stuck! You may loose a valuable data, production stops until you fix the problem, and this may take a considerable time. I don't think that I have to explain consequences of this. Another problem – someone introduces somewhere virus in the network environment – I don't think it require any more explanation! Lets talk about networking environment. If every production line is networked through PLC's in majority cases it is low maintenance and it is hard to interrupt entire process with exception for lack of electricity. In my long practice I never heard of virus infected PLC. In case of PC's anything can happened. Lets make another assumption. All of PC's running lines are networked. You have several PC's, which can cause problem. In the simple case only one line is out for several hours, or in worst case entire networked factory can be out for several days until all of the elements are fixed. Matter of usage of network resources – major problem especially if engineering servers and production servers are connected to business server. From my practice I was forced to limit and block resources for each level due to prevent accessing and abuse (production and engineering) resources by business section. Not only majority of viruses come exactly from business section! In some cases you can find around 90 viruses or viruses sources. Imagine effect of it on production. With PLC it can be prevented much easier, and much quicker. It is necessity to block all of the resources and limit it if necessary to read-only! Otherwise you are in trouble. Christopher Blaszczykowski cblash@netnitco.net

By Chris Hale on 22 June, 2001 - 6:00 am

I guess that you are already getting the gist from the other respondees that it really depends on apllication and how it is applied.

The main rules of thumb that I can suggest are:

Shy away from Win9x OS's, more for office / home use. NT2000 seems to be fairly stable but is still not deterministic and real time. VenturCom seem to have an application that is used by most of the big SoftPLC manufacturers.

Use good quality PC's. I have successfully used DELLs for many years. Beware of some big name suppliers that use bespoke type hardware and bus systems - YOU KNOW WHO YOU ARE. Make sure that the PC uses standard parts that you can easily get hold of. Try to stay away from hard-disk storage, use flash disks, etc.

Way up the risks. If your a&*e is on the line, stick to the old faithful PLC. In my 14 years at this game I have never had a PLC CPU fail. I/O yes, but good MTBF's nonetheless.

I have had SoftPLC's that needed weekly reboots, NT Scada packages that "freeze" and all were sat on standard OS (from you know who).

But to be more positive, I am currently investigating using a (well 30+) European SoftPLC's (that can have a PLC slot card put in it) for the main controllers in a very large automation project in the UK.

By Dan Berger on 24 June, 2001 - 2:57 pm

Are preliminary conclusions available?
Concerning Soft-PLCs there are some solutions using dedicated processor cards which work even if the PC OS is rebooted.
Concerning purely software-based solutions I really wonder if PC OS's are really enough reliable. According to my experience, using any MS -Windows product doesn't allow a long-term runtime (> several months coninuous duty) without crashes.
Some specializes OS's are better suited for industrial applications (e.g. QNX etc.).
Would be intersting to hear any comments about Soft-PLC experiences.

By Ben Kelly on 17 June, 2003 - 1:55 pm

I work for a very famous PC manufacturer which switched to PC based controls 3/4 years ago.

Without naming vendors, we used SW based on flowcharting coupled with a CAN based fieldbus running on an NT platform.

Due to reliability problems with the Servers, 75% failed in 3 years, bus stabilty problems, application bugs as well as good old NT problems the company is back tracking to PLCs for all future lines.

To give it it's due PC based systems proved easy to integrate into the enterprise using DCOM, OPC etc. But with vitually all PLC suppliers providing OPC servers this advantage is lost.

On the other hand we have some turn key equipment running on a UNIX platform which has not failed within the same 3 years, this is the only PC based scenario I would recommend.

By Yariv Blumkine on 22 June, 2001 - 6:35 pm

Hi guys,
SoHaR (Our name is derived from a contraction of Software and Hardware Reliability)
is a company dedicated to analysis and improvement of Reliability and Availability in critical systems. (nuclear reactors, airborne systems etc.)
I came across your group debate regarding reliability in the automation industry, and was wondering what are the major concerns and problems you have when addressing reliability and availability issues.
I'm not trying to sell you anything (yet), but to understand your pains and debates when facing these issues.
Maybe latter – we could help.
I would very much appreciate responses to Yariv@sohar.com

Thank you,
Yariv Blumkine

By Curt Wuollet on 27 June, 2001 - 2:01 pm

I agree with most of what you are saying, but a lot depends on the application
and how you define PC's. For example, an industrial SBC running Embedded Linux
in a DIN rail case and appropriate memory,IO and storage arguably _IS_ a PLC.
Most plcs are simply a small purpose built computer running an executive built
along the same lines as other embedded systems except more generalized in
nature to allow programming by the user. A SOC such as the MachZ from ZFLinux
Systems has even more fail-safe features and embedded adaptations than what
the PLC vendors are using. It would be fairly easy to build a drop-in
replacement for many Micro's at least. I fully intend to do just that if I
can generate funding. At the same time PLC's are moving more towards the
PC as they address demands for non-logic functionality and reasonable
communications capability. Between them is a gray area that they will both
occupy in the near future.

In the meantime PC's are useful where they are cost competitive and conditions
allow. In my case, the PC is a far more viable solution even considering the
cost of environmental control and a UPS. It was a case of where we needed to
have a PC for machine vision and terminal emulation and communications so it
makes a lot of sense to run the logic for the cell on the machine also. The
Linux Box does Logic a lot better than PLCs could do the other things needed
and the costs were drastically less. The reliability in this plant has been
quite acceptable with most of the Linux boxen I have doing MV running a year
or so between scheduled downtime for a vacuum job and general check up. These
are running without the benefit of sealed enclosures or in some cases UPSs.
I fully expect that the cooled, sealed enclosure for this cell to at least
double the maintenance interval.

For true relay replacement hazardous environment jobs the PLCs make sense.
But as fanless, diskless, low power, high function embedded platforms become
available as commodities, there will be fierce competition for everything
else. As you move up in functionality, programming time and development
costs will clearly favor more general more powerful solutions.

Regards

cww

By Ranjan Acharya on 27 June, 2001 - 2:09 pm

<clip>
One should admit the fact than ANY Win-based system will crash sooner or
later and also show apparently random instabilities leading to umpredictable
behaviour.
</clip>

I do not agree with this statement -- it is true for many situations, but
not all. We have had some Windows NT Server systems out there that run for
"ever". The only time they have been re-booted is when the server was moved
to a new room. The application software has crashed a few times, but that
was nothing to to with NT (granted, you cannot be 100% sure of that, but the
same could be said for a system running on top of Linux too unless you know
<B>exactly</B> what caused the crash) -- NT kept running with no sign of
memory leaks, resource hogging et cetera. The customer just re-started the
application with no need for a service call.

The main problems with the system were the loss of a drive in the RAID array
(hot pluggable, no big deal) and the loss of a cooling fan (had to be shut
down for that too).

Before all the Linux lads get too upset, I am not implying in any way that
NT is a stable well-written OS (it is not), however, in a plain-vanilla
set-up with good hardware (the key, for any OS), users can expect to see it
behave quite well.

By Peter Whalley on 28 June, 2001 - 11:10 am

Hi all,

One of the disadvantages of NT however is the need to re-boot whenever a
significant change is made to the software environment. Every time you install a service pack or even a new version of any of the applications, NT needs to be re-booted. This is a major disadvantage for servers connected to the Internet where updates need to be installed fairly frequently (for NT or Linux) for security reasons.

With Linux for example I can stop Sendmail, install an updated version and just restart Sendmail without interrupting the operation of any of the other applications and I can do it from a thousand miles away using ssh (secure Telnet). This is generally not possible with an NT system.

In a closed environement it maybe possible to just get the system running and then leave it be for many months or even years but if security is an issue systems need to be frequently updated to stay ahead of the hackers and this is where Linux really shines.

Regards

Peter Whalley

By Prashant V. Ingole on 20 July, 2001 - 6:52 am

Dear sir
I think these PC 's although provides the flexibility of programming and a general purpose solutions the reliability is really a concern for the control people today.
When I work with my Pentium PC based control workstation the problems like OS stability and Hardware failure are common when the usage period is more. Some of the time the workstation fails without any clue and reason so I am doubtful about the complete replacement.
Thank you
Prashant Ingole

Hi.
Based on my personal experience with PC for Automation, Reliability is not a strong feature of it. But on the other hand PC has many advantages that makes it attractive over other options. When extreme reliability is needed then at the current era PC is probably not an option.
PC reliability has improved during the years
and got a kind of acceptance level. In general PC reliability might be good enough for many applications. So when making a decision, PC or not PC, like anything else in life, Advantages over Disadvantages must be carefully judged.

By Reginald Sheridan on 1 August, 2002 - 9:51 am

I was surfing the net and found this site and feel I must add my 2 cents. I have a PC program installed on a Dell computer in a few dust driven location in South Korea. I have read some of the replies stating that an industrial pc is better than a regular pc. This is not the case. Most industrial pc's do not use the most modern and up to date cpu's. This system has been running since 1998 using Taylor's Process windows and Waltz control software. It is also interfacing with a vb developed program for recipe and prodution reporting. This software has been by GE FANUC. If anyone has any questions feel free to e-mail at rsherida@mindspring.com

By Bryan Hoffman on 27 January, 2004 - 3:57 pm

IMO, PC controls is not reliable. HAve had problems with general protection errors and the like.

Especially with viruses, I would not want to take the chance.

MTBF is not what I would expect. PLC are 10 years.

PLC are rock solid technology, PC's, especially with custom programming are not.

PLC allow for easier trouble shooting, you don't have to get the original programmer to help.

By Marc Molnar on 8 April, 2004 - 1:14 am

IMO a PC should only replace a PLC if the PLC cannot meet the application demands. There are third party packages which create a SOFTPLC which can be programmed in ladder, but frankly I find this ludicrous. I have programmed PC's on WIN95 and put them in the field with no problems (fingers crossed) for the past 6 years. However, these systems are showing signs of hard drive fatigue and Windows errors due to daily OFF/ON routines without proper shutdown.

If you must use a PC in a production environment I would suggest using a UPS system that slowly shuts down Windows upon 110VAC power loss. This will protect Windows and extend the life of the PC.

By Curt Wuollet on 8 April, 2004 - 2:42 pm

I agree on this point, although it's unlikely I'd field W95. I wrote a script that shuts down the UPS after shutting down the OS and disconnected the PC power switches. This way it behaves properly on loss of power or if someone drops the main breaker. Some people have fallen into the habit of hitting the power button whenever they don't understand what's happening. Many UPS can watch a serial conection and can be programmed to do the right thing.

Regards

cww

According to SEMI S2, the computer can be shut down separately if it is only used for data logging. From what I understand, if the PC is doing the controls, it has to be shut down immediately upon EMO (emergency off).

However, PLC manufacturers is coming out with more advanced version of PLC with more computing power and storage (e.g. compact flash). So I think PLC is still the ideal choice.

By Reuben Allott on 24 January, 2006 - 3:03 am

Hi

PC-based automation has come a long way recently, mainly with improvements in flash-memory storage capacity and operating systems like Windows XP Embedded.

The Siemens Microbox is a bare-bones Windows XP with no moving parts (no hard drive). Other manufacturers have similar devices. These machines offer the same industrial resiliance as a traditional PLC (the Microbox is even mounted on a PLC rail), but provide much faster CPU performance with the added ability of linking in DLLs and applications written in high-level languages like C, and making use of advanced functions available to the operating system (modems, remote monitoring, web servers etc).

In my opinion "PC-based control" will eventually replace traditional PLCs - the capabilities will be the same as a desktop PC but the hardware will be industrial-strength.

Reuben Allott

I was an engineer in production of animal food. Automation was based on Siemens PLCs but the thing is that we had 2 servers in network with thise PLCs. We had database and SCADA on those servers. The thing is that the producton was not poslible if any of the servers was down. Data base was essential to the proces 'couse the recepies where there.
So conclusion is this. Noting would work without a PC!

I was also involved in a project of PC aotumation. Me and a friend made a custom program for a small production line. Also an animal food company. It runs on WinXP. One PC is incharge of everything!

I'm for PC automation. Even on Windows. It runs great and it is a LOT less expensive!

Dave, I would suggest reading a very interesting paper, "Loss-Prevention and Risk-Mitigation in Equipment Protection Systems" by Phil Corso. You can contact Phil at cepsicon@aol.com if you can't find his paper. He is very knowledgeable and helpful.