Blind faith in PLCs?(was Software Quality)

R
This is the reason hardly anyone uses any of the diagnostics provided by Genius I/O. Since everything is hard-codded to connect to every little bit of info the programming effort
required to handle the diagnostic information is many many times the programming effort required to implemnt the control strategy. The result: most people don't do it or do it only a little until they run out of resources.

 
A

Anthony Kerstens

Would you blame the PLC or DCS for damage, or put the blame on poor integration. I have also been upon to fix other's work, but never would put the blame on the hardware.

As with anything else, the weakest link is the human.

Proper choices and regimented design processes, no matter the platform, is required.

Anthony Kerstens P.Eng.
 
Paul Yager wrote:

> We stand completely behind what we said, and won't water down opinions.

Good for you. Standing up for something you believe in is important. I never would have guessed that PLC's aren't as reliable as servers, but now I have another opinion to take into consideration.

> The observations I summarized below are to us fact. It comes from visiting
> many PLC/HMI installs coupled with senior engineering experience at large
> scale DCS/PLC projects. We don't consider this to be sales or marketing
> bullshit. It is what we believe in 100%.

You believe it 100%. Got it.

> If we were to build a factory, assembly plant, process or even create
> Jurassic park we'd do it with SERVERS and CLIENTS and be proud of it. No
> PLC's. Don't need them, don't want them. The I/O is a must of course, take
> your pick.

Yay servers! No matter what the initial cost, lets use servers. Even if the factory only has three automated machines with 36 I/O, lets
reverse-engineer the manufacturer supplied, PLC-controlled machines, rip out the PLC's, and put in a server. We'll even be very careful to research the intellectual property rights of the manufacturer to ensure that we don't violate any patents on their process. We'll come up with better ways! The total cost of ownership is bound to be lower because we now have to do the
controls design that the machine manufacturers used to do, and continuously rip out PLC's as they come into our factory. In the long run, we'll have a much lower total cost of ownership because we know how to control the machines much better than the dumb manufacturers of the machines.

> However, I did not say PLC's are dead, or not required but said from a
> computing stand-point, there is a huge deficiency in the approach. I
> said in considering an install engineers should look at the business or
> organization as a whole, and the total cost of maintaining these
> effectivel isolated and arcane computers. There is an opportunity cost
> to locking data in the hardware and making it largely invisible to the
> user base, the people of the company they serve. (But depending on how
> you look at things this may benefit the control tech/maint. engineers
> and serve the vendors well).

Don Walker used to be the CEO of Magna International, a large auto parts manufacturer. Before he was CEO, he was establishing auto parts
manufacturing plants in Russia in the late 1980's. He told me about what it was like to work in a Russian factory at that time. Everything went by the "five year plan". If they wanted to place a new machine in the factory, one
of the things they had to do was make the bolts. Make the bolts? Yup, they weren't in the five-year plan. There was no hardware store or industrial equivalent. So the plant had experts who could make the bolts. I guess you could say that they didn't lose the opportunity cost of locking up this valuable information in the hardware and making it largely invisible to the
user base. Now it might be argued that there was little benefit to making their own bolts since another company could do it instead, but imagine the lost opportunity cost. Strangely, the Russian factories seem to have strayed from this ideal in recent years, allowing all kinds of outsourcing and subcontracting that have eliminated a lot of the knowledge that they
used to keep internally.

On the other hand, I could have sworn that PLC data could be made available to the enterprise through a variety of networks and protocols - never mind, I must be thinking of something else.

> Keeping on the Jurassic theme, imagine if it was as easy as using your
> Internet browser to view or modify all aspects of the Automation Servers
> (I mean everything) that manage the environment and containment for the
> Tyrannosaurus Rex, the Diplodocus, or the water treatment system, etc. for
> the park, all within seconds, no matter where the location of user or
> server. And that's just one feature, there are dozens more with
> absolutely enormous impact.

I could have sworn that you could access all aspects of PLC based information through a browser as well, but I must have dreamt it. Couldn't you access the PLC programming software using Windows terminal services - no, never mind.

> As one person pointed out, there certainly are cases where remote
processing
> is a must, such as a well-site or if needed in really low cost conditions.
> Our preference then would be to manage this remote processing in the same
> automation environment as the server software that manages the rest of the
> facility. This approach would reduce total ownership cost (TOC).

Yes, I can understand this. It would be much cheaper to incorporate the management of a remote processing unit by adding the additional server
software configuration so that it looks like the rest of the server software managed facility, rather than just letting it run autonomously with no user intervention. I could see how this would reduce the table of contents, um, total cost of ownership.

> As far as safety goes, the process itself must be safe, and the
> appropriate controls and backups in place along with the human issues
> resolved - such as alarm management. I've seen PLC and DCS systems
> responsible for some pretty big $$ damage and hefty explosions in my
> years. If you can improve the visibility to the process, the visibility
> of the controls, the responsiveness and convenience of the system and
> the ability to manage alarms effectively, you'll end up with a safer
> system overall, with less human error.

Yes, PLC's and DCS's cause explosions. It's a wonder we're even allowed to use those time-bombs. I'm frequently amazed at how PLC's go out of their way to hide data and mismanage alarms. I've often seen them in packs of five or six, grinning as they reduce responsiveness and hide controls. I talked to Dick Morley about this and he said he did it on purpose. There should be a law.

Now that I've had my fun, let me state that AutomationX would seem to be an excellent product for large I/O count systems where complete knowledge of the process was available, and control design was handled internally for all
parts of the process (not provided by manufacturers of specialty machines). I'll even be so daring as to say it would be a much better solution than PLC's for these cases. But you can't use this on your website, Paul, and if
you make a silly statement on the Automation list, it will probably be met with an equally silly response.

Sincerely,

Mark Wells
President
Runfactory Systems Inc.
http://www.runfactory.com1235 Bay Street, Suite 400
Toronto, Ontario, Canada M5R 3K4
Ph. 416-934-5038
Fax 416-352-5206
 
W
List,

I've noticed on this and other lists lately, that while the volume of posting is going down, the instances of food fights have increased remarkably . I'm wondering if this has any further meaning or implications? What does this presage? Or is it just a symptom of the difficult and uncertain times we live in?

Regards,

Willy Smith
 
I apologize, I was not pointing towards your comments, I was agreeing with them.........

I too have PC based systems, but because they were the first ones....they have had to prove themselves (sort of R&D projects), just like my first PLC project did, comments like "but relays are so reliable, are you sure this thing will work". "How am I going to troubleshoot this thing" ring in my ears.

I also use my first HMI project which went into an area that had panel boards and I spent 2 weeks selling the Operators that it would be OK. That
was an old AB Advisor that was totally run from a 286 and tactile keyboards. 5 years later when I replaced it with a mouse driven HMI, I had the same 2 weeks (being generous) to convince them that it would be OK. Today, try to take it away from them......Humans are afraid of change.....period. As soon as we engrain this into our heads and understand it, the better off we will be. Part of my job with "humans" is psycho analyst...........Those same people never even ask about my installing another PLC in the plant or DCS anymore.....so I am well aware that things will evolve.................When they are PROVEN...........that does not mean that I am afraid of innovation........in fact I understand it and am always pushing it.....but not at the expense of my reputation to deliver PROVEN TECHNOLOGY (proven over a long period of time).

Also the sorting equipment in the Post Office is PLC based, at least at some plants (ours locally), so my point was that for certain things the government does do it with PLC's......

PS - I have not gotten laser surgery on my eyes for all of the same reasons.........long term proven results....there is a Simpson's cartoon
where Flanders is blind because 30 years earlier he had that "new fangled laser surgery" (personal insecurity, I like eyesight with glasses better
than none at all)......I am wide open to technology but want to see factual long term proof before my reputation goes behind it. I have said more than once on this thread, we have our own moral code, ethical code and integrity code as well as CRAFTSMANSHIP code that each of us has to follow to live with ourselves. I choose to follow mine carefully and as I have said before,
I have the luxury of passing up jobs that do not meet my internal requirements.....but so do all of us.....although for some it may mean
unemployment, but that is a choice.........I also am very nervous about my messing with what I call evolution. I have noticed over the past number of
years that Operators are getting "dumber and dumber" because they count only on the box on their control room desk and don't learn the process anymore.........I have heard of whole plants being converted back to bench boards for that reason....so lets all look at the big (3-40 year picture)....Off the soap box.

Dave

DAVCO Automation
"The Developing Automation Value Company"
 
D

Donald Pittendrigh

Hi All

I have been thinking about this as well and have come to the conclusion that as many operations are tailing off for Xmas, many of our colleagues
are bored and are suddenly finding time to read their email before deleting.

Donald P
 
I agree....or as I like to put it......What is the difference between a Painter and an Artist.......one paints houses.........Anyone can go down and buy paints (tools) but what you do with them is the magic.....

Dave

DAVCO Automation
"The Developing Automation Value Company"
 
C
It's interesting to take a step back and see the big picture. I think some of this is like the blind men describing the elephant. Some folks work in areas that the PLC is ideally suited for and they think all this innovation talk is
fixing something that ain't broke. Other's work involves a lot of integration and communication and some of the other areas where the paradigm is weak and the programming model is inadequate to the task. They feel much less satisfaction with PLC's and perhaps something else might answer their needs better. At the high level process
control and IS integration level, PLC's are sorta outclassed but for packaging machinery they might be the ideal solution. PC based solutions probably are best suited at the middle to high end
in general. Also for some types of problems working in a procedural highly descriptive language is much easier than trying to work in
say, ladder.

I have started a working document for a general class system that would address the issues and solve the problems. What should be in the next generation?

The best general class solution would be very reliable, cost competitive on all except very low end projects. powerful enough for high end projects, flexible enough to cross product line boundaries and offer both your favorite PLC
languages and powerful procedural languages for low level and compute intensive work. It should also be modular yet highly integrated so that you can run only a little where you need only a little
yet cover the map without adding extra machines. It should directly support simple HMI or complex SCADA or no display at all. The HMI and SCADA should be customizable to fit the customer's needs exactly and should have easy scripting
capability for the little tweaks and fixes. To cover recent trends it should interface with all IS platforms and be at home on a standard corporate ethernet physical plant. It should be
web capable and support storage from flat files to raid and SANs. Serial comms from ancient to bleeding edge should be supported extensively and networking should transparent and it should be possible to support weird and new protocols
at need. It should be possible to augment and extend any feature to protect your investment and mitigate the risk of project failure due to unforseen difficulties. Programmer productivity should be maximized by making complete
information available and web accessable 24x7x365 along with a users group of peers to cover details not obvious fron the documentation. It should never require a credit card or contract to support your customers as you lose money that way. And it should never be obsoleted. Additional capacity should be simply a matter of adding user logins up to the capacity of the machine. The
hardware should be off the shelf from as many distributors as possible who compete to keep prices falling. All commercially viable processors and hardware platforms should be
supported from SIMM sized machines to IBM mainframes and clusters. This way hardware can be chosen to meet both reliability and cost
criteria. From an all solid state sbc to a failsafe mainframe. Comprehensive security and backup provisions should be included at no
charge since you need them in any case. Wherever possible, public open standards should be absolutely enforced and implemented in a portable
fashion and none of the customer management issues of vendor lock-in should be allowed. All non native hardware should be on an equal footing.
Ideally the software should include all of the above out of the box without expensive add-ons and license hassle or support extortion. Support should be available at no charge and guaranteed response should be available for a fee to cover all comfort zones.

Have I missed anything?

CWW

 
D

DAVCO Automation

I agree.....reminds me of the story of the Home roofing company who had crews out doing homes and at the end of the year wondered how to improve
output of his crews and came up with the idea to buy all the crews air nailers.

The next year he found that productivity had not gone up only to find out the crews didn't have compressors.......old story, use the right tool
properly.

Dave
 
The cases I've seen were due to the distributed (so-called reliable) nature of the system, data corruption, and operating with the corrupted data.

Case#1 - Boiler explosion. A power problem caused an I/O Processor to loose its control direction, causing the gas valve to be 100% instead of 0%.
Kaboom. $$$. Honeywell TPS and Process Managers. System software had to be changed to rectify.

Case #2 - An almost KaBoom in a furnace. Communication from controller to controller was disrupted by some bad code causing compensation variables to be frozen. Tripped on High Flue (exhaust) gas temp due to running so rich.
Honeywell MFC's.

Case #3 - What I call "Poor Visibility" on a Bailey DCS. The documentation, confusing spec codes, and a cheesy DOS program contributed to the wrong valve being closed as to what was intended. Immediate damage occurred when a Paper Machine press roll was unloaded under full operation. Total $$ bill a few million. If the system has excellent visibility - what you see on the screen is properly labeled and is what is executing, chances of errors such as this one are reduced dramatically.

Another strange one: During maintenance the entire DCS plant network became fragmented, resulting in an inoperable 1 Billion $ Pulp Plant. Recovery required shutting down nodes and rebooting each one to the "good" token. This
is what happens when you take perfectly good computers and break them apart into Hard drives (HM), processors (AM) and gateways (take your pick). Its a database and communications nightmare, kudos to those who made it actually
work.

On a project we did mid 90's. Shell did a study of explosions with PLC systems and was recommending hardwired systems for all combustion controls. The standard project fare was to suggest a PLC of course in the design phase. But
those with a full view of the corporation's experience over many years, with the data in hand were quite against "the blind faith". We were
managing the automation portion of a Shell refinery re-configuration project. Despite the unruly response this head office engineer got from the electrical design groups, I could totally see his point.

Paul Jager
CEO
www.mnrcan.com
 
A

Anthony Kerstens

Curt,

Add complete control over scheduling of everything
from communications to I/O updates. Even programs,
scripts, and lower level system functions.

Anthony Kerstens P.Eng.
 
Send me your software and I will apply the technology to cause these "PLC Errors" in exactly the same way. If you do not know how to use the
tools.......step back away and let someone who does do it right.

Again, it is not the tools or the technology it is the application of that technology that was faulty in all the cases you site. Although I
understand the demanding pressures we allow ourselves to be put under......downsizing and time crunches and all. That is why I like the job
I have which I refer to as "Batting Clean-Up, it is a luxury, always easy to come in and point
out others faults (bat clean-up)..........I too could make a list of "horror stories" and all but a few are related to SHT, "Stupid Human
Tricks"
apologies to David Letterman........

Painters or Artists............

DAVCO Automation
"The Developing Automation Value Company"
 
B
> The cases I've seen were due to the distributed (so-called reliable) nature
> of the system, data corruption, and operating with the corrupted data.


Sounds to me like the cases you cited were not due to the use of PLC/DCS systems but rather poorly thought out control strategy.

IMHO, one should not rely on ANY single programmable system for emergency shutdown, and any case where there is a potential for catastophy
should have some sort of backup shutdown, and not relay strictly on a DCS or PLC to blindly control it.

Note I said single system. I am not opposed to using redundant (not hot backup) programmable systems for shutting down things, if they are
done in an appropriate way. One of the reasons I feel this way is that hardwired systems are susecptable to the alligator clip syndrome.
It is not so easy to "jump" out a transmitter, and my expereince has been that transmitters are
typically more reliable than switches are.

Having said this, I caution that I would want total redundance (including redundant instrumentation and sampling lines), and with different
systems (such as a PLC and a DCS). I would want both to be capable of shutting down
the thing independantly, and would want serious protection of the code so it could not be changed by just anyone that happened along.

I would also want a LOT of checks along the way. For instance, I would want to check that switches made a transition from off to on rather
then just
seeing that its on, to reduce the chance of alligator clips being used to thwart the safety systems, or to see that a low pressure to high
pressure transition was made if using transmitters.

Bob Peterson
 
A

Anthony Kerstens

....
> Case#1 - Boiler explosion. A power problem caused an I/O
> Processor to loose
> its control direction, causing the gas valve to be 100% instead of 0%.
> Kaboom. $$$. Honeywell TPS and Process Managers. System software
> had to be
> changed to rectify.


There wasn't a hard-wired thermal switch to cut off the gas???? Gas code would require it, and would have saved the situation.

>
> Case #2 - An almost KaBoom in a furnace. Communication from controller to
> controller was disrupted by some bad code causing compensation
> variables to
> be frozen. Tripped on High Flue (exhaust) gas temp due to
> running so rich.
> Honeywell MFC's.

So the hard-wired thermal switch did its job.
And a human screwed-up the configuration of the controller. If communications was lost, the controller should have shut-down, NOT kept on
going.


>
> Case #3 - What I call "Poor Visibility" on a Bailey DCS. The
> documentation,
> confusing spec codes, and a cheesy DOS program contributed to the wrong
> valve being closed as to what was intended. Immediate damage
> occurred when a
> Paper Machine press roll was unloaded under full operation. Total
> $$ bill a
> few million. If the system has excellent visibility - what you see on the
> screen is properly labeled and is what is executing, chances of
> errors such
> as this one are reduced dramatically.


Human being designed it without considering the operator? Or flogging old technology (dos) in light of the new (windows)????

>
> Another strange one: During maintenance the entire DCS plant
> network became
> fragmented, resulting in an inoperable 1 Billion $ Pulp Plant. Recovery
> required shutting down nodes and rebooting each one to the "good" token.
> This is what happens when you take perfectly good computers and break them
> apart into Hard drives (HM), processors (AM) and gateways (take
> your pick).
> Its a database and communications nightmare, kudos to those who made it
> actually work.

Human being improperly applying technology?
Or flogging old technology yet again???

>
> On a project we did mid 90's. Shell did a study of explosions with PLC
> systems and was recommending hardwired systems for all combustion
> controls.

Again, go look at Canadian gas code.

> The standard project fare was to suggest a PLC of course in the design
> phase. But those with a full view of the corporation's experience
> over many
> years, with the data in hand were quite against "the blind faith". We were
> managing the automation portion of a Shell refinery re-configuration
> project. Despite the unruly response this head office engineer
> got from the
> electrical design groups, I could totally see his point.

The only thing that ought to be controlled by a PLC is the proportional valve that regulates the flame. All other items MUST MUST MUST be
hardwired. If there was any "blind faith" going on, it was that PLC's are an acceptable substitute for hardwired safety systems.

>
> Paul Jager
> CEO
> www.mnrcan.com
>


Paul, no matter your examples, you just cannot blame the hardware for the faults of humans. If a piece of technology has a limitation or
deficiency, then it's up to the human to recognize that and deal with it.

As for the other stuff with the explosions, no matter the choice of PLC or _your_ systems, certain things must be hardwired.

Don't flog a piece of technology just because some human screwed-up.


Anthony Kerstens P.Eng.
 
Of course this could never happen with AutomationX - it is perfect with no bad code anywhere.

The patches on the web site are purely cosmetic.

And the PC cards that drive the I/O are also perfect.

Gary Law
 
RE: The Xtreme Q&A section of our website and re-broadcasting the A-list posts, and the providing of candid commentary. We apologize for that and have removed this section from our website. It was inadvertently placed on there without
sufficient checks and balances, and again we apologize for the oversight.

Regards,

Paul Jager
CEO
www.mnrcan.com
 
Xtreme Q&A with Mr. Jansen (this will not be on the web site)
>Have you been around longer than that?

We've been doing server systems since the early 90's. Our firm was a DCS/PLC integrator before teaming up with automationX in '95. We were both working on a major project - a $200 million Coated Paper Machine and we "lost" a sizable
portion of the plant to automationX. Then the challenge was to start-up before them. They were drinking coffee and reading magazines, and we were sweating it out with no sleep right up until D-day. No joke. Plus the system performance
was extreme. The project had PLC, HMI, DCS, of all types. Great place to be to test-drive them all. Took the guys from automationX water-skiing, told them we were blown away impressed. That's how we started here.

>Gee. you mean I can't load Oracle onto my Micrologix? Boy, this thing
>*must* be a piece of garbage! This is simply a case of right tool /
>right job. What about a 'mission critical' medical process? Or a food
>?processor. I can guarantee that anyone who needs absolute uptime
>isn't running a PC control.

Oracle is big software. I wouldn't load it on an aX server either. But if the plant is big enough you'll soon have a bunch of micro logix controllers or small computers that you have to power, wire and maintain. PC control has a bad
rap for now, but obviously as a small company we have to have near perfect uptime to survive.

>A properly deployed system does not suffer from any of these problems.
>You cannot take a 10 year old installation, and compare it to a PC
>based installation today, and then point out the differences. Where
>was -your-
system 10 yrs ago? ........Just as I would not try to
use lights and buttons as the sole UI on a system of any complexity, you cannot honestly tell me that it is cost effective and justifiable to use a PC on
*every* control system.

I agree Joe. 10 years ago computing was simply unable to deliver the horsepower required to do plant control in a server. It is the most demanding task for a computer. The automationX servers, written very efficiently have upper limits in terms of users and I/O for the most practical server one would consider for an area. Still a server will simply smoke a process area typically handled by 6-12 DCS controllers depending on the brand - several thousand real I/O etc. Note that ridiculous but interesting IBM server that Mark suggested will probably be able to crunch 20,000 I/O and 30 operators.

>You were the one that made the reliability statement originally, I
>believe. What exactly do you now mean by "server pair"? Are you
>suggesting that the only way to make your system more reliable is by
>having two of them? Is that cost effective against a Micrologix 1000
>with 16 I/O points? Remember, you are the one saying that PLC's are
>useless. I am not saying PC's are useless, just not the best solution
>for every problem.

There is an advantage from an administration point of view to have two servers. They are like a huge redundant process controller. They back each other up - including updating software automatically. There is a hot standby ready to go
if you have to replace a disk drive or do maintenance. Your data is protected. Switchover is uninterrupted. We did however, run a critical site on a single server for a month or so waiting on some network parts. Fail-over is rare.
Having a backup is really convenient, and it works great. It is a very natural thing to do. Cost has been citied in this thread a few times. Large industry pays millions for their PLC's, HMI, and DCS in aggregate. Maintenance costs are
much higher than for IT systems of equivalent scope. A server-based system reduces installed cost, and maintenance, while delivering incredible
conveniences for users of all levels. It's like the internet for your plant. Servers and clients. Taken on a singular, small application with blinkers on (as in a horse), with 16I/O - No way, not feasible. But take a step back, look at what might happen in one or two years. The scope of some of our servers have doubled in size since initial install. Same two computers - just add software, interfaces or some more I/O blocks. Piece of cake!

>I think you are mistaking 'fear of innovation' for fear of having to
>support some PC based nightmare whose rapid pace of development means
>patches, upgrades, and bugs. The reason that induistrial controls are
>not on the bleeding edge is because we have no time for firmware and
>control software that needs constant patching. I have never
>experienced a PLC processor going into the equivelant of a 'blue
>screen'. (yes, I realize that PLC's have no screen. DUH! What I mean
>is that the processor doesn't just go out to lunch because an I/O
>driver was written wrong and created a memory leak, or whatever....)

I agree that most Microsoft based industrial software products suffer from these problems. I don't mean that in a MSFT bashing way either. Using servers for plant control, there is no room for any kind of weak link in the software.
It's just like a mission critical business app or website. That is why Linux is so popular for these types of applications. Using exclusively MSFT-based products there are weak links and the products won't be as successful. We are X
based whether running in NT or Linux - it's the same source. We have not seen BSD on our systems in the 6 years of running, excepting a hardware memory failure on a server at one site in which the backup took over. As far as patches go - we have update patches a couple of times a year. For a support fee per server these are included and can be installed in a short time, by the end
user or by us, same as any other IT company. It is true that PLC's are closed environment. But in my opinion, the drawbacks of inflexibility, loss in process visibility, loss of ease of sharing data, maintenance of many stand-alone devices, and the communication overhead to each is not worth it.

>"Might eek out"? ROFL! Let's see. at the last plant, we had to
>redesign the RSView apps and PLC programs so that the process could
>continue while the PC rebooted, since it went down about every 4 to 6
>months. RSView on
NT. No extra software, no games, all service packs applied, blah blah blah.
It was a noisy environment that the PC was in. But guess what? Sometimes that
is the environment that you get. Also, I am not costing my company anything by
using the proper tool for the job. I guarantee that what I am doing with a
PLC, you cannot do with a PC for the same price and same capabilities. And
what I *do* use a PC for is the best use of a PC in
*our* production environment

Going down 4-6 months is not acceptable for our applications, and these applications you are referring to are what I call NT Based. As you can see with performance like this they give other products in that loose category a "bad"
reputation as well. I've seen engineers lump software based systems together as the "same thing". They are however, vastly different dependent on the design approach of the software system. If you are trained in IT, you can easily see this. Sure the tool fits for the task at hand, and taken with the standards of the day, would on the surface look like the company doesn't suffer much. The essence of innovation is to go beyond the obvious, to be the first to adopt a future trend, or an enormously brilliant idea. To be ahead of the pack. To make sense of the differences in available technology and to apply them with tangible benefit, is innovation. If you were to setup your facility manager with an easy to understand Window into every part of the plant, in real time, or to call this up in a meeting to explain how an area is operating they'd be impressed. A server can consolidate many individual devices or systems into one more manageable resource. These are just part of the amazing benefits.

>Looking at the website, specifically products.phtml, that looks like a
>lot of computing power. I notice that you have a hot standby machine in the loop.
Is that to indicate that reliability is defined as redundancy?

Not sure I understand Joe. As said earlier fail-over is rare. The systems we have going here once per two-years or so.

>"Field components are typically accessed via (E)ISA or PCI boards
>inside
the control servers, an integrated Soft PLC enabling many different
combinations. Typical cycle times (to perform all the control tasks and send
data to and from the field devices) are from 20 to 100 milliseconds"

>A couple things on that. One, I have noticed that (E)ISA bus is
disappearing from new PC's. How do you intend to support systems that still
rely on (E)ISA cards to communicate? Or do those get dropped? What if my
process needs a faster scan time? Most my stuff runs in the 5 to 10 msec time
frame.

Replace the ISA with PCI, software stays the same. If you need to run a 5-10 msec, this would be accomplished at the device level, not the server level, although Linux systems can get down to that. The device level you have a choice
of embedded programming in a bus couple module or using a PLC!

>Lastly, on the website, I read your tirade against PLC's that is
>disguised
as a FAQ.

It's not FAQ but "Xtreme Q&A" We are re-writing this section. It's not a tirade against PLC's but was written to demonstrate that PLC's are computers, and fail like computers, suffer from power problems, are complex to maintain
above the scale of one or two. Of course sophisticated technology like automationX servers have problems associated with them that have to be managed too, for proper operation. You are taking care of one server, and you can do it
from anywhere in the facility or the world. You can get a lot done in a short period of time, with standard components.

>My (P)athetic (L)ittle (C)omputer has never gotten a virus.
Not relevant

>My (P)athetic (L)ittle (C)omputer never has operators loading games on
>it.

Not relevant
>My (P)athetic (L)ittle (C)omputer never stops running because of an I/O
driver getting corrupt.

Not relevant
>My (P)athetic (L)ittle (C)omputer never has a hard drive crash.

We use dual SCSI drives for large servers. It is possible, but you can get some pretty high power drive systems to ensure 100% availability.

>My (P)athetic (L)ittle (C)omputer has near-zero boot time requirements.

System Feature.
>My (P)athetic (L)ittle (C)omputer can keep a complete application
>backup in
a EEprom for when the program does get dumped. Of course, My (P)athetic (L)ittle (C)omputer never dumps the program when handled properly. Hmm...You
are saying it needs a backup.

>My (P)athetic (L)ittle (C)omputer can continue to run without a screen,
keyboard, mouse.
? I'd trade 20 PLC's for one server with those.

>My (P)athetic (L)ittle (C)omputer doesn't rely on hardware that is
>revving
every 6 months. I can find A (P)athetic (L)ittle (C)omputer **exactly** like
the 5 year old one that my forklift driver just speared on his fork.

So far our servers have typically run in place often as is for many years (5+).

>The opinions expressed here are mine, not my companies, blah blah blah.
Ditto!
Regards,
Paul Jager, P.Eng.
CEO www.mnrcan.com
 
J

Joe Jansen/ENGR/HQ/KEMET/US

Thank you! I enjoyed reading a substantive reply! Now, down to the good stuff.......

<snip uncontested stuff>

Note that ridiculous but interesting IBM server that Mark suggested will probably be able to crunch 20,000 I/O and 30 operators.

Joe Jansen:

And play a mean game of Quake at the same time! <grin>


Paul Jager:

>You were the one that made the reliability statement originally, I
believe.
>What exactly do you now mean by "server pair"? Are you suggesting that
the
>only way to make your system more reliable is by having two of them?
>Is that cost effective against a Micrologix 1000 with 16 I/O points?
>Remember, you are the one saying that PLC's are useless. I am not
>saying PC's are useless, just not the best solution for every problem.

There is an advantage from an administration point of view to have two servers. They are like a huge redundant process controller. They back each other up - including updating software automatically. There is a hot standby ready to go if you have to replace a disk drive or do maintenance. Your data is protected. Switchover is uninterrupted. We did however, run a critical site on a single server for a month or so waiting on some network parts. Fail-over is rare. Having a backup is really convenient, and it works great. It is a very natural thing to do. Cost has been citied in this thread a few times. Large industry pays millions for their PLC's, HMI, and DCS in aggregate. Maintenance costs are much higher than for IT systems of equivalent scope. A server-based system reduces installed cost, and maintenance, while delivering incredible conveniences for users of all levels. It's like the internet for your plant. Servers and clients. Taken on a singular, small application with blinkers on (as in a horse), with 16I/O - No way, not feasible. But take a step back, look at what might happen in one or two years. The scope of some of our servers have doubled in size since initial install. Same two computers - just add software, interfaces or some more I/O blocks. Piece of cake!


Joe Jansen:

Agree, for the most part. I have had systems that were no more than a MicroLogix with a Start and Stop button, a sensor, and an output driving a cylinder. It had to be fast enough to count cans running by at 450 cans per second, and kick out one can every XX parts for testing samples. This was put onto a network with the rest of the line. I do not dispute the need to draw all the data back to a central location for collection, just the method of control.


Paul Jager:




>I think you are mistaking 'fear of innovation' for fear of having to
>support some PC based nightmare whose rapid pace of development means
>patches, upgrades, and bugs. The reason that induistrial controls are
>not on the bleeding edge is because we have no time for firmware and
>control software that needs constant patching. I have never
>experienced a PLC processor going into the equivelant of a 'blue
>screen'. (yes, I realize that PLC's have no screen. DUH! What I mean
>is that the processor
doesn't
>just go out to lunch because an I/O driver was written wrong and
>created a memory leak, or whatever....)

I agree that most Microsoft based industrial software products suffer from these problems. I don't mean that in a MSFT bashing way either. Using servers for plant control, there is no room for any kind of weak link in the software. It's just like a mission critical business app or website. That is why Linux is so popular for these types of applications. Using exclusively MSFT-based products there are weak links and the products won't be as successful. We are X based whether running in NT or Linux - it's the same source. We have not seen BSD on our systems in the 6 years of running, excepting a hardware memory failure on a server at one site in which the backup took over. As far as patches go - we have update patches a couple of times a year. For a support fee per server these are included and can be installed in a short time, by the end user or by us, same as any other IT company. It is true that PLC's are closed environment. But in my opinion, the drawbacks of inflexibility, loss in process visibility, loss of ease of sharing data, maintenance of many stand-alone devices, and the communication overhead to each is not worth it.

Joe Jansen:

I don't see those losses, actually. I do admit that they are sometimes harder to get at, but my current employer has (mostly) standardized on AB processors, with WonderWare Operator interface. Some TCP touchscreens handle local interface needs. The WW interface on the machines can pull any data out of the PLC, and post it up to the Oracle database. Additionally, I have written some VB programs that allow me to remotely gather data from remote manufacturing sites back to my desk for trending, etc. All the data I need is available. Again, I agree that it is harder to get at, but I will still contend that a PLC is more stable, faster processing, and able to do what I need better than a PC for what I do. I will add that I have never worked with a DCS system, though. They always seemed a bit too bulky and proprietary for anything I would want to try to do.

Paul Jager:


>"Might eek out"? ROFL! Let's see. at the last plant, we had to
>redesign the RSView apps and PLC programs so that the process could
>continue while the PC rebooted, since it went down about every 4 to 6
>months. RSView on
NT. No extra software, no games, all service packs applied, blah blah blah. It was a noisy environment that the PC was in. But guess what? Sometimes that is the environment that you get. Also, I am not costing my company anything by using the proper tool for the job. I guarantee that what I am doing with a PLC, you cannot do with a PC for the same price and same capabilities. And what I *do* use a PC for is the best use of a PC in
*our* production environment

Going down 4-6 months is not acceptable for our applications, and these applications you are referring to are what I call NT Based. As you can see with performance like this they give other products in that loose category a "bad" reputation as well. I've seen engineers lump software based systems together as the "same thing". They are however, vastly different dependent on the design approach of the software system. If you are trained in IT, you can easily see this. Sure the tool fits for the task at hand, and taken with the standards of the day, would on the surface look like the company doesn't suffer much. The essence of innovation is to go beyond the obvious, to be the first to adopt a future trend, or an enormously brilliant idea. To be ahead of the pack. To make sense of the differences in available technology and to apply them with tangible benefit, is innovation. If you were to setup your facility manager with an easy to understand Window into every part of the plant, in real time, or to call this up in a meeting to explain how an area is operating they'd be impressed. A server can consolidate many individual devices or systems into one more manageable resource. These are just part of the amazing benefits.

Joe Jansen:

Absolutely!!!! I did this, in fact, at my last employer. Windows into processes, trends for down time reports, automatically generating pallet tag info, historical data, pareto diagrams for lost production time, and even some simple predictive maintenance for production equipment, based on historical data. All written by me in VB and SQL. (ugh).

I think that the differences in our opinion is not the need to gather the data, or the benefits of doing so. I think the differences are that you believe that the same machine that is gathering should also do the controlling. I do not agree with that. I am of the belief that several small, dedicated controllers are better than a single controller, even if it has redundant backup. I fear the potential of a single point of catastrophic failure. If the network cable between the computer and the backup gets tripped over and disconnected, or if the I/O controller blows out, or any other list of possibilities. I am sure that you have an excellent uptime record, or you wouldn't be in business having this discussion. However, when you DO have a failure, is it small and isolated, or is it facility wide?


Paul Jager:


>Looking at the website, specifically products.phtml, that looks like a
>lot
of computing power. I notice that you have a hot standby machine in the loop. Is that to indicate that reliability is defined as redundancy?

Not sure I understand Joe. As said earlier fail-over is rare. The systems we have going here once per two-years or so.

Joe Jansen:

I was simply making reference to the large number of computers you had in the diagram. From a pricing standpoint, that processing power is more expensive. I feel somewhat ridiculous trying to state that Allen Bradley is the low cost solution for anything ;^) but if pricing does become important (like our latest prototype budget that was approved), we revert to Omron PLC's, which is the 'other corporate standard'. (ugh) Point being, there are lower cost solutions than the PC network you show on your site.



Paul Jager:

>"Field components are typically accessed via (E)ISA or PCI boards
>inside
the control servers, an integrated Soft PLC enabling many different combinations. Typical cycle times (to perform all the control tasks and send data to and from the field devices) are from 20 to 100 milliseconds"

>A couple things on that. One, I have noticed that (E)ISA bus is
disappearing from new PC's. How do you intend to support systems that still rely on (E)ISA cards to communicate? Or do those get dropped? What if my process needs a faster scan time? Most my stuff runs in the 5 to 10 msec time frame.

Replace the ISA with PCI, software stays the same. If you need to run a 5-10 msec, this would be accomplished at the device level, not the server level, although Linux systems can get down to that. The device level you have a choice of embedded programming in a bus couple module or using a PLC!

Joe Jansen:

<GASP!!!!!!> using a PLC?!?!?!?!?!?

ha ha ha. Sorry, couldn't resist *that* one! This, I believe, is the crux of my viewpoint. Local specific control that is easy to swap out, and easy to troubleshoot because it is a small local controller. I will not argue that IT training is any easier/harder than ladder logic. Either way, you pay someone to learn it. It's just a matter of what they know. My point is that with a central controller, your program is inherently harder to troubleshoot because all the control is in one place.


Paul Jager:

<snip>

>My (P)athetic (L)ittle (C)omputer has never gotten a virus.
Not relevant

Joe Jansen:

I would argue that it is. If you have your server connected to the Internet, it is vulnerable to infection. If nimda or Code Red(for example) were to get on it and start trying to replicate itself, it would inherently slow your process because of the clock cycles that go to the virus.


Paul Jager:

>My (P)athetic (L)ittle (C)omputer never has operators loading games on
>it.

Not relevant

Joe Jansen:

Depends on where it is located. If it is anywhere an operator can get their hands on it, it will be part of a Quake tournament by the end of the month.


Paul Jager:

>My (P)athetic (L)ittle (C)omputer never stops running because of an I/O
driver getting corrupt.

Not relevant

Joe Jansen:

Yes it is, for the reason described above. Anything based on NT/windows is at risk of having driver problems because MS doesn't want to publish all the specs for their OS. This is offset a lot by a Linux based solution, since you can debug it better, but again, not all software is perfect the first time through...

Paul Jager:


>My (P)athetic (L)ittle (C)omputer never has a hard drive crash.

We use dual SCSI drives for large servers. It is possible, but you can get some pretty high power drive systems to ensure 100% availability.

Joe Jansen:

But at what cost?

Paul Jager:


>My (P)athetic (L)ittle (C)omputer has near-zero boot time requirements.

System Feature.

Joe Jansen:

Why yes, yes it is. And a rather nice one, I might add. Especially when a plant manager is standing behind you waiting for his line to start back up.....

Paul Jager:


>My (P)athetic (L)ittle (C)omputer can keep a complete application
>backup
in
a EEprom for when the program does get dumped. Of course, My (P)athetic (L)ittle (C)omputer never dumps the program when handled properly. Hmm...You are saying it needs a backup.


Joe Jansen:

Everything needs a backup. You may be confusing backup with redundancy, tho. The EEProm backup is similar to burning you application to a CD. Again, this is just precautionary, tho, since a properly installed system doesn't lose it's memory.

Paul Jager:


>My (P)athetic (L)ittle (C)omputer can continue to run without a screen,
keyboard, mouse.
? I'd trade 20 PLC's for one server with those.

Joe Jansen:

Right up until someone spills soda all over the keyboard and mouse, rendering them dead...

Paul Jager:


>My (P)athetic (L)ittle (C)omputer doesn't rely on hardware that is
>revving
every 6 months. I can find A (P)athetic (L)ittle (C)omputer **exactly** like the 5 year old one that my forklift driver just speared on his fork.

So far our servers have typically run in place often as is for many years (5+).

Joe Jansen:

This doesn't address my point tho. That system you installed 5 years ago that you mention is probably all based on ISA architecture. Can you still get those cards? Can you still get a MB with enough (E)ISA slots? Will the BIOS in that MB recognize those cards, since they are probably not Plug and Pray compliant? Can you guarantee that a processor with a higher clock speed isn't going to affect the timing of the program? If something were to happen to the computer, how hard would it be to get an exact replacement that you could *just* load the software into and run? Failures happen. Usually because someone is an idiot and breaks something. My point wasn't how long can it run, but how easy is it to replace when it fails? Or does the customer suddenly need to redesign the system to work with the new I/O cards that are the only ones you can get? How much time and money will they need to put into testing the new configuration to make sure that there are no surprises? re-commissioning is something you just don't have to do when you replace a PLC processor. Plus, I can still get PLC 5 and even PLC 2 parts, if I really need them. Can you get an *exact* duplicate of the motherboard in your 5 year old system?

Thank you again! I really enjoyed reading your comments this time, and look forward to your replies.

Sincerely,

--Joe Jansen


 
R

Ranjan Acharya

Here is a case in point for life cycle problems:

System installed in the 1990s. Centralised control on a SCADA package running on OS/2. Further distributed and localised control running on one PLC-5 with DH+ to some SLCs. All done by an automation supplier "X".

It is now 2001, I work for automation supplier "Y" -- the system is critical, there is no back up but the end user wants to migrate to a Win32 version of the SCADA package. No one has a clue how the central system works. I certainly do not remember all the OS/2 commands and it turns out that supplier X customised the SCADA package by writing C add-ons. Of course, no one knows how they work.

What to do? Try and set up an OS/2 box? Try and write C extensions to the new version of the SCADA package? Port to another SCADA package or
custom-written software on Linux or Win32?

I think that these are the issues Mr. Jansen is trying to refer to.

You will note that nowhere are there any problems with the A-B stuff. Even without comments I can look at the ladder and decipher what is happening.
Ladder logic is not rocket science. The PLCs don't go down either. The OS/2 system has already crashed a few times for various reasons.

Perhaps Mr. Jager's company has a solution for less than CDN$5000 for my customer? I doubt it. Can he really guarantee that when supplier Z comes
in ten years from now that they will not be scratching their heads over a Win32 solution running on (giggle) just a Pentium 4 or perhaps a Linux system using (giggle) only this revision of the kernal from (chuckle) that build of Linux that you cannot get anymore with (chortle) those types of I/O and networking cards that you cannot even get on eBay?

I am having enough trouble getting certain sizes and formats of RAM without jumping through hoops. Further trouble upgrading SCSI arrays for brand-name servers because the new dive cages are a different size. And so on.

On the other hand, the PLCs are still trundling away.

One thing that scares me about PC-based solutions is the tendency of suppliers to make black-box solutions rather than open solutions. I cannot
recall when the last time was that I had a PLC without at least a paper dump of the ladder. I do not have many customers who would accept a solution based on a PLC with no access to the ladder (or IEC code or whatever). On the other hand I meet PC-based systems all the time with "secret" add-ons that no one knows about. You go and change the box, for example, even with
the same hardware and the new box does not work. The source code is no where to be found. The OEM went out of business or wants an outrageous fee
to let you in on the secret.

Some rambling on the subject.

R
 
I was under the impression that a PLC does controlling functions and a PC is made to 'crash' . Its possible to control a complete factory ith a Pathetic little computer ( 8 Mhz , 48 K) Imagine what you could control with a PLC ( 1.4GH etc etc).....Use PC or PLC for the correct tuff . Lets jump in the boat and fly away ....
 
Top