PLC Failure rate data

B

Thread Starter

Brent Phillips

We're trying to find any public information / databases with info on PLC failure rates. We've already purchased the 'PDS Data Handbook' from SINTEF, which is fairly useful, and the 'Electronic Parts Reliability Database' from RAC, which seems to focus on much lower level components (resistors, caps etc).

Any help finding more data would be much appreciated.

Thanks
Brent
 
B

Bob Peterson

I'd bet there is some information on this at just about all the manufacturer's web sites. Probably not all in a chunk where its easy to get at though.
 
B
Exida's book "Safety Equipment Reliability Handbook 2003" ISBN: 0-9727234-0-4 has data regarding major safety PLCs and "generic" general purpose PLCs as well as field and other devices. You can find information about this book at http://www.exida.com and at phone number: 215-453-1720.

Bill Mostia
=====================================================
William(Bill) L. Mostia, Jr. P.E.
Partner
exida.com
Worldwide Excellence in Dependable Automation
[email protected] (b) [email protected] (h)
www.exida.com 281-334-3169
These opinions are my own and are offered on the basis of Caveat Emptor.
 
C

Curt Wuollet

Hi Brent

Good Luck! I can't see any reason the manufacturers would have for sharing that data, and it would be suspect if they did. Unless the government or some large public entity did some tracking, we may never know what actual rates are. Very few entities have a large enough population to get meaningful stats. The number of DOAs I've seen contradict what little data has been published in sales material. If you do find something, please post.

Regards

cww
 
H

Hakan Ozevin

I know that on Siemens website you can find MTBF figures for all their PLC components. For the other manufacturers, you have to ask to the company.
 
S

ScienceOfficer

Curt---

Yet again, I implore you to continue your useful posts on subjects you know about and just don't post when you haven't a clue.

The major PLC manufacturers have both predicted and actuarial MTBF data available for clients that need the information. The numbers I have seen support the reliability claims of all the majors, even taking into
account the DOAs, which are a recognized factor in electronics. Be assured that white boxes suck in comparison. If the major PLC companies are lying, present your case.

MTBF data are available to clients as needed, but not published because of the obvious potential for abuse and the dynamic nature of the information. Samples of the data can be viewed over the Internet in discussions of military contracts and TÜV certifications.

For the originator of this thread, I suggest actually contacting the manufacturers and giving them a good reason to provide the data. Since it's a legal obligation on them when they respond, you may understand when they want a little bit of commitment from you.

Hope this helps!

Larry Lawver
Rexel / Central Florida
 
C

Curt Wuollet

Hi Larry

FYI, I have several years of experience in component failure analysis, reliability studies, and component testing to military standards. I feel that compares quite favorably to having read the manufacturers advertising... and believing it. I can tell you with some certainty that the spendy stuff isn't very far removed from the white box stuff. Compare a commodity motherboard to a SLC backplane sometime.

Regards

cww
 
P

Peter Whalley

Hi Curt,

Couple of differences will be noted:

1. PLC has no fan on the motherboard or in the case or in the power supply.
Fans have very limited liftimes.

2. PLC has no hard disk drive attached. Hard disk drives have very limited
lifetimes.

3. PLC has a power supply which has generally been conservatively designed
and has known reliablity. Motherboard is connected to a power supply of who
knows what quality.

4. PLC may (depending on model) use Mil spec componenents with higher
temperature ratings. Motherboard uses unknown quality componenents.

5. Motherboard has only been in production for 6 months at best so has
little or no historical information on which to make any claim regarding
MTBF. PLC may have been in production for many years.

6. By carefull selection of PLC and proper system design you could well use
it in a life safety system. Their is no data available to allow a PC mother
board to by used in this way and very many reasons why you would not even
think of it.

Regards

Peter Whalley
 
M
In addition to Peter Whalley list:
PLC stuff is designed and QUALIFIED per IEC 61131-2 for, among others:
1. Operating temp range 0-55 deg C - motherboards ?
2. Vibration, shock, etc. immunity - motherboards ?
3. EMC immunity - motherboards ?
4. etc.
Meir
 
S

ScienceOfficer

Curt----

Ah, a credentials war! I'll see your vague experience and anecdotal evidence and raise you by my master's degree in the math (MSCICE, Michigan '81), my access to the data, and my twenty years of field experience that tells me the numbers (good and bad, mine and competitive) make sense.

In a bizarre comparison, you strangely want to compare a commodity motherboard to a passive SLC500 backplane. That's not even a fair fight. The 1746-A13 SLC500 backplane has an actuarial (from field data) MTBF of 32,233,178 hours, or 3680 years. A perfectly good commodity motherboard has an MTBF of as little as 50,000 hours, or 6 years. If you have a better motherboard, plug in your own number. I'll confidently predict that the passive backplane wins by at least two orders of magnitude.

Curt, that's a direct answer to your challenge, "Compare a commodity motherboard to a SLC backplane sometime." You lose. No manufacturers' ads were read in deciding that, whether I somehow blindly and stupidly "believed" them or not. You lose your challenge.

Despite your strange example, a better comparison would be between whatever you think the latest commodity motherboard can do, and a SLC 5/03 processor with more than ten years of field data. The original 1747-L532 has an MTBF of 3,324,672 hours, or 380 years. The newer 1747-L531 has an MTBF of 3,785,239 hours, or 432 years. The commodity motherboard is going to lose by, being charitable, at least one order of magnitude.

Of course, I could be lying. I've stated specific numbers from my sources. You've given innuendo and insinuation. I can't accuse you of lying, because you never said anything specific, testable, or important. If I'm lying, or if you know of ads from major manufacturers that are lies, this would be a good time to present a case.

I'll go on the record. White box PCs suck by at least an order of magnitude if MTBF against major brand PLCs is the criterion.

Hope this helps!

Larry Lawver
Rexel / Central Florida
 
C
Hi Peter

Lets examine those one at a time, although you'll notice that I did mention only motherboards as they are an example of a product that has sufficient complexity and is marketed at such low margins that they absolutely must meet stringent quality goals or the company is out of business. And their rate of achieving those goals is a world class example.


> Hi Curt,
>
> Couple of differences will be noted:
>
> 1. PLC has no fan on the motherboard or in the case or in the power supply.
> Fans have very limited liftimes.


I can run what I need without any fans, it's only the bloated, grossly inefficient software prevalent in the monolithic Windows world that requires fan cooled processors or graphics chips. Since I can run only as much software as I need, I can select conduction/convection cooled components. I have such a silent system sitting on my bench.

> 2. PLC has no hard disk drive attached. Hard disk drives have very limited
> lifetimes.
When you have a Linux version that loads from two floppies, solid state disk is far more convenient. If your minimum system loads from two CDs, then you need a hard disk.

> 3. PLC has a power supply which has generally been conservatively designed
> and has known reliablity. Motherboard is connected to a power supply of who
> knows what quality.

Only if you buy a power supply of such quality. I wouldn't stake my life on the SLC switchers either. I've replaced several in my minimal exposure. Crack one open and compare the components with computer switchers. Typically the same vendors. I'll give the SLC the edge as the power requirements are small which should equate to better reliability. But, you can buy hi rel PC type supplies with guaranteed MTBFs, certainly at the cost you pay for the AB.

> 4. PLC may (depending on model) use Mil spec componenents with higher
> temperature ratings. Motherboard uses unknown quality componenents.

Ridiculous. Same foundries, same packaging. The LTPDs allowed by the MB companies are brutal and as high volume purchasers, they have the leverage to get what they want. I'll make an allowance for the fact that the PLC has lower component count, but the idea of attempting to run a _lower_ quality fab for parts that absolutely must be high volume, low cost is contradictory. You cannot achieve the yields needed to achieve PC prices with anything but a first class facility. It's the way the silicon biz works. Big chips with low yields at PC pricing, is simply not possible. Ask the many dead chip manufacturers. Indeed, low volume custom silicon is much more likely to be a problem. And chips are seldom a problem anymore. The temp issue may or may not be valid. I've seen motherboards that use all 1% resistors because the cost difference is swamped by decreased production variation. We could talk about the difference between mil and commercial parts at length, but bear in mind I've worked in a mil qualified fab. It used to be part design and part sort, but that isn't practical anymore. Now, it's mostly a guarantee for the same parts. With a few exceptions.

> 5. Motherboard has only been in production for 6 months at best so has
> little or no historical information on which to make any claim regarding
> MTBF. PLC may have been in production for many years.

Also ridiculous when many motherboard designs achieve sales thousands of times higher than any PLC. And HP, Dell and the boys frown on bad hardware. A failure rate that is forgivable with a high cost, low volume product would put them out of business. And I'll bet that some MB designs have more power on hours than all PLCs in aggregate. And I'd like to see the PLC makers do a 6 week turnaround with anything like the success rate. And a claim of MTBF is pretty much exactly that. Nobody's keeping score in real time. The processes and technology change so quickly that this year's XXXX chip is probably 20% smaller than last year's XXXX chip. So what does last year's data tell you?

> 6. By carefull selection of PLC and proper system design you could well use
> it in a life safety system. Their is no data available to allow a PC mother
> board to be used in this way and very many reasons why you would not even
> think of it.

Yes, but I was comparing commodity technology. I wouldn't want my Iron Lung running on a SLC either.

And once again, please carefully differentiate software failures and reliability, from hardware failures and reliability. The vast majority of the piles of MBs that get thrown away still run fine with proper software. Nearly all greatly outlast their practical lifetime. Very few of the ancilliaries can come close to the cost/performance achievement. That's why I chose MBs for the comparison.

If you do the same things with PC electronics that you do with PLC hardware which uses pretty much the same stuff, you will obtain the much the same results. If you surround the processors with junk and run garbage on it you will obtain much the same results. There just isn't that much difference. Take a look inside. I'll bet some recent PLCs would even run Linux :^)

Regards

cww
 
A

Anonymous, for obvious reasons

I currently do QA at a world-class manufacturer of passive components. It is very much practical to produce large volumes, then test and sort. What meets mil-spec is rated as such. That which is further out of tolerance, has temperature rolloff issues, etc. get rated lower. All the parts are for sale.

And I can guarantee that the parts made in Korea/China/Mexico facilities aren't all mil-spec. Fortunately labor is cheap enough that we can do the testing and sorting profitably.

Cheers!

QA Guy
 
C
I humbly apologize and would like to inform the world that they need never worry about a SLC failure. You see, according to their published figures, I have experienced all the failures that will ever occur in our lifetime. And GE users are probably off the hook as well. Why would anyone seriously consider stocking spares :^) It's a belief system, and I suppose I wouldn't want to question anyones beliefs. Perhaps if I meditate, that SLC that trashes it's memory will adhere to the figures. And I'll see if it makes all this Taiwan trash around me fail more frequently. Don't get me wrong, I have some AB stuff coming even as we speak. But, I'll likely order spares consistant with my experience. We have spacecraft that go queep and die, and we have electronic greeting cards that will run forever. Discount any and all background and credentials if you wish, if 10% of us have ever seen a PLC failure, those figures are bogus. If 10% have seen more than one, how can they possibly be achieving those MTBFs?

Hint: They don't include the most frequent failure modes and mechanisms.
Because they can't.

Regards

cww
 
C
Passive components are a little different, as you _can_ sell various grades. And the astounding production control and consistancy achieved in the last few decades means a much smaller distribution of values. Look at the price for a 1% resistor now VS 1970. Of course, each type of passive is a story unto itself. But, overall, component manufacturers are much better at what they do than in the past. With the margins being what they are, you simply have to be. Several things have come to pass that push active components through a narrower slot. Die shrinking, SM packaging, low power requirements, etc. You can't simply up the current by 10% to meet mil temperature ranges. You won't find cheap, horrid, ICs because they cost just as much to produce as good stuff. Or more, with old technology.

Regards

cww
 
J

Joe Jansen/TECH/HQ/KEMET/US

Larry,

I started about 13 years ago, when the 5/02 was the "cool new thing" I remember the 5/03 coming out. Since then, I have had around 4 or 5 out of box failures, and another 10 or so go bad in under 5 years. How is this consistent with your numbers? How can they even claim 380+ years MTBF? I have seen some circuit boards looking pretty bad after 20 or 30 years. I can't imagine a board survivng 300+ years without cracking, joint corrosion, decay, etc. 300 years average lifespan is just too much to ask me to swallow. Maybe Curt is right, maybe it is a belief system, but I simply cannot. Common sense gets in the way.

Why do they need a procedure for handling returns/repairs, if everything is rated for 400 year lifetimes?

--Joe Jansen
 
B
I almost hate to agree with Curt on anything, but he does have a point. My own experience with failed PLC hardware of various sorts and brands makes me wonder about the published numbers that I have seen.

I wonder if there is a category in their failure rate calculations for "mysterious glitches resulting in CPU failure that a reload of the program cured"?

OTOH - we accept such behaviour routinely from any windows platform w/o a second thought.
 
Since SLC's have been mentioned specifically in this discussion I can add some data and comments that may be useful.

I have about 160 SLC's in the field. A few are SLC5/02's. Most are 5/04's. They have been installed progressively over a period of about 9 years. They are in small stainless steel cabinets in the sun with other heat producing components including a motor that gets quite hot, no air conditioning, no air purge and no fan. The cabinets are mounted on stands above large vessels whose contents range in temperature from 65C to 107C. Many are installed in the tropics. Most of the others are in climates where the ambient temperature can reach 46C.

Each unit has a 4 slot rack, power supply, processor, digital input card, digital output card and a combination analog card.

I have had no DOA's. Some sites do not necessarily inform me if they have a failure but almost all do. As far as I know there have been 2 processor failures, 1 power supply failure, no rack failures, and no I/O card failures.

The SLC's don't like brownouts. At one site we had to install regulators on the AC supply to overcome the memory corruption issue but we knew the supply there was frequently out of spec. (MTBF failure data can only be based on inspec conditions.)

I like PLC's because it is easy to achieve high reliability. I agree with Curt that that is attainable with PC technology but I don't have his skills to achieve that. I wish I did. In fact, when PLC's first appeared I suggested that they would not be around for long. I thought the I/O, operating systems and software would quickly come to be able to do the PLC job with standard computer systems. We were already implementing logic control with computers.... and doing it reliably. Well, I was very wrong. The race for high performance, the latest and greatest features, fancy smarts, Bill, the ownership seen by accounting users and accounting departments and the influence of a certain company on large corporations made sure that didn't happen.

Maybe the Curt's of this world can get things back on track.

Vince

PS: We still have a 20 year old AB PLC operating 24/365.
 
B
I have done perhaps 100 slcs of various models, with varying I/O counts and different rack sizes, and probably another 30-50 PLC5 systems.

Off the top of my head, I can recall several CPUs that were DOA, and a couple I/O cards that had one or more nonfunctioning points on them, and one slc I/O module that the CPU could not even see in the backplane.

I've had several (maybe 4 or 5) CPUs lockup where the only solution was to remove the battery overnight (or short out the memory holdup capacitor) and reload the program.

I have also had several field failures of various cards including one CPU. The CPU was fixed by program reloading. IIRC, most of the various I/O card failures were detected during startup or soon thereafter, so they might well be considered DOAs as well. I think I had at least one AB power supply fail as well.

I have also had several eeprom modules fail in the field. In fact, at one time I bragged I had 100% failure rate on slc eeproms, as the first two eeproms I used with slcs managed to fail in the field.

I don't really keep track of it, so I suspect I have forgotten more failures than I remember.

I just don't believe the numbers either, but the fact is that they are far more reliable than any whitebox PC.
 
B

Brent Phillips

Hi Everyone
thanks for all your replies.
I've also been doing more research & may be able to shed some light on the difference between MTBF data & expected lifetime.
These two are simply not intended to be comparable. The MTBF relates to random failures during the 'normal' operating lifetime (flat part of bathub curve).
At some stage in the products life, a DIFFERENT failure mechanism becomes dominant, ie. old age. This then causes the average failure rate to increase dramatically above the 'MTBF failure rate'.
Its much easier to apply to mechanical systems with moving parts where an end of life failure is very clear due to it simply wearing out.
Electromechanical devices (eg relays) would be something of an in-between for us where there is a mechanical wearout mechanism after several years, but the MTBF may still be high (10s of years?) due to a low level of random failures before the 'end of life'.
The issue certainly gets a bit cloudy, but have a read of this article: http://www.weibull.com/hotwire/issues22/hottopics22.htm

Brent
 
Top