A bit surprised no one brought up the "great black out of 2003" - how
did all of our wonderful technology fail? Specific complains I heard
even on CNN-type news programs:
1) time syncs at different companies varied such that it was impossible
to verify sequence-of-events across company borders (gee, noone's heard
of GPS or radio sync of time?)
2) All the auto/fail-safe computer systems ... didn't prevent it. (I'd
wager most were in manual and/or over-ridden by supervisors who's
year-end bonus is linked to keeping the revenue-meters rolling).
3) Utilities (& politicians) in Detroit complain that when their power
started going down nearly ONE HOUR after the start in Ohio, they had yet
to receive any notice or email or hint that there was a problem heading
Anyone else have comments? views? personal experience?
- LynnL, www.digi.com
1) >>time sync's varied<<
I discuss this at http://www.controlviews.com/blackoutinvestigation.html. It is true, according to multiple sources, that incorrect time stamps on logs and records (from computers) hampered the investigation. This is not new. I am frequently in control rooms for many types of process plants, including power plants and power company central system control rooms. It has always disturbed me that the clocks were almost always incorrect, sometime by a couple of minutes.
While I haven't investigated anything as big as this blackout, I have had to try to find out the cause of a problem. In some cases I might observe something happen in the plant, note the time from my wristwatch, go to the control room and check the log or trend on the DCS. On other occasions I would have to compare the logs from two different devices, such as a DCS and a PC connected to PLCs. If the time was incorrect, determining cause and effect was difficult.
It does not take new technology. I worked in a plant in the early 1970's in which every clock was within at least two seconds of correct time. (some records were kept in milliseconds). NIST has been broadcasting the correct time on WWV for over 50 years on short wave, it is easy to set a clock correctly.
In spite of this old technology, I have seen time displayed, in seconds, on TV that was incorrect by 20 seconds, time on outside "time and temp" clocks that was 3 minutes off.
I hate to be anal about it, but if we just set our clocks to the correct time we could compare logs from two different systems. Radio Shack now sells inexpensive travel alarms with clocks set by radio to within one second of correct time.
2) I don't know that there were any failures of computer systems. The problem was in procedures (whether implemented in computer programs or written operator procedures) and communications. After the blackout of 1965 procedures were developed to have each region isolate itself from adjacent regions when problems occurred. However, several problems have come about. In order to isolate a region, that region must be "in balance"-that is, the generation must equal consumption. With "merchant" plants producing power in one part of the country for sale in another, keeping a regional system in balance is very difficult and will require dropping some loads if the system is a net consumer at that time.
3) Communications is a serious problem. Again, it is not a technology problem but the failure to use technology. For at least 40 years information from one location has been available in control rooms in other locations. There is no reason why system operators in one region should not have detailed information about plant and power line trips in all other regions. It should be particularly easy now (I can tell you right now that the system load in the New York State Control Area is 19477 MW and that Lake Norman in North Carolina is 2.1 feet below full pond. In 1965 the power plant control room operators in North Carolina new about the North East problems as the blackout occurred.
There are procedures written to stop blackouts from spreading. These procedures do not have the force of law and are not always followed. The communications capability that has long been possible is not being used.
Sorry to be on a soap box here, But for item 3) How are you ment to send a warning email saing "my power has gone off" if the power is out. Thats back to the old addage of some who phoned a Tech support line to complain his monitor had failed. Yet when the technincian asked him to check the cables at the back he said it was to dark. So the tech. asked him to turn on the light and he replied "I Can't the powers off".
People should may think before they make daft statements to CNN etc. with out first thinking of waht they are saying.
Secondly Why has knowone brought up the idea that it was some sort of sabotage (via a virus)!. I could happen.
Then ofcourse London had the same thing 1-2 weeks later.
Routine computer failures??? or Sabotage? or Virus?
It is not good to be on a soapbox while ignoring modern electronic components such as UPS's and Microwave links etc. etc. as it will hurt just that little bit more when getting knocked off of it.
A study in complacency, serious problems are so few and far between that they are almost a total surprise. I have fallen victim to this many times myself. The system you know the least is the important one that just runs. The ones you know best the are the unreliable ones you work on all the time. You can't take down the power net and play around or train very often. As a consequence, it's very difficult to remember clearly what to do when it barfs. I've seen quite a few systems that no one knows anything about because it's always running and must stay that way. My Linux knowledge is fading somewhat because if you aren't installing a lot of new systems, you simply don't ever do many things. I never had that problem supporting Windows :^). It's the price of success. And it's very hard, it not impossible, to justify tearing things down simply for the needed experience in bringing them back up.
So, since these guys sit at a console for years without a major incident I can see where they wouldn't be making the split second decisions needed to stop the avalanche. Perhaps there should be scheduled blackouts so they could conduct exercises and drills and test equipment. That would be good for users as well, especially if they forgot it was coming :^)
On September 19, 2003, Curt Wuollet wrote:
> A study in complacency, serious problems are so few and far between that
> they are almost a total surprise. <
That's not so much complacency as a faulty risk analysis. Risk is probability times cost. If serious problems are so few and far between, then assessing true probability or true cost is difficult. When the probability of a catastrophic failure is perceived to be extremely low,
the cost to prevent it may seem significantly greater than the likely benefit.
When the cost includes:
- paying for employees trained and ready to handle an event that might occur perhaps once or twice in a lifetime
- keeping equipment and infrastructure at or near the state of the art
- providing service to a clientele that is notoriously unwilling to pay a dime more than they have to
then it's no surprise that systems are allowed to approach the point of imminent catastrophic failure before the shortcomings get addressed.
People in general (as consumers and as voters) do not like to deal with problems before they manifest. It is axiomatic in business and politics - and it's important to acknowledge that power generation and distribution has feet planted solidly in both arenas - that people will
only demand a solution, or even agree to foot the bill for a solution, when the pain of the problem is already being felt.
Getting a market or an electorate to pay for a solution to a problem that hasn't already killed someone or cost a fortune is nearly impossible. No matter how well the problem is documented, no matter how widely the experts agree on the need to solve it, there will always be an opposition that trots out its own "experts" who claim that the problem is being overstated, or that the proposed solution is more expensive than necessary, or that it should be paid for by somebody else.
This is a recurrent pattern, and one that I suspect is inherent in the functioning of a "democratic" society. Without meaning to start a flame war or take the thread off-topic, I maintain that we could find any number of examples in the news today where the political and economic will is NOT being mustered to avert an imminent catastrophic failure of some system or other, whether it be a natural system, a financial or economic system, or an engineering/infrastructural system.
http://home.swbell.net/chironsw -- email@example.com -- 713.869.6876
This is an inherent problem with any political system. People who know how to use the system are able to manipulate it to their own ends, and usually do not care much about anything else, as long as their own constituency does not gripe too much about it.
But I am not sure I would consider the recent blackout a catastrophic failure, so much as a widespread nuisance failure. Power failures due to weather related situations are very common, and we don't consider it a catastrophe. I don't have any figures to support my guess, but I would guess that the cost of weather related power failures is probably much higher than that due to equipment (or in this case system) failure.
And in this case, we really did not have a single piece of equipment that failed, rather whole networks of systems designed primarily to provide power, rather than protect the power network failed because the focus was on the wrong thing. Someone needs to have the political will to say its acceptable to shutoff power to a few people to avoid shutting it off to far more, and setup the grid along those lines.
I would hazard a guess that if the focus was on protecting the grid, and isolating problems to a local area, as opposed to avoiding power shutdowns, this situation would have been localized and probably contained without incident, and with little or no news coverage, except for the few thousand people that were affected.
For a good description of why power systems can become unstable see:
A weather related outage that affects 30 million people would also be considered catastrophic even if it only happens infrequently. I don't think the harm done by outages is proportional to the total number of people involved only. 1,000 outages each involving 100 people is a lot less disruptive than a single outage involving
100,000 people. Those 100 people can go a few blocks to get warm, cool, water, toilets, etc. Where do 100,000 people go?
If we make enough requests, sooner or later one of the vendors will see the light. And I really don't like using DOS or Windows software on Linux either. It's kind of a kludge and often no better than running it on those platforms. I want a port that runs native on Linux. And at times it's damn difficult to get the Simatic manager and Step 5 programming package to run on Windows :^)
They just laugh when I suggest Linux. A few of them, like me, piddle around with it at home - to get all the peripherals working is quite a challenge - my shiny new laptop at home is stuck on Windows XP because the Wireless
Ethernet has never heard of Linux. The general expression there is that it is still not ready for the big time. I'm sorry, but I don't think that re-compiling my kernel just because I have an off-the-beaten path sound card is acceptable (in an unscientific anecdotal survey, I have not found a hard-core Linux user yet who has not re-compiled their kernel). Also each tool looks as though it was written by a different person (it probably was,
but they should be using some sort of guideline for consistency and help files should be written in reasonably grammatically correct English for the English "distro").
I'll be there when:
- It installs consistently in graphics mode and recognises all my peripherals
- The release set is stable
- Each tool in the system has a consistent user interface
- The big OEMs start to support it
- Re-compiling the kernel is not a requirement for day-to-day stuff
- SCO are bought-out by IBM and become a footnote to history
- It is on an old Pentium II box at home for LegOS fun stuff
- All my customers remain with Windows or HP-UX
I would like to comment on a few items in your message. Also, lets make sure that we hold Windows to the same standard that you want to hold Linux too. Only fair, right?
> is acceptable (in an unscientific anecdotal survey, I have not found a
> hard-core Linux user yet who has not re-compiled their kernel). <
This is because _they can_! Re-compiling the kernel to optimize it to the machine it is on is a HUGE performance benefit, which is why the "hard-core" Linux users do so. How do you get rid of the built in support for the stuff you don't need for windows?
> Also each
> tool looks as though it was written by a different person (it probably
> but they should be using some sort of guideline for consistency <
Of course, windows software always uses consistent operator interfaces, right? Is that why I keep accidentally trashing stuff in Lotus notes when I hit the wrong key combo trying to send a message?
Windows has Common Controls DLL, Linux has QT or GTK. Beyond that, both have the same problem of people using there own set of design criteria.
> I'll be there when:
> - It installs consistently in graphics mode and recognises all my
> peripherals <
Agree, although for me, the only stuff it doesn't recognize is stuff that windows won't either.
> - The release set is stable <
Defined as? By stable do you mean that they will not release another version? Or do you mean that it won't need to be patched? If that is the case, windows has a long way to go.......
> - Each tool in the system has a consistent user interface <
Both have this problem.
> - The big OEMs start to support it <
Like who? IBM? Microsoft won't, of course. I think IBM counts as "big". Cisco? I would like to see your list of big OEMs, if IBM doesn't make the list.....
> - Re-compiling the kernel is not a requirement for day-to-day stuff <
Already done. Although re-compiling will give you a major performance boost, you are obviously free to not do so. I have systems that have never recompiled, and they run just fine.
> - SCO are bought-out by IBM and become a footnote to history <
Doubtful... I think IBM will wage this fight long enough to kill the company.
> Until then:
> - It is on an old Pentium II box at home for LegOS fun stuff <
This is where the Linux doubtfuls always get me. They put Linux on a crappy old machine, and then compare it to XP or Win2K running on a 2GHz P4. Then they complain about how slow linux is. Give me a break! here is an experiment: I will put win2K on my 600MHz P2, and carefully craft a RedHat installation onto the 2GHz P4. Do I get to complain about how worthless windows is then?
> - All my customers remain with Windows or HP-UX <
Good for them. Did they evaluate the options? If so, and they made an informed decision, then they should be happy with it. If they did not bother to compare the two tho, or worse, you did not bother to give them the information and take the time to offer a *fair* comparison, then they
have been short-sighted or short-changed.
FYI, I prefer Linux over Windows, but from the point of view of Factory Automation I am stuck with Windows.
On October 6, 2003, Ranjan Acharya wrote:
R>>> is acceptable (in an unscientific anecdotal survey, I have not found a
hard-core Linux user yet who has not re-compiled their kernel). <
On October 8, 2003, Joe Jansen/TECH/HQ/KEMET/US wrote:
J>This is because _they can_! Re-compiling the kernel to optimize it to the
machine it is on is a HUGE performance benefit, which is why the "hard-core"
Linux users do so. How do you get rid of the built in support for the stuff
you don't need for windows? <
Yes, but that makes things too complicated. Out of the box installation is what users want. My customers don't want to deal with re-compiling the kernel. When you are a "power user", you forget what it is like for many people who either cannot be bothered (i.e., they could figure it out, but why should they) or just cannot (i.e., their expertise is in other areas).
R>>> Also each tool looks as though it was written by a different person (it
probably was, but they should be using some sort of guideline for
J>Of course, windows software always uses consistent operator interfaces,
right? Is that why I keep accidentally trashing stuff in Lotus notes when I
hit the wrong key combo trying to send a message? Windows has Common
Controls DLL, Linux has QT or GTK. Beyond that, both have the same problem
of people using there own set of design criteria. <
True. Common User Access and so on died a long time ago. But there is still a consistency I "feel" when I use Windows that I just do not have with Linux. The reason for this complaint is from when I was setting up some
networking tools with Linux. I had software from Sweden and North America - both were wildly different and it was a royal pain to get it all working.
R>>> I'll be there when:
R>>> - It installs consistently in graphics mode and recognises all my
J>Agree, although for me, the only stuff it doesn't recognize is stuff that
windows won't either. <
That was not my case; I had to re-compile my Kernel just for sound. Then the next time I got a Kernel patch ... and so on. I found it annoying and ended up throwing in the towel and buying a new sound card. Lately, I have
found Windows quite acceptable when recognising my peripherals. Over the last few years, with "normal" peripherals I have not had a case of device in-fighting (thanks to Ethernet, USB and FireWire, the old IRQ battle has finally gone away - but that would hold true for Linux too). I don't have hundreds of peripherals beyond the USB tower for Lego, a camera, keyboard and mouse. I don't even have a printer at home.
R>>> - The release set is stable <
J>Defined as? By stable do you mean that they will not release another
version? Or do you mean that it won't need to be patched? If that is the
case, windows has a long way to go....... <
True, also Linux patches are much easier to apply. But then I prefer Linux. The only down side is the perception of release du jour. Also, stability means that these Linux OEMs have to get their collective act together or go
the way of Unix (Solaris, AIX, HP-UX ....).
R>>> - Each tool in the system has a consistent user interface <
J>Both have this problem. <
R>>> - The big OEMs start to support it <
J>Like who? IBM? Microsoft won't, of course. I think IBM counts as "big".
Cisco? I would like to see your list of big OEMs, if IBM doesn't make the
You forget, we are only talking about Industrial Automation. This is Control.com after all! IBM indeed does NOT make the list of automation OEMs.
R>>> - Re-compiling the kernel is not a requirement for day-to-day stuff <
J>Already done. Although re-compiling will give you a major performance
boost, you are obviously free to not do so. I have systems that have
J>never recompiled, and they run just fine. <
Not in my experience or that of all the Linux users who work for the same employer as me. I'll wait and see.
R>>> - SCO are bought-out by IBM and become a footnote to history <
J>Doubtful... I think IBM will wage this fight long enough to kill the
Good. Until then, another AMD - Intel brawl.
R>>> Until then:
R>>> - It is on an old Pentium II box at home for LegOS fun stuff <
J>This is where the Linux doubtfuls always get me. They put Linux on a
crappy old machine, and then compare it to XP or Win2K running on a 2GHz
>P4. Then they complain about how slow linux is. Give me a break! here is
an experiment: I will put win2K on my 600MHz P2, and carefully craft a
RedHat installation onto the 2GHz P4. Do I get to complain about how
worthless windows is then? <
Whoa! Where did that come from? I never once complained about Linux being slow or mentioned P4. Give ME a break! The reason it is on an old machine is because at home my wife has the shiny new P4 laptop that runs Windows XP - not out of choice, but necessity (the WAP hardware from Netgear does not have Linux drivers, she was using Linux before that). I bring my own P4 laptop home from work if I want to use a performance tool - Linux or Windows. Read what I originally wrote! The LegOS is for F-U-N on an old P2 box (I have an old P1 box that came with Windows 98 on it for my eldest daughter, she is four and only cares if the Internet is up for on-line fun, the 486DX2 is up on a shelf and still has DOS on it). The P2 runs on Linux
because the old box would never run a newer version of Windows, but runs Linux acceptably well. Also, LegOS requires Linux.
R>>> - All my customers remain with Windows or HP-UX <
J>Good for them. Did they evaluate the options? If so, and they made an
informed decision, then they should be happy with it. If they did not
>bother to compare the two tho, or worse, you did not bother to give them
the information and take the time to offer a *fair* comparison, then they
have been short-sighted or short-changed. <
Options? What options? They had no choice. None of my customers are silly, they choose to work with large automation OEMs with years of experience, engineering staff, good product selection and so on. They come to me with a standard platform - Siemens, Allen-Bradley, General Electric and so on. What is this "you did not bother to give them the information"? Pointing fingers and making accusations like that is a little bit rude and very daft - what information? - you know perfectly well that the big OEMs are in bed with Microsoft. Is there a release of RSView/32 for Linux from Rockwell? Did Siemens announce an S7 programming tool for Linux? As pointed out again and again and again by many postings to those very keen
fans of Linux who think we can switch right now; we use the OS we have to use not the OS we might want to use. Let me know when CiTect have their Linux version out eh? Until then, let me suffer in peace.
At the end of the day, Windows gets the job done. Linux would too. More choice would be nice. I integrate solutions based on tools available in the market place and based on directives from my customers (sometimes given with advice from me, sometimes not).
Where the action is:
Sixnetio.com 40% Sales Growth Uses Linux
China. Going Linux with a bang!
See Article, below
Blackout of 2003 - a large part of the perceived problem is time-based data analysis and state-communication. With recent TC/IP security breaches, I bet a lot of the new SCADA WAN infrastructure will go Apache (Linux) whether the above Western Vendors like it or not.
Article: Per Joe Feeley - Publisher, Chemical Processing magazine, September 2003
"When there's any opportunity to stop for a minute, catch our collective breathes, and take a hard look at our industry, It's clear that there are many, many subjects we should talk about inside the covers of this magazine."
"Job #1 is providing information that helps you do your job better. But, from time to time, there are important issues that involve something more than that. "
"Like what, Joe? Well, of late, I'm hearing more and more about the loss of US-based technical jobs to countries such as China and India, in a fashion similar to what's happened to the IT world."
"The design work for that new sulfone polymers process in Georgia or that more energy-efficient stem cracker operation in Louisiana is being out-sourced to countries where process engineers earn one-third of what their US counterparts do. The trend is no longer a trend, it's an
expanding fact of business."
Me: What part of the re-vamping of our transmission infastructure is subject to outsourcing to Siemens. Note the reference to Siemens PLC and Linux TRU's in
http://www.ats.nl/press_releases/sixnet-versatrak-11-2002-gb.pdf (Sixnet again.)
Two years ago I was stopped by a sweaty pump operator - non-control room type- in a plant in India. He asked me a technical question and I asked him where he got his engineering knowledge. He told me he was a graduate
chemical engineer, and that India graduates much more technical talent than they can absorb domestically. He said lots of his friends had to settle for laborers jobs, and that he was lucky. Six years ago in China I was beginning to see the same trend - except China CAN absorb much of their talent.
Hey! maybe the Indian laborer/engineer would be willing to do controls design work for $10/ hour. Much more than he's making now...
The writing is ALL OVER THE WALLS. We got a hurt coming unless we do new things in new ways....
Yes again, all wonderful news, China tell Microsoft where to go and so on. Some users implement Linux.
Again, it all simply puts the ball back in the court of Allen-Bradley / Rockwell, Invensys-Wonderware-Foxboro, Siemens, General Electric-Intellution, Schneider Automation-Modicon-Telemecanique and so on. This is where I integrate.
I suppose they will use Linux when their commercial model allows it.
Until then I wait.
On October 13, 2003, Ranjan Acharya wrote:
> Yes again, all wonderful news, China tell Microsoft where to go and so on. Some users implement Linux.
> Again, it all simply puts the ball back in the court of Allen-Bradley / Rockwell, Invensys-Wonderware-Foxboro, Siemens, General Electric-Intellution, Schneider Automation-Modicon-Telemecanique and so on. This is where I integrate.
> I suppose they will use Linux when their commercial model allows it. <
Exactly. For most vendors, support for any given feature, protocol, or operating platform is an business decision driven by the market. What Curt's been championing is a deliberate effort by the market to influence those business decisions in a particular direction.
> Until then I wait. <
As you should. For you, the tools are simply a means to a very concrete end, and the tools you have adequately meet your needs. As far as I can tell, there is absolutely no incentive for you to move to Linux until the choice is not only painless, but painless with clear benefits.
For most users, the choice of what product to use is also an economic decision, in the broad sense of maximizing return within the user's own utility function. (Well, stated that broadly, every choice everybody makes is an economic one. :^) I phrased it that way because everyone choosing from among the same products isn't using the same utility function.)
For many or most automation integrators, the utility function consists largely of maximizing profit, which usually implies using readily available components with which they and their customers are already familiar and comfortable. Today, that means Microsoft-based software for a very large percentage of the market.
Some integrators - and their clients - are willing to absorb an additonal cost (expertise, effort, local rather than outsourced support) in order to create more elegant, stable, scalable and maintainable solutions. Of those, some are using Linux to realize their goals. That's true for me and most of my clients.
If we are successful (and I believe that we are being successful), the market will, over time, perceive a reduction in the cost/risk of using Linux. More people with high enough risk tolerance - or high enough pain level with their current situation - will implement automation systems on Linux, the market for Linux-based automation components will grow, and eventually the big automation vendors will see a large enough potential user base to make it worth their while to provide their tools on Linux.
Even then, I don't expect to see a wholesale abandonment of Microsoft. The learning curve for using and administering Linux will still be a cost, inertia on the part of integrators and their clients will still be a barrier, and Microsoft's marketing ability and market clout will provide other real and apparent benefits to using their products.
We've seen exactly this course of development in other areas of computing, particularly the realms of network infrastructure and internet services, which (not coincidently) more closely resemble automation integration than does anything that happens in the office on a desktop. I don't see any reason why this won't play out in industrial automation, too.
This attitude puts us squarely in the "chicken or egg" problem. Many of those things you demand simply can't be done without vendor cooperation. The drivers for the thousands of different peripherals are written for MS but seldom for Linux. This is changing as Linux becomes an important market, but making it important to have Linux drivers can only be accomplished by the community. You are opting out of that community. And, no doubt buying products that support only Microsoft. The Wireless Ethernet should know about Linux because I believe this was working on Linux before it was supported on Windows. Buying a product that supports Linux would solve your problem and move us forward at the same time. There is a great deal of synergy there.
And if your "off the beaten path" sound card vendor considers everything top secret and won't release the information to write drivers, you're asking the impossible from the community. Again, making Linux support a criteria in your selection will help much more than you might think. And I doubt that you would have to sacrifice anything to do it as Linux support tends to be available for two classes first, the cheapest, and the best. That said, I haven't had a sound card problem for quite a while except for notebooks and extremely stingy integrated MBs where they only give you half a soundcard and burden your processor with the rest. Sort of like WinModems and other Winjunk. I'm about as hardcore Linux as you get and I'm running three stock kernels across a multitude of different PCs The interface consistancy problem is being addressed with KDE and to a lesser extent GNOME. For my own part, I prefer the UI to fit the program, not some framework copied from a word processor, but that's personal preferance.
Oh, and all the big OEMs do support it, including HP and especially IBM. There is only one holdout that doesn't support it and probably never will. Unfortunately, they have most of the market by the short hairs for the moment and can thus exert life or death pressure on all except the biggest companies. But their power is waning and slowly, bit by bit, the monopoly is crumbling.
So, by your actions, you can make your criteria occur, or not. I, and a lot of other folks, think it's very worthwhile to enable competition once more. You may think your little bit doesn't matter. But it does. Progress tends to be almost linear towards an inflection or tipping point, after which it becomes exponential. We have almost enough of a community to push it over. Just a few more people that care will make it happen. And even if you never move to Linux, the competition will be of great benefit to all. I wouldn't want to see a Linux "monopoly" either. Monopoly and a monoculture have been demonstrated to be bad for consumers and security and the market in general. Diversity will get things moving again and prevent the strangulation of new and better ideas. Then we can judge which are better by comparison rather than speculation. Like I say, I'd be happy with RSLogix for Linux 1.00 even if it doesn't run any better than the Windows version.
I was wondering if you were reading this thread, as I said to Joe, I'm stuck with Windows at work and a little bit at home (one machine has Linux, one machine has Windows 98 and the other has Windows XP) for various reasons BEYOND MY CONTROL (I am a fatalist - I know I can't fix everything - Linux can't find those pesky WMDs in Iraq can it?). I just don't have the time to re-write CiTect (for example - fill in any large industrial OEM here) to work with Linux (I don't have the source code either), plus I have not written in C or C++ for several years.
I don't care about those "big OEMs" like HP and IBM supporting Linux. I care about big OEMs in the industrial control field supporting Linux. Until then, I use Windows at work.
Reading you loud and clear here.
I'm not suggesting that you rewrite Citect. I'm suggesting that, if you would like to see a Linux version, it's more helpful to demand a Linux version from Citect (or name your favorite vendor here) than to simply wait for it to magically happen. If everyone just waits, it will never
happen. But, Linux users, acting together are making some rather remarkable things happen. Almost all of the recent events were held as impossible by those who simply wait. And the solution to the problems you mention is the same. If the community makes it important to vendors, it will happen. They will release Linux drivers for your hardware, they will port their products to Linux and it will no longer be impossible.
As someone who's writing Linux applications right now for a Linux HMI, I'll tell you one major reason why no one is using Linux -- there are basically _no_ drivers for any industrial automation hardware. In the Windows world, there are numerous companies that sell OPC and COM and
.NET and other acronym-compliant drivers to talk serial, Ethernet, or whatever to PLCs and the like.
In comparison, I look at http://www.linuxincontrol.org/, and I find none of those projects dealing with the constant question... "How do I talk to my PLC from my Linux box". Nothing. This isn't something done on
Windows by AB or Siemens or what have you. This stuff is all done by small companies with tens of employee, and I can't find anyone, open source or not, doing this for Linux.
You have a GPLed Modbus driver in the MAT project. I know of Ron Gage's AB Ethernet library also, and there are probably a couple more out there, but they're all wildly different projects. Omron Hostlink for Linux? GE SNPX for Linux? Nope. Even then, customers on Windows platforms can call up their vendor and go "THIS STUFF DOESN'T WORK, FIX IT", while the Linux answer for the free software is "fix it yourself".
This is a problem. I am writing most all my own drivers for all the devices I have to talk to, but I'm a programmer and that's my job. Integrators won't do that, they want to get the job done, and as long as your answer is "write some code to talk to a PLC", then most people
won't be using Linux to replace Windows, because they don't have the time or skills.
Alex Pavloff - firstname.lastname@example.org
ESA Technology ---- www.esatechnology.com
------- Linux-based industrial HMI ------
-------- www.esatechnology.com/5k -------
Lets say that I had a secret stash of drivers using Open Protocols to avoid being stomped into dust by the monopoly beyond the law. Everything you could imagine, with a core commonality so translation would just be a matter of loading a DeviousNet and a PreppyBus module. And I could talk to every PLC beause they had support on their end for my Open protocols. That would be Nirvana. Arguably possible and certainly well within the technical capability of the OSS community.
Now let's get back to reality. I can't even get Modicon, who obviously wants Modbus to be considered an Open protocol, to give me a clear go ahead to popularize their protocol by making it available free of charge or commercial interest. I can only imagine they would like to retain the right to do to us what SCO is trying to do to Linux. If I won the lottery, I could perhaps license a few, but even there it's highly unlikely that I could release code under the GPL. Which is pretty much the point of our existance.
These little shops and widget makers have what they need to build and sell interfaces of various types. They have paid and, no doubt continue paying for those rights. And since they maintain secrecy and sell only on the terms in their licenses, their continued existance is tolerated. We cannot, within the limits of our charter and scope, operate that way. We need to find another way that involves cooperation from dozens of for profit corporations. This has not been forthcoming. So, do we reverse engineer their IP? That would give us part of the puzzle and very likely dozens of lawsuits. Can we force them to provide register access and the like? No, we probably can't. Can we then simply emulate and use OPC, .NET and the other peepholes provided by agreement between monopolists? Well, yes we could, with even greater legal exposure.
It will require a change of paradigms, and a change of heart for us to be able to provide that Nirvana. I feel the OSS community is the right vehicle if that Nirvana is to ever come to pass. We've seen that private consortia and even international standards bodies fail miserably in even the first faltering steps toward cooperation. The OSS community, more any other entity, owns the cooperation paradigm. World Wide. And even big corporations play by the rules as it is in their best interest to do so.
It will come to pass, but not until the rest of the world is eating our lunch, having rejected Microsoft and moved ahead on common Open ground. The current market leaders may well be in the position of the guy who makes the best yardsticks in the world, but who can't understand why they just don't sell anywhere else but the US. They just don't get it. :^)
I'd like to point that in your _very own MAT project_, you already _have_ Modbus TCP and RTU code available. There's a few other open source Modbus implementations on the web too. But if the argument is "I'm not going to write any code, because I'm afraid that there's some chance that possibly sometime in the future someone might sue me", I'm not really sure that anything in the open source world would ever have gotten done. Heck, I'm not sure that any software anywhere would ever get done.
I also think that accusing Modicon or Modbus.Org of some nefarious plan to act like SCO and sue everyone for money is rather inflammatory and a baseless claim. Do you any proof that they have plans to do this? Have you discussed these sorts of issue offlist with representatives of Modbus.org?
Heck, if you're so worried about the Modbus click-thru license, spend a couple minutes and find someone to send you the PI-MBUS-300 paper manual, which has no click-thru or shrink-wrap license. There are numerous other manufacturers who have made their own Modbus variations, and documented them separately. I'm sure you could find a manual somewhere describing a protocol that would interoperate with Modbus devices that had absolutely no legal disclaimers anywhere.
(Lynn will then tell you that all specifications save the latest on Modbus.Org are obsolete, and its true, but most of them still seems to do the job <g>).
Alex Pavloff - email@example.com
ESA Technology ---- www.esatechnology.com
------- Linux-based industrial HMI ------
-------- www.esatechnology.com/5k -------
On October 14, 2003, Alex Pavloff wrote:
> Hi Curt,
> I'd like to point that in your _very own MAT project_, you already
> _have_ Modbus TCP and RTU code available. There's a few other open
> source Modbus implementations on the web too. But if the argument is
> "I'm not going to write any code, because I'm afraid that there's some
> chance that possibly sometime in the future someone might sue me", I'm
> not really sure that anything in the open source world would ever have
> gotten done. Heck, I'm not sure that any software anywhere would ever
> get done. <
Yes, we do have some code. I have written some as well for private projects. And yes, it can be done with freely available materials. What I am looking for is to set a precedent of using protocols and other necessary IP with the knowledge and blessing of the owner.
> I also think that accusing Modicon or Modbus.Org of some nefarious plan
> to act like SCO and sue everyone for money is rather inflammatory and a
> baseless claim. Do you any proof that they have plans to do this? Have
> you discussed these sorts of issue offlist with representatives of
> Modbus.org? <
Yes, I agree that was a bit ill considered and hereby apologize to Modicon et.al. But the thought behind it was that this is going to be a key issue going forward and by far the best way to resolve it is amicably, up front. What I meant to say is that it exposes us to the same type of antics, which could well be fatal to our project as we
are not blessed with the resources of IBM. It would be truly tragic if the hard work of many volunteers were negated in this manner. I as founder, feel quite a bit of responsibility to protect that body of work freely given. I want only what in the best interest of all. We have the opportunity of setting a wholesome precedent and example.
> Heck, if you're so worried about the Modbus click-thru license, spend a
> couple minutes and find someone to send you the PI-MBUS-300 paper
> manual, which has no click-thru or shrink-wrap license. There are
> numerous other manufacturers who have made their own Modbus variations,
> and documented them separately. I'm sure you could find a manual
> somewhere describing a protocol that would interoperate with Modbus
> devices that had absolutely no legal disclaimers anywhere. <
Yes, but permission and cooperation would be very meaningful. We have no desire to adopt an adversarial position. That would serve no one.
> (Lynn will then tell you that all specifications save the latest on
> Modbus.Org are obsolete, and its true, but most of them still seems to
> do the job <g>). <
That's the wonder of Modbus standards, there are so many to choose from.
Of course you have. But what are you doing to change it for the better? It has been demonstrated lately that meek acceptance of one sided business practices isn't the only course available. As I have said before, if everyone reading this were to simply bring the matter up to vendors, we would have products by the end of the year. And, outside of automation, it _is_ changing. People do have real choices in several important areas. This is purely the result of a community working for change rather than waiting. To the benefit of all, except the monopoly.
"All that is necessary for evil to triumph is for good men to do nothing." Edmund Burke
I don't believe that. (Even if we're talking about the end of next year.) If everyone reading this were to bring the matter up to vendors, all that the vendors would know is that there is plenty of interest.
Expressions of interest do not necessarily equate to willingness to buy - especially a willingness to buy version 1.0 on a new platform - and the appropriate vendor response would be to start doing some more market research.
If, on the other hand, everyone were to go to the vendors with the statement, "when you release a version of your product on Linux, I'll buy it," then we might see some different action. But that's not going to happen anytime soon, and you yourself are part of the reason why.
Most of the people using the vendors' wares on Windows aren't anxious to move to Linux. And some of the people most anxious for a market swing toward Linux are Open Source advocates who aren't willing to help make it worth the vendors' while. There is already capable SCADA/HMI and control system software available for Linux. But you're not buying and deploying copies of AutomationX or Scadabase or Performux, and I've not seen you encourage anyone else to do so.
I would contend that actually buying and using their products will do more to further vendors' interest in providing Linux-based tools than any amount of purist Open Source advocacy. When other vendors see market share going to their Linux-based competitors, they'll start
considering ports of their own.
Linux user, Open Source programmer/advocate, Pragmatist
You might be right, especially if that were the only force at work. But, those companies who have futurists, and I believe almost all have some form of another, and those who are watching events in Europe, are already formulating a plan (or should be) because the monopoly is unraveling far faster outside the US. At some point, they will either have to accommodate several small markets of important (municipal, government) buyers who have no interest whatsoever in Microsoft hegemony, or give those to our side of spectrum. And comparison on a level playing field, at last, will snowball.
With all the advantages that would accrue to _them_ using a more reliable, easier to integrate, far less costly platform that can be customized and tailored specifically to automation needs, it simply can't be lost on them (the major international vendors) that they could quickly become irrelevent by slavish devotion to Redmond. It will come to pass that they have to decide if they intend to be world class or US only. A demonstrated interest here in the US for better, more Open solutions, will bring that tipping point closer, even if it doesn't provide the push.
And no, I honestly can't say I would be ecstatic with the status quo with the only difference being that the closed, proprietary, non-interoperable, ball of goo runs on Linux. It would be a vast improvement, but the availability of Open Standards could do so much more to sunder the Tower Of Babel that OSS _must_ have a place so competition can work it's magic. One monotonic model replacing another wouldn't be anywhere near as effective in solving the artificial issues that are inhibiting the growth of automation. So forgive me if I don't shill for non-Open solutions, there are no shortages of folks who do.
If this hurts those who mix proprietary apps with OSS Linux, they can take to heart that their time is coming and many folks will be more comfortable with the devil they know.
Ad Hoc futurist with a very good track record, Ask my previous employers :^)
Using a proprietary package on Linux negates many of the advantages of moving to Linux in the first place. True, you're now locked in to one less vendor than before; but you're still locked in.
I don't think Curt is speaking so much of the software vendors as of the hardware vendors who only supply configuration utilities for Windows, for instance. Those config utilities have absolutely no sale value; they are prime candidates for open-sourcing.
Have you used a program like RSLogix 500 recently? It'd a little more than a "configuration utility". Move up to RSLogix 5000 and I'm sure it's acquired a few billion new features that allow it to brew coffee in the morning for the plant workers or something.
As a hardware vendor who's configuration software runs on Windows, I'm going to have to point out that not _one_ of my customers has ever said "I'll buy your stuff if the configuration software runs on Linux." Not a one. Curt said once that "he'd look at my stuff" if the software ran on Linux, but I don't think that can be transformed into lots of sales.
It'd be a huge time commitment for me to move my configuration software to Linux, and to make it worth my while, I'd probably need to have a piece of paper ordering a thousand or so units in hand before even starting. Linux has a tiny share of the desktop market, smaller even than Macs. Don't hold your breath for small hardware companies like mine to redo their configuration software for Linux. I think I can get a lot more customers by making my software and hardware better, rather than porting it to a boutique desktop OS.
Alex Pavloff - firstname.lastname@example.org
Is the configuration software a profit center for your company? If not, why not simply release the spec publicly, and let Curt, Jiri, or whomever wants to write the software for you instead? You get it done for free, so you aren't out anything, and with the spec published, the configurator software could improve on several different platforms. You could potentially even reduce overhead expenses by not having to write the entire application yourself, although depending on that may not be 100% reliable.
Obviously, if you use the software as a revenue stream, then it probably wouldn't make sense at this point.
The software is not a profit center.
Releasing "the spec" assumes that there is a spec. There is now one programmer on this project -- me. I'm rather confident of my ability to get things done, and so are the people that run the company, so there really isn't a "spec" of any sort. Design docs, feature lists, bug lists, yes.
I also wonder what the point of running on several platforms is when the only platform that any of my customers have ever cared about is Microsoft Windows. As I've said before -- not one of my customers has ever complained that my software runs only on Windows.
I also have little hope that Curt, Jiri, or any other volunteer programmers would be able to spend the time needed to actually do open source configuration software. I work on this project full time, and have been doing this for a couple years. I've got around ~110,000 lines of code here -- which is small potatoes compared to some projects, but not insubstantial. The entire MAT project consists currently of ~81,000 lines of code, and that's been done in about the same timeframe. It's still in alpha, also, and hasn't been deployed to the field yet (correct me if I'm wrong).
I also know that Curt, Jiri, et al, _wouldn't_ want to do my configuration software. Understandably, they're not going to participate in any project that isn't 100% open source. They're not going to do the work that I'm getting paid for free. Besides, we're not AB or a major company. We don't have thousands of units sold every month.
I roll down every project listed in www.linuxincontrol.org, which is a great summation of all the various projects going on. A few of the websites don't appear to be there, some of those projects haven't been updated in years, and the most successful project on that list (in terms of people actually using them to do things) is the very cool linuxcnc.org project, and it appears to be mainly used by hobbyists. But that's PC-based motion control, which isn't really related to what I'm doing.
If I thought I could make all my software open source and cooperate with a few developers to get more drivers or more things done than I can on my own, I'd do it. I just haven't seen any solid output from any developers so far that makes me reconsider the choice to do most all the automation-specific software in-house. I have used a couple libraries that were useful and contributed all my minor changes to those pieces of software back. The software was there, and I adapted it, and it works fine. However, there's currently no must-have software for Linux automation that I can't write myself in a couple days and make it fit my application better.
Alex Pavloff - email@example.com
ESA Technology ---- www.esatechnology.com
------- Linux-based industrial HMI ------
-------- www.esatechnology.com/5k -------
On October 23, 2003, Alex Pavloff wrote:
> Have you used a program like RSLogix 500 recently? It'd a little more
> than a "configuration utility". <
RSLogix is perhaps a class higher than that; but many pieces of hardware do come with what is nothing more than a configuration utility, and often quite an uninspiring one.
> It'd be a huge time commitment for me to move my configuration
> software to Linux, <
Mind you, I did say "open-sourcing" - not porting to Linux. Putting the GPL on it and letting people download it, on the grounds that it's not much use without your hardware so they'll probably come back and buy it.
> From: Jiri Baum
> To: AUTOMATION@CONTROL.COM
> Subject: Re: BUSN: Blackout of 2003
> On October 23, 2003, Alex Pavloff wrote:
> > Have you used a program like RSLogix 500 recently? It'd a
> little more
> > than a "configuration utility". <
> RSLogix is perhaps a class higher than that; but many pieces
> of hardware do come with what is nothing more than a
> configuration utility, and often quite an uninspiring one.
Oh, this I know.
> > It'd be a huge time commitment for me to move my configuration
> > software to Linux, <
> Mind you, I did say "open-sourcing" - not porting to Linux.
> Putting the GPL on it and letting people download it, on the
> grounds that it's not much use without your hardware so
> they'll probably come back and buy it.
If its no use without my hardware, why would people download it in the first place? In the grand scheme of things, it doesn't really _do_ anything besides let a user program something else to do something. If you want to look at the software, go download the demo. How would making it open source help my user? They still couldn't do anything with it anyway, seeing as how they don't have the hardware.
Not to mention that most complicated pieces of Windows software take advantage of third party components, which can't be distributed. I use a few in mine, some of which could be stripped out, but some of which are integral to my code. They're great components, saved lots of time, look nice, and don't require any runtime royalties or anything, but I can't give them to anyone.
Waving the magic open source wand isn't going to make anything better. It takes people writing code to make things better, and there just aren't enough of them writing open source code right now for automation to provide the critical base needed to bring about the revolution.
I've said it before, I've said it again -- if you want to make linux and open source feasible in automation, you can't sit around and wait for companies, large or small, to cater to a boutique desktop OS. You're going to have to show that you can get better results with Linux and open source first.
------- Linux-based industrial HMI ------
-------- www.esatechnology.com/5k -------
I totally agree with you Alex:
The open source world compared to the Windows world comprises a smaller market share (yes that is what makes this world work, or at least this automation world) than Apple. I don't know about you but I actually work for a wage and the only way that wage works is if there is a demand for my services.
Why are we in automation, to make better, faster, products so that we can SELL them. We are not making better, faster, products so that we can give them away.....
I am sorry but there is no demand for the services of OSS right now, there are extremely small pockets of demand. In actuality it is as you say "just a hobby" and if it had the users pounding it, and the hackers attacking it in any where near the numbers that are using and pounding and attacking Windows based products, it would quickly fall under the pressure of not enough time to repair (especially when no pay was involved), there are not enough "volunteer" programmers in the world to make it work for all the issues that would come up when it was pounded by these numbers and I do not want to write the source code, nor do ANY of my customers.
Others have tried it, Socialists, we all share the land for the common good, Communists, for the good of the state, etc. But we live in a world whereby you get paid for the work you do, and no-one is willing to pay you unless you have a service that they want. And if they will pay you, they demand the security of knowing your competence and not the "fix of the week", here you test it for us.
The theory is being tested by "off shoring" and "outsourcing" to low wage countries to do IT type work and you want to take it a step further and have it be "Free" instead of low wage. And then will you give the products that you build with this "Free, open, project" because after all, if the software should be free and open, why not the products developed with it, give away the cars so the many "open source car testers" can use and improve them for the rest of us instead of having to count on the "Rich Car Makers" ....etc.
In reality, the OSS movement is a noble plan, but is doomed to failure, wish I was wrong, but so far (past 3 years), I am right..........
But try to tell that same thing to a MAC user, prepare to fight.
I think this list needs to create a separate list for this endless rant on OSS vs. Microsoft and let the rest of us get on with solving "REAL WORLD" problems for each other.
Time is such a hard thing and we are all so short of it, I propose we get back to the intention of this list, helping others solve real world day to day issues and let the "Futurists" banter with each other in their own web space.
I respect the right to their opinions, but am getting tired (as I am sure many others of you are) with this endless babble with the result of neither side giving an inch (in over 2 years).
We all are extremely aware of each others opinion ........... Lets get on with it.
You Win and you Win, now lets talk about productive issues..............
On October 25, 2003 14:20, "Dave" wrote:
> I totally agree with you Alex:
> I am sorry but there is no demand for the services of OSS right now, there
> are extremely small pockets of demand. In actuality it is as you say "just
> a hobby"
Mr. (Alex) Pavloff (whom you totally agree with) includes in his signature the phrase "Linux-based industrial HMI". No doubt his employer will be dismayed to discover that he has really been secretly building these as a hobby instead of selling them. He has, or so I hear, been using them to panel his basement.
> and if it had the users pounding it, and the hackers attacking it
> in any where near the numbers that are using and pounding and attacking
> Windows based products, it would quickly fall under the pressure of not
> enough time to repair
So the Internet (which runs on OSS) has "fallen under the pressure" from the
hackers? How sad. We'll miss it.
> Others have tried it, Socialists, we all share the land for the common
> good, Communists, for the good of the state, etc. But we live in a world
> whereby you get paid for the work you do,
Your tale of woe goes on and on with communists, low wage foreigners, and Apple Computer as targets for your spleen. Would you feel happier if I told you that yes indeed, the communist foreigners with Macintosh computers really are all out to get you?
> I respect the right to their opinions, but am getting tired (as I am sure
> many others of you are) with this endless babble
"Dave", let me tell you something that might help. If you see a message that doesn't happen to interest you, then you really don't have to read it. Just skip over it. Nobody will mind, I promise. Nobody will even know.
The moderators will ensure there is no spam or offensive language in any of the messages. They can't however guaranty that you'll find them all
London, Ont. Canada
On October 25, 2003, Dave wrote:
> I totally agree with you Alex: <
Well, I'm sorry Dave, but I totally disagree with you.
> I am sorry but there is no demand for the services of OSS
> right now, there are extremely small pockets of demand. In
> actuality it is as you say "just a hobby" and if it had the
> users pounding it, and the hackers attacking it in any where
> near the numbers that are using and pounding and attacking
> Windows based products, it would quickly fall under the
> pressure of not enough time to repair (especially when no pay
> was involved), there are not enough "volunteer" programmers
> in the world to make it work for all the issues that would
> come up when it was pounded by these numbers and I do not
> want to write the source code, nor do ANY of my customers. <
OSS works extremely well. My 5000 HMI uses Linux. The OSS programs that are currently used on the Model 5000 HMI are:
Common C++ libraries
(and a small raft of other libraries).
They are all of extremely high quality, and they let me get the work that helps my customers done. My bone of contention with Jiri, Curt, et al, is just that there isn't yet the necessary base of programmers writing open source code for automation to make it worth it for people
like me to start helping push the bandwagon.
ESA Technology ---- www.esatechnology.com
------- Linux-based industrial HMI ------
-------- www.esatechnology.com/5k -------
Well said. Makes me wonder how many thousands of programmers are sitting on
the same fence, singing the same song...
Petr Baum, P.O.Box 2364, Rowville 3178
And my major difference with Alex is how to cause that
to happen. Doing automation software as OSS will obviously
be more effective than simply using OSS for automation.
However, it's a friendly philosophical difference as we are
all hopefully, doing what we can. I would like to realize
the tremendous promise that opening up automation offers
before I retire. So like my other retirement funds, I invest
more as the time draws nearer. I might have to jump back to
IS to have more time to invest. Automation isn't meeting
my needs at the moment. But that's because I'm at the lowest
But, just a small investment from the folks reading this could
do more than Jiri and Alex and I and the rest of the OSS folks
working in automation can do in the near term. Simply by keeping
your options Open and conciously avoiding lock-in as much as
possible. This has no downside and is certainly in your best
interest as well. When the vendors have to put your interests
at the top of the list rather than theirs, things will naturally
gravitate in the right direction. If the really excessive examples
become unpopular, better choices, or at least _some_ choices will
become available. The outside world is changing already.
> Using a proprietary package on Linux negates many of the advantages of
> moving to Linux in the first place. True, you're now locked in to one
> less vendor than before; but you're still locked in.
No argument from me.
> I don't think Curt is speaking so much of the software vendors as of the
> hardware vendors who only supply configuration utilities for Windows,
> for instance. Those config utilities have absolutely no sale value; they
> are prime candidates for open-sourcing.
Good point, and one I can't fault. Hardware vendors' config utilities
and device drivers are good examples of what we *should* be asking for
on Linux. To the extent that Curt is talking about those items, I
agree. And there's no question he includes those items in his discussion:
> The drivers for the thousands of different peripherals are written for MS but seldom for Linux. This is changing as Linux becomes an important market, but making it important to have Linux drivers can only be accomplished by the community.
I wrote what I did about market forces because Curt responded with the
same answer to several posts in which list members said they'd consider
using Linux when the PLC programming tools and HMI software they use are
available for Linux.
[a response from Curt early in the thread, talking about programming tools]
> I want a port that runs native on Linux. And at times it's damn difficult to get the Simatic manager and Step 5 programming package to run on Windows
[Curt's response to Ranjan, on the subject of SCADA application tools]
> I'm not suggesting that you rewrite Citect. I'm suggesting that, if you would like to see a Linux version, it's more helpful to demand a Linux version from Citect (or name your favorite vendor here) than to simply wait for it to magically happen
[another response about SCADA tools]
> Like I say, I'd be happy with RSLogix for Linux 1.00 even if it doesn't run any better than the Windows version.
By the way, I do understand that this last quote does not represent
Curt's most-desired outcome. He (and I) would infinitely prefer that
RSLogix (or some functional equivalent) be released as Open Source.
Curt, however, seems to believe that Rockwell's best interests in the
long term would be served by doing so. I'm not convinced that they
would, and I'm certain that Rockwell doesn't believe that they would.
(While I firmly maintain that an Open Source business model is workable
for certain types of company, I'm not at all certain that a software
vendor is one of them. Especially a software vendor whose consulting
business competes directly with almost 100% of its software license
In any case, I'm looking forward to the availability of open source PLC
programming tools; I just don't expect them to come from the currently
entrenched players in the automation software marketplace.
To close, I'll reiterate what I consider to be the fundamental point in
this thread, and one on which Curt and I agree:
> If the community makes it important to vendors, it will happen. They will release Linux drivers for your hardware, they will port their products to Linux
I would just like to correct the statement made below, Siemens does support Linux, and several flavors thereof, and also two flavors of unix, for which the softnet driver source code can be purchased for Ethernet or profibus.
No I don't give out part numbers, and yes I do sell the packages, or should I say I would love to, at the price.
For which they should be duly recognized, it's a start.
Unfortunately, the rest of the catalog is quite MS centric. But comms are an important area none the less.
On October 10, 2003, Alex Pavloff wrote:
> Even then, customers on Windows platforms can call up their vendor and
> go "THIS STUFF DOESN'T WORK, FIX IT", while the Linux answer for the
> free software is "fix it yourself". <
Alternately, "get someone who will fix it for you for a fee".
It's a different model, but it seems more sensible, anyway. In the Windows world, you pay an up-front fee, and then you argue with the vendor about how much support will or will not be provided as part of that fee and at what stage you need to pay another fee (i.e., upgrade). In the Linux world, you pay as you go, which means that you for the amount of support you actually require, and only that amount.
The Linux world also has the advantage of integrated support - any one company can fully support all the software, because it has the source to all of it. In the Windows world, problems caused by interaction between
different vendors' pieces of software are a much bigger problem as far as support is concerned.
Since support is not a profit center for the proprietary software vendor, the vendor is tempted to cut corners on it, another problem which is unlikely in the Linux world.
Hi Ranjan and Curt
Should you happen to re-write Citect, please let me know where I can download your source code............. for free.
Regards Donald Pittendrigh
If I were to rewrite it, or actually reimplement it's functionality with OSS tools and help from the community, you can bet I would let you know where to find it. GUI programming not being my cup of tea, I would suggest you watch the folks who are interested in OSS similar to Citect. I have played with the idea of something similar to Panel Builder or Quick Panel for use on inexpensive Linux based Panel PCs. Of course, Since you can't easily use a Panel View with GE or Quick Panel with AB, getting them to open up to an OSS project would be problematic.
As the public becomes more educated and proprietary and closed become dirty words, this will change. It would, at present, be a worthwhile addition to the MAT project, so I think about it off and on. My passion lately has been to make an Open, fully documented, PC compatible, industrial quality PLC for OSS (and perhaps other stuff) to run on.
I've had health and financial setbacks to overcome, but I should be back on track this winter.
Technology is only as wonderful as people taking care of that technology are smart.
Jacek Dobrowolski, Ms. Sc. E. E. Software Engineer
One of the alleged experts who was on TV a number of times kept blaming it on "SCADA". Several times he was asked what this was and he never was able to even get the acrynom right. Once I heard him refer to it as security something.
This guy was supposedly a big wig with the federal govt at least partially responsible for infrastructure protection at one time.
My guess is that most of the stuff is left in manual to prevent nuisance trips, and the operators and supervisors on duty were to afraid to shut down a few people's power to protect the rest of us from the blackout.
You forgot one very important thing,
The auto reclose system and various other legislated protection mechanisms were not working either (sabotaged????) or selected out of auto operation by someone with high school principles of electricity and a little less savvy than is required to fly a plane????
Regards Donald Pittendrigh
Basically it boils down to humans making mistakes re: the operation of systems which can operate on their own automatically, failing to update / communicate with each other, not having auto-time sync between facilities (they could all be on GMT so logs have the same time / date stamp everywhere), auto dialers were probably disabled and to a larger degree most of these operators probably did not know what to do in this type if situation.
Unfortunately the public gets snippets of information from non-technology persons who have been briefed by a manager or other non-technology person and the like the black out, the whole story cascades into a bunch of non-sense about who did not do what they were supposed to at a given moment in time under this circumstance or what happen when and why. Now the lawsuits will fly, insurance premiums will increase and in the end the power, generation industry will not have improved to insure that such a massive blackout will not occur again in the next 20 years. 90% (or greater) or all accidents (or incidents) are related to human error (even if the transmission line which went down in the first place was due to lightening, human error caused the cascade of the event beyond the area it was located in).
The utility companies, like all good large corporations are about one thing - profit, the shareholders want a return on their money and if the companies are installing new lines, updating equipment, training operators, spending profits on improving the transmission system (which includes working together - keeping each other duly informed, ect.), the shareholders don't see big dividends (ROI), the CEO's and others who manage this massive corporations don't get big million dollar bonuses, they are all un-happy. Of course we the public are stuck with transmission systems which are older than I am and in dire need of upgrades, but the industry big wigs will tell use that the lines were designed for 50 or 60 years of operation (I guess they forgot about adding capacity to the lines (more lines - better lines) as our demand for power continues to go up every year.
The point, humans are the root cause in the cascading failure which resulting in the largest blackout in US history, not the technology.
Several people have mentioned systems being run in manual as being a possible cause. I have heard many times (including here) that power systems are often run in manual because the automatic systems don't work properly. The deficiencies of the automatic systems don't get addressed because the plant operators are there anyway, and they usually do a fairly good job.
The big question about the blackout isn't why it started. Equipment failure producting a local blackout is a "normal" event. The real quesiton is why did it spread? An international commission was set up to investigate, but no one has come up with a satisfactory answer. I rather suspect though that the commission was given information about all the wonderful automatic features in place, but no one told them whether these systems actually ever really worked or not.
London, Ont. Canada
I think it would be wise to wait till the international commission has completed its work and submitted a report. You do not investigate this type of event in a couple weeks.
The automatic systems do work. Quebec was not affected by this problem because it's systems worked. The same systems have reduced the scope of outages and improved the restart time for major outages. The most recent example was last year when smoke from forest fires in northern Quebec tripped a major line. The automatic protection shed enough load in Montréal to keep the system up and running at about two thirds capacity. Within a few hours everything was back in operation. The reason it took so long was a problem at one substation in particular that took a longer time to get back on line.
Many of the auto trip functions or auto protection features of the various power plants did in fat funstion as designed to protect the plants from being damaged while attempting to provide the levels of power required within the grid network. The resulting cascade failure which was not managed well by the operators, resulted in power flowing the oppsosite direction in the grid from its normal direction of flow (based upon where the power is being normally supplied from vs where it wound up being supplied from), as the demand for energy continued to increase more generation facilities went off line to protect themselves from total failure.
I gleaned a fair amount of this through a lot of reading of various reports and articles published through the various industry trademags, newsweek, CNN, WSJ, the Economist, friends in the indusrty and some general knowledge of how plants typically function and the protection systems they have built in. (these auto protection syetms can be thought of as very large circuit breakers - they did their job or function in life - don't let the plant melt down, unfortuantely the opertors did not follow good pratices and manage the situation very well, because if they had, only a small portion of Ohio would have been with out power.)
All of the fancy auto time sync ideas (or any other related to improved SCADA or auto control), additonal laws to regulate response actions or the power generation industry will not cure the basic problem, humans who don't make good decisions in crisis situations, leads to massive system wide failures.
I read in the news this weekend (28th/29th of September) that Italy just had a blackout that affected more people than the one which started this discussion (57 million vs. 50 million). This is surely more grist for the mill, and it sounds as if this type of problem (cascading failures) cannot be dismissed as an isolated local incident.
London, Ont. Canada
> > The auto reclose system and various other legislated protection
> > mechanisms were not working either (sabotaged????) or selected out
> > of auto operation by someone
> Several people have mentioned systems being run in manual as being a
> possible cause. I have heard many times (including here) that power
> systems are often run in manual because the automatic systems don't
> work properly. The deficiencies of the automatic systems don't get
> addressed because the plant operators are there anyway, and they
> usually do a fairly good job.
I don't have specific knowledge of power plant operations but this isn't true of utility control centers that control the transmission systems.
> The big question about the blackout isn't why it started. Equipment
> failure producting a local blackout is a "normal" event. The real
> quesiton is why did it spread? An international commission was set up
> to investigate, but no one has come up with a satisfactory answer. I
> rather suspect though that the commission was given information about
> all the wonderful automatic features in place, but no one told them
> whether these systems actually ever really worked or not.
Utility control centers are highly automated and use a variety of software applications to prevent system collapse and minimize outages. These applications calculate and analyze power flows,
perform contingency analysis, and perform state estimations. Most utilities also keep detailed historical records of system data (many use the OSIsoft PI System). Even small utilities run these kinds of applications. Most of the engineers running the control centers I am familiar with (quite a few) would never consider operating the system without these applications. Failure of the Energy Management Systems (EMS) would not be tolerated. If the EMS was not operational that would be a major problem and this would have already been known if that was the case in this outage. You can't keep that secret.
According to reports I have read, there were highly anamolous power flows occuring immediately prior to the collapse of the system in Michigan. The problem was that these anamolies provided little or no warning of the collapse. You can't just start opening breakers when these things happen. Prudence demands that you have a pretty good idea what is going to happen before you open the breakers. It takes time for both humans and EMS to figure that out for systems as complex as electrical transmission systems. Furthermore, the systems in which these anamolies were occurring apparently did not have knowledge of the problems that were happening in separate, but connected, systems in other places. The EMS, or its operators, can't respond to external conditions when they don't know about the external conditions.
While there are system operators involved that can manually control the system, they also depend on the EMS applications. If the operators would have known of the problems in the other systems,
they might have been able to take steps to prevent the collapse. But this is conjecture. There may never be a way to determine what the
system operators would have done if they knew more then they did.
Obviously, something didn't work as planned. The first thing they are doing is looking at all the data in the historical archives to find out what happened. The second step is to find out why it
happened. It will take a significant amount of time to figure this out.
Another interesting fact which leaked out after the black out was that approximately 6 months prior to this a nuclear power plant in Ohio USA had serious problems with in one of their computer systems caused by a computer worm (I think it was SQL Slammer). It knocked out the safety monitoring system (I believe this was an MMI for the safety systems). Fortunately the reactor was already shut down for other reasons so no serious consequences resulted.
The computer worm appears to have entered the plant via the business systems and then entered the control systems. Technical commentators on the situation said that commercial pressures to make use of operational data for cost reduction projects are causing companies to link their plant and business computer systems more and more closely together.
This sounds like a subject that needs to be addressed more seriously. The IT industry's approach of "patch daily and hope nothing happens" isn't a viable solution in any industry that requires high reliability.
P.S. I've just read in the news that a new hole has just been found in Windows similar to the one used by the recent MS Blaster worm. Computers that were patched to secure them from MS Blaster are still vunerable to the new problem.
I was about to say that I did not think that any computer "for which safety credit is taken" or that is connected to the safety system or used for safety would be connected to the internet or to any company wide system. At one time nothing safety related could be connected to non-safety systems.
However, nothing can suprise me now.
Computers used for safety or for other critical control functions in any plant should never be connected to company networks or to the internet. If you need to extract data from a critical computer for use on a non-critical information system, there are ways to provide one-way links. But there should never be a two-way link that can make any transfer into a critical computer or control system from outside.
I know that many control systems in process plants are connected to company networks and to the internet. That trend is a problem.
Which is more important: producing product or producing numbers about the product?
I know in this case they say it was by direct link, but don't forget the "mobile" factor these days. Even if a "Safety Network" isn't linked to any other Ethernet, where do you think the PC+Windows for programming it has been? Very likely it is one or more notebooks that move around from network to network. So these days even NO CONNECTION isn't enough since viruses can move by the new "Sneaker-net" of mobile personal computing. ;^)
MS Windows is source of virus. If SCADA base on MS Windows then culture of work must be very high. And industrial ethernet must be isolated from world.
This is a subject that I have brought up several times in the past few years. Much of the production and test equipment I have been involved with is PC based. With addition of searchable databases now becoming commonplace, computers outside the closed network are now connecting to retrieve or examine data. Program changes are being made by technicians using laptops that may have been connected to dozens of other systems including the internet.
Virus scanners are rarely installed on production PCs, as the performance hit would seriously affect any kind of high speed data acquisition. I don't even know how software such as LabView would react to having to share CPU time with virus checker.
In one recent case, one computer on a test line was equipped with a modem so that it could "call-out" for remote sessions with a software developer. This computer was essentially connecting to the internet "naked", as no firewall and no updates had ever been performed on it. Even though the connection was dial-up and hence intermittent, a simple port scan would have revealed vulnerabilities inherent to all Windows machines.
Considering that thousands of man hours worth of testing data are potentially at risk, I agree that some kind security is definitely needed.
I would have greatly preferred not to know there were people stupid enough to control a nuclear reactor with Windows. The NRC should execute commitment papers for those decision makers. I'd sleep better with the Chernobyl system in place.
Issues relating to IT security in general are discussed at length on the SANS web site ( see http://www.sans.org/rr/ ) as an example and daily security bulletins on security are issued by SANS. Go to http://portal.sans.org/ to subscribe. I've found these very usefull in keeping up to date with security issues.
Today's bulletin indicates the IT comunity is in a mad panic over the latest Microsoft DCOM vulnerability. See http://isc.sans.org/diary.html?date=2003-09-11 for details. The SANS bulletin states "..Acting on this vulnerability immediately is absolutely critical.."
There was also an editorial comment posted in a SANS bulletin a few weeks ago which commented: "...[Editor's Note (Ranum): Repeat after me: Mission critical systems should be on isolated networks that are not connected to the Internet. There is no amount of web surfing fun that justifies the cost and labor downside of an incident such as the one above..."
Certainly the more enlightened in the IT Security community understand the issues involved in connection to the Internet but many have yet to learn the lesson.
WRT to the power black out I suspect the layers have been rushing around telling the techs to keep their mouths shut until the law suits have been completed.
I suppose we are going a bit off topic now...
Is not part of the problem that people on the corporate side now want the plant intranets tied in for MES and ERP and so on - they end up being tied directly or indirectly into the Internet. All one big happy network (granted, with some routers and firewalls, but still easy pickings for the latest hacks).
I only see this getting worse.
On SANS they also mentioned about a "responsibility" of security types to see to it that their neighbour's machines are protected. The excuses for not having patches / firewall / anti-virus that I have heard include:
- Those patches are too large to download with my 56 kbps analogue modem (IE 6 SP1 anyone?)
- I don't understand what the patches mean
- My copy is pirated / modified (e.g., Office Update asks for the CD in order to patch), so I can't or won't patch
- I'm safe because I have Windows 98 (only this time ...) - Anti-virus is too expensive for me
- I tried ZoneAlarm but it was too complicated
- <fill in the blanks> crashed my system
- I have Norton Anti-virus, so I'm safe right?
- I never open attachments I'm not expecting, so I'm safe right?
- I only go to reputable web sites, so I'm safe right?
- I don't care
The problem is with the OEMs, we are asking users to close the barn door after the horse has bolted.
Another resource AList readers may find interesting is the North American Electric Reliability Council website at http://www.nerc.com
They've posted a preliminary report with details of the August 14th outage timeline, as well as a yearly summary of significant grid events, and reliability assessments.
Amen!, Michael Most experienced systems people absolutely sweat bullets when they have to apply a patch to a working production system. Most simply won't unless they absolutely have to. And the policy of bunching them together ala Service Packs, has done a lot to vindicate this approach. Better to deal with the problem you know about. This is a case where fools rush in and then the phone starts ringing.....
Another good reason to seperate the control and business networks.
I had a bad experience with a brand new Dell computer recently. I hooked up to the business network planning to download all the latest patches. I ended up going to lunch first and by the time I got back from lunch the PC had managed to become infected with something. The IT guy was running around trying to figure out who was using up 100% of their DSL bandwidth. I wonder why their proxy server would allow a single PC to hog resources like that.
He had been able to narrow it down to a specific computer and knew its name but had no way to tell where it was. he asked me and I knew right away it was me. We unplugged it until he could disinfect it with some utility he had and than I spent several hours downloading and installing various patches off the MS website. After that i installed the antivirus software they thought they did not need to buy.
One would think that Dell would have the decency to at least install the latest patches before they ship out a PC, but for some reason they choose not to do so.
My guess is that these type of attacks will continue for the immediate future. We are just going to have to be vigilent. Probably someone will come up with a network manager software that will be able to look at the installation on a PC that connects to a business network and if it is creating a problem, just isolate that PC and report it to someone to take care of the problem.
I am a bit perturbed with AOL of late. They obviously have to know that a huge number of these spams I am getting lately are virus/worm attacks. Why don't they just filter out those messages at the mail server level? It cannot be all that hard. I am tired of getting 15-20 spam virus/worms a day.
While I am not a Dell fan but I feel I must defend them. It is a difficult task to install all the hardware ECOs before shipment much less the software ECOs. If you haven't already done so, look at the volume and frequency of "Critical Updates" from Microsoft. I have seen as many as two a day. There were probably updates released while your Dell box was in transit.
Put the blame where it belongs. I also remind you there were folks asking Bill not to release XP with such poor security. Bill scoffed at the idea of less than perfect security. It took the FBI to remind Bill Gates that XP (all WINDOWS for that matter) was a petri disk for viruses.
I've been telling customers and friends for a couple years now, to assume any Windows machine that has been on the 'net is infected. With several scans per minute, it doesn't take long. I can't see how Dell could keep current, there is often a new threat in the time it takes to process and ship. And I imagine the drives are bulk loaded and inventoried. Once they get a "trouble free" set that works with their products, they aren't going to change it without a compelling reason and pretty serious testing. If you get a virus, you don't typically blame Dell. If they introduce a serious bug when patching, it can wipe out any profit for weeks. The do have a solution however. They will load Linux or sell an empty machine now that MS can't "cut off their air supply". That would let you be sure what is on the box before exposing it to the Internet. But people want to plug and play and Damn the torpedoes.
Actually, once 30 or 40% of the PCs on the net are running Linux it should ease the situation somewhat. It's harder to get the exponential infection rates with even a little diversity. Till then it's obviously fairly simple to cause extensive destruction and grief, they've been doing it as long as there has been a monopoly. Once that most favorable situation for virus writers has ended, it won't be quite so overwhelming to deal with the problem.
Filtering is possible, but time consuming and of neccesity, behind the wave since you have to know explicitly what you are filtering for. When you're up to yer arse in alligators...... During the mailstorms generated by the MS virus of the week, most will settle for just keeping the servers up. Sometimes that's more than they can do.
Linux has, so far, been about the best way to avoid all the hassle, expense and downtime. At least it's worked for me.
I guess that the message of the blackout is that diversity in computers and operating systems is good!
Peter Clout, DPhil.
Vista Control Systems, Inc.
I am not sure how Dell runs their business, but I believe that many PC manufacturers buy their hard drives with Windows already installed. The hard drives would be loaded with software from a master image during final testing
of the drive. The master images are only changed at infrequent intervals after the PC manufacturer is sure it is compatable with their hardware. They don't want to risk shipping PCs with untested software. What you do about
service packs and patches and any problems they may cause after the PC is delivered is a matter between you and Microsoft.
If you really want your new PC to be "ready to run", you need to buy it through a local dealer (or consultant) who can do all this for you. Otherwise, with all the updating and patching that has to be done with a new PC these days, it's starting to be not much different from building your own.
London, Ont. Canada
Hi All We patch every PC and do a burn-in test and print a test sheet, before they leave our office, fortunately we don't rely on PC sales to make a living as this process used to be a matter of an hours work, start the burn test and come back 2 days later and print the report. Today with all the patches and updates and other muck from muckrosoft, it takes a day of restarting and downloading to get the operating system loaded, and don't think you can download all the patches and make an install cd or an auto install over your network anymore, the patches are being updated so fast that it is impractical.
If I got such a bad incomplete and essentially flawed product anywhere else, I would insist it was replaced with a new one at the shop where I bought it, if only all the software I need to use daily would run on Linux!!!
You could always use Ghost to create a disk image of a system. As someone that used to run a computer lab and did things the hard way, Ghost is a huge time saver. Spend a few hours to get a network server setup to send Ghost images over a network, and you can recreate a PC in
a few hours, most of which just amounts to watching a progress bar march across the screen.
As for downloading too many updates, well, install Red Hat, fire up Red Hat Network, and tell me if that's any better.
ESA Technology ---- www.esatechnology.com
------- Linux-based industrial HMI ------
-------- www.esatechnology.com/5k -------
I belive Mr. Pittendrigh was referring to setting up new systems with Windows XP. How do you handle the problem of product activation? If you copy an image of an activated system to multiple computers, Windows will notice that it
isn't on the original computer any more and require re-activation. If you contact Microsoft with multiple re-activations from one copy, are they going to decide that you are a pirate?
Large customers solve this by using a corporate (site) license which they can image (there is no copy protection). However, it is contrary to the software
license to use your own corporate license to prepare a computer for a customer.
London, Ont. Canada
Simply make all your vendors (and the others who wish to sell you something) aware that you would prefer Linux solutions. Eventually they'll get the idea. Especially those whose products you don't buy.
You know how I feel about this, it just damn difficult to get the Simatic manager and Step 5 programming package running on Linux.
We work almost exclusively with alarms. Since the blackout we have seen considerable increased interest. The utility operators have a discussion group through NERC and the lack of talk about Aug 14 seems to be the biggest item of interest.
Say hello to Jason for me.
http://www.theinquirer.net/?article=11529 and especially
http://www.theinquirer.net/?article=11523 put it all in perspective. Although the first article fails to directly note that another country was also affected by the blackout.
I think the technology failed because it is out of date and not very wonderful at all. I also think it failed because we pay ridiculously low cents per kilowatt hour versus the true economic and environmental cost of electricity - hardly an incentive for private or public money cash infusions for new plant and infrastructure (or conservation). Finally, if governments are too busy trying to privatise and de-regulate electricity (it started out unregulated and it was a mess, after all) then who is looking after things? I pay a "stranded debt" charge on my electricity bill every month. It is from the old Ontario Hydro (publicly owned, in massive debt, traditional centralised power generation) that was split up into two companies - one for generation and one for distribution. In order to make the companies more attractive for privatisation (either in part or as a whole), the government held back some of the debt as "stranded debt" and shafted the end users with the payment. They also ignored every expert in the field of de-regulation who told them not to bother (the status quo was obviously no good either). We generate 1% or less of our power up here in "green" Ontario by "alternative" means. I think that Denmark is around 20% and even California is approaching 20%. Pity the poor Danes.
The technology is out of date; we don't pay the true cost; the centralised generation model is out of date &c.
I was driving from home (north of Detroit) to Columbus, OH when the power went out. When I got to Toledo I noticed the refineries were burning a lot of excess pressure off, turned on the radio and discovered there was a blackout. After confirming that Columbus was still powered (see below), I continued on to my meeting listening to a Detroit radio station. They were desparately trying to figure out how a substation problem in Niagra Falls (what was thought to be the cause at the time) could shut the down the power over such a wide area. Numerous people were calling in and offering explanations and assigning blame. One call was from a person who claimed to be an engineer from Detroit Edison and said he could explain how this could happen in such a way that anyone could understand. Here is what he said (paraphrased):
"Image the power grid is like a big hula hoop with 12 holes in it and there is a football team arranged around the hoop with each player at a hole. If you connect a water hose to the hula hoop and all the players stay in place, water will shoot out the 12 holes at equal pressure and keep a football in the air in the center of the hoop. As long as the football is in the air, the power will be on. If one of the players covers up the hole in front of them or drops their end of the hoop, the football will fall down causing a power outage."
I almost had to pull off to the side of the road I was laughing so hard. If this guy was operating the system, that could be the cause of the outage. Investigators should find this guy quick.
The question was posed: why didn't the protection relays and reclosers operate properly to isolate the failures? They did in some places. American Electric Power (AEP) is the utility in Columbus. They issued a press release claiming that their protective relays operated properly and isolated the AEP system protecting most of their service area. BTW, AEP has been agressively automating their transmission substations (and is a large user of UCA/IEC61850 for their transmission substations). Technology, properly applied, does work.
On a personal note with respect to the blackout: Ever notice how there is an antenna at the top of the roof on a Ford Focus? Detroit Edison said it would take 3 days to get the power back on. So, I loaded up my rental car (a Ford Focus) with blackout supplies (dry ice, batteries, and water) in Columbus and headed home. I had to drive through a bad thunderstorm. Just as the storm ended, that antenna worked very effectively as a lightning rod: the car was struck by lightning while I was driving. All that was left of the antenna was a black spot on the roof of the car. The power came back on before I could make it home (28 hrs after it failed) resulting in a strange kind of disappointment that I just spent $150 on all that stuff I didn't need.
So what is wrong with Windows based SCADA systems? I recently spent 4 year putting in XP based and NT based SCADA systems to keep water flowing to every house for over 100 major water districts through out Colorado and Wyoming, all of your computers had NAV installed, some were connected to the internet, most not. We never had to respond to any type of computer problem outside of hard drive failure or a monitor failure, never did we have virus issues, or other related problems related to being tied to the internet. In fact utilizing IBM based machines, I can only recall one complete computer failure out of over 300+ machines installed, and it was replaced in 2 working days by IBM, we were on site in 5 hours and had their entire system functional in 2 hours and then just waited for the new machine to show up.
Unprotected (unmaintained) systems are prone to attacks or failures, like any system (mechanical or electrical), maintenance is required, I even perform some routine maint on my machine at home and it runs 24/7/365 (has for the last sevral years) without a problem. I not would blame Windows, any operating system is only as good as it is maintained and protected.
Oh, if you don't think water is important, turn it off for 2 to 4 hours and see how fast people start screaming!
Uncanny! That makes _Five_ people I've heard from who have had that experience. Of course, most of my acquantances are professional SAs until just lately.
On September 18, 2003 16:30, Patrick Allen wrote: <clip>
> This is a subject that I have brought up several times in the past few
> years. Much of the production and test equipment I have been involved with
> is PC based. With addition of searchable databases now becoming
> commonplace, computers outside the closed network are now connecting to
> retrieve or examine data. Program changes are being made by technicians
> using laptops that may have been connected to dozens of other systems
> including the internet.
Laptops have been a frequent cause of the SQL Slammer and MSBlaster worms returning to a network after they have been cleaned out. We recently had an e-mail sent around which asked us who had a visitor with a laptop. Our IT department detected that an unknown computer was connected to the network with the MSBlast worm active in it and they were trying to find out who it was to get them to unplug it. As long as people have laptops, building a fortress wall around a network is not a sufficient protective measure.
> Virus scanners are rarely installed on production PCs, as the performance
> hit would seriously affect any kind of high speed data acquisition. I
> don't even know how software such as LabView would react to having to share
> CPU time with virus checker.
Virus scanners are reactive programs. They only respond to known problems and they don't typically protect against worms. Virus scanners are incompatible with some application software (I was using some software recently that recommended shutting down any virus scanners when on-line to a servo controller).
Worms are perhaps a more serious threat to production PCs because they install themselves remotely via sercurity holes and don't require anyone to click on an e-mail attachment (MS Outlook is a virus writer's best friend). Even if some PCs are immune to any existing worms, the more active worms will take up so much availalble bandwidth in attempting to spread themselves that they will clog up the network. If I/O, instruments, or other devices are connected to the production PCs via the same network, even PCs which are not themselves directly vulnerable to the worm may not be able to reliably access the devices they need to function.
> In one recent case, one computer on a test line was equipped with a modem
> so that it could call-out for remote sessions with a software developer.
> This computer was essentially connecting to the internet naked, as no
> firewall and no updates had ever been performed on it. Even though the
> connection was dial-up and hence intermittent, a simple port scan would
> have revealed vulnerabilities inherent to all Windows machines.
Some people got hit by the MSBlast worm when they when on-line to download the patch to protect themselves from the worm. As soon as they got a connection - bing! - the worm was in.
Whenever a new virus or worm goes around, people like to blame system administrators who haven't installed all the latest updates and patches. However, when the SQL Slammer worm went around last winter, many of Microsoft's own computers hit by it quite badly because they had not installed their own patches. If Microsoft can't keep their own systems up to date, is is reasonable to expect everyone else to?
Frequent patching is probably a bad idea for a production computer. In some cases, Windows patches have had bugs that were worse than any effect the virus or worm was supposed to have. This is why so many PCs with Windows don't have all the latest updates or patches installed. New updates and patches will come out faster than people can test, validate, and install the previous ones. Installing patches without testing for side effects is very risky, perhaps even a higher risk than the virus or worm presents. Patching system files may also require re-validating the test system which may not be a trivial project in itself.
In short, I believe the usual office solutions of firewalls, virus scanners, and frequent OS patches are of doubtful effectiveness in their intended application, and appear to be inadequate for protecting production computers.
London, Ont. Canada
Personal story, little automation content:::
Leslee and I escaped Detroit for Orlando during the mid-80s GM-10 disaster, but we return as often as possible to visit friends and family. On the afternoon of Thursday, 14 August 2003, we were northbound on I-275, approaching I-696, on our way to participating in the Woodward Dream Cruise 2003, when all the radio stations suddenly disappeared...
A couple of radio stations reappeared over the next few minutes, running on emergency generators. They had no idea what was happening, but they began speculating and never stopped...
As we continued along the freeway, we could see the growing traffic jams on the surface streets, as drivers had to deal with dead traffic lights. Detroit drivers are pretty resourceful; this actually worked a lot better than you might have guessed. Still, it was slow, and traffic backed up onto the freeways. The I-696 through lanes went from congested to stopped...
Our destination was the home of Fred and Barb Collins in Berkley, an enclave city at Twelve Mile and Woodward. We drove down the exit lane of I-696 to Greenfield, then negotiated our way into Berkley. We really weren't slowed much at all, so far...
We had spoken to our friends via my cellphone while south of Detroit. During the entire blackout, our roaming, Florida based cellphones were never useful again. The system was simply oversubscribed, and didn't allow bandwidth to roamers. Incoming calls went to voicemail; outgoing calls crashed instantly. Our friends' local cellphones continued to work, and the land line system never faltered...
Fred and Barb were drinking beer on the front porch when we arrived. They were in the process of moving to their new home, and had no battery powered radios on hand. We had one of those, and more beer in our cooler. A block party formed...
Darkness approached, and the folks on the radio continued to provide no useful information whatsoever. In fact, all of the technological information was gibberish, and the planning and scheduling information was either obvious or nonexistent. Will the automobile plants be open Friday? Not if there's no power! When will this problem be fixed? We're working on it. I never heard a hard question get a hard answer.
To put a sharp point on it, we never heard authorities give an on-point answer to a single useful question during the entire blackout. The media gave great credit to the new mayor of Detroit and the new governor of Michigan for their great leadership during the blackout; I can't figure out what they did! Do they get credit for not simply breaking down and sobbing because something is happening that is totally beyond their control?
Just after midnight Friday, power was restored to the neighborhood we were in. Still, we drove to 24 Mile and Van Dyke on Saturday morning before we found ice for our Dream Cruise picnic!
Mr. Pittendrigh (who incidentally prefers to be called Donald) has had a look at this thread and can't find the context which has caused Michael Griffin to make the statement in the first sentence of the attached post.
For the record, Mr Pittendrigh's best moment last week, was when his teenage Son decided to take WinXP off of his PC (which also happens to be on Mr. Pittendrigh's home/office network), and replace it with Win2000. Mr Pittendrigh hates WinXP with a passion and finds Win2Kpro and Win2K server to be the only acceptable muckrosoft operating systems for use in industry at present.
Jokes aside, I have my web server running on Win 2003 server, I think I may become a fan of this product in the long run, but at present I have problems with this server which I never had with Win2000, or even for that matter with NT. The problems I had with the other Win2K2 and NT, I was able to resolve quickly easily and without reading tons of MCSE books, (well at least 95% of the time) At present I am considering re-installing 2003 for the 3rd time, to get active directory working properly.
I have absolutely no time for XP, I find it insults my intelligence everytine I need to do something more complex than open a powerpoint presentation, about a month ago it took me about an hour to figure out how to get files and folders in C:\Program Files, to be declared read write and authorized for me to make manual changes to some INI file or another.
There is in existence, a feature on Server, that allows one to set up a network boot and operating system install controlled from a central server. It is a fine solution for industrial machines to ensure accurate and consistent installation of the OPSYS, and is something I could even use in my business to cut the setup time of a new PC I am selling and reduce the competence level required to get an operating system installed and working. There would be some constraints which would have to be dealt with in order to use this feature to full advantage, but the issue which at present thwarts any incentive to utilize this feature, is that every time it was being used to set up a new machine, it would be necessary to determine if the operating system patches in the install partition were up to date or not. In fact one would spend so much time servicing this install partition, that it is more efficient to do a manual install for each new PC.
There are ways around, (and I don't mean illegal ways of circumventing), the WinXP licencing issue, the mechanism is described on the OEM support disk supplied with OEM distributions of the MS operating systems, the trick is in the setting up of "run once" software that prompts for the licensing information and other details on first time start up of the new PC. It also provides neat ways of customizing the operating system such as putting up a company logo on the windows desktop, and including supplier name and telephone details in the "about windows" general tag of the system properties panel.
If you have ever started up a new Siemens programmer, you will see the techniques employed to good effect.
Cheers Donald P