Today is...
Friday, February 24, 2017
Welcome to Control.com, the global online
community of automation professionals.
Featured Video...
Featured Video
Watch an animation of a conveyor stacking operation demonstrating the use of a move on a gear command.
Our Advertisers
Help keep our servers running...
Patronize our advertisers!
Visit our Post Archive
Communication protocols
What is the reason (except the commercial) of having so many "different" communication protocols instead of one common protocol?

I am working with PLCs and automation systems for seven years. All these years I am learning for different types of communication (Profibus, ArcNet, RCOM, Modbus, Ethernet etc) What is the
reason (except the commercial) of having so many "different" communication protocols instead of having one common protocol for all systems which will give the possibility (if the companies
wanted to do so) to connect different systems (PLC, SCADA, SENSORS etc) from various companies?

I am looking forward to receiving from you
Best regards
Tassos Polychronopoulos
ABB Constructions SA
El.Power & Automation sys Dept.
email : POLY_T@HOTMAIL.COM

By Darold Woodward on 19 June, 2000 - 9:45 am

As someone whose life seems to revolve around protocol issues these days, I'd like to comment. First -- why? most of the protocols have their roots in vendor specifc protocols of one sort or
another and have some degree of optimization for the types of information that vendor was concerned about.

The most notable protocol that is both an exception and a contradiction of this is Modbus. It really is specialized for Modicon PLC registers. However, its specialization is generic enough that most people can adapt their systems to it and get something useful out of it. Modbus has become a defacto standard because
it is a fairly easy and cheap "lowest common denominator" of protocols.

The only notable industry I've seen that has adopted "one" protocol is the European substation automation industry. They all use IEC 60870-5-101 or 103 and soon 104. While this sounds great, in practice there are lots of vendor specific extensions in 103 and lots of vagueness in 101. Therefore interoperability between devices from ABB, Siemens, and others really only exists on a limited basis.

An interoperability demonstration and report concerning the IEC protocol was conducted, but it required only interoperability at a low level of functionality and required some fairly extensive coordination between vendors before success was achieved.

In North America, UCA 2.0 is expected by some to become that one protocol for future substation integration. While it will deliver a lot of cool stuf, it is based on standards that are either not used or no longer in wide or growing use in the industrial integration arena. I'm still not sure if that helps or hurts us in the long run, but we're going to find out.

With UCA the protocol was developed and conformance and interoperability tests are still under development. This means that the main goal of interoperability for now will be up to the developers to follow the specs with no certification procedure. Interoperability
will probably be realized, but it sure would have been nice to have the testing and certification in place before products entered the market.

While the idea sounds good, I haven't found an example where an entire industry has ever reached consensus quickly enough to have a single standard. This lack of standards is what doomed the original laser discs. Sony spent lots of money selling beta VCRs to the public. Only recently have PC and peripheral manufacturers
gotten together and developed tight enough specs to start to eliminate some of the integration issues with Personal Computers.

We're going to be set back again by the push for Ethernet. It is a great physical layer, but the application layers are just as numerous as the other networks you sited. I'm not convinced we'll have it solved in the near future in the industrial control world.

Darold Woodward PE
SEL Inc.
darold@selinc.com

By Rob Hulsebos on 20 June, 2000 - 12:26 pm

>> ...What is the reason (except the commercial) of having so many
>> "different" communication protocols instead of having one common
>> protocol for all systems ...?

>We're going to be set back again by the push for Ethernet. It is a
>great physical layer, but the application layers are just as numerous
>as the other networks you sited. I'm not convinced we'll have it
>solved in the near future in the industrial control world.

Hear hear. I'm trying to keep track of all the 'open' protocols that exist on this planet. I lost track somewhere above 100.

I now think we can double this number without much effort. There is Profibus on Ethernet, FF on Ethernet, Modbus on Ethernet, ControlNet/DeviceNet on Ethernet, there's IP and IP to cause eternal confusion, etc. etc.

I'm now leaving for home, and halfway have to fill up. It's strange: only 3 types of fuel in the gas station (diesel, unleaded, LPG). No separate fuel for my Honda. As consumers, we would never accept the situation of 50 types of fuel for each brand of car.


Greetings,
Rob Hulsebos

By Hugo Ahrens on 21 June, 2000 - 10:17 am

Hey Rob, just to stirr the pot a little, as consumers we *are* accepting the plethora of protocols. There's nobody with the power to stuff them down our throats. It's just that we like the things we want, and if a little strange protocol comes with it most of us do not take the stand to fight it off, insisting on what we really want. I see it like blaming the hockey players for the price of the game tickets.

Hugo

By Colin T. Marsh on 23 June, 2000 - 12:17 pm

Hugo, you are right about consumers "accepting" the plethora of protocols. However you say "nobody has the power to stuff them down our throats". You might be right if your name is GM. They were powerful enough to resist A-B's new "offering", so ControlNet became "open".
Unfortunately most clients don't have GM's clout and have no choice but to use/accept the proprietary networks they are offered by the manufacturer. Now the client is the manufacturer's prisoner and can be restricted, by
connectivity issues, to the products supplied or approved by the manufacturer. Then watch the manufacturer protect his monopoly when a non-approved company (like ours) provides truly competitive alternatives that enables the
client's choice of OEM equipment to connect to these same networks.

Colin

By Hugo Ahrens on 27 June, 2000 - 10:48 am

No, Colin:
That's not the core message I was intending to pass on. My attitude is that if we all feel like little nobodies who have no control over these decision then we deserve what we get. If you don't like the protocol, don't buy the product. But we all want the thing we want, and we close our eyes and cringe about the part we dont like, but most buy anyway! That is why companies like
AB are able to stay bullies, and the Westburnes of this world are just order takers. Where would the PC be today if it had been made closed like the Mac, where would Modbus be (do you remember how in the eighties everbody was saying that recycling paper could not be done economically?). We know that open works for everybody, but we can't get everybody together to put pressure on the unreasonable guys because everybody is in their own isolated situation.

Hugo

By Rob Hulsebos on 28 June, 2000 - 3:18 pm

Or take a seat in your national standardisation committee and disapprove of any unwanted 'standards'. Have the 'user's groups' be filled with users, and not with product sellers. Etc. etc...

Rob Hulsebos

By Hullsiek, William on 29 June, 2000 - 10:31 am

> Rob Hulsebos wrote:

> Or take a seat in your national standardisation committee and
> disapprove of any unwanted 'standards'. Have the 'user's groups' be
> filled with users, and not with product sellers. Etc. etc...

I used to attend SP72 meetings when they were being held. My employer stopped paying for them, because it did not directly add to the bottom-line.

I also took vacation and spent my own money going to a POSIX meeting.

I would make a comment about pointy-hair bosses, ala Dilbert but it would be censored off the list.


- Bill Hullsiek

By Sage, Pete (IndSys, GEFanuc, Albany) on 21 June, 2000 - 1:01 pm

Right,

But there's 50 types of oil and air filters for the cars, not to mention dozens of different engines, transmissions, tire types, brake pads, etc. I can't slap a Honda engine into my Toyota. Nor can I take the snow tires of my wife's car and put them on mine. Each car vendor has there own interface for talking to the car computer...

I'm not defending the hundreds of different protocols out there - since we have to write special drivers or qualify OPC Servers for all of them. Vendors like to have control of protocols so they can rapidly make changes without going through a committee. That said I do see a big push towards standardization.

Pete

By Curt Wuollet on 21 June, 2000 - 1:35 pm

Rob Hulsebos Philips CFT wrote:

> >> ...What is the reason (except the commercial) of having so many
> >> "different" communication protocols instead of having one common
> >> protocol for all systems ...?
>
> >We're going to be set back again by the push for Ethernet. It is a
> >great physical layer, but the application layers are just as numerous
> >as the other networks you sited. I'm not convinced we'll have it
> >solved in the near future in the industrial control world.

I think it's kind of shallow to blame the "open" people for that. Most of the proliferation was for the same reasons as in the automation world.
Now, we have most of the world standardized on the "Internet" suite of protocols and it is impossible to characterize that as a bad thing. If the Internet were run by automation vendors, no two people could mail each other and a mailing list would only reach people who had xyz computers
with xyz modems and xyz wiring. Of all those many protocols that you mention, a few are in lagacy applications and the rest are forgotten. You stand almost no chance to break standards now and be commercially succesful. even Novell and Microsoft had to bitterly concede and accept commodity standards that they don't control. The automation market just doesn't "get it" yet.

> Hear hear. I'm trying to keep track of all the 'open' protocols
> that exist on this planet. I lost track somewhere above 100.

You have a very, very, loose definition of "open". I count about a dozen and each of them serves a unique purpose as opposed to dozens that
serve identical purposes in the automation market.

> I now think we can double this number without much effort.
> There is Profibus on Ethernet, FF on Ethernet, Modbus on Ethernet,
> ControlNet/DeviceNet on Ethernet, there's IP and IP to cause
> eternal confusion, etc. etc.

That _is_ the regrettable part. Users, by popular demand, want the standardization that makes the Internet work for everyone. The automation vendors respond by spending huge amouts of time and effort to build proprietary implimentations on top of a "universal" wire protocol thereby rendering it useless as an interoperability
tool. The consumers get screwed again. and no progress has been made. Ethernet by itself, means nothing, without a single, standard protocol on top. I can see why people are not enthusiastic, proprietary Ethernet is just as bad as all the other media. They remove the magic ingredient and wonder why it's not magic anymore. If they would
simply agree to add just one, and only one, common protocol to the Internet suite of protocols, preferably encapsulated in TCP/IP, or even UDP, huge amounts of time and effort would be saved, costs would plummet and adoption would be exponential. But they would rather have a tiny piece of the market that they can control. Smart., real smart. That's why we're doing the Linux PLC, it's not very hard to do better than people who think like that.

> I'm now leaving for home, and halfway have to fill up. It's strange:
> only 3 types of fuel in the gas station (diesel, unleaded, LPG).
> No separate fuel for my Honda. As consumers, we would never accept
> the situation of 50 types of fuel for each brand of car.

Why then do we accept it in the automation business? Support your local Linux PLC hacker and use Open Source products wherever you can.
We built the Frankenstein monster, the only way to subdue it is to starve it until it's willing to change.

Regards,

Curt Wuollet, Owner
Wide Open Technologies No disclaimers, Go ahead and call my boss.

By Phil Covington on 23 June, 2000 - 2:13 pm

The problem that I see with a "common" protocol is trying to get everyone to agree on what that "common" protocol should be. Getting a group of people with different expectations and requirements to agree on a subject can be very difficult. All one has to do is look back at the message archive of the LinuxPLC for January and February to see how difficult it is to get
everyone to agree on how to proceed. Even now there seems to be a few people working in different directions on the LinuxPLC.

Regards,

Phil Covington

By Curt Wuollet on 26 June, 2000 - 10:42 am

Hi Phil

That's why the pressure for standardization must come from outside the automation establishment. Hell, they've just ended years of trying to do this very same thing with worse than no results, along the way they actually gained protocols. The push for Ethernet had very little to do with this bunch of self-interested NIH mavens. The push came from customers who have already
standardized on Ethernet and TCP/IP who know that connectors shouldn't cost $80.00 and a serial card shouldn't cost $450.00 and the whole purpose of networking is to interoperate. I expect this
trend to continue until proprietary solutions get you shown to the door. Only then will someone come up with the bright "idea" of standardizing on open protocols. I don't have a problem with
proprietary protocols so much as what they are used for which comes close to extortion. That and the bald faced liars calling them open. My questions are: When will the reasonable expectations of the customer start to matter. When will systems integrators start to matter? And when will networks begin to connect things together rather than keep them isolated by vendor?

Regards

cww

By Ralph Mackiewicz on 29 June, 2000 - 11:56 am

> > The problem that I see with a "common" protocol is trying to get
> > everyone to agree on what that "common" protocol should be.

...snip...snip...

> That's why the pressure for standardization must come from outside
> the automation establishment.

Who outside of the automation industry cares about interoperability of automation equipment?

...snip...snip...

> My questions are: When will the reasonable expectations of the
> customer start to matter.

If users care so much about open and interoperable standards then why do they keep buying things that are not open and not interoperable?

I am serious: Can someone please answer this question?

> When will systems integrators start to matter?

When they start making the purchase decisions on the equipment they integrate.

> And when will networks begin to connect things together rather
> than keep them isolated by vendor?

When users (or the people who make purchase decisions) stop buying those solutions that isolate them by vendor.

Regards,
Ralph Mackiewicz
SISCO, Inc.

By Warren Postma on 30 June, 2000 - 10:55 am

> When users (or the people who make purchase decisions) stop buying
> those solutions that isolate them by vendor.

This is a very pragmatic way of seeing it. Customer pressure might some day increase, or it might not. I agree, and I think it'll never happen.

Diverse ways of communicating are a fundamental part of being human. Just try to make everyone speak English. Or worse yet, try to get everyone to speak Esperanto. A doomed idea, maybe noble, maybe just dumb, but in the end it leads to nothing. On the other hand, a lot of people learn English who need it only to conduct business. Having a "lingua franca" is more useful
than to ban or deprecate any particular single language, protocol, or other simple standard.

For automation industries, Modbus is exactly that. It's a least common denominator. While unspecialized, and not very powerful, it works. Any expectation that everyone will suddennly start being "more consistent" in the Automation or communications protocols realm than Human Beings are wont to be in general, is in fact a pipe dream. I find the smaller the niche, the greater the potential for "balkanization".

Warren

By Ralph Mackiewicz on 23 June, 2000 - 7:37 am

> >> ...What is the reason (except the commercial) of having so many
> >> "different" communication protocols instead of having one common
> >> protocol for all systems ...?

...snip...snip...

> I now think we can double this number without much effort.
> There is Profibus on Ethernet, FF on Ethernet, Modbus on Ethernet,
> ControlNet/DeviceNet on Ethernet, there's IP and IP to cause eternal
> confusion, etc. etc.
>
> I'm now leaving for home, and halfway have to fill up. It's strange:
> only 3 types of fuel in the gas station (diesel, unleaded, LPG). No
> separate fuel for my Honda. As consumers, we would never accept the
> situation of 50 types of fuel for each brand of car.

With gas, you make the purchase decisions and the companies respond appropriately.

I am curious, how many of the automation engineers on this list actually determine the brands of controls that are used in end user
systems?

I suspect that the people making purchase decisions are not the same people who object to the proliferation of "open" solutions.

The people who make the purchase decisions obviously don't think that having a plethora of "open" communications protocols is relevant to
their decision making process for controls.

Regards,
Ralph Mackiewicz
SISCO, Inc.

By Mark Hutton on 29 June, 2000 - 3:04 pm

But we do settle for 50 (and many more) types of power cell, for the myriad of electrical and electronic devices in use.

And if you include other types of vehicle/transport, with different needs and fuel requirements to the domestic vehicles, you would find that the number of fuel types also increases. Different problems require different
solutions, hence http,ftp, etc. not to mention tcp and ip.

The next great thing (or one of them) will be XML, this effectively gives every body the ability to create their own protocol(s).

Infinite protocols.

By Mark Rogers on 30 June, 2000 - 3:35 pm

> The next great thing (or one of them) will be XML, this effectively gives
> every body the ability to create their own protocol(s).
>
> Infinite protocols.

I've never really understood the problem of multiple protocols - why should we only have "N" protocols unless we also limit the number of
PLCs, RTUs, controllers, etc in a similar way?

What I do find frustrating is the way protocols are often so closely guarded (or made open but documented in a vague or inaccurate way), and the way that manufacturers often charge so heavily for the drivers (and cables and manuals and ...), but these are commercial issues for the manufacturers concerned. As has been pointed out, unless end users demand either a standard protocol, or (my preference) a well documented open protocol, then they'll get what the pay for.

[Working for a SCADA supplier my take on this may be different from that of the end user, of-course. To sell to a customer using brand/model X of PLC we need to have a driver for that PLC. We can't write one unless the protocol is open, so we either use OPC which is usually a chargeable extra from the manufacturer (and slower than a
native driver would be), or talk through some other software layer (eg AB's RSLinx - again a chargeable extra and unlikely to be as fast as a
native driver). Where a "standard" protocol is used, eg modbus, it is very common to find that it is a new variation (I'll avoid saying
"incorrect implementation") of modbus which nobody else is using, so we're back to square one anyway. From my perspective, as a support
engineer, the other thing that a documented protocol gives you is the ability to debug problems - it's all very well being given a byte
level trace of comms if you have no idea what it should look like.

Now, if the manufacturer of the PLC also sells a SCADA of some type then there is a clear commercial advantage to them in all of the above
scenario. Since we are all in this, at the end of the day, to make a living, this can and will only change if end users insist on changes. If using an open or standard protocol meant more PLC sales then it would happen, but there is no evidence that this is the case.]

My apologies for the indiscriminate use of the word "open" in the above, by the way, since we all know how many different ways that can be interpreted.

Mark

By Peter Placek on 19 June, 2000 - 9:47 am

Dear Tassos,

It is very hard to answer your question. I am afraid there is no clear answer. We can ask similar question: "Why people all around the world do not speak the same language? It would be
so much easier!" We are probably just of the nature.

But instead of trying to answer your question, more important might be to find a solution. There is several activities and groups around, but I believe more successful one is OPC Foundation. The OPC communication has become
quite common in industry automation lately, providing connection among all protocols you mentioned.

Sincerely,

Peter Placek.


____ Merz Company ... _________________________________
Peter Placek, Sales and Marketing Manager
tel: +420 48 510 0272, fax: +420 48 510 0273
http://www.merz-sw.com
_____ ... for Clean Sound of Software

By roger Irwin on 27 July, 2000 - 2:46 pm

Peter Placek wrote:

> Dear Tassos,
>
> It is very hard to answer your question. I am afraid there is no
> clear answer. We can ask similar question: "Why people all
> around the world do not speak the same language? It would be
> so much easier!" We are probably just of the nature.
>

Cannot totaly agree. Firstly many protocols have different characteristics for different requirements. CAN goes short distances, but with tightly controlled timing. Profibus DP is easy, but costly and not scaleable.

Then there are hardware constrainsts, adding a ethernet controller and TCP/IP stack to a little microcontroller based temperature regulator would up the cost 4 fold as well as increasing size and power consuption wheras MODBUS, by contrast,
is ideally suited to such a task and is freely implementable without costly licences and/or
association fees.

Then there are the manufacturers. Everybody makes products that conform to a 'standard'. Of course what they really do is just take what they have always done and create an 'independent open standards body' to promote it. Well OK, that last remark is a bit cynical, but you have to admit there is a bit of truth in it as well;-)

Have you ever asked yourself 'why do we not all drive the same standard type of automobile', 4 door sedan, 5 seats, standard sized boot that takes a standard set of luggage.......

>
> But instead of trying to answer your question, more
> important might be to find a solution. There is several
> activities and groups around, but I believe more successful one
> is OPC Foundation. The OPC communication has become
> quite common in industry automation lately, providing connection
> among all protocols you mentioned.

Nearly all OPC applications I have seen do not replace these protocols, it simply sticks a wrapper on top, often superfluous and often with
very expensive price tags.

IF OPC is a solution I am bewielderd! Indeed the OPC foundation has left me perplexed, as has the whole OLE>ActiveX->DCOM caboodle.

When OPC started out (around '95) there was much enthusiasm and jubilation, and everybody talked about common standard access to peripherals and WinCE and embedded NT, I spent hours drooling over the WinCE developers CD......

Well, WinCE has been radically reincarnated 3 times now, each time being rationalised in the direction of embedded consumer products (TV set top boxes, PDA's, cellphones etc.). This enormous market is what MS is really after, and we industrialists are left with something that is not at all suitable for e.g. a DIN rail mounted active gateway.

As for embedded NT4.........They never got round to finishing many common desktop requirements such as USB scanners before they embarked on W2K, and now the process starts over. I have never seen any serious effort on the part of Redmond to make a decent embeddable version of NT (like something that can be used without a disk drive
etc), and given that they have their hands full with W2K, which they must now port to 64 bits
for the new Intel processors......well, frankly, I am not holding my breath.

This is serious for OPC, which nowdays does little more than define DCOM calls, because the DCOM technology they use only works (too all intents and purposes) on windows. And yet MS are making no effort to make a windows suitable for
industrial appliances.

What happens in pratice is that I must interface my PLC, axis controller, whatever, to a PC running NT4 using Profibus, MODBUS, whatever, and then access the NT4 machine via DCOM from e.g. my EXCEL spreadsheet (and pay a hefty license for
the DCOM server software I require on the NT box). Of course if I do not want headaches I will be running all this under an NT domain so at least one NT box on the network must be an NT server with a $2000 price tag and a MCSE to configure it. Then in the not too distant future I find I have to upgrade all this to W2K, but to do that need to upgrade the hardware.............

This is absurd. An ever increasing amount of industrial hardware is capable of supporting
ethernet interfaces with TCP/IP stacks, and I can buy miniature flash based industrial PC modules for a few 100 dollars which will run the Datalight embedded DOS with TCP/IP (around $30 license fee). I can communicate directly with such devices with a few lines of VB script in my EXCEL spreadsheet, or just use the object supplied by the manufacturer. I do not need this DCOM caboodle in the middle, although I can optionally route my TCP/IP communications into a DCOM server if that IS appropriate, and be no worse off.

For VB scripting cynics out there, take a look at the MSDN VB entry for WinSock, there is a ready made example that demonstrates how easy it is to send messages between two programs over a TCP/IP network. Allthough this example uses two VB programs, one side could just as easily be a 'SENDTCPMESSAGE' command in a PLC program.

Of course what is missing is the format of the message to be sent. Allthougth in many cases
ad hoc messaging is quite adequate, in order to facilitate messaging between off the shelf
products it is much simpler if standardised profiles are available.

This is basicly what OPC are doing nowdays, they are defining formats for the interchange of data, not the actual communications protocol themselves. IF they were a serious independent standards group (as they claim to be), they could design these message formats such that they could be sent over basic TCP/IP message streams (i.e direct to a PLC etc), OR be encapsulated in a DCOM object (or a DLL or CORBA object) as appropriate. Instead they insist on DCOM objects that are so intertwined with the Win32 API it is
unlikely that they will ever break free.

Now they are moving onto higher things, XML for the standard interchange of intelligent data. XML is also an open interoperable standard, but I will be little suprised if the OPC XML implementations are only actually capable of being used in conjunction with office 2000.

By Curt Wuollet on 31 July, 2000 - 8:13 am

Hi Roger and all.

I quite agree on your views on OPC and especially the facade of openness that is put upon the whole business. Kinda like calling black white and overcoming objections through massive and pervasive marketing. What I don't understand
is the resistance to real open protocols and this fierce, rabid, Windows everywhere and nothing but Windows concensus in the automation market. It
seems as if no technical argument can stand this "Windows at any cost" mindset. So many of the incompatibilities and problems could be overcome by examining the example of the Internet, universal, ubiquitous exchange of data with high
reliability and incredible ease in comparison to the exixting state of the art in this market sector. I agree that ethernet is not the solution to all problems, but it would be a good start if and only if the mistake of proliferation of proprietry protocols can be avoided. The cost of silicon for Ethernet is diminishing rapidly
and licensing costs can be eliminated by using any of the Open Source stacks that are widely available. If people can build and sell NICs for $9.00 it should be competitive with even serial hardware at reasonable volumes. If one was to listen to the "experts" in the automation field, the problems with ethernet and TCP/IP are so severe that the Internet is absolutely impossible,
there would be no data left after traversing all those unreliable, non-deterministic links :^)
What drives this denial and rejection of the obvious? What makes people so desperate to implement automation and process control on systems known to be unreliable and invent new private protocols instead of even attempting to
use what's already available? What form of the Stockholm Syndrome is it, that makes people so fiercely defend the vendor who is responsible for so many of the problems and service calls that really have nothing to do with the actual
automation and controls work performed? On these platforms, even if I make the most perfect product, there will be goodwill lost that's beyond my control. Yet the next product will absolutely, without question, use that platform?
Why do people in this industry reject, out of hand, efforts to improve their lot by opening and commoditizing the very things that they fight with and complain about the most, often with a vehemance and closed mindedness reserved for
religious issues and other articles of faith?

This is a very curious and counterintuitive phenomena and as someone who is interested in providing alternatives, I seek to understand it. Is it fear?, Money? Is absolute conformance and universality _that_ important that so many will
persue it at any cost? It boggles the mind if you look at it with any degree of detachment. What is the essence of this attitude and what causes it? Why are people so reluctant to talk about the reasons and rationale? Why is proprietary good?


Regards

Curt Wuollet, Owner
Wide Open Technologies

By Mark Bayern on 31 July, 2000 - 2:59 pm

>So many of the incompatibilities and problems could be overcome by examining
>the example of the Internet, universal, ubiquitous exchange of data with high
>reliability and incredible ease in comparison to the exixting state of the
>art in
>this market sector.

We're probably too late. It seems that most people attribute internet communications to the 'fact' that all the computers run one operating system.

Mark

By Jansen, Joe on 2 August, 2000 - 11:53 am

yeah, Unix.

-> We're probably too late. It seems that most people attribute
-> internet communications to the 'fact' that all the computers
-> run one operating system.

By Roger Irwin on 2 August, 2000 - 12:10 pm

Despite the fact that the internet was actually born and bred on unix, in fact unix servers form the mainstay of the internet (even MS run hotmail on SUN systems!).

If truth be known, the success and interoperability of the internet has been due to the openess of it's protocols

(As someone once said "thank god TCP/IP is not patented).

By Jansen, Joe on 31 July, 2000 - 3:26 pm

Hi Curt!

I will take a try at responding to some of these issues:

I think that part of it is that the automation and IT industries are no longer made up of the 'geeks of old', so to speak. The way I put it to people is that "I was a geek before it was cool'.

Those of us that were writing assembler programs on our Commodore 64 oh-so-long-ago are by nature out of the mainstream. we were the ones that usually ate at the lunch table by ourselves with our noses buried in the apple II programmers
reference. And we liked it that way! We did not want to be bothered by the peer issues of the mainstream.

(trying to keep flames to a minimum, realize that I do not mean everyone, just some people.)

Many of the new automation engineers and (especially) IT staffers do not remember the world before windows. If I go upstairs to the
computer room right now and ask the 3 people in there what they know about the "Trash-80 coco 2", I would get a blank stare. They literally would be unable to even translate that into the correct
model name. And forget about file redirection. I spent about 1/2 hour explaining redirection at a DOS prompt once.

These groups of people got into it after windows was dominant. And since they came to it after it was cool, they are squeamish about going outside the accepted boundaries. Their comfort zone
is smaller.

Also, as more business managers get involved, they have even less understanding of what is out there. All they know is windows. And this is because they go to best buy, circuit city, Wal-Mart, or whatever other chain store, and that is what they see.

For those of us that remember the "holy wars" of Commodore vs. Apple II vs. PC-compat., it is easy to get the concept that if you don't like one product, you just switch to the other. For those that have grown up on only a single platform, there is literally nothing else out there. They cannot accept that there is a viable alternative
to windows.

I am the same way on some things, and I am sure you have some things that people consider "quirks". I still keep track of all of my
project notes in bound journals, using a pencil. I realize that there is software out there that is better. I realize that there are a
thousand and one arguments for making all of my notes electronically rather than on paper. I still do it because when I developed the habit, there wasn't any viable alternative.

Likewise, Microsoft is their habit. It is all they are comfortable with.

In fairness, Windows does have redeeming value: It is VERY easy to set up. I have computers that cannot run Linux because of hard drive sizing issues, monitor incompatibilities, video and network drivers, etc. But they run windows as well as any other machine. MS has a very simple interface for the user. When I set up a
database and file server, I used NT 4.0 SP6. Not because it was my preference, but because I needed it up that day, and Linux would have taken longer to get drivers and such ready, install,
compiled, etc. (Note: I am in the process of migrating :^} )

This 'ease of use' is the biggest driver. Since everyone has less time to do more stuff, we like to grab something that we can slap into place, and deal with minor issues as they arise. I have never gotten a bit of argument when my server is offline, as I just say "Windows crashed. It will be back up in about 15 minutes". All the suits just smile and say "Oh. OK. Let us know when it is ready".

Since there is the windows buy in on ease of use, everything else comes part-and-parcel. Don't like what MS did to Netscape? Still want windows? Then you compromise your principles and fire up IE.

Translate that to the automation world, and you have AB/Rockwell. Nobody has ever told me that they thought AB was price competitive. Nobody has ever accused AB of having the latest technology. They are usually a step behind and twice the price. But they are "standard". Many, many, many places spec it, just because that is what they are used to. And again, you get what they offer as a package deal.

I will stop here, as I could end up writing a book if I get going... All responses are welcome, provided that they are thought out and
civilized.

--Joe Jansen

By Roger Irwin on 2 August, 2000 - 9:31 am

> I think that part of it is that the automation and IT industries are
> no longer made up of the 'geeks of old', so to speak. The way I put
> it to people is that "I was a geek before it was cool'.
>

Automation has not had geeks, up till recently they have got by quite happily pretending that everything can assimilate a bank of relays or a cam, and user interfaces can be made up of screens
that are swithched on and off.

Now things are getting more serious, and automation people are understandably finding it difficult to come to terms with the changes.

But I think they must try to learn some of the fundamentals, so often I see my collegues and people on this list talking about absurd solutions.

> Those of us that were writing assembler programs on our
> Commodore 64 oh-so-long-ago are by nature out of the
> mainstream. we were the ones that usually ate at the lunch table
> by ourselves with our noses buried in the apple II programmers
> reference. And we liked it that way! We did not want to be
> bothered by the peer issues of the mainstream.

In an industry that doubles each year, it is only natrual that 1/2 the people have less than one years experience;-)

What amazes me is that inexperienced users are happy to believe any marketing release or commercial guy who happens them by, and

> Also, as more business managers get involved, they have even less
> understanding of what is out there. All they know is windows. And
> this is because they go to best buy, circuit city, Wal-Mart, or
> whatever other chain store, and that is what they see.

What I do not get is why everybody is an expert when it comes to computers. If you decide to install a 4 quadrant brushless DC motor and drive on a simple ventilation fan, nobody would question your wisdom, do anything with a PC and every tom dick and harry is there telling you how it should have been done.

> They cannot accept that there is a viable alternative
> to windows.

Well Microsoft do spend an awfull lot on evangelists. One of the things they spend an awful lot of effort on is promoting windows as the easy solution. They also promote the image that 'geeks'actually waste time because their solutions, even when idealisticaly correct, waste time.

It is an excellent approach, MS products are characterised as being quick to have 'something' running. In fact my experience with MS is that you can get 80% of the way very quickly, it is just the last 20% that turns out to be a nightmare!

Not that I advocate that everybody types documents with vi! It is a very efficient editor in the hands of an expert, but it does take years to get that expertise!

No, there has to be a balance. A small investment in time trying to actually understand what you are doing, and selecting the right approach, pays big dividends in the long term.

But at the end of the day that marketing is good. You can waste as much time as you like with a windows solution, and everybody sympathises with
you. Dammned windows they say, and crack a few jokes about blue screens and 'if MS made cars....'. But just waste a couple of hours trying to get a non windows solution working and everybody is coming down on you because you are
not using windows.

My experience with unix is that it takes longer to do most setup tasks. But, when I do have problems I can get to the bottom of things (which is often impossible with windows), and once I have the unix box setup it is setup for good. In fact I believe a lot of windows stability problems are due to misconfigured systems, but the reason they become misconfigured is because when the wizards don't work then you are left with ugly work around, you cannot get under the hood and get to the root of the problem, partly because the rampart use of wizards means you never actually get to learn what is under the hood!

I must confess, I deploy windows far more than non windows. There are cases where windows is a good solution, their desktop environment is sleek, but there are many occasions were there is a much better case for non windows, especially in dedicated embedded boxes. Yet even here I err on the side of windows. Despite the fact I am
expert in using unix and specialist operating environments, I have a fear of deploying them,
with windows I can waste 10 times as much time, and risk having unstable and/or unscalable systems.

And yet some customers are not interested in the inner workings, they are just interested in what the system achieves, and the price. As it should be. In these cases my use of windows is limited to front ends that must co-exist with windows desktops. I do far more for far less in these circumstances.

Managers should learn to look at the bottom line, and long term results.

By Matthew da Silva on 3 August, 2000 - 4:01 pm

It is not long now that computer systems engineers will be certified just like any other type of engineer. As IT becomes 'mission-critical' for companies, the CEO will be asking for people with more accountability. Maybe this is one of the problems with 'everyone's an expert' syndrome. It's the same with literature. Technical writers (which I have been in the past and shall possibly again be, in future) are largely considered an unfortunate adjunct to the expense of a project.

Not only that, but they are often brought in only at the end of the development project. What happens is that they suddenly start asking for a
lot of changes to screen displays, error messages and such, to match their terminology and taxonomy choices, which are being made in the documentation. Because these types of 'user documentation' are part of the softwaare source
code, delays result (the alternative is to give the tech. writer the code and have them make the changes themselves -- would you do this?).

When it comes to documentation, every engineer and project manager is telling you how it should be done. As time goes on, the importance of
documentation will increase. Online help will be expensive and anyway, the online help will be, essentially, the same material as is being put into the paper manuals. Tech. writers must (should) learn more skills; librarian skills, for example, to help with classifying and structuring information. Indexing is a valuable skill that most tech. manuals lack. Indexing is also the key to effective online help. It is labor-intensive (cannot be automated) and requires significant efforts and commitment on the part of the project manager and the tech. writer.

Future software cycle times should shorten more, and more. In this scenario, it should be a requirement for tech. writers to be brought into development projects earlier and to be given an amount of latitude commensurate with their analytical abilities. Despite appearances, many tech. writers are quite intelligent and may even have useful input into marketing and sales
strategies. Like IT professionals, tech. writers will be more valuable if brought in-house, rather than used as an outside resource.

Cheers,
Matthew
Tokyo, Japan

>> when it comes to
computers. . . every tom dick and harry is
there telling you how it should have been done.<<

> It is not long now that computer systems engineers
> will be certified just like any other type of engineer.

I just noticed this yesterday in the Spring 2000 "PE Newsletter" by the Texas Board of Professional Engineers. It's been in the works for a while, it seems.

"In June 1998, Texas became the first state to license software engineers. This action had an influence on the recognition by ABET of software engineering as an engineering discipline. Subsequently, a committee formed by IEEE and ACM has developed a "body of knowledge" needed to serve as the basis for a national NCEES exam and curriculum in software engineering. Board Member Dave Dorchester, P.E., has spearheaded this effort from its inception."

By Matthew da Silva on 10 August, 2000 - 10:56 am

What is it about Texas that makes it so enterprising? On the Net, I've met many people who live in Texas, and who are unusually progressive and original. This should not cast aspertions over the capabilities or populations of other states, but it seems that the South is leading in very many ways. Must be all that hot food and wide-open spaces; and being close enough to Mexico an' all.

Cheers.

By Anthony Kerstens on 14 August, 2000 - 10:28 am

Similar here. Several Ontario,Canada universities have created Software Engineering programs. There set to graduate their first batch of students in the next couple of years, and to have the
programs' professional accredation granted hopefully before they graduate.

As for Texas and other southern states being enterprising, it might have to do with all the Canadian talent moving south of the border!!
:-)

Anthony Kerstens P.Eng.

Joe:

I think that you are accurate to a degree. Before I got to my current job I used C and UNIX. I also had experience developing systems back when computers didn't have virtual memory and we had to swap pages in and out of memory. How about counting the time for each machine instruction so you knew when your slice of time was up? I didn't do Commodore 64, but I worked with C/PM, Apple II
and a lot of other early computers.

For a long time I pushed UNIX and what I considered "open" standards. I was not comfortable when I had to start using Windows. I refused to even consider it until NT came along. Once I started using NT I realized it was good enough. Being good enough was more than enough of a reason to use it, since it is the dominant computer architecture.

I used to be like the old cigarette ad, "I'd rather fight than switch!", but once I finally switched I found that the benefits outweigh the limitations. I don't like Oracle's CEO, either, but I like their products...

Sam

By Curt Wuollet on 2 August, 2000 - 1:43 pm

Hi Joe

So, to sum it up, familiarity? Ease of use I attribute to familiarity because for example, I find NT hard to use and Linux easy because
that's what I'm familiar with. I didn't think the resistance to change was that powerful. Off-list I got the word that "My boss makes me use it".

> I will take a try at responding to some of these issues:
>
> I think that part of it is that the automation and IT industries are
> no longer made up of the 'geeks of old', so to speak. The way I put
> it to people is that "I was a geek before it was cool'.
>
> Those of us that were writing assembler programs on our
> Commodore 64 oh-so-long-ago are by nature out of the
> mainstream. we were the ones that usually ate at the lunch table
> by ourselves with our noses buried in the apple II programmers
> reference. And we liked it that way! We did not want to be
> bothered by the peer issues of the mainstream.
>
> (trying to keep flames to a minimum, realize that I do not mean
> everyone, just some people.)
>
> Many of the new automation engineers and (especially) IT staffers
> do not remember the world before windows. If I go upstairs to the
> computer room right now and ask the 3 people in there what they
> know about the "Trash-80 coco 2", I would get a blank stare. They
> literally would be unable to even translate that into the correct
> model name. And forget about file redirection. I spent about 1/2
> hour explaining redirection at a DOS prompt once.

I wonder if there is a relation to the number of non-professionals that wander into it because it's "easy with windows"


> These groups of people got into it after windows was dominant.
> And since they came to it after it was cool, they are squeamish
> about going outside the accepted boundaries. Their comfort zone
> is smaller.
>
> Also, as more business managers get involved, they have even less
> understanding of what is out there. All they know is windows. And
> this is because they go to best buy, circuit city, Wal-Mart, or
> whatever other chain store, and that is what they see.
>
> For those of us that remember the "holy wars" of Commodore vs.
> Apple II vs. PC-compat., it is easy to get the concept that if you
> don't like one product, you just switch to the other. For those that
> have grown up on only a single platform, there is literally nothing
> else out there. They cannot accept that there is a viable alternative
> to windows.
>
> I am the same way on some things, and I am sure you have some
> things that people consider "quirks". I still keep track of all of my
> project notes in bound journals, using a pencil. I realize that there
> is software out there that is better. I realize that there are a
> thousand and one arguments for making all of my notes
> electronically rather than on paper. I still do it because when I
> developed the habit, there wasn't any viable alternative.

The problem is, one of my quirks is reliability, I have customers I haven't heard from in a year. Of course, service calls may be billable. Windows is good enough for a lot of things, but controls?

> Likewise, Microsoft is their habit. It is all they are comfortable
> with.
>
> In fairness, Windows does have redeeming value: It is VERY easy
> to set up. I have computers that cannot run Linux because of hard
> drive sizing issues, monitor incompatibilities, video and network
> drivers, etc. But they run windows as well as any other machine.
> MS has a very simple interface for the user. When I set up a
> database and file server, I used NT 4.0 SP6. Not because it was
> my preference, but because I needed it up that day, and Linux
> would have taken longer to get drivers and such ready, install,
> compiled, etc. (Note: I am in the process of migrating :^} )
>

I have seen some valid cases the other way too, in fact I have some machines here I got because they wouldn't run windows.

> This 'ease of use' is the biggest driver. Since everyone has less
> time to do more stuff, we like to grab something that we can slap
> into place, and deal with minor issues as they arise. I have never
> gotten a bit of argument when my server is offline, as I just say
> "Windows crashed. It will be back up in about 15 minutes". All
> the suits just smile and say "Oh. OK. Let us know when it is
> ready".

:^)

> Since there is the windows buy in on ease of use, everything else
> comes part-and-parcel. Don't like what MS did to Netscape? Still
> want windows? Then you compromise your principles and fire up
> IE.
>
> Translate that to the automation world, and you have AB/Rockwell.
> Nobody has ever told me that they thought AB was price
> competitive. Nobody has ever accused AB of having the latest
> technology. They are usually a step behind and twice the price.
> But they are "standard". Many, many, many places spec it, just
> because that is what they are used to. And again, you get what
> they offer as a package deal.

But why are AB, Siemens, GEF et al. In bed with MS? Why would GEF for example, replace a good stable UNIX Cimplicity product with a product that crashes during the demo? And they refuse to
continue the UNIX product. These guys treat me like a raving lunatic when I want to use something else and simply ignore the reliability aspect.

> I will stop here, as I could end up writing a book if I get going...
> All responses are welcome, provided that they are thought out and
> civilized.

I am not looking for a X sucks, Y rules type of discussion. I can find those elsewhere. The paradox I'm working through is the difference between stated priorities and priorities in practice.

Curt W.

By Jansen, Joe on 4 August, 2000 - 9:16 am

-> So, to sum it up, familiarity? Ease of use I attribute to
-> familiarity because for example, I find NT hard to use
-> and Linux easy because that's what I'm familiar with.
-> I didn't think the resistance to change was that powerful.
-> Off-list I got the word that "My boss makes me use it".

<snip>

Yes, to a degree. The catch here though, as stated to you off-list, is that you are not always dealing with the engineers familiarity.
Many times it is the suits in the front office that are making the platform decision. They go out and play some golf with a vendor, and suddenly they are standard. Or, as another scenario, they get one or two vendors to come in and give them a dog-and-pony-show, then boil down the feature lists to some common denominators,
and tell the engineer "You can use whatever system you want, as long as it has COM, OPC, Runs an Access database, and has feature A, B, and C" because that is what the sales people that they talked to promoted as their biggest features.

Summed up: You are dealing with familiarity for non-technical individuals.

-> I wonder if there is a relation to the number of non-professionals
-> that wander into it because it's "easy with windows"

You actually wonder? :^}

<snip>

-> The problem is, one of my quirks is reliability, I have customers I
-> haven't heard from in a year. Of course, service calls may be
-> billable. Windows is good enough for a lot of things, but
-> controls?

Not in my process. Of course, if anything bumped in my process we are out of production for 6 to 8 hours. Fortunately the VP engineering has been convinced that control by windows is a "bad
thing". He presses us about every 4 months, but we have a standard list of replies that keeps him at bay! (Typically "How long has it been since you rebooted your computer on your desk? That
would be a complete system shutdown and restart.")


-> But why are AB, Siemens, GEF etal. In bed with MS? Why
-> would GEF for example, replace a good stable UNIX Cimplicity
-> product with a product that crashes during the demo? And they
-> refuse to continue the UNIX product. These guys treat me like a
-> raving lunatic when I want to use something else and simply
-> ignore the reliability aspect.

Bandwagon. Also, they are no longer selling to engineers. they sell to Executives. The suits only know Windows. it isn't even that hard to distract them with questions like " Do you have a full time Unix administrator on staff? You would need one, you know, if you tried this Linux thing. They cost anywhere from $50K to $100K to
get one that knows what their doing. Our Windows package, however, is SOOO simple, my 8 year old set up the oil refinery down the street...."

And once they are in the door, they can blame operator error, blame Microsoft, blame the hardware, or worst of all, say that the plant
engineers are at fault. "Gee, I haven't had any service calls from the oil refinery that my 8 year old set up. How competent are your people?"

-> I am not looking for a X sucks, Y rules type of discussion. I can
-> find those elsewhere. The paradox I'm working through is the
-> difference between stated priorities and priorities in practice.

Have you ever seen a spec that didn't include reliability? Nobody has ever said "We want this system to control our process, but it can go down whenever, we really don't care...." The problem is that they feel that they must trade the reliability for ease of use. Their IT staff cannot support anything that didn't come from
Redmond, so supporting a Unix system day to day is perceived to be more costly than the Windows based system. Why? Because deep down, they cannot believe that there could be a system that
doesn't crash as much as windows. The thought process here is "If it were really possible to make a system that was that stable, Microsoft would have done so. They are, after all, the largest software company, they have the resources to do it right, so that must be as good as it can be. If we went to Unix, we would have the same problems, but we would either need an expensive support person, or it would take twice as long to get back up and running due to unfamiliarity".

I appreciate that everyone is taking the high ground with this. The only (candle)flame I got off list was someone telling me that I forgot about those that hunched over their old Atari computers....

mea culpa.

--Joe Jansen

By Roger Irwin on 10 August, 2000 - 9:00 am

> "You can use whatever system you want, as
> long as it has COM, OPC, Runs an Access database, and has
> feature A, B, and C" because that is what the sales people that
> they talked to promoted as their biggest features.

Of course people promoting their products on the basis of COM/OPC are going to feel a bit silly now;-)

> The thought process here is
> "If it were really possible to make a system that was that stable,
> Microsoft would have done so. They are, after all, the largest
> software company, they have the resources to do it right, so that
> must be as good as it can be.

Microsoft are very good at their core business, which is making software that is quick to learn to do something, like one of those electronic pianos that you play by following numbers and colours. Their desktop technology is good, but they keep changing the standard, much faster than the lifetime of IA systems. The OPC claim that MS keeps thier distance (sure, they let any Tom Dick
or Harry hold their AGM in their Redmond campus...), the truth is they are just not that interested in IA, if people are stupid enough to use thier OA solutions in IA systems, that is fine
by them, but they are not going to extensively modify their OA solutions to IA requirements, and it would be a mistake to do so. They did promise specific IA solutions, but they have not been forthcoming because the embedded division is up to their necks in it with the consumer device market where they are suffering real competion from the likes of EPOC and Palm.

Of course Unix is little different. Unix systems have always maintained the edge on reliability and scalability, and still do. But as system specs
go steadily upward we have reached the point were MS systems, despite being inferior (and before you flame me rember that SUN will sell you systems
capable of handling 1Mega users plus off the shelf) are more than capable of handling the limited capacity of IA requirements.

Technically, the best solutions for IA are systems like WRS or QNX, but the downside of such niche systems is the it can be very difficult to get drivers and general purpose apps were needed.

Perhaps the reason Linux is gaining so much ground in IA is not because it is automaticaly adapt, but because it is so adaptable. People do embedded Linux with 2M flash based systems. Other people do very large databases, or run it in conjunction with hard real time schedulers. Also, people are using it on non PC hardware, the fan cooled CPU modules that are now standard in the PC industry are just out of line with many embedded requirements, but linuxers have a wide rnge of hardware to choose from, and we are starting to see IA products shifting to RISC based solutions such as the ARM.

> If we went to Unix, we would have
> the same problems, but we would either need an expensive support
> person, or it would take twice as long to get back up and running
> due to unfamiliarity".

Very few people know how to handle or program NT correctly, I know I don't but I also know enougth to realise the ignorance of the 'experts' I
resort to for help. I do not think the arrival of W2K is going to help that situation, given that it is all new and more complex under the hood.

One thing I do know is that one only needs to know 2 OS's in this world, MS and non MS, because everybody else works towards a common style
for API's and command shells, whilst MS invariably do the opposite. Telenetting into and maintain a small embedded QNX box is essentially similar (from a sysadmins point of view) to telenetting into the mega galatic Ultra SParcs that run hotmail or Amazon. Even BeOS lets you telnet into a familiar bash environment.

Given that MS do not provide an OS for IA applications, and thus there are inevitably situations where MS just does not cut, one could say that learning non MS systems (learn one - use all), is a much better option to IA personnel
than learning windows but not being able to understand all systems.

Of course I know the suits will not buy this, and I know people who have never used non MS systems (which is most) will not accept it, but the world
is full of rules and conventions that make things worse rather than better, and one simply cannot re-educate those millions of people who think MS invented the internet and Bill Gates wrote DOS (BTW, anybody know what happened to Tim Patterson?).

By Dave Ferguson on 17 August, 2000 - 2:45 pm

I love this discussion........

As I have said and has been said by others on the list........the reason everyone loves UNIX control systems IMHO is that they have spent the time to know how to "tweak" it to be totally (?) stable but they will not devote the same time and attention to lowly NT.

I have systems running 24/7 that have never been rebooted. I also spent a large amount of time setting things up front. The bad thing is I am basically a full time IT person now because I have like 25 systems and 150 PC's out there and the IT people are not used to 24/7 response times. By this I mean, we still have hardware
crashes because of a management decision (again IMHO) to not by "hardened" hardware, but those are rare.

I was totally against control via NT but after learning it as well as I knew UNIX, I now have little to no problems.

I control the machines via user profiles so that someone cannot play solitair etc. and the interface is locked down to just running my
control software, and we actually "tested" things and tweaked them just like I had to do the first time with UNIX and we devoted the time to learn the stuff. I am now an MCSE only because I like that kind of cheap personal gratification to see how I am doing (only for myself). We also ghosted the machines to network drives so that any hardware issues and we can restore the machines in like 6 minutes.

It made me realize that different does not mean "bad" it only means different. In todays world market you better be able to accept change and other opinions or you are doomed to fail............

Adapt or dye..........and oh by the way, the Internet is not going away and why if I had to sum it up isn't it going away ?

Because I do not care that I have connected to a Unix, NT, Linux, Mainframe etc system to get my information and that it took 25 hops through routers all over the world..........I only care that I get my INFORMATION..........that is the bottom line. Before it is done, like it or not........the internet browser will be the control system of use.

I already have systems running that just gather info and spit it out to web pages for control and diagnostic information........get used to it. I had spoken at the ISA show numerous times 5 years and more ago about this "revolution" and people laughed at the time but look around.............

Computers were designed to make things easier, they are just a tool, like a hammer. If you give a hammer to me and you give one to my friend who is a Master Carpenter, I will build a stick house
and he will build a work of art. You must know how to use your tools.......

I better get off my soap box.........

Dave Ferguson
Blandin Paper Company
UPM-Kymmene
DAVCO Automation

By Jansen, Joe on 18 August, 2000 - 12:18 pm

I tried to resist. I really did. But this just screams at me!!!

-> I love this discussion........
->
-> As I have said and has been said by others on the list........the
-> reason everyone loves UNIX control systems IMHO is that they
have
-> spent the time to know how to "tweak" it to be totally (?) stable
but -> they will not devote the same time and attention to lowly NT.

Because everything is either hidden in the cryptic registry, or simply not available to change. For example: I want to free more
resources to do my database processing on my server. A new production line is going in, and I don't want to upgrade my server hardware, but I would really like just a little better performance on my database. I decide that the best way to free resources is to shutdown all unnecessary processes. Let's start with the biggest one: the GUI.

Linux: No seat. Change my runlevel.

NT: Errr.....

OK, so we won't shutdown the GUI. Not sure why I am forced to
have it running, though.

etc. etc. etc.

->
-> I have systems running 24/7 that have never been rebooted. I
-> also spent a large amount of time setting things up front. The
-> bad thing is I am basically a full time IT person now
-> because I have like 25 systems and 150 PC's out there
-> and the IT people are not used to 24/7 response times. By this
-> I mean, we still have hardware crashes because of a
-> management decision (again IMHO) to not by "hardened"
-> hardware, but those are rare.

So if problems are "rare", why are you basically a full time IT support person? This seems like a contradiction to me. How long have they been running 24/7 without reboot? What service pack
does that have you on? do you mean never reboot, or no unscheduled reboot? If they run 24/7 with nary a problem, what are you doing as a full time IT person? I guess I don't understand your
statement here....

-> I was totally against control via NT but after learning it as well
-> as I knew UNIX, I now have little to no problems.
->
-> I control the machines via user profiles so that someone cannot
-> play solitair etc. and the interface is locked down to just
-> running my control software, and we actually "tested" things
-> and tweaked them just like I had to do the first time with UNIX
-> and we devoted the time to learn the stuff.

So do you have to start over with Win2K? I am asking because I don't know. I have some experience working with NT Server, but not 2K of any sort.

-> I am now an MCSE only because I like that kind
-> of cheap personal gratification to see how I am doing (only for
-> myself).

Whatever turns you on. I could get one to, I am sure. I have met MCSE's that couldn't do command line redirection.

-> We also ghosted the machines to network drives so that
-> any hardware issues and we can restore the machines in like 6
-> minutes.

A good method of backup. Everyone should have one.


-> It made me realize that different does not mean "bad" it only
-> means different. In todays world market you better be able to
-> accept change and other opinions or you are doomed to
-> fail............

I COULDN'T AGREE MORE! THAT IS THE HEART OF MY
ARGUMENT! nt IS NOT THE ONLY WAY!

-> Adapt or dye

(What color?)
or is that 'die'?

..........and oh by the way, the Internet is not going
-> away and why if I had to sum it up isn't it going away ?

WHAT?!?!?!?! Where did that come from?

-> Because I do not care that I have connected to a Unix, NT,
- > Linux, Mainframe etc system to get my information and that
-> it took 25 hops through routers all over the world..........I only
-> care that I get my INFORMATION..........that is the bottom line.
-> Before it is done, like it or not........the internet browser will be
-> the control system of use.

Not true. The browser is good for replication of data reporting throughout an enterprise. The thing to realize,though, is that the WWW is NOT the internet. It is a subset. It is not the most used (E-mail) and it is FAR from being the most efficient way of transporting data. I would much rather have my app talk to your app using TCP/IP and not have to deal with all the overhead of HTML document formatting. Just give me the data, and leave out the <FONT> and <TABLE> crap.

-> I already have systems running that just gather info and spit it
-> out to web pages for control and diagnostic information........
-> get used to it.

As I said. It is good for that. Don't tell me that the web browser is the end-all be-all of control platforms though.

Although, I guess one of the good points of putting it all into a web browser is that I could run your browser apps in Netscape on Linux..... :^}

-> I had spoken at the ISA show numerous times 5 years and more
-> ago about this "revolution" and people laughed at the time but
-> look around.............
->
-> Computers were designed to make things easier, they are just a
-> tool, like a hammer. If you give a hammer to me and you give
-> one to my friend who is a Master Carpenter, I will build a stick
-> house and he will build a work of art. You must know how to use
-> your tools.......

The point is though, that the master carpenter probably owns more than one hammer. And I would even venture to guess that his tools are from more than one manufacturer. That is the point. Windows is not the best answer for everything.

-> I better get off my soap box.........

As will I.

--Joe Jansen

By Michael Griffin on 18 August, 2000 - 12:24 pm

At 16:37 14/08/00 -0400, Dave Ferguson wrote:
<clip>
>As I have said and has been said by others on the list........the
>reason everyone loves UNIX control systems IMHO is that they have
>spent the time to know how to "tweak" it to be totally (?) stable but
>they will not devote the same time and attention to lowly NT.
<clip>
>I was totally against control via NT but after learning it as well as
>I knew UNIX, I now have little to no problems.
<clip>
My own impression is that anyone who is a genuine Windows NT expert usually also has Linux (or some other Unix) experience. They seem to make a living with Windows, but often their first love is Linux. I consider a "Windows expert" to be someone who can get to the bottom of a problem by means other than changing things at random and hoping for the best. Someone who is genuinely interested in operating systems often knows several fairly well.

I'm glad to see that you are able to "tweak" Windows to get it to do what you want. There seems to be a lot of people who have some sort of Windows certificate or "ticket", but very few who really know what they are doing.
For office systems this situation seems to be acceptable (or at least, tolerated). Given the number of computers with Windows NT operating
systems showing up in industry however, it's a shame that for most of us there seems to be so very little genuine Windows related expertise available we can draw on for difficult problems.


>I have systems running 24/7 that have never been rebooted. I also
>spent a large amount of time setting things up front. The bad thing
>is I am basically a full time IT person now because I have like 25
>systems and 150 PC's out there and the IT people are not used to
>24/7 response times.
<clip>
I'm not sure what your figures of "25 systems" versus "150 PCs" means, but it sounds like you have become indispensible to the operation of your plant. What is it you have to do with these PCs that makes them a full time job though? Repair and maintain them (hardware and software)? If so, that seems a rather expensive use of your time when you consider that you
should be able to buy at least 100 small PLCs for the cost of your annual salary alone.

**********************
Michael Griffin
London, Ont. Canada
mgriffin@odyssey.on.ca
**********************

By Dave Ferguson on 21 August, 2000 - 12:58 pm

Michael Griffin responded and here is my reply..........

By full time I mean that we we have roughly 100 PLC's (50 AB, roughly 20 Seimens, and 20 GE and 10 oddballs). We have like 25 HMI systems as well as a large DCS and another major system (ABB). These systems also are tyed to give or take on any given day 150 PC's. We also have links from all of these systems to an upper level "shop to top" system, as well as HMI maintenance
diagnostics systems. We also administer a 100 meg ethernet network of VLANS and managed switches as well as routers. We also manage an internal Intranet and 5 servers.

By "full time IT person" I mean that new users, security, HMI and DCS revisions, PLC automated backup system, server tape backups, user "help desk" issues, network management (software and hardware) etc.

Like most management people I work with and for, you seem to think that because it is "automated" means that it just sets up and runs itself. This is part of the snubbing of NT. I am not a Linux
expert. Just like I became an AB "expert", I had to become an NT "expert". I use the term EXPERT loosly because I don't think there is such a thing except in peoples minds.

What I was trying to get at is that now I have all of the issues that the "business system" IT people had 10 and 20 years ago, change management, engineering, backup and recovery, user management, security software revisons and testing, etc.

To add a loop in the field or change a calibration requires a huge outlay of personnel time that management needs to realize.

For instance to change the range of a level loop requires, the actual recalibration, documentation, DCS database revision, graphics
revisions, links to upper level range change, database change in upper level system, graphics changes in upper level system and documentation, drafting, data sheets etc. THIS DOES NOT HAPPEN AUTOMATICALLY.

Managers better wake up and realize that this gets done with my salary not 100 small PLC's.

My entire point was that NT works if you know what you are doing just like AB works if you know what you are doing or Fisher-Rosemount works if you know what you are doing or UNIX works if you know what you are doing. Usually not liking something comes from not taking the time to learn it. I like the first thing I learned usually and must remind myself to be open to CHANGE. This
problem IMHO is exasperated by the fact that there are no manuals for anything anymore, only electronic "help" files. The problem is I do not know of a "feature" I only use this if I know what
I am looking for. I like BOOKS......but that is another discussion.

In my plant I can shut the entire plant down by turning off the wrong switch gear or I can shut it down by pulling the wrong air line or
screwing up just the right control system. My point is you better know what you are doing or don't do it. NT works if you devote the time to learn it.

Dave Ferguson
Blandin Paper Company
UPM-Kymmene
DAVCO Automation

By Roger Irwin on 18 August, 2000 - 4:14 pm

> I love this discussion........
>
> As I have said and has been said by others on the list........the
> reason everyone loves UNIX control systems IMHO is that they have
> spent the time to know how to "tweak" it to be totally (?) stable but
> they will not devote the same time and attention to lowly NT.

Or perhaps you just have not read what is being written.........

I spend more time working with windows than UNIX, and as I have pointed out I err on the side of windows in (the many cases where either would do) because it is 'acceptable' to lose time on windows, wheras if you have problems with UNIX everbody says you should have used windows.

Little wonder then that when I do deploy UNIX it is an ideal application for this platform and I lose zero time with them. I mean zero. I mean the boxes are installed by electricians who plug it in and off it goes. I do need to set up a first case, which I can then replicate ad infinitum
just by copying the disk image and altering the IP/hostname.

Of course it could well be the case that if I used UNIX where I use windows I may have more trouble than I do with windows, but windows does lose me an awful lot of time. I do not claim to be expert on NT, but I am not too proud to seek help. I must admit I do get nervous about spending time on learning MS stuff because it keeps changing. Years ago I did put a lot of time into learning OS/2, and look where that got me.........

But that is by the by. In the areas were I do employ UNIX, there is NO MS equivalent. They have been making promises for years about OS that may be suitable for headless embedded control/networking tasks, and they have never come up with the goods. Now the commodity PC market dictates that PC hardware must be so powerful that it requires multiple fans to keep things cool. At the same time people are turning out RISC processors that offer Pentium performance on so little power that you could feed them from a linear regulator, small size, no fans, and ideal for mounting on a DIN rail. NT was originally offered on a wide range of platforms, that has steadily reduced to just one. Will the last person to promote microkernels please remember to shut down the system log.............

Oh yes, I went through the WinCE CDROM with a fine toothcomb when they first launched that, and of course that just keeps changing, we are now
at the third re-incarnation, but I am not making PDA's, thanks all the same. Perhaps I should go and study DCOM before it dissappears.

> I have systems running 24/7 that have never been rebooted.

Nobody doubts this can be achieved, but why crow about it? People expected this of UNIX long before NT came out.

Most 'new' NT features are things that have been done on UNIX, and allthougth there are still things UNIX boxes can do that NT cannot, most (myself included) think that technology has moved along such that NT can suit most IA's requirements (allthougth X windows capability would be nice).

BUT, if UNIX can do it, why deploy NT?

I think the windows desktop is very slick, and everbody knows it, but my applications are not particularly graphical, in fact the interface is often romote or non existent.

I was enthusiastic for OS/2 and then NT because they were to offer UNIX like power at NT cost. It still costs a lot to have a 1M+ user sparcstation, but it now costs less to install UNIX on an NT sized machine. So then we talk about 'total cost of deployment', well, like I said, I expect my apps to run unsupervised, but unlike Dave I am not perfect, and in any case, sometimes the customer wants changes. Well then I can enter into a UNIX box from anywhere and do anything remotely with no additional software packages. Software development? Well my IDE/de-bugger environment under UNIX is pretty much the same as the one under windows. My code is written in ANSI C++ or Python. Works on both platforms, except the serial port stuff, but that is also different with each version of windows. Oh yes, dev tools used to be a major cost issue with UNIX, now that situation has also been reversed.

'Everybody knows windows' is another argument, but rubbish. Very few people know how to set up an NT box properly even IT people (as Dave himself points out). So what happens is that people who think they know what they are doing (because it
is windows and therefore the same as their home computer), go and alter it, disastrously.

OK, I am not trying to promote the use of UNIX, like I said I use windows more than UNIX, but the reasons are on the basis of customer misconceptions, but that is no skin off my nose. But there are cases where NT (or any other MS OS)
simply does not cut it. Then I use UNIX, and I find it better, and I remain convinced that much of the work I do under windows would, from a technical and economic point of view be better off under UNIX, I use Windows for reasons of marketing and mindshare, I am not protesting,
just stating.

> I was totally against control via NT but after learning it as well as
> I knew UNIX, I now have little to no problems.

I was all for MS years ago, but over the years they have speechless.

> It made me realize that different does not mean "bad" it only means
> different. In todays world market you better be able to accept
> change and other opinions or you are doomed to fail............
>
> Adapt or dye..........and oh by the way, the Internet is not going
> away and why if I had to sum it up isn't it going away ?

Keep your mind open or die. Accepting that windows can be usefully deployed, and seriously attempting to deploy it is correct. But using NT for the sake of it is also stupid. One has to keep an open mind.

Remeber that NT, and MS as a whole, is going ever more after the client/server architectures that are ideal for corporate computing and internet connected society, yet are not adept at IA and SCADA.

BTW, Do you remember when W95 was launched. You may remember that Internet Explorer was not there. We had a button that launched a wizard
which was to connect us to the Microsoft Network. In fact MS had this plan of building their own 'internet'. They did not succeed because by the
time W95 got onto peoples desktops the real internet boom had already started whereas their network was hardly off the ground, people wanted to connect to the 'real' internet.

But had they been a bit earlier they may well have succeeded. What would you think about having an 'internet' controlled by MS? Some think it would have been better, some worse. What is your opinion?

> I already have systems running that just gather info and spit it out
> to web pages for control and diagnostic information........get used to
> it. I had spoken at the ISA show numerous times 5 years and more
> ago about this "revolution" and people laughed at the time but look
> around.............

Ummm I was actually doing this 5 years ago, I took the source code of the free NCSA server from my linux box and compiled it on an AIX workstation. The dynamic pages were generated by a KORN shell cgi script. I started after lunch and was demonstrating it to my colleagues before we went for coffee break. But I also realised that
while this is very cool, it has limited pratical application in supervision.

But note also that I can serve dynamic web pages from a little 4M flash based card running Linux, or even Datalight DOS, a box that cannot even run NT, hence I cannot understand the relevance of your comment.

It does suggest to me that you perhaps do not know UNIX so well, as TCP/IP related services have been around for a long time on UNIX and
have always been very easy to deploy.

> Computers were designed to make things easier, they are just a
> tool, like a hammer. If you give a hammer to me and you give one
> to my friend who is a Master Carpenter, I will build a stick house
> and he will build a work of art. You must know how to use your
> tools.......

And you must know how to pick the right tool for the right job, the master carpenter probably could get the wood to length by knocking bits of it with the hammer, but more likely he will select a saw from a whole range of saws, to suit different types of cut on different types of wood. You on the other hand go down to the DIY store and see a shelf full of the ACME super saw that was advertised on TV and hat your neighbor has and pick that, because it is what everybody is
using.


> I better get off my soap box.........

No, stay on it. The IA industry is in a period when it must choose OS's and protocols that will have long term implications, and most people have limited experience. Although there are few participants in this thread there are many readers. The more people air their opinion
the better, as it allows a more balanced view to be obtained.

DISCLAIMER: People pay me to fix windows generated problems and limitations, therefore I consider Microsoft to be a business partner.

By Curt Wuollet on 21 August, 2000 - 7:58 am

> As I have said and has been said by others on the list........the
> reason everyone loves UNIX control systems IMHO is that they have
> spent the time to know how to "tweak" it to be totally (?) stable
> but they will not devote the same time and attention to lowly NT.
>
> I have systems running 24/7 that have never been rebooted. I also
> spent a large amount of time setting things up front. The bad thing
> is I am basically a full time IT person now because I have like 25
> systems and 150 PC's out there and the IT people are not used to
> 24/7 response times. By this I mean, we still have hardware crashes
> because of a management decision (again IMHO) to not by "hardened"
> hardware, but those are rare.

Yeah, OK, It's _always_ the hardware. C'mon, even Ballmer admits there was room for improvement. I suppose now when you go to W2K it won't even go down if you shut it off. I'll pause a moment
for the list members to reflect on their own experiences. That, by the way is why we converted to Linux, we don't want to be booters
and reloaders.

> I was totally against control via NT but after learning it as well
> as I knew UNIX, I now have little to no problems.
>
> I control the machines via user profiles so that someone cannot play
> solitair etc. and the interface is locked down to just running my
> control software, and we actually "tested" things and tweaked them
> just like I had to do the first time with UNIX and we devoted the
> time to learn the stuff. I am now an MCSE only because I like that
> kind of cheap personal gratification to see how I am doing (only for
> myself). We also ghosted the machines to network drives so that any
> hardware issues and we can restore the machines in like 6 minutes.

But, you never have to do that.

> It made me realize that different does not mean "bad" it only means
> different. In todays world market you better be able to accept
> change and other opinions or you are doomed to fail............

OK, I'll take my chances.

> Adapt or dye..(die)........and oh by the way, the Internet is not
> going away and why if I had to sum it up isn't it going away ?

Because the Internet runs on UNIX and existed before the first MS machine ever connected. And if we can keep it from being perverted with "extended" protocols and vendor specific websites, it has a very bright future. I expect _some_ company to try to take it over, but so far all they have managed to do is break my netscape and produce broken Java and html.. Inconvenient but, the Internet is still free.

> Because I do not care that I have connected to a Unix, NT, Linux,
> Mainframe etc system to get my information and that it took 25 hops
> through routers all over the world..........I only care that I get
> my INFORMATION..........that is the bottom line. Before it is done,
> like it or not........the internet browser will be the control
> system of use.

We agree completely here.

> I already have systems running that just gather info and spit it out
> to web pages for control and diagnostic information........get used
> to it. I had spoken at the ISA show numerous times 5 years and more
> ago about this "revolution" and people laughed at the time but look
> around.............

This is simply a given with *nix systems, nothing new. In fact, I'm not sure where the Internet rant is coming from, I have been a zealous advocate of the Internet and its free and open
protocols. I have even used it as an example of what can be accomplished by cooperation, even with competitors.

> Computers were designed to make things easier, they are just a
> tool, like a hammer. If you give a hammer to me and you give one to
> my friend who is a Master Carpenter, I will build a stick house and
> he will build a work of art. You must know how to use your
> tools.......

I agree here too.....I have a lot more tools and more freedom to use them.

Regards

cww

By Roger Irwin on 23 August, 2000 - 3:33 pm

> Yeah, OK, It's _always_ the hardware.

Just a note on 'hardware' and 'drivers'.
Device drivers actually represent the bulk of OS core code. MS mostly rely on hardware vendors to develop this code, and blame them when things break.

OTOH, even companies like 3COM and HP struggle to
produce device drivers for the latest and greatest versions of windows. I do not think they are short of skilled engineers, and I am sure they get full support from MS, yet struggle they do. Therefore there would seem to be a design flaw in the requirements for the device drivers.

By Curt Wuollet on 23 August, 2000 - 3:37 pm

Hi Roger

Just so we don't confuse folks, that was sarcasm. I have my lab populated with "bad" hardware that didn't work with MS. Redhat 6.2 repairs it all.
You guys can draw your own conclusions. PC hardware is a lot more reliable than it gets credit for because MS blames everything on the hardware. With no wild pointers scribbling on the disk you don't have mystery crashes.

Regards

cww

By Anthony Kerstens on 24 August, 2000 - 3:53 pm

Sounds like a situation that most of us are in.
IE. customers that don't always give us perfect
information, and hence things don't always work
out and we have to scramble.

Anthony Kerstens P.Eng.

By Dave Ferguson on 28 August, 2000 - 4:16 pm

>> hardware issues and we can restore the machines in like 6 minutes. <<
>
>But, you never have to do that. <

Nice line, I must apologize, while on vacation I talked to my partner at work and he informed me that he had to restore one of our MMI machines the other day (Monday)......I was wrong he did then have to "re-boot" it....................

Oh by the way, a water pipe had broken feeding an air conditioner and water went into the power supply when the Operator noticed the leak. (Hey its a tough world out there) I guess there is a need for our salary, he had it running in 31 minutes. Put out a spare and ghosted it across the nework. We do have a second MMI running.

The only time we have rebooted our MMI machines on the plant floor, 24/7 in the past year is maybe and I am going out on a limb here, maybe once or twice. Usually the product of a power spike or power outage during a down day or "planned" maintenance gone bad. Other than that we have the occasional hard drive and like one
power supply in last 3 years. The other thing is monitors every few years. Other than that they just keep going and going................

Sorry for not having more trouble with Windows.............Must have gotten the CD made on a Wednesday instead of the Friday version............

Dave Ferguson
Blandin Paper Company
UPM-Kymmene
DAVCO Automation

By Curt Wuollet on 31 August, 2000 - 10:44 am

Hi Dave

I have no trouble at all with Windows now :^) You mentioned you were having a terrible time at work, I was just telling you how I fixed that.
Probably 70% less trouble for all involved by simply upgrading to Linux. I spend my time on planned work instead of platform management. Try it, you'll like it. You don't realize how much extra hassle you're going through until it stops.

Regards

cww

By Dave Ferguson on 1 September, 2000 - 1:50 pm

I am not having a terrible time at work. If you read the entire thread you would have recognized cheap, selfless sarcasm. I have little to no trouble with my Windows machines. Does that mean that I will close my mind to all other technology like most "specialists" NO. (No flames needed)

As I also pointed out if you know what you are doing, then it should all be planned work. If you do not then it is all unplanned.......

Dave

By Curt Wuollet on 7 September, 2000 - 12:51 pm

OK. Fine, you do seem a little tense though.

regards

cww

I don't know about facades of openness or resistance to real open protocols. For me it is a matter of what makes sense. Do I implement a
system with an operating system that is unique to what my support staff current maintains? There needs to be a very good reason to do that. What I find is that there isn't enough justification for that, because the common one, the one that is running on hundreds of desktops in the organization is good enough. The same logic
follows from there. If the object system that comes with that operating system is adequate then I will use it rather than going with another technology.

Obviously, there are different definitions of what "open" means. One definition is that the technology is widely used. OPC is being used by a number of companies, like many commercial SCADA
vendors. I consider it "open" because before the device interface for each commercial system was unique to each vendor.

As far as Ethernet and TCP/IP is concerned the argument is over. People can argue against whether or not they are best suited for controls networking, but the same logic applies. If the commodity technology (Ethernet/TCP/IP) is good enough then it will prevail - as it is already.

Sam

By Edelhard Becker on 3 August, 2000 - 1:05 pm

Hi Sam,

> I don't know about facades of openness or resistance to real open
> protocols. For me it is a matter of what makes sense. Do I implement
> a system with an operating system that is unique to what my support
> staff current maintains? There needs to be a very good reason to do
> that. What I find is that there isn't enough justification for that,
> because the common one, the one that is running on hundreds of
> desktops in the organization is good enough. The same logic follows
> from there. If the object system that comes with that operating
> system is adequate then I will use it rather than going with another
> technology.

IMHO (and by experience) these "this-is-currently-good-enough" solutions will bite you somedays. Some examples:

- the "good-enough" operating system might be, over time, not as stable as expected (usually desktop OSse don't run 24x7). Then, you can spend hours and days to fine-tune that system for
stability (remove unnecessary programs, drivers; clean up dynamic libraries etc.etc.)
- projects grow over time. As soon as a system runs flawlessly, there might be new demands, options etc. Once started with a good-enough solution, these systems will become a nightmare.

- using good techniques (usually, not automatically) leads to elegant and simple solutions (and therefore drops development time and cost). Using unappropriate techniques (always) leads to crappy solutions. You can put a screw into the wall with a hammer, that might be good enough, but is it good?

- usage, including local backups etc., for the local staff can never be too simple. We (as a software company) have to make the system as simple as possible. E.g., for backup insert empty floppy and press some button, nothing more. On this list a few weeks ago was a thread where somebody screwed up his Win9x system by simply copying some files to a floppy manually!

- BTW: i have usually better experiences with staff that doesn't know anything on computers. You simply make instructions what to do and that's it. When using well-known OSses in a production system, there likely is somebody with the same OS at home who starts fiddling around (e.g. try CTRL-ALT-ESC etc.).

At hand, i can only think of two reasons why someone would not choose the technically best solution:

- price (seldom, but that's another story)
- a (HW or SW) interface, which is absolutely needed for the customer's environment, is not available

> Obviously, there are different definitions of what "open" means. One
> definition is that the technology is widely used.

Sorry, i often see the term "open" misunderstood, but i never heard that definition before.

> OPC is being used by a number of companies, like many commercial
> SCADA vendors. I consider it "open" because before the device
> interface for each commercial system was unique to each vendor.

You can name OPC everything but "open". It relies upon OLE/DCOM which is a proprietary protocol by MS. You can not get a strict specification of the protocol. It simply is widely used, because the
SCADA vendors use that particular "good-enough" OS, where you get OLE/DCOM libraries (binaries, no source!) with the development system. It just is too easy to use (especially if the programmer doesn't look into the future). Linux implementations of OLE/DCOM have to use reverse engineering (because there is no spec) and therefore are always at least half a year behind OLE's native platform (if they get 100% compatible ever, i don't know of any 'DCOM conformance test').

Regards,
Edelhard
--
s o f t w a r e m a n u f a k t u r --- Software, that fits!
OO-Realtime Automation from Embedded-PCs up to distributed SMP Systems
info@software-manufaktur.de URL: http://www.software-manufaktur.de/
Fon: ++49+7073/50061-6, Fax: -5, Gaertnerstrasse 6, D-72119 Entringen

By Curt Wuollet on 4 August, 2000 - 8:39 am

> I don't know about facades of openness or resistance to real open
> protocols. For me it is a matter of what makes sense.

Me too.

> Do I
> implement a
> system with an operating system that is unique to what my support
> staff current maintains? There needs to be a very good reason to do
> that. What I find is that there isn't enough justification for that,
> because the common one, the one that is running on hundreds of
> desktops in the organization is good enough. The same logic
> follows from there. If the object system that comes with that
> operating system is adequate then I will use it rather than going
> with another technology.

So, if it's good enough for the desktop, it's good enough for controls? Or a replacement must run everything in the company?

> Obviously, there are different definitions of what "open" means. One
> definition is that the technology is widely used. OPC is being used
> by a number of companies, like many commercial SCADA
> vendors. I consider it "open" because before the device interface for
> each commercial system was unique to each vendor.
>
> As far as Ethernet and TCP/IP is concerned the argument is over.
> People can argue against whether or not they are best suited for
> controls networking, but the same logic applies. If the commodity
> technology (Ethernet/TCP/IP) is good enough then it will prevail - as
> it is already.

But, suppose Ethernet and TCP/IP required you to run Sun Solaris for example, would that then be open in the sense of OPC? I think people would have a problem with that. Why is it not an issue if MS controls OPC? Why would people shy away from one and embrace the other? If Sun, for example, had the market sewed up, would it then be ok if they controlled all the communications?

> Do I implement a
> system with an operating system that is unique to what my support
> staff current maintains? There needs to be a very good reason to do
> that. What I find is that there isn't enough justification for that,
> because the common one, the one that is running on hundreds of
> desktops in the organization is good enough. The same logic follows
> from there. If the object system that comes with that operating
> system is adequate then I will use it rather than going with another
> technology.

->So, if it's good enough for the desktop, it's good enough for
controls? ->Or a replacement must run everything in the company?

I am not saying if the operating system is good enough for the desktop then it is good enough for controls. I am saying that if you can run one operating system and it satisfies the requirements of both, then you should use it. There is little benefit in going with a specialized technology if the improvement you get is not significant. If the commodity technology will not perform to requirements.

> Obviously, there are different definitions of what "open" means. One
> definition is that the technology is widely used. OPC is being used
> by a number of companies, like many commercial SCADA vendors. I
> consider it "open" because before the device interface for each
> commercial system was unique to each vendor.
>
> As far as Ethernet and TCP/IP is concerned the argument is over.
> People can argue against whether or not they are best suited for
> controls networking, but the same logic applies. If the commodity
> technology (Ethernet/TCP/IP) is good enough then it will prevail -
> as it is already.

->But, suppose Ethernet and TCP/IP required you to run Sun
Solaris
->for example, would that then be open in the sense of OPC? I
think
->people would have a problem with that. Why is it not an issue if
MS
->controls OPC? Why would people shy away from one and
embrace ->the
other? If Sun, for example, had the market sewed up, would it -
>then
be ok if they controlled all the communications?

It doesn't matter to me if a vendor controls a technology. For instance, I ran SunOS and Solaris for many years. These operating systems are owned by Sun and I didn't have a problem with using
them - what was my choice. NT is owned by MS. There is no difference.

Sun controls Java, yet many people consider it an "open" technology. OPC is based on COM, but the specification of OPC is based on the work of people outside of MS. The implementations of OPC are done by automation companies and not by MS.

If Sun was in MS's place I would be running Solaris and programming in Java.

By Roger Irwin on 9 August, 2000 - 2:39 pm

> If Sun, for example, had the market sewed up, would it then
> be ok if they controlled all the communications?
>

Actually SUN did have the market sewed up with SUN RPC calls, which were the predecessor of DCOM and CORBA technologies. Allthougth telecomms
is now going over to CORBA, 90% of the worlds telecommunications networks are still managed by RPC's, which have been in use for over 15 years.

But SUN never tried to monopolise this technology, they made it freely available and most POSIX systems include an implementation of RPC derived from original SUN source code. There is even a port to NT, complete with an rpcgen that works with Visual C++.

By Jansen, Joe on 8 August, 2000 - 1:56 pm

Sam wrote:
-> Obviously, there are different definitions of what "open" means. One
-> definition is that the technology is widely used.

Are you seriously presenting that as a definition? The PLC-5 is widely used. Does that suddenly make it open? Was OPC a 'closed' system before being adopted by most of the SCADA
packages? (That is what I am to believe, based on your definition.) What is the magic number of users for a system to metamorphosize from closed to open? The only place that a closed system becomes an open system with no change to the source or binaries is in the marketing department. Anything that suggests that 'open architecture' is based on user count is rubbish. I assume then that you would consider the LinuxPLC project a closed, proprietary solution due to lack of widespread use?

-> OPC is being used by a number of companies, like many commercial SCADA
-> vendors. I consider it "open" because before the device interface for
-> each commercial system was unique to each vendor.
->
-> As far as Ethernet and TCP/IP is concerned the argument is over.
-> People can argue against whether or not they are best suited for
-> controls networking, but the same logic applies. If the commodity
-> technology (Ethernet/TCP/IP) is good enough then it will prevail -
-> as it is already.

I would ask why you think it is that this has already happened?
TCP/IP is an OPEN standard. Nobody owns TCP/IP. I can write a TCP/IP driver without paying a royalty. Why, I can even find a spec that tells me what TCP/IP is supposed to do! Here is the
distinction: It is widely used because it is open. It is not open simply because it is widely used.

Yes. I am serious. Open technology is the technology that is most widely used. I would add that you would not pick a widely used technology if the technology is at the end of its lifecycle. I see it as open because it opens up my options.

The classic definition of "open" is like communism. It sounds good on a mailing list discussion, but it doesn't work. Capitalism does
not foster "open" technologies, because the market leader gains nothing by creating an even playing field. TCP/IP is a defacto standard that came about from a research project. Because it was
widely used it never got replaced by better technology. TCP/IP is "good enough" and still is just that.

By Roger Irwin on 10 August, 2000 - 12:10 pm

> The classic definition of "open" is like communism. It sounds good
> on a mailing list discussion, but it doesn't work. Capitalism does
> not foster "open" technologies, because the market leader gains
> nothing by creating an even playing field. TCP/IP is a defacto
> standard that came about from a research project. Because it was
> widely used it never got replaced by better technology. TCP/IP is
> "good enough" and still is just that.

I'm sorry, TCP/IP is a perfect example of the classic case of an open standard. The standards are published openly, and anybody and everybody can, and does, contribute openly to them. They are
neither de-facto nor proprietry.

TCP/IP definitions can describe protocols that cannot be freely used for reasons of patents, license requirements, or maby even lack of a 'key', and in fact there are quite a lot of RFC's that define protocols that cannot be freely adopted. Yet the protocols that we actually all
use (FTP, SMTP, POP3, HTTP etc.) are all completely free.

Other examples of 'open' standards (i.e. standards that anybody may use and that have been established by people freely contributing to a not-for-profit organisation) include C/C++, Ethernet and RS232/485.

Note that many standards are handled by institutional groups such as the IEEE and ANSI, as would happen in any other industry, indeed
the computer industry is unique in its wide scale adoptance of proprietry and de-facto standards. In the past this has been in part because technology was moving faster than standards
bodies could cope. This is no longer the case, and allthougth wide scale use of the internet has led to a whole new load of requirements, the internet itself allows standards to be hammered
out an agreed very quickly. Nowdays the imputus to push proprietry standards is to gain license share.

In many cases proprietary standards are turned into open standards in order that they may be improved, for example the IEEE defines far more functionality for the PC parallel port than the original proprietary centronics interface

Please forget this communism business, like I said, in every other industy open standards are the norm. Proprietary standards are about Monopolism not Capitalism. Capitalism relies on everybody being able to compeet on equal terms in an open marketplace, that's why all capitalist societies have anti-monoply commisions.

But lets get to the bottom line on the business side. A major advocate of open standards and open software is IBM. They will sell you support
contracts for sendmail, they will install Linux on a 390 mainframe (and at $100,000 per CPU that has to be the most expensive Linux distro ever),
and thier e-commerce solutions packages are based on the Open source Apache server. Their buyline on open source is "it's about the service, stupid". Of course IBM also happen to be the largest computer company in the world ($90B sales, MS has $20B). Now go ahead, tell me IBM are communists, tell me they do not understand the market.

Before you make such riduculous statements, you should get yourself more informed about the history of computing and the origin of what you are using, everybody makes mistakes, gets numbers wrong etc, but your comments demonsrate a wholesale lack of knowledge of the items you are citing as an example. When I read your original post I thought you had cited TCP/IP by mistake!

All success stories in computer communications have been born out of open standards. The most borderline exceptions are Netware and SMB (aka Windows file and printer shareing).

Netware is proprietary, but is based on IPX which is a simplification of IP, it was done when PC's had not yet reached the necessary power to support full blown TCP/IP.

SMB (your windows network neighborhood), is based on IBM LanManager. IBM published the protocol, and anybody can implement it, there are no patent issues etc. However, MS have extended the protocol in their implementations. In order to
interface to windows computers, Unix programmers developed an open source implementation called SAMBA. Being open source anybody can add features to Samba, and in fact there are a few things you can do with Samba which you cannot do under windows. Because SMB implements user to user connections (as opposed to system to system
connections which is the case of Unix's default NFS file shareing protocol) it is often used for networks which involve no Windows machines. Quite how we define SMB is therefore unclear.

But lets wrap up getting back to IA communications. OPC is based on DCOM, which is an open standard. However much of DCOMS implementation is based around an underlying WIN32 API. It can, and has been, implemented on non windows platforms but it does not make sense, you must have windows to be interoperable. So most
of us consider OPC to be based on proprietary technology. But that is NOT the principle reason why I am against it. My prime motive is that DCOM is the architectural inverse of what is required in IA, it was designed for OA where there are a few big managed data servers and a lot of dumb clients, not IA where we have a lot of dumb un managed servers (field devices) and a few (relatively more managed) clients. DCOM has useful application in IA networks, but not for interfacing towards the field, which is what they tout it as doing). It can work great for a few test cases, and for large specific function plants, but handling a typical factory it will quickly become a nightmare. The same architectural
arguments may also be applied against the use of CORBA.

The number 2 reason I do not like CORBA is that MS have allready anounced that it is dead
technology, they are dropping it in favour of SOAP which is even less IA suitable.

That DCOM is (to all pratical effect) proprietary, ranks only number 3, if a
really good and universal alternative existed I would adopt it even if it where proprietary, but
OPC are not even in the ballpark.

It would be nice if this thread concentrated less on commercial politics and more on technical
issues, such as what protocol could we use?

Profibus FMC on ethernet is not far off the mark, but not on it either. They seem to want to take the industrial network into the computer center rather than allow the corporate TCP/IP network reach out into the factory, which I feel is the philosophy we must look to. Has anybody experience of this?

I am well informed. I just disagree with you. Your argument is that we go back to developing text based TCP protocols over using a distributed object technology like OPC. Your biasness against anything associated with MS shows that your
reasoning has been clouded by emotion. I had 17 years of developing systems in UNIX. I developed all kinds of communications systems include text based application layer protocols like yours. The day I fired up VB and made a seamless distributed object connection to a PLC I was sold. Unlike systems I had used in the past like RPC I didn't have to jump through hoops to build the interface. My VB code doesn't know whether it is connecting to an in-process object or one across the network.

I use to argue for open systems and the superiority of my UNIX-based solutions, but I stopped when I saw what could be done with NT, COM and OPC. There is no comparison between OPC and your dated recommended way of doing it.

Obviously, you and I won't sway each other.

I suggest the readers of the list, especially people using UNIX to investigate NT, COM and OPC technology and give it a fair shake. I believe that you will find, like the rest of the leading companies in this field of work, that this is
the future. In any case, you need to make the decision based on test driving the technology and not on arguments on this list.

By Joe Jansen on 14 August, 2000 - 3:52 pm

My work with connecting PC's to PLC's is actually starting out on windows/VB/VC++/NT. I have had enough headaches and support issues that I am actually moving the other direction. As you point out though, we could throw anectdotal stories at each other all day, and at the end, we
would both just figure the other didn't know what they were talking about :^}

I will summarize what I stated in an off-list discussion. this was in regards to my "OPC is a fad" prediction. Here are my basic fears with
using the windows solutions:

1. Microsoft controls OPC. This means that it is re-definable.
Obviously they cannot completely rewrite the spec, due to market forces. But the fact remains that this is entirely at their discretion.

2. Microsoft's main revenue stream is on product upgrades. They make their money because corporate customers migrate from 3.1 to 95 to 98 to NT Workstation to Windows 2000. That's 4 products in 5 years. I have not yet used Win2000, but I know that many apps that are for 95 do not work on NT, and vice versa, and most 3.1 apps fail miserably in the Win32 API. This is by design. This design is for 2 reasons. A,
improvements in technology mean that things are made diferently to support new devices, etc. B, it encourages upgrades. If the latest version of package X works on W2K, but not not on NT, you need to upgrade to use it. If package X is needed to communicate to a machine, and you have several copies, eventually you will need to upgrade the rest for consistency.

3. Microsoft uses upgrades competitively. I installed OS/2 2.0 as soon as it came out. The Windows system rev'd several times, and the biggest difference is that it broke OS/2's ability to run Windows apps. ("The system isn't done until Lotus won't run!"). If Linux/Unix makes a big move into the OPC arena, what will stop MS from doing an 'embrace and extend', thus making the old stuff incompatible.

4. MS is an office software company. They have a different mindset than the automation market does. It is like they get it on the surface,
and they can comprehend what we are saying, but at a gut level, it just doesn't quite click. They still tell us that "this is the greatest
thing in the world. It is much better than the last version! Everyone will be doing this in the next few months!" That isn't what I want. My
brand-new-out-of-the-box SLC5/05 will still network to a 8 year old SLC fixed I/O brick, and communicate natively. And of course, there is
Modbus, which anyone can use, is not controlled by anyone, and has been around decades.

My bottom line point is that OPC as we know it today will not be what is promoted in 3 to 5 years. It will most likely be incompatible, and we will have either abandoned it, or will be caught in the upgrade cycle that corporate software is stuck on today.

If OPC and Microsoft is what is working best for you, then hey, knock yourself out. I just do not have that level of trust in them to not leave me hanging in the wind some day, and would prefer to write my own stuff on a platform that I know won't arbitrarily pull me into an endless upgrade cycle. It is a matter of preference. I agree that everyone should decide based on experience and investigation. I would much rather see that than blindly following marketing material that only shows one side of the issue.

--Joe Jansen

By Alex Pavloff on 14 August, 2000 - 4:02 pm

> The day I fired up VB and made a seamless distributed object connection
to a PLC I was sold.
> Unlike systems I had used in the past like RPC I didn't have to jump
through hoops
> to build the interface. My VB code doesn't know whether it is connecting
> to an in-process object or one across the network.

Component programming with COM or CORBA is slick. I love it, but....

> I use to argue for open systems and the superiority of my
> UNIX-based solutions, but I stopped when I saw what could be done with NT,
COM and
> OPC. There is no comparison between OPC and your dated recommended way of
doing it.

Honestly, one of the major problems with OPC is that while DCOM *CAN* run on a non-Microsoft designed system, it sure as hell doesn't make much sense. So much of COM, and DCOM especially is based on an NT server for authentication and a system registry, which AREN'T common things in the all the devices that I deal with. Sure, what you do is slap an OPC server onto the machine getting the data that will happily read the data from the PLC in the manner that it's accustomed to, and that works great. This is where COM is great, because yes, talking to various devices via a common COM interface from whatever language you want is a very powerful and useful thing. There are no good arguments against that.

But the moment that people say that we have to start using DCOM on all our devices is the moment where we sit back and say "Hey! This doesn't make a lot of sense for a simple device!" So, what you end up with is OTHER protocols, ModbusTCP and the like, being the actual method used to talk to devices, with OPC being used to glue all this stuff together at the other end. That's what OPC can do. Glue things together. Anything else just don't make much sense to me!

> I suggest the readers of the list, especially people using
> UNIX to investigate NT, COM and OPC technology and give it a fair shake.

Alright. Hrm. Next target platform for us: AMD SC520 with probably 8 megs of RAM and 8 megs of Flash (as a disk). Hrm. NT on that. Hrm. Not going to happen. Windows CE? Hrm. Maybe, if Microsoft would actually bother to help us small embedded folks. Embedded Linux. Hrm. Lets see, it does fit, I do get all the code, and I do all this great TCP/IP stuff with standard protocols (which then all you guys with big heavy machines can use OPC to call and ask without knowing the underlying details of the protocol). Plus,
licensing is dirt cheap compared to WinCE licensing. Heck, Montavista Hard Hat Linux has no runtime licenses. This is quite a big deal for a small outfit like ours, which would have to pay nearly $30 a license for WinCE and pass that cost onto the user.

On the technical side, I'd say that an embedded Linux is right for me. What do you think?

By Matthew da Silva on 17 August, 2000 - 2:21 pm

Once again, moores3 has hit the nail on the head. Well done. The writing style and attitude of tolerance reflect a helathy respect for the issues at hand. Until recently, I, too, had little reason to trust Miscrosoft. I started using MS products a rev. 3.0 when the blue screen of death was also the startup screen. Having worked on Macs before that, I was sceptical but powerless due to the prevalence of PCs in the new office I had joined.

I became acquainted with the PC and soon got to enjoy the black screen and prompt as a way of getting quickly to the bottom when a problem occurred with the user interface. Which was often. I then left and joined another company, where I switched back to Mac OS. The lack of transparency was a barrier but since the software
worked well (more often than with Windows 3.1, at least), I didn't complain.

Now, due to the requirement to access a single network here, I've got back to PC (Win). The facility with which applications are online and at which remote servers are accessed is not
comparable in the world of Macs. It's alright to support open source and even open communications protocols, but most businesses couldn't operate without pre-packaged, 'open-by-default,' and
virtually standard engines such as Windows.

That does not mean that any manufacturer or developer isn't examining all the options. There's no black-and-white result. It's an
ongoing process in which corporation culture selects the individuals to make purchasing decisions who, usually, have both the wisdom
and the knowledge to do so in the best interests of the company.

It's an ongoing process also, for developers of control and atuomation systems. There are many individuals in such companies who plan future systems. Not just one per company. Such a monolithic and 'totalizing' view is fine for thick paperbacks, but the real world is more complicated and shadowy.

My comparison of Mac and Win is hardly technologically breathtaking, but I think it illustrates how the evolution of a major
industrial product closely mirrors the development of commerce and business in general. In the case of Win, to see openness may simply require a momentary suspension-of-disbelief which can allow us to see the simple reality where otherwise there is the illusion of chaos.

Regards,
Matthew, Yamatake, tokyo

Roger

Yes we have tried to implement TCP/IP into the factory floor, and not yet succeeded. We build batchweighing machines with multiple industrial CPU's which are linked on Ethernet in Master slave
configurations.

The difficulty I have is that almost all the vendors of TCP/IP stacks want royalties for each installation that you make. To me that is
extremely messy. I prefer to pay a reasonable up front price and do what I like with it.

So our first attempt ( and successful) was to communicate by simply sending Ethernet packets to the slave's. This works well, is fast, never loose packets BUT does not interface to the factory floor. I.e: The customer wants the data pumped into their Windows box.

Currently I have resigned myself to the fact that I have write my own TCP/IP interface using public domain modules typically from watch.

But it is a long learning curve.

My wish list is a simple TCP/IP implementation for DOS based industrial installations, that my software can call with an address and data.

Regards

Why would you want to use DOS? I assume that your need for a vendor to provide you a TCP stack is relative to the fact that you are using DOS, given the reference to DOS in the last paragraph.

Why don't you use a operating system that comes with TCP, like Linux or NT?

By Alex Pavloff on 21 August, 2000 - 10:34 am

Probably because his machines don't have the resources. That being said, there are many flavors of embedded linux that will run on very low end machines and provide all the capabilities that one would need.

By Jeffrey A. Rhines on 21 August, 2000 - 1:03 pm

Have you checked out Watt-32? GNU licensed 16 and 32-bit TCP/IP stack for dos. Requires a packet driver.

http://www.bgnett.no/~giva/

Regards,

Jeff

By Gilles Heinrich on 11 December, 2002 - 2:39 pm

To interface your load cells, you could just use a momentum weighing module and a Modbus TCP/IP tophat... Makes your life simplier!

By Nickels, Bob on 10 August, 2000 - 1:13 pm

> TCP/IP is a defacto standard that came about from a research project. Because it was
> widely used it never got replaced by better technology. TCP/IP is "good enough" and still is just that.

I agree. Back in the Multibux I/iRMX era, my group tried to standardize on Intel's OpenNET network which was a full seven-layer OSI implementation that purported to be "truly open". Unfortunately almost no one supported it and users hated having to have two NICs, two network connections, re-boots, etc. Before long we switched to TCP/IP because it was available, and "good enough". Now, 10-15 years later - it seems that everyone else has come to the same point. Claims of being "more open" didn't offset a lack of support and critical mass.

Bob Nickels
Honeywell S&C

By Jansen, Joe on 10 August, 2000 - 4:15 pm

-> Yes. I am serious. Open technology
is the technology that is most -> widely used. I would add that you
would not pick a widely used -> technology if the technology is at
the end of its lifecycle. I see it -> as open because it opens up my
options. -> -> The classic definition of "open" is like communism. It
sounds good -> on a mailing list discussion, but it doesn't work. ->

Tell that to all of the Linux developers. Seems to be working so far....

Capitalism does
-> not foster "open" technologies, because the market leader gains
-> nothing by creating an even playing field. TCP/IP is a defacto ->
standard that came about from a research project. Because it was
-> widely used it never got replaced by better technology. TCP/IP
is ->"good enough" and still is just that. -> ->

I would again suggest that TCP/IP is widely used because it is an open standard. The reason it has not been replaced is not because nobody has gotten around to it, but because it is
ubiquitous due to its public nature, and would be extremely difficult to replace. (Unless you are pushing a dot.net marketecture :^} )

My other questions still stand. What is the magic number of users for something to change from closed to open?

OPC will end up as a fad. (Watch out! I am making predictions. usually a bad thing!) I suspect that it will go the way of DDE in a matter of 3 years or less.

--Joe Jansen

By Roger Irwin on 10 August, 2000 - 10:40 am

> From: "Jansen, Joe" <JJansen@gehls.com>
> I would ask why you think it is that this has already happened?
> TCP/IP is an OPEN standard. Nobody owns TCP/IP. I can write a
> TCP/IP driver without paying a royalty. Why, I can even find a spec
> that tells me what TCP/IP is supposed to do! Here is the
> distinction: It is widely used because it is open. It is not open
> simply because it is widely used.

Joe, don't be too hard on him, it is a widespread belief among computer users that MS invented the internet, ethernet, TCP/IP, DOS, GUI's, BASIC, C++ and just about everthing else out there.

That's what MS pay all those spin doctors to do.

Interesting though, that just about all the success stories in the computer industry have been born out of open standards/source rather than
proprietry solutions. In fact Microsoft itself was born when Bill Gates persuaded his classmate Paul Allen to port the openly available source of the basic on the VAX to the Altair computer.

By contrast, it is only a few years ago that Microsoft attempted to implement it's own proprietry worldwide network, but was beaten out of the market by the open internet standards.

By Roger Irwin on 1 August, 2000 - 4:57 pm

> What I don't understand
> is the resistance to real open protocols and this fierce, rabid, Windows
> everywhere and nothing but Windows concensus in the automation market. It
> seems as if no technical argument can stand this "Windows at any cost"
> mindset.

Because most industrial users know nothing about comms Curt, and they think MS do (which is surprising, as comms is an area where they have an
abysmal track record).

Most users marvel at what they can do with OPC, without realising they could do the same more simply without OPC.

Certinaly they do not understand that they are using an architecture that is the inverse of what automation systems require. Interestingly MS
have shoehorned them into DCOM because it is what they are pushing across all market sectors. But in the sectors were it is more appropriate
(telecomms, banking, government, large legacy systems), everybody seems to be standardising on CORBA, MS must really love the OPC crowd!

Certainly there is an irony, MS welshed on thier annoucements to make an OS that was suitable for industrial embedding. I can understand them wanting to chase the consumer gadget market (it's much bigger!), and as they have heavy competition there from things like EPOC, I can understand then
needing to slim WinCE down and forget aspects not strictly necessary, but as this point they should either get to work on another OS or pull out of OPC, they are leading them up a gum tree.

Personaly I have said goodbye to the lot of them. I use my own TCP/IP protocols, it takes me less time to implement than even reading the FAQ of
most standards groups, industrial communications does not need to be so complicated. Yet I can access my field devices with just a few lines of VBA code, or easily put them in DCOM wrappers.

I would be quite happy to publish my own protocols, they are quite flexible and simple, but I will not bother as I know nobody will be interested in them as it does not say MS on the packet.

No skin off my nose, though, I do what the customer wants for a lot less money, a LOT less, and that has allowed us to deploy were more complex solutions have been evaluated and rejected because of cost.

By Larry Lawver on 2 August, 2000 - 9:16 am

To the List:

Curt's questions have prompted me to write about something that has been stewing in me over the last year of flames, religious fervor, and cheap shots at brand name automation suppliers and
Microsoft. Executive Summary: Use whatever is right for the job at hand because it is the best way to do it, not because all other possibilities are in some way evil!

Curt asks: Why is proprietary good? The answer is: Because it almost always works.

The good news about truly open systems is that you can do ANYTHING. The bad news is that YOU are responsible for EVERYTHING. When a proprietary system fails to do something it
should reasonably be expected to do, you should expect and receive complete support from the supplier. [Please do not flame me about that statement. It has always worked for me, over
the last three decades and four careers. If your experience seems to be different, try reading that sentence again, carefully.]

I have long seen "open" vs. "proprietary" as a question of make vs. buy, a classic engineering issue. If you need a small quantity of
something, it is difficult to justify the development costs associated with making it yourself if someone else has it in a catalog. If you truly need something that no one else has ever created, then you have to make it. Even at that point, though, you should take some time to decide if you really have a project so unique that no engineer has ever addressed it before--- or if it is a project so wretchedly defined that no one is going to come out of it alive!

Once, I found myself simultaneously at three different points along the make vs. buy continuum. I was managing two large proprietary
PLC projects, a UNIX project, and a volume product development project with an embedded controller and custom PCB at the heart of each of a planned 1000 units / year. All three projects were successful and profitable. Each was as "open" as was appropriate, but that wasn't a criterion. The important criteria were the project requirements, and how best to meet them.

Many complaints about "proprietary" systems (and, yes, I agree Microsoft is proprietary) have to do with unreasonable expectations, especially about cost. Many complaints I have seen on the list could be solved by purchasing a $1000 component.
Unapologetically, I have to tell you that "costs $1000" is not the same as "impossible." If you are in the automation business, you have to be mentally prepared to take $1000 out of the bank and burn it in an ashtray at any given moment. Of course, I am not encouraging waste or sloppy engineering. I am merely pointing out that it's an expensive business!

When the cost of proprietary systems is compared with "open" systems, I frequently find that the performance criteria are very relaxed for the "open" system. One of my clients regularly uses a
$7000 PC-based bill of material unless hard specified to use my $18,000 bill of material. The PC-based system takes six weeks in the field to tune and has spotty reliability after that. The proprietary bill of materials goes together quickly in the shop and rarely requires field visits. Which is really the more expensive bill of material?

Across the universe of automation projects, a universe of solutions is possible. I frequently mention Larry's Rule to my clients: If it works, it must be right; just be careful of your definition of "works." If you found a way to run payroll on a PLC, you are insane, but if it works for you, then it must be right. If you build an entire automation project around open source software running on XT clones, it must be right for you, or you wouldn't have done it. Demonizing the products and services you didn't use doesn't make your project any better!

The demands of automation are a very small niche, to this day, and while we should take advantage of any technology that will benefit our clients, we should note that the rapid evolution of the Internet and open systems does not directly relate to automation. It's all about numbers. Millions of people are working on open systems
and the Internet. Thousands of people are working on automation (at the level of an A-List participant). Face it: Our entire worldwide community wouldn't amount to a decent beta test field for Microsoft! The promise of excellence in open systems is a function of manhours spent on it, and it will take years for automation to enjoy benefits other businesses are reaping now.

A proprietary system lets some company make a buck off of it. That motivates them to engineer it carefully and support it strongly. It means that they service the warranty to the end user, and provide spare parts for at least a decade. This stability is very important to end users and critical to small end users. And this is why I have always questioned PC-based control in general, not just because of Microsoft issues.

I insist that proprietary systems, properly applied, are the simple solution to a wide range of automation projects. That doesn't mean that I doubt or demean the success of everyone on the list who avoids proprietary systems. This is a brilliant newsgroup, and the people who will prove me wrong are probably reading this now.
You just haven't done it yet.

Hope this helps!

Larry Lawver
Rexel / Central Florida

By Jansen, Joe on 3 August, 2000 - 9:53 am

-> Curt's questions have prompted me to write about something
-> that has been stewing in me over the last year of flames,
-> religious fervor, and cheap shots at brand name automation
-> suppliers and Microsoft.

I would take that as a compliment!

<snip>

-> Many complaints about "proprietary" systems (and, yes, I agree
-> Microsoft is proprietary) have to do with unreasonable
-> expectations, especially about cost. Many complaints I have
-> seen on the list could be solved by purchasing a $1000
-> component. Unapologetically, I have to tell you that "costs
-> $1000" is not the same as "impossible."

<snip>

-> I am merely pointing out that it's an expensive business!

This point I have to disagree with somewhat. If proprietary system 'A' costs $1000, and performs a set of functions, and open system 'B' is free and performs the identical set of functions, I HAVE to ask why 'A' costs $1000. The point being that it is only an expensive business because there are no choices to the overpriced otions. I
worked for AB for a year and was disgusted by the waste and overpricing of the components produced on my production line ( I was a line supervisor). The item we made cost all of about $5.00 worth of parts, and sells on the street for slightly over $100. Why? Because there wasn't an alternative. I am all for supply-and-demand, market forces, and what have you. But when a proprietary system leverages off of an installed base to keep others
from competing, you have an unfortunate situation that goes against the free market system. The truly open options are just the competition. As with most markets, those who support the
'underdog' are usually very convicted of their decision, and not afraid to share their reasons. This is a personality trait (refer to my earlier post on this subject).

-> When the cost of proprietary systems is compared with "open"
-> systems, I frequently find that the performance criteria are very
-> relaxed for the "open" system. One of my clients regularly uses
-> a $7000 PC-based bill of material unless hard specified to use
-> my $18,000 bill of material. The PC-based system takes six
-> weeks in the field to tune and has spotty reliability after that.
-> The proprietary bill of materials goes together quickly in the
-> shop and rarely requires field visits. Which is really the more
-> expensive bill of material?

I would assume, however, that you are more familiar with your $18,000 system, and are much better prepared to setup and support that system than the lower priced system. Not suggesting
anything sinister, just pointing out that, as with anyone, the system you prefer is always going to work better for you. Also, although I obviously do not know for sure, I would ask if the $7000 system is based on open standards, or is it also proprietary? (ie. Windows and a proprietary control system.) If so, that sort of
shoots down the 'proprietary always works' statement. I am currently only aware of one open control project (LinuxPLC) and this has not been released to the public at large as a stable control system.


-> Across the universe of automation projects, a universe of
-> solutions is possible. I frequently mention Larry's Rule to my
-> clients: If it works, it must be right; just be careful of your
-> definition of "works."

<snip>

Agree completely. I have used Microsoft products for many things. This is because of the Ease-of-setup I mentioned in my other post. However, I do not claim that they are more stable, or that they
provide the highest quality of service. They were just easy to slap together, and provides the ready made excuse for downtime (Windows crashed). If i take down a Linux server, people start to get stressed out over the downtime. This to me speaks volumes of what is expected of Windows vs. what is expected from anaything
else. Nonetheless, I still will throw together a VB program on WinNT to do a quick monitoring project if something is working incorrectly. I just want to make sure I make the distinction
between 'easy' and 'better'.

<snip> All comments agreed to.

-> A proprietary system lets some company make a buck off of it.

Yes, by definition.

-> That motivates them to engineer it carefully and support it
-> strongly. It means that they service the warranty to the end user,
-> and provide spare parts for at least a decade. This stability is
-> very important to end users and critical to small end users.
-> And this is why I have always questioned PC-based control
-> in general, not just because of Microsoft issues.

WHAT? '64,000 known bugs at release' is engineered carefully? 3 hours in the tech support phone queue is supported strongly?
quote from the MSDN documentation on VB ...this feature "is not compatible with WindowsNT. This is by design"... And don't even get me started on stability.

In the PLC realm, I stand by my earlier statements that nobody has ever accused AB of being on the cutting edge nor price competitive. They are simply a standard. It is about the comfort zone, not quality of product. Referring earlier in the message, why do I pay several hundred dollars for RSLogix, when others offer their software for free or next to free.

-> I insist that proprietary systems, properly applied, are the simple
-> solution to a wide range of automation projects.

Simple, yes. Best, not always.

By Larry Lawver on 8 August, 2000 - 8:58 am

I wrote, in part:

-> When the cost of proprietary systems is compared with "open"
-> systems, I frequently find that the performance criteria are very
-> relaxed for the "open" system. One of my clients regularly uses
-> a $7000 PC-based bill of material unless hard specified to use
-> my $18,000 bill of material. The PC-based system takes six
-> weeks in the field to tune and has spotty reliability after that.
-> The proprietary bill of materials goes together quickly in the
-> shop and rarely requires field visits. Which is really the more
-> expensive bill of material?

Joe's reply (not repeated here) to this section of my post was off-point, obviously because I didn't mention that I am a distributor of well-known proprietary stuff, not an integrator or OEM. The client I mention is an OEM. The rest of Joe's comments are appreciated.

Hope this helps!

Larry Lawver
Rexel / Central Florida

By Curt Wuollet on 7 August, 2000 - 8:28 am

> Curt's questions have prompted me to write about something that has
> been stewing in me over the last year of flames, religious fervor,
> and cheap shots at brand name automation suppliers and Microsoft.
> Executive Summary:
> Use whatever is right for the job at hand because it is the best
> way to do it, not because all other possibilities are in some way
> evil!
>
> Curt asks: Why is proprietary good? The answer is: Because it
> almost always works.

Unless interoperability and integration are your criteria. In the context of heterogenous communications it fails miserably, by design.

> The good news about truly open systems is that you can do
> ANYTHING. The bad news is that YOU are responsible for
> EVERYTHING. When a proprietary system fails to do something it
> should reasonably be expected to do, you should expect and receive
> complete support from the supplier. [Please do not flame me about
> that statement. It has always worked for me, over the last three
> decades and four careers. If your experience seems to be different,
> try reading that sentence again, carefully.]

I feel that systems can reasonably be expected to communicate. Systems in the rest of the computing world all communicate, they all use common, open protocols and are expected to interoperate to a reasonable degree. Also the "anything" I have to do includes talking to "foreign" equipment, and the major obstruction is the _deliberate_ omission of any means to do so. It shouldn't be my responsibility to provide a gateway that talks to each machine in it's own unique means, but at least open systems (primarily Linux) make it
possible. Linux speaks many protocols and they are all available free, with source. Obviously supporting open protocols is not too expensive. The "reasons" things don't interoperate are merely
excuses for non-cooperative behavior and avarice.

> I have long seen "open" vs. "proprietary" as a question of make vs.
> buy, a classic engineering issue. If you need a small quantity of
> something, it is difficult to justify the development costs
> associated with making it yourself if someone else has it in a
> catalog. If you truly need something that no one else has ever
> created, then you have to make it. Even at that point, though, you
> should take some time to decide if you really have a project so
> unique that no engineer has ever addressed it before--- or if it is
> a project so wretchedly defined that no one is going to come out of
> it alive!

Exactly my point. If instead of senselessly inventing new protocols that serve the same function as existing protocols, manufacturers
were to use something "off the shelf", large amounts of resources would be saved and the customer would be better served in the process. By using high volume protocols and making it
interoperable they could sell a lot more I/O which is where the money is anyway.

> Once, I found myself simultaneously at three different points along
> the make vs. buy continuum. I was managing two large proprietary
> PLC projects, a UNIX project, and a volume product development
> project with an embedded controller and custom PCB at the heart of
> each of a planned 1000 units / year. All three projects were
> successful and profitable. Each was as "open" as was appropriate,
> but that wasn't a criterion. The important criteria were the
> project requirements, and how best to meet them.
>
> Many complaints about "proprietary" systems (and, yes, I agree
> Microsoft is proprietary) have to do with unreasonable
> expectations, especially about cost. Many complaints I have seen on
> the list could be solved by purchasing a $1000 component.
> Unapologetically, I have to tell you that "costs $1000" is not the
> same as "impossible." If you are in the automation business, you
> have to be mentally prepared to take $1000 out of the bank and burn
> it in an ashtray at any given moment. Of course, I am not
> encouraging waste or sloppy engineering. I am merely pointing out
> that it's an expensive business!

In most cases I don't use open systems to save money. I use them to solve the interoperability problem because no one vendor will address this problem. I do have a problem with $400.00 serial cards that differ only in a connector from $4.00 cards and similar clear abuses. Using unique and proprietary hardware and software when there is no clear technical advantage over commodity items is poor engineering at best and amounts to exploitation. The commodity item is likely to be better tested and more reliable through greater field experience and refinement.

> When the cost of proprietary systems is compared with "open"
> systems, I frequently find that the performance criteria are very
> relaxed for the "open" system. One of my clients regularly uses a
> $7000 PC-based bill of material unless hard specified to use my
> $18,000 bill of material. The PC-based system takes six weeks in
> the field to tune and has spotty reliability after that. The
> proprietary bill of materials goes together quickly in the shop and
> rarely requires field visits. Which is really the more expensive
> bill of material?

I would change the OS and software the PC is running. The hardware has demonstrated reliability. Of course the OS and software vendor will always blame the hardware. There is no
inherent reason that PC's should be less reliable. After all, many PLC's now are very close to an embedded PC. If you use software of known questionable reliability for control apps that's more than bad engineering, that's negligence, no matter how pretty it is.

> Across the universe of automation projects, a universe of solutions
> is possible. I frequently mention Larry's Rule to my clients: If
> it works, it must be right; just be careful of your definition of
> "works." If you found a way to run payroll on a PLC, you are insane,
> but if it works for you, then it must be right. If you build an
> entire automation project around open source software running on XT
> clones, it must be right for you, or you wouldn't have done it.
> Demonizing the products and services you didn't use doesn't make
> your project any better!

If you can point out a case where I demonized an entity without just cause I will be happy to apologize.

> The demands of automation are a very small niche, to this day, and
> while we should take advantage of any technology that will benefit
> our clients, we should note that the rapid evolution of the Internet
> and open systems does not directly relate to automation. It's all
> about numbers. Millions of people are working on open sytems and
> the Internet. Thousands of people are working on automation (at the
> level of an A-List participant). Face it: Our entire world- wide
> community wouldn't amount to a decent beta test field for Microsoft!
> The promise of excellence in open systems is a function of manhours
> spent on it, and it will take years for automation to enjoy benefits
> other businesses are reaping now.

It would surely make sense to leverage those millions of manhours where appropriate rather than reinventing proprietary solutions, especially at low volumes.

> A proprietary system lets some company make a buck off of it.
> That motivates them to engineer it carefully and support it
> strongly. It means that they service the warranty
> to the end user, and provide spare parts for at least a decade.
> This stability is very important to end users and critical to small
> end users. And this is why I have always questioned PC-based
> control in general, not just because of Microsoft issues.

That's the theory and sometimes it works in practice. The bazaar method seems to work fairly well without the risk of single sourcing. If the parts are generic and non-proprietary there is no need to stock spares for decades, that's what standardization is for. Desktops are generic, notebooks are proprietary. Which one has more problems with compatibility and obsolescence?

> I insist that proprietary systems, properly applied, are the simple
> solution to a wide range of automation projects. That doesn't mean
> that I doubt or demean the success of everyone on the list who
> avoids proprietary systems. This is a brilliant newsgroup, and the
> people who will prove me wrong are probably reading this now. You
> just haven't done it yet.

I seek only the opportunity to provide a choice based on its merits and technical aspects. What I am finding is that merit and technical
considerations are almost irrelevent in the face of marketing and perception.

cww

By Larry Lawver on 9 August, 2000 - 10:45 am

In reply to Curt's post, which appears at the bottom of this one:

Use whatever is best for the task at hand. In a lot of cases, that will be a proprietary system. That is my simple message.

Curt's disagreements with me seem to resolve into three categories:
the failure of proprietary systems to be open, the unimportance of cost in the open vs. proprietary discussion, and the superiority of
commodity products over carefully engineered proprietary components.

In the first set of issues, I'll simply concede. We all agree enough on the definitions to agree that a proprietary solution will not be open. If your project criteria include "open" by any definition, then don't use proprietary stuff.

On the cost issue, on the other hand, I'll dig in and reject disagreement.

Cost is brought up every time open systems are discussed, and is central to the advertising of the vendors in the field. Curt brings it up in his discussion, after briefly brushing it off. Proprietary systems involve more PURCHASED components than open systems, and they are probably always more expensive when measured that way. If it wasn't about cost, first and foremost,
there wouldn't even be a discussion here.

A hypothetical: Suppose that I had a proprietary black box for you that met all of the requirements for your next project. What would
you pay for it? If your fixed price contract was for US$1M, and I wanted US$10K, you'd probably buy it. Reductio ad absurdum, of course.

In the practical example I gave in my post, I described a client (an OEM) that prefers a US$7000 open system bill of material to
buying a proprietary system bill of material from me (I am currently a distributor, formerly an integrator, for those that didn't already
know that.) for US$18000. If they get a hard customer specification for my stuff, they use it and have a trouble-free start-up. The open
system takes weeks of set-up in the field, and provides me with lots of anecdotes about using Ethernet for real-time I/O.

In my opinion, this open system is clearly inferior to my proprietary system. I suspect that a few folks from this list could improve the
track record of this particular open system, but that isn't the point. The point is that the solid reliability of proprietary systems,
properly applied, gets compared unfavorably with less expensive open systems that don't work as well. In the case of my client, they claim that their favored system is better because it is
open. Strangely, if I brought my price down to their number, though, they say they would buy it.

(That example is only one of many I have from personal experience. I do not mean to generalize that to all open systems, though. I have personally implemented successful open systems.
I'm defending proprietary, not dismissing open.)

Finally, on leveraging the stuff that has become commodity rather than relying upon proprietary catalog items: Who determines, ten years later, that a certain commodity item is form, fit, and function compatible with the original? Eight-tracks and Betamax are the usual targets to bring up at this point, but for this audience I will
mention pre-IDE disk drives and 256K memory sticks that we were all using ten years ago. Without a brand-name manufacturer and a
believable commitment to spare parts, you can't rely on the market still offering the spare you need ten years from now.

Ten years from now, when a rollercoaster, launch pad, or juice plant in my territory is down due to a component failure, I know that the owner will be able to get a spare part quickly, probably out of local inventory. Can anyone guarentee that a generic ISA Ethernet card will still be available at any price? Curt --- that US$400 price SHOULD include that longterm proprietary support I value so much!

This kind of longterm stability is very important to owners that will keep their systems running indefinitely. It is not marketing hype --- it is the result of a long, consistent track record. It is a reputation that is easily lost, as Westinghouse found out fifteen years ago. Thus, perception is more important than Curt allows.

Generic is a particular problem in tightly configuration managed systems.

When you ride a rollercoaster, would you be satisfied with your safety if you thought that maintenance workers had discretion to
substitute generic components, including software downloaded from the Web? Or, would you rather that the system is only allowed to carry guests if it can be proven that the system is identical to what passed the Acceptance Test Procedure at commissioning time? Doing that requires catalog numbered, proprietary components.

Notice that I am not denying the advantages Curt mentions. Many of you work on projects where the things I bring up are not important. I trust all of you to choose the best solutions for your
clients, and many of those solutions will be proprietary.

Hope this helps!

Larry Lawver
Rexel / Central Florida

By Curt Wuollet on 10 August, 2000 - 12:18 pm

> Use whatever is best for the task at hand. In a lot of cases, that
> will be a proprietary system. That is my simple message.
>
> Curt's disagreements with me seem to resolve into three
> categories:
> the failure of proprietary systems to be open, the unimportance of
> cost in the open vs. proprietary discussion, and the superiority of
> commodity products over carefully engineered proprietary components.
>
> In the first set of issues, I'll simply concede. We all agree
> enough on the definitions to agree that a proprietary solution will
> not be open. If your project criteria include "open" by any
> definition, then don't use proprietary stuff.
>
> On the cost issue, on the other hand, I'll dig in and reject
> disagreement= .
>
> Cost is brought up every time open systems are discussed, and is
> central to the advertising of the vendors in the field. Curt brings
> it up in his discussion, after briefly brushing it off. Proprietary
> systems involve more PURCHASED components than open systems, and
> they are probably always more expensive when measured that way. If
> it wasn't about cost, first and foremost, there wouldn't even be a
> discussion here.
>
> A hypothetical: Suppose that I had a proprietary black box for you
> that met all of the requirements for your next project. What would
> you pay for it? If your fixed price contract was for US$1M, and I
> wanted US$10K, you'd probably buy it. Reductio ad absurdum, of
> course.
>
> In the practical example I gave in my post, I described a client (an
> OEM) that prefers a US$7000 open system bill of material to buying a
> proprietary system bill of material from me (I am currently a
> distributor, formerly an integrator, for those that didn't already
> know that.) for US$18000. If they get a hard customer specification
> for my stuff, they use it and have a trouble-free start-up.

When we do this we get DOA equipment, bad documentation and lots of headaches, especially with regard to communications. If you can guarantee a trouble free start-up, by all means, send me a line card. I'm serious.

> The open
> system takes weeks of set-up in the field, and provides me with lots
> of anecdotes about using Ethernet for real-time I/O.

Modbus doesn't work too well if you apply it incorrectly either. And if you need realtime IO you should use realtime Ethernet from Lineo. it's free, I believe. Any fieldbus, misapplied or overloaded will slow down. The proprietary implementations I've seen from GEF for example, are niether deterministic or fast. As "foriegn" protocols I suspect they are deliberately hobbled so that the "native" protos are always better. This wouldn't happen in an Open System
implementation. I'll be happy to compare 100mbit/sec switched Ethernet with any of the other common transports for determinism and throughput. And you should hear my stories about a profibus setup that can't follow a 10 hz square wave.

> In my opinion, this open system is clearly inferior to my
> proprietary system. I suspect that a few folks from this list could
> improve the track record of this particular open system, but that
> isn't the point. The point is that the solid reliability of
> proprietary systems, properly applied, gets compared unfavorably
> with less expensive open systems that don't work as well. In the
> case of my client, they claim that their favored system is better
> because it is open. Strangely, if I brought my price down to their
> number, though, they say they would buy it.

On this point, I suspect the problems are due, at least in part to the fact that we define open systems quite differently. Someone using Visual Basic to control some I/O does not constitute an Open System in my book For that matter, why would an open system have to be any different than a closed one? If you took a closed one and published the source and schematics, would it suddenly stop working? Even with commodity class hardware, you have to have decent software to have reliability and predictable results.

> (That example is only one of many I have from personal
> experience. I do not mean to generalize that to all open systems,
> though. I have personally implemented successful open systems. I'm
> defending proprietary, not dismissing open.)
>
> Finally, on leveraging the stuff that has become commodity rather
> than relying upon proprietary catalog items: Who determines, ten
> years later, that a certain commodity item is form, fit, and
> function compatible with the original? Eight-tracks and Betamax are
> the usual targets to bring up at this point, but for this audience I
> will mention pre-IDE disk drives and 256K memory sticks that we were
> all using ten years ago. Without a brand-name manufacturer and a
> believable commitment to spare parts, you can't rely on the market
> still offering the spare you need ten years from now.

With commoditization and standardization you don't need to maintain spares. I can drop my application on a whole new PC economically as long as there's nothing special about the
hardware. I can buy a new PC to run it on for the cost of one of those pre-IDE hard drives. Why would I want an old MFM drive (at full price or more) that's been sitting for years? And I'm very
confident that Linux will run on the PC's we have ten years from now, only much faster. And I'm betting that good ol ethernet is still around. And If I was worried, I can recompile the version I'm using on the new hardware because I own the source.

> Ten years from now, when a rollercoaster, launch pad, or juice plant
> in my territory is down due to a component failure, I know that the
> owner will be able to get a spare part quickly, probably out of
> local inventory. Can anyone guarentee that a generic ISA Ethernet
> card will still be available at any price? Curt--- that US$400
> price SHOULD include that longterm proprietary support I value so
> much!

See above

> This kind of longterm stability is very important to owners that
> will keep their systems running indefinitely. It is not marketing
> hype--- it is the result of a long, consistent track record. It is
> a reputation that is easily lost, as Westinghouse found out fifteen
> years ago. Thus, perception is more important than Curt allows.

Company XYZ may be out of business in ten years. If you have proprietary gear, you are SOL. If you have an Open System, any competent programmer can keep your system working indefinately on contemporary hardware if neccesary.

> Generic is a particular problem in tightly configuration managed
> systems.
>
> When you ride a rollercoaster, would you be satisfied with your
> safety if you thought that maintenance workers had discretion to
> substitute generic components, including software downloaded from
> the Web? Or, would you rather that the system is only allowed to
> carry guests if it can be proven that the system is identical to
> what passed the Acceptance Test Procedure at commissioning time?
> Doing that requires catalog numbered, proprietary components.

I see no difference from PLC's if people are allowed to play with the code. You would use certifiable hardware in this case and I believe
there is a certifiable version of Linux that has met with FAA requirements. Like I said before, simply because it's open doesn't imply it's different or of lesser quality. Almost all of those precautions have nothing to do with being proprietary. Many of the commodity producers are ISO9XXX compliant and would meet the lot tracibility requirements. This wouldn't be a run of the mill application for proprietary hardware either..

> Notice that I am not denying the advantages Curt mentions. Many of
> you work on projects where the things I bring up are not important.
> I trust all of you to choose the best solutions for your clients,
> and many of those solutions will be proprietary.

For my part, I merely want to counter the FUD and misperception that Open Source software and commodity hardware can't be as good as or better than their proprietary counterparts. The amount of
hardware of all types that goes to obsolescence without ever having failed bears this out. And good software is good regardless of the license. All that are now commodities were once proprietary and specialized.

Regards

cww

By Randy DeMars on 16 August, 2000 - 12:17 pm

I haven't been following this discussion too closely, but this section brings up a concern that I have about PC-based control vs. traditional PLCs. I would like to hear some opinions and explanations of how any of you may be
handling this.

Suppose we manufacture a machine that uses PC-based control instead of a PLC, and send this to one of our customers. After several years they have a problem with part of the computer system and need a replacement. The original hardware is no longer available and current hardware does not
support the software used. (We have seen this situation before - system ships in '92 with Win 3.0--> touch screen fries in '98 --> new version of touch screen does not support Win 3.0 --> search for compatible replacement --> engineering required to reconfigure system with new drivers, etc).

I realize that the PC-based solution can offer tremendous advantages, but I have a hard time getting past this problem. We manufacture capital
equipment, and our customers typically perform their own maintenance. Trying to get our application running on new hardware may be beyond their level of expertise, whereas installing a new PLC and downloading the program is not a problem.

We have been shipping PLCs on our equipment since the early '80s and except for a few of the very old systems, spare parts are still readily available. I realize that some of the industrial computer vendors will support their hardware for a certain length of time. What kinds of timeframes are you seeing?

Thanks in advance
Randy DeMars

By Curt Wuollet on 17 August, 2000 - 2:53 pm

Randy DeMars wrote:

> I haven't been following this discussion too closely, but this
> section brings up a concern that I have about PC-based control vs.
> traditional PLCs. I would like to hear some opinions and
> explanations of how any of you may be handling this.
>
> Suppose we manufacture a machine that uses PC-based control instead
> of a PLC, and send this to one of our customers. After several
> years they have a problem with part of the computer system and need
> a replacement. The original hardware is no longer available and
> current hardware does not support the software used. (We have seen
> this situation before - system ships in '92 with Win 3.0--> touch
> screen fries in '98 --> new version of touch screen does not support
> Win 3.0 --> search for compatible replacement --> engineering
> required to reconfigure system with new drivers, etc).

The problem here is the forced upgrade/termination of support from the well known vendor. Guaranteed investment distruction. This can bite you many more ways, it's just that this one is the most frustrating. With an Open Source OS this goes away. There is less
incentive to drop earlier versions and since the source is available, the new driver can be backported, or the application can be compiled on the new version. In your example the touch screen on a Linux box for example would most likely be serial and probably would just work because a Linux application would very likely use the same interface to the serial character driver regardless of kernel or serial driver version. This is not an accident, these API's are
carefully maintained for just this reason.


> I realize that the PC-based solution can offer tremendous
> advantages, but I have a hard time getting past this problem. We
> manufacture capital equipment, and our customers typically perform
> their own maintenance. Trying to get our application running on new
> hardware may be beyond their level of expertise, whereas installing
> a new PLC and downloading the program is not a problem.

This can be mitigated and avoided by burning a CD that includes the OS and the application and the configuration. Of course to be legal, this too implies an Open Source OS. In use it's just like the CD's the ship with a lot of Windows systems. When it gets too hosed up, insert CD and boot. If the only thing that is done with the machine is control, (strongly recommended) they are right back in business fast. Or just load a minimal system so you can dial in and do the rest. It is also a good idea to not ship anything that your application doesn't require. If all the system does is what it's supposed to, people don't play around. I've found that Linux helps a lot in this regard, you don't have people loading junk on the
machine. In the bad old days, it was quite common for someone to load an application on the PC and mess things up. As more people use Linux I suppose it will lose this advantage, but for now, it really cuts down on user problems. There are lots of ways to make it easy, too bad they all require some forethought:^) We sell PC based test equipment that runs on Linux and support costs have been minimal. Windows was unsupportable for a small shop like ours, we never let it out the door, our own use kept us too busy. I think this is a big part of the bad impression many have about using PC's.

> We have been shipping PLCs on our equipment since the early '80s and
> except for a few of the very old systems, spare parts are still
> readily available. I realize that some of the industrial computer
> vendors will support their hardware for a certain length of time.
> What kinds of timeframes are you seeing?

It's far better to not depend on specific hardware. If all you need is standard PC facilities and Ethernet for example, it should work on a new box. There are many apps written years ago that still load and work on new hardware. This is probably by accident. But with a little thought and anticipation it can be achieved intentionally. Standardization of the environment is a PC strong point we might as
well use it.

Regards,

cww

By Roger Irwin on 21 August, 2000 - 12:43 pm

> When it gets too
> hosed up, insert CD and boot.

Sometimes, I set the root partion up read only, and have just a small rw partion that is initialised with a clean copy of var at boot, and
contains the home directory, which contains any app configs in simple ASCII files they can print out if they want to. Or I use Python scripts, most people can figure out how to do simple mods
to that such as changing timeout values etc.

Randy:

There are compatibility issues with open source. Just because the source is available doesn't mean that interfaces don't change. A case in point is when Linux underwent a shared library change.
Open source just means inexpensive source it doesn't mean it was well developed or well documented.

Sam

By Blunier, Mark on 18 August, 2000 - 4:17 pm

> There are compatibility issues with open source. Just because the
> source is available doesn't mean that interfaces don't change.

There are compatability issues for any software that changes. It doesn't matter if it is open source, closed source, proprietary, free, or etc.

> A case in point is when Linux underwent
> a shared library change.

This is a pretty vague example. What kind of shared library change are you talking about? When linux libraries are changed, they are to add
new API's, or fix broken API's that don't work as documented. If you are talking about the switch from libc5 to libc6, you are also spreading
misinformation. You can still use libc5 and libc6 at the same time.

> Open source just means inexpensive source it doesn't mean it was
> well developed or well documented.

This is wrong. Open source does not mean inexpensive source at all. Open source means you have the source. The source may be free, or it may come with a purchase price. You may have a license to make changes, you might not.

Open source may not be well developed, but at least you have the opportunity to look at the code and make that decision before you run it, instead of needing to run it to find out if its bad.

Closed source programs are always well documented either, and since you don't have the source its much more difficult to figure it out on your own, or find someone that can.

But getting back to the message that you responded too, to bring things back into
context, open source doesn't make new hardware work automatically, if you have the source (and license to change it), it is still possible for you to get your system to run again even if the vender won't, or won't do it for a reasonable
price.

>This is a pretty vague example. What kind of shared library
>change are you talking about? When linux libraries are changed,
>they are to add new API's, or fix broken API's that don't work as
>documented. If you are talking about the switch from libc5 to
>libc6, you are also spreading misinformation. You can still use
>libc5 and libc6 at the same time.

My original statement was about the change in the library format. It has been a while when it happened. Back when I used Linux regularly the shared library format itself was changed. I can't
remember what the type formats were. I think one of them was similar to how Sun did theirs. The point is that just as Microsoft changes their software platform so can "open" source. The change between different versions of libraries is also a maintenance problem. Yes, you can run multiple versions of the same shared library, but it is a maintenance issue. My point is that change can happen on any platform. People blame proprietary software companies like Microsoft, Sun, HP, IBM, SGI and others for making changes in their operating system that cause problems for
them down the road. Well, the same thing can happen with any operating system. If Linux is not in vogue in 5 years then someone can end up in a situation like the example where the computer
hardware is obsolete, but the OS can't handle the new hardware. In either case you will have write your own driver. What's the difference?

>This is wrong. Open source does not mean inexpensive source
>at all. Open source means you have the source. The source
>may be free, or it may come with a purchase price. You may
>have a license to make changes, you might not.
>
>Open source may not be well developed, but at least you have the
>opportunity to look at the code and make that decision before you
>run it, instead of needing to run it to find out if its bad.
>
>Closed source programs are always well documented either, and
>since you don't have the source its much more difficult to figure it
>out on your own, or find someone that can.
>
>But getting back to the message that you responded too, to bring
>things back into context, open source doesn't make new
>hardware work automatically, if you have the source (and license
>to change it), it is still posible for you to get your system to run
>again even if the vender won't, or won't do it for a reasonable price.

Wrong for you maybe. I don't see the benefit of having source to the operating system. I believe that you should minimize the software that you develop and it is hard for me to see a good business reason for fiddling with operating system source or anything else that I can get commercially. If I didn't think NT or OPC was going to perform then I wouldn't use them. I wouldn't use software because I have source to it. I use Samba, but I do that because it is the best software for the job, not because I can get to the source.

By Randy DeMars on 24 August, 2000 - 2:43 pm

Somehow my question turned into a discussion about the relative merits of open source operating systems vs. Windows NT. This is not really what my question was trying to get at. Specifically, my point was that if I use a PLC then my customer can get a machine running again after a hardware failure with no help from me. If I use a PC, then this is not the case, unless the hardware is 100% compatible with that with which the machine shipped. Once we start talking about recompiling, etc., then my engineers are
required.

My question was, how have other OEMs dealt with this situation?

Randy DeMars

By compu-weigh-jvp on 25 August, 2000 - 10:15 am

I think this problem only exists on PC's if using Windows based software.

However if you are really honest then you will admit that if a breakdown occurs on a PLC installation 5 years after the install, the chances are that that model of PLC is no longer available and the replacement needs address changes or is mounted different or,,,,,

We have been installing PC based batch weighing systems for nearly 12 years now, and have never had the problem where an upgrade would not run on the hardware. In the early days we had to recompile without the 286 switch to run on a 8088 but that was the only change.

We do encourage customers to upgrade their hardware as the years go on. For example the Y2K fiasco. From booting machines on Floppies, we have progressed to RomDisk's, then to Disk On Chip. The same 48 bit I/O boards now plug into ISA backplanes instead of Motherboards. We have put intelligence on our I/O boards , but they still work on 8088's.

I am afraid I am a control freak, I won't allow somebody at Microsoft to decide my workload for the next 12 months. OK so the screen's dont look as fancy, but the machine works perfectly to the customer's satisfaction.

However now I am at cross-roads.
Some of my customers want their reporting info downloaded to their windows box.
(What version, 3.1? 95 ?, 98?, 2000?)
Maybe I just provide them with a floppy.???

Regards,

Jan van de Poll
Technical Manager
Compu-Weigh Pty. Ltd.
jvp@Compu-Weigh.com.au

By Curt Wuollet on 23 August, 2000 - 8:33 am

There is strong evidence that it is made better by peer review, but, I agree that there are no guarantees. At least you can look at it and find out. And you have a much better chance of dealing with issues if you have the source. Most of the libc5/glibc issues required only a recompile. Of course, you need source for that. MS users still face dll hell every day and every application installed is a time bomb. I'll take my issues verses their issues any time.

Regards

cww

By Roger Irwin on 23 August, 2000 - 9:17 am

> There are compatibility issues with open source. Just because the
> source is available doesn't mean that interfaces don't change. A
> case in point is when Linux underwent a shared library change.

Yes just look. You see you can choose wether to make a package a.out or ELF (which was the big change in the shared libraries). Major releases of Linux switched to ELF 5 years ago, but you can still use compile and support a.out, and you can make a completely a.out system.

The same is true of another recent major change, from libc5 to glibc.

Of course this is looking at the problem from a closed source point of view, where the problem is supporting old software in new environments. Open source offers a much more desirable solution,
re-compile the software to make a new version, or to make it run on different hardware, or just better (for example compiling with Pentium specific optimisations enabled).

Allthough OSS is genrally available in the form of pre-built binaries, they consider the main distribution to be the source, and this will
include makefiles and utilities which will automatically configure the option to best suit your platform and then build you your optimised package.

> Open source just means inexpensive source it doesn't mean it was
> well developed or well documented.

Open is not necessarily inexpensive, can cost more. As for development or documentation, that varies from software to software irrespective of
open source issues.

Mind you, there is an implication here that OSS is perhaps not as well documented, or perhaps not as well written.

I am not going to argue, just point you to the facts, for example the Linux documentation project which covers everything from individual command options througth to how the kernels works may be viewed online at

http://www.ldp.org

As for the coding, well, you can see for yourself, go look at it;-)

But how did this get into the protocols thread? Presumebly because many people confuse open standards with open software. It is like
confusing opening a door with open heart surgery, open means the same, but it is meaningless without a context;-)

For example, the telecoms industry is rigourously conservative and you can expect to thousands of dollars just to get documentation.

But their standards are completely open and interoperable and do not build on vendor specific technology. And it works.

By Ing. Pietro F. Maggi on 1 August, 2000 - 1:53 pm

> Cannot totaly agree. Firstly many protocols have different characteristics
> for different requirements. CAN goes short distances, but with tightly
> controlled timing. Profibus DP is easy, but costly and not scaleable.

Just curious, what do you mean with the statement that CAN has a tightly controlled timing?
CAN is an event based protocoll with an automatic collision detection that can prevent a message object to be trasmitted if there is an higher priority m.o. waiting. With CAN you can just have best case or average case timing, (as I know it ;^).
Another point, is the short distance of CAN link...
Simply speaking, the max distance of a link comes from the link length, you can only get 40m for 1Mbit/s, but you can get 500m with 125Kbit/s and even 1000 with 50Kbit/s. And this without repeater.

With ProfibusDP you can obtain a maximum link of 9600m with 93.75Kbit/s AND 7 repeater.

> Then there are hardware constrainsts, adding a ethernet controller and
TCP/IP
> stack to a little microcontroller based temperature regulator would up the
cost
> 4 fold as well as increasing size and power consuption wheras MODBUS, by contrast,
> is ideally
> suited to such a task and is freely implementable without costly
licencesand/or
> association fees.

I agree that ethernet is not the right choice, but I think that, actually, we have to much "standard" link available. Even if you choose an low level bus as CAN then you have multiple choice for the higher Layer (CANOpen and all the family).

> Then there are the manufacturers. Everybody makes products that conform to
> a 'standard'. Of course what they really do is just take what they have always
> done and create an 'independent open standards body' to promote it. Well
> OK, that last remark is a bit cynical, but you have to admit thier is a bit
> of truth in it as well;-)
>
> Have you ever asked yourself 'why do we not all drive the same
> standard type of automobile', 4 door sedan, 5 seats, standard
> sized boot that takes a standard set of luggage.......
>

If the manufacturer create a products with an "open standard" you are lucky... even try to communicate with a Siemens S7/200 using PPI? ;-P

Best Regards

Pietro F. Maggi
http://www.studiomaggi.com/ p.maggi AT studiomaggi DOT com

By Rob Hulsebos on 2 August, 2000 - 2:41 pm

>> Cannot totaly agree. Firstly many protocols have different
>> characteristics for different requirements. CAN goes
>> short distances, but with tightly controlled timing.
>> Profibus DP is easy, but costly and not scaleable
.
What is "not scaleable" about DP ?

>Another point, is the short distance of CAN link...
>Simply speeking, the max distance of a link comes from the link
>length, you can only get 40m for 1Mbit/s, but you can
>get 500m with 125Kbit/s and even 1000 with 50Kbit/s.
>And this without repeater.

With even lower bitrates longer distances can be achieved. I heard about bitrates of 5 Kbit/s and 5 km. Haven't tried it, though.

>With ProfibusDP you can obtain a maximum link of 9600m with
>93.75Kbit/s AND 7 repeater.

According to the Profibus spec you can only have *three* repeaters maximum. But that's theory, current practice allows more then this.
(luckily!). I have also heard of distances of 90km with fiber.

Greetings,
Rob Hulsebos

By Steve Cliff on 4 August, 2000 - 3:52 pm

For what it is worth, CAN is limited to a total width of the network, including any propagation delays in the network, to at most one bit time.
The bitwise arbitration used by CAN requires that every node "see" a bit while the transmitter is still transmitting it in order to do the bitwise
back off arbitration. This is at the core of CAN and is a hard limit. The slow a CAN network runs, the wider (longer) it can be, but at some point the bit times and widths just don't make sense anymore. A CAN system's size is directly related to its bit propagation time. Signal degradation does not matter.

BTW, this is similar to the maximum width of an Ethernet 10BaseT segment to less than 64 bytes -- to make sure the two farthest away node both see a
collision between them within the specified time..... Signal degradation does not matter.

Since Profibus uses RS-485 and no collision detection or monitoring or arbitration (depending upon timing and command/response to avoid
collisions), Profibus length depends upon signal degradation and some overall node response parameters. Network size is related only indirectly related to bit rate. This is the same for most of the RS-232 asynch communications we are all used to....

Ah, details, details.

steve

By Curt Wuollet on 19 June, 2000 - 10:01 am

Hi Tassos

There is no good reason (except political and commercial) to have so many. A few variations perhaps, but, not dozens. Ethernet will wipe out a lot of low level protocols because that is driven by computer professionals that can see how ridiculous the situation is. But, the proprietary vendors will then destroy Ethernet by
plopping dozens of incompatible, non-interoperable protocols on top of it instead of encapsulating their protos in the existing
Internet protos. Eventually there will be a return to reason, driven, again from the outside. But, greed and avarice are much more prevelant than altruism and concern for the customer, so
expect another couple of generations before people realize the enormous advantage of sharing open protocols.

Curt Wuollet, Owner
Wide Open Technologies

By Unique Systems on 19 June, 2000 - 10:02 am

1. Politics and Money
2. Some protocols are used for different levels of communications. For example DeviceNet is used primarily for device level communications, PLC to
discrete I/O (limit switch, motor starter, etc.). Profibus can carry larger amounts of data and might be used to communicate with a whole remote I/O rack. Ethernet is primarily used for information that is not time critical,
information going from a PLC to an accounting program in the front office.
These are VERY general descriptions, it is hard to categorize a specific protocol for one specific use because they can be used many different ways depending on the application.

Check out this link for a comparison of many of the industrial networks.
http://www.synergetic.com/compare.htm

Steve

By Hullsiek, William on 19 June, 2000 - 10:21 am

> What is the
> reason (except the commercial) of having so many "different"
> communication protocols instead of having one common protocol
> for all systems which will give the possibility (if the companies
> wanted to do so) to connect different systems (PLC, SCADA,
> SENSORS etc) from various companies?


We attemped to do this in the 1980's, with Manufacturing Message Service (MMS).

MMS has been commented on quite a bit, so search the archives on the list.

What we learned, is that we need a COMMON OBJECT MODEL, that is vendor extensible.
In the 1980's, most vendors had brain-dead processors, with limited memory space and a static architecture. The overhead of mapping vendor specific stuff to a common object model, then putting that into a common protocol was too much.


There was also the issue of "software development" costs. The internet protocol suite is so prevalent today, because the federal government paid for most of the research and development costs. Inexpensive code and specifications results in lower cost products. Hence the popularity of Modbus.

In a nut-shell, the 'up-front capital costs', and 'knowledge cost' of the proprietary interfaces is cheaper. The last analysis I did (4 years ago), indicated there was a payback period of 5-7 years for open protocols. Which is why MMS makes sense for Utilities.

OPC is a pretty good hack that is available today. However, it is typically not a native protocol to PLC's, DCS and SCADA.


William F. Hullsiek
Software Engineer

Relative to what William Hullsiek said:

OPC is much more than a "hack". It may not be built into PLC's but it is the closest thing to a common object model for interfacing with controls. OPC is based on distributed network objects (DCOM). MMS is a protocol and not an object model. The major SCADA systems have adopted OPC. You can buy OPC servers from any number of vendors. MMS would be dead if it wasn't for the utilities adopting it.

To Tassos Polychronopoulos:

OPC is your best bet for finding a common interface to control systems. There is not logical reason for all the various protocols.
Your best bet is to utilize OPC to give your software independence from the underlying control networks.

By Ralph Mackiewicz on 23 June, 2000 - 12:29 pm

> OPC is much more than a "hack". It may not be built into PLC's
> but it is the closest thing to a common object model for interfacing
> with controls. OPC is based on distributed network objects (DCOM). MMS
> is a protocol and not an object model. The major SCADA systems have
> adopted OPC. You can buy OPC servers from any number of vendors. MMS
> would be dead if it wasn't for the utilities adopting it.

1. MMS does have an object model. It is called the "Virtual Manufacturing Device Model" or VMD Model. The approach of using a virtual object model is a necessary step in building an application protocol that can support interoperability. The utility work has simply expanded the model to include application specific models like meters, relays, RTUs, etc.

2. Some clarification is needed: OPC is an API based upon COM. OPC contains a data model that is not application specific. DCOM is a protocol for distributing COM function calls, like OPC, across a network. MMS is not an API. MMS is a protocol with a non-application specific data model. MMS does not distribute functions calls like DCOM. MMS is an application level protocol for supporting real-time data access and supervisory control. While both OPC and MMS contain data models, the OPC data model is a subset of the MMS data model and they are both completely compatible with each other. OPC Servers are
available for MMS just as they are available for Profibus, Modbus, etc. etc. etc. etc. OPC is not a replacement for MMS, Modbus, Profibus, DH, etc. etc. etc. etc. nor vice-a-versa.

3. MMS would not be dead regardless of any utility activity. It is used for applications outside of utilities that require a non-vendor
specific application protocol that runs over standard internetworking gear and that supports a richer set of features than just reading/writing PLC registers with primitive addresses like "4001". There are applications where client discovery of device objects and object descriptions; and the handling of very large complex typed data sets are required outside of a Microsoft O/S.

Regards,
Ralph Mackiewicz
SISCO, Inc.

By Sam Moore Square D Company/Groupe Shneider on 26 June, 2000 - 2:24 pm

Ralph:

What I meant was that OPC is based on distributed object technology (DCOM) and MMS is an application protocol that is implemented as a linkable library of functional calls. I didn't go on to say that if you want to develop MMS
applications you have just a couple of vendors that support MMS development environments.

If you want to plug MMS in the back of an OPC server you could, but I wonder why you would do that. MMS is based on a truly dead network stack called OSI and to get it to work on TCP you have to load a couple of the old OSI network layers on
top of TCP.

MMS is not object based.

Sam

By Rob Hulsebos Philips CFT on 27 June, 2000 - 3:50 pm

>MMS is not object based.

What makes you say this?
Do you mean ' object' in the meaning of 'object oriented programming' ?

I do not know MMS very much, but since FMS is based on MMS (only simpler), I have the experience that the 'objects' in FMS are just a collection of bytes, only with a meaning administrated somewhere. There are definitely not objects in a C++ or Java way. No class is associated with them.

Greetings,
Rob Hulsebos

By Sam Moore Square D Company/Groupe Shneider on 29 June, 2000 - 7:57 am

Yes. And in in an distributed object architecture where you can make connections to objects across a network.

By Georg Hellack on 28 June, 2000 - 8:27 am

Dear Sam,

While OPC may be used on DCOM, in practice it is used locally based on COM. DCOM can become a nightmare with all the security/authorization relevant parts in it. Looking at the present
technology promoted by Microsoft like COM+, which is MSMQ based, and SOAP, which is XML and HTTP based, it makes you wonder on the future of DCOM.

Just a few facts that have so far apparently missed your attention :-)

MMS is a message specification. It is therefore separated from the way the MMS messages are transferred from A to B, although there is set of existing profiles and the most widely used is OSI.
This is IMO the reason for much of the criticism towards MMS. However there are products that offer MMS over TCP/IP with an empty OSI session and presentation layer as specified by ITU. The
different profiles can exist in the same MMS product simultanously and the profile to be used is determined upon the connect request from the initiating MMS node. BTW: have you ever bothered to look with the network monitor in the additional protocol overhead on top of TCP/IP, that is generated with DCOM ?

Why is MMS not object-based ? Does object-based equal object-oriented ? The properties of an OO-system are:

1. Inheritance

2. Encapsulation

3. Polymorphism

While you may argue that 'inheritance' is perhaps not realized by MMS, encapsulation and polymorphism are (e.g. different meaning of 'set'/'get' methods of different MMS 'classes' like named variables, domains etc.) It is a different issue, how this object-orientation is
reflected in an API. But you may miss the object-orientation in DCOM (in case there is any), if you use it from a C-language program and not with the ATL C++ lib.

Your argument on the limited number of MMS vendors misses the point. There are already several and the number will increase, when there is an increase in the market (which is underway with the use of MMS in utilities). As the specs and all necessary information is publicly available (aside of the exaggerated fees charged by the IEC ;-), anyone can develop such a product, although it may not a weekend job like writing one's own Modbus driver is apparently for
some people on this list ;-). And using Microsoft proprietary technology with only a single source of supply appears to be no issue for a lot of companies in automation.

Best regards,

Georg Hellack

GHF automation GmbH
Technologie-Park Herzogenrath
Kaiserstr. 100
D-52134 Herzogenrath
e-mail: hellack @ ghf-automation.de
http://www.ghf-automation.com

By Sam Moore on 29 June, 2000 - 10:21 am

>Dear Sam,
>
>While OPC may be used on DCOM, in practice it is used locally
>based on COM. DCOM can become a nightmare with all the
>security/authorization relevant parts in it. Looking at the present
>technology promoted by Microsoft like COM+, which is MSMQ
>based, and SOAP, which is XML and HTTP based, it makes you
>wonder on the future of DCOM.

I don't see a problem with this. And I don't think that the SCADA companies do either.

>Just a few facts that have so far apparently missed your attention
> :-)
>
>MMS is a message specification. It is therefore separated from
>the way the MMS messages are transferred from A to B, although
>there is set of existing profiles and the most widely used is OSI.
>This is IMO the reason for much of the criticism towards MMS.
>However there are products that offer MMS over TCP/IP with an
>empty OSI session and presentation layer as specified by ITU.
>The different profiles can exist in the same MMS product
>simultanously and the profile to be used is determined upon the
>connect request from the initiating MMS node.
>
>BTW: have you ever bothered to look with the network monitor in
>the additional protocol overhead on top of TCP/IP, that is
>generated with DCOM ?


I don't see a need for MMS.

My point is that you have the extra overhead for simple messaging. You still don't have objects.

>Why is MMS not object-based ? Does object-based equal object-
>oriented ? The properties of an OO-system are:
>
>1. Inheritance
>2. Encapsulation
>3. Polymorphism

Yes. But also distributed objects that are integrated completely in the OS and every aspect of the development environments. I will get beat up over this, but whether or not my "open" UNIX
friends want to admit it the dominant operating system in the world and in the controls arena is Windows NT. Pick any of the major SCADA packages and you will see this.


>While you may argue that 'inheritance' is perhaps not realized by
>MMS, encapsulation and polymorphism are (e.g. different
>meaning of 'set'/'get' methods of different MMS 'classes' like
>named variables, domains etc.) It is a different issue, how this
>object-orientation is reflected in an API. But you may miss the
>object-orientation in DCOM (in case there is any), if you use it
>from a C-language program and not with the ATL C++ lib.
>
>Your argument on the limited number of MMS vendors misses the
>point. There are already several and the number will increase,
>when there is an increase in the market (which is underway with
>the use of MMS in utilities). As the specs and all necessary
>information is publicly available (aside of the exaggerated fees
>charged by the IEC ;-), anyone can develop such a product,
>although it may not a weekend job like writing one's own Modbus
>driver is apparently for some people on this list ;-). And using
>Microsoft proprietary technology with only a single source of
>supply appears to be no issue for a lot of companies in
>automation.


I don't see anyone running to develop MMS as a messaging system.
What I see is OPC sitting in front of the various controls network protocols like Modbus. MMS doesn't seem to have a place. It just doesn't make sense to put it behind the OPC server and it isn't adding value in front of it. Maybe I am wrong. I would like to hear someone from one of the major SCADA companies commenting on it.

By Ralph Mackiewicz on 6 July, 2000 - 8:31 am

> What I see is OPC sitting in front of the various controls
> network protocols like Modbus. MMS doesn't seem to have a place. It
> just doesn't make sense to put it behind the OPC server and it isn't
> adding value in front of it.

MMS' place in an OPC world is exactly like that of Modbus. Here is the scenario: you have a MMS/UCA based device that you want to connect to an HMI/SCADA system on Windows NT. The HMI/SCADA supports OPC. You use a MMS/UCA OPC server to connect the HMI/SCADA to the device.

I will restate this again: OPC is not a replacement for MMS (or Modbus) and MMS (or Modbus) is not a replacement for OPC. OPC is an
application programming interface. MMS and Modbus are application layer protocols. When Windows is the only O/S and every device manufactured has an OPC server supporting DCOM built into it, then
neither Modbus nor MMS will be needed. I don't think it is snowing outside the devil's house yet.

Regards,
Ralph Mackiewicz
SISCO, Inc.

By Ralph Mackiewicz on 29 June, 2000 - 12:03 pm

> What I meant was that OPC is based on distributed object technology
> (DCOM) and MMS is an application protocol that is implemented as a
> linkable library of functional calls.

Certain MMS products are implemented as linkable libraries of function calls. Others are implemented as DLLs, DDE servers, MMS executable servers, monitors, OPC servers, SCADA, HMI, EMS, etc. etc.

> I didn't go on to say that if you want to develop MMS applications
> you have just a couple of vendors that support MMS development
> environments.

Anyone can buy the standards and develop their own MMS. There are a number of companies that have done this. The couple of vendors you
refer to, like my company, provide a service to eliminate the protocol development effort so that OEMs can focus on application development. So the point is....?

> If you want to plug MMS in the back of an OPC server you could, but I
> wonder why you would do that.

The answer is obvious: to connect an OPC enabled application to an MMS/UCA/ICCP enabled device or application. Exactly like any other OPC server that is out there.

> MMS is based on a truly dead network stack called OSI and to get it
> to work on TCP you have to load a couple of the old OSI network
> layers on top of TCP.

So what? The presentation and session functions you refer to are completely transparent to both users and developers (using the development tools available from a couple of vendors). They add no
significant overhead to the system. They do provide a simple and interoperable mapping to the TCP/IP stack that is endorsed by the IETF and allow multiple applications to coexist independently on the same TCP socket.

> MMS is not object based.

MMS IS object based. The VMD model may not be object-oriented in the classical sense as it is applied to programming techniques. However, the object charateristics of MMS allow it map much more readily to object-oriented environments than register access protocols which support integer variables via addresses.

Regards,
Ralph Mackiewicz
SISCO, Inc.
6605 19-1/2 Mile Road
Sterling Heights, MI 48314-1408 USA
T: +810-254-0020 F: +810-254-0053
mailto:ralph@sisconet.com http://www.sisconet.com

By Ralph Mackiewicz on 19 June, 2000 - 11:53 am

There are a number of reasons (all my opinion only) for this but the commercial reasons cannot be overlooked:

1. Having "one common protocol for all systems" assumes that there is only one set of problems to be solved. Unfortunately, the world is more complex than that. There are many different, and sometimes incompatible, requirements for industrial networking applications. No single solution can possibly fulfill all the requirements without significant tradeoffs in performance, functionality, price, etc. We can probably get by with a much smaller number of protocols than we have, but a single solution is not possible.

2. Dominant control vendors prefer solutions that they control (at least initially) for numerous commercial and technical reasons.

3. Most users don't seem to care about unification of communications protocols in the industrial control market. If they did, the
situation we have would not exist. There is no conspiracy to deprive us of interoperability. If customers insisted on interoperability between brands of PLCs and other controls the suppliers would respond or innovative companies would be formed to meet this need. Instead, the need is so small that this niche is serviced by (relatively)
small third party suppliers who provide this capability to the minority of customers that care about interoperability.


Regards,
Ralph Mackiewicz
SISCO, Inc.

The opinions given above should be attributed to me and not
my company because my company did not write them...I did.

By Andrew Piereder on 19 June, 2000 - 1:04 pm

Aside from commercial?

There are technical features that make one "standard" better than another for a certain application, but in my experience this is seldom a criterion in deciding for one standard over another. Most applications only use a fraction of features and/or power a particular protocol can provide and could legitimately use any one of several alternatives. In many, possibly most cases, the decision for a particular vendor's product entails a choice of a particular protocol.

Andy Piereder
Pinnacle IDC

By Warren Postma on 21 June, 2000 - 2:40 pm

> .... But, greed and averice are much
> more prevelant than altruism and concern for the customer, so
> expect another couple of generations before people realize the
> enormous advantage of sharing open protocols.

Open standards can promote lots of greed and avarice too. Look at Linux. Think there's no Greed and Avarice there? Sometimes we call that the "Economy", but it's really just greed and avarice, right? Not that it's all good, or all bad, just that is what fundamentally drives it. It could be better, but it could also be much worse.

Now, closed standards are basically (i) naturally going to happen because of the way systems are developed, (ii) remain closed because if the reasons you stated.

The best weapon is to convince the engineers developing systems to look at open standards first when they develop a new product. A new proprietary protocol is at best a non-issue, and at worst, a potential reason for customers not to invest in a new piece of control hardware or SCADA system, or whatever it is you're building.

Here are my predictions (never a good idea, but oh well):

1. TCP/IP for everything. This doesn't prevent you from making proprietary protocols over top of TCP/IP, however the base standard TCP/IP plus
proprietary protocols is eventually going to be noticed as putting "new wine in old wineskins". The two are not compatible in the long run, and all proprietary protocols will be reserved for small niches at the worst.

2. Modbus over TCPIP (and thus Ethernet, or whatever else) is going to be huge for whenever you need to send digital bits, or 16 or 32 bit registers between equipment and systems.

3. SOAP is going to be huge. This is the successor to XML-RPC, and it involves using the XML data formats to encapsulate a sun-RPC like layer between both compiled and script languages.

4. The existing device-net/profibus stuff will never become as ubiquitous as we hoped, but will remain strong as a niche.

5. At a low level, where Ethernet and TCP/IP is too much, then USB or potentially IEEE-1394 (aka Firewire), will be huge in the control-systems
market because the costs will be spread among both consumer electronics and computing products and industrial products.

6. Good old RS-232 links with Modbus RTU protocol will never never die! :-)

Warren Postma
ZTR Control Systems
London Ontario Canada

By J-F Portala on 23 June, 2000 - 3:23 pm

I don't have a great experience in communications protocols, but my reflex is to use every time I can to use open protocols.

I agree with Warren, the Modbus protocol will have a long life. The modbus Rtu protocol is an open protocol, and number of devices can communicate via a RS485 network. You can find PLC
with the communication protocol completely integrated. (TELEMECANIQUE tsx nano) I use these PLC as external IO for my PC. Is is cheaper than dedicated IO boards.

I also work with CAN-open wich is also an non propietary protocol. You can find number of devices integrating this protocol. external IO, encoders, frequency variator, PLC ...

In a installation, if you choose an open protocol (well known), you have more chance to make your installation evoluting and safe for
many years.

Regards
J-F Portala
SoViLor company
jfportala@free.fr

By David Leese Dresser Valve Div., Halliburton on 21 June, 2000 - 3:13 pm

>But there's 50 types of oil and air filters for the cars,
>not to mention dozens of different engines, transmissions,
>tire types, brake pads, etc.

As I experienced last month, there are at least 2 variations of the tire stud for 1995 Chevrolet model 1500 pickup trucks. The studs are not interchangeable. If you strip one, you have
a 50/50 chance of purchasing the correct replacement at an auto supply.

If one manufacturer can't standardize a screw on the same model vehicle, what chance is there for COMM standards among thousands of manufacturers.

D. Leese

By Hullsiek, William on 10 August, 2000 - 12:35 pm

I am getting confused over some of these discussions regarding Open Vs. Proprietary.

Back in the 1980's and early 1990's, we defined an "Open" system as being one with well-defined "INTERFACES" that adhere to published standards. (A standard being either de-facto (Modbus) or
de-jure (TCP/IP or ISO). How you implement the interface is always "proprietary", but once it leaves the black-box and place it in on the
wire, it should always be "open" and interoperable.

An example is that phone from Vendor A communicates with a phone from Vendor B. The "Internal electronics" are different, but
the interface to the phone network adheres to standards. You can buy the phone from one vendor, the interface cord from my local hardware store, and then phone service from either the cable
company or the "former baby bell".

A "black-box" is okay, but when it requires me to buy the network cable, power cables, serial cables, plus replacement parts from the same vendor, then I have a concern. This adds to the "Total Life Cycle" cost and stifles competition.

In an Open System, you can readily replace vendor A with vendor B. I can share horror stories of clients who were LOCKED IN to proprietary systems, because they had implemented proprietary
infra-structures.

In the better implemented systems, often time vendor A uses an "open backbone" to communicate with its own components. This allows you to "optimize" for performance and throughput. But
vendor A can share objects with other components using "open interfaces".

William F. Hullsiek
MES Software Engineer

By Roger Irwin on 15 August, 2000 - 10:54 am

> I am getting confused over some of these discussions regarding
> Open Vs. Proprietary.

Well yes, it is not that clear cut. OPC, for example, is an open standard. The Windows API is also an open standard (allthougth it seems that 'undocumented' usage plays an important role). OPC can be implemented on non-Windows platforms. But windows itself does not conform to the standards, de-facto or de-jure, it makes it own rules up as it goes along. Given that it is
very widely used, that does make it a sort of de-facto standard, but standard is the wrong word when things get changed so quickly. This makes it very inconvinient to use OPC on other platforms and thus to all intents and purposes a proprietary standard.

The telephone network is defined across the board by open standards, so anybody can implement
any element. Putting OPC in the telephone context would mean you can use any phone across any
telephone network EXCEPT that the combo must be a particular Lucent technologies chip, and they
only sell this chip mounted on a board which has a proprietary backplane connector which is only sold mounted on Lucent backplanes.........

Standards are not laws. Participents have to sincerely WANT to make an open interoperable
standard. All to often that is not the case, the reasons are obvious and the examples are well
known. OPC is not one of these, however. I believe OPC members DO want to interoperate,
but they are also strangely hell bent on interoperating on Windows and Windows alone.

Apparently (according to the OPC website) one of the principal benefits of being an OPC member is that you get to recieve an annual sneak preview of where Microsoft are planning to go that year. Strange.

Of course OPC justly point out that they are only designing wrappers as a standard way of making IA devices available to windows desktop apps, an important and commendable task Trouble is, they have nothing to wrap. You can wrap assorted media formats in an AVI file, you can wrap a TCP/IP connection in a winsock, but you can't say I am going to make a wrapper, period.

OPC is not a wrapper, it is convenient to say that at times but it is being touted, and above all accepted, as a standard interface between standard computing environments and IA devices.
The basic problem with doing that is the plethora of standards out there. And that is how this thread started, lest we forgot along the way! Somebody bemoaning the fact that there were so many standards out there, and straight away people said 'now OPC is becoming popular as a standard'. So what is it? A class wrapper or a protocol?

In reality users want a common protocol to go between the desktop and the field device, and as there are existing transports for COM objects they are saying 'that will do'. No matter that these transports were designed to solve different problems in different environments. No matter that these transports are not suitable for the field devices themselves, they are saying 'we have an egg carton, lets wrap eggs'. I hope you
all like omlette.

So then the OPC say 'hey, we are not tied to DCOM, we are looking into using XML for the transport'. XML is a great buzzword, and a real open standard. But it is not a communications protocol. It is a machine readable metalanguage that allows an intelligent device to read the specification for itself, so the human does not have to read the spec and then specify it to the device. But what is it actually going to specify?

Now I am just a single simple idiot who just happens to have spent a large part of his career
designing both hardware and software to stuff bits down a piece of copper wire, so far be it from me to doubt the 250+ corporate members of OPC, or, even more pretentiously, to question the wisdom of a Microsoft evangelist. But, IMHO, they are up the creek with this one.

Never mind that they only intended to provide wrappers (atually they started out defining DDE profiles). What people expect of the OPC, and what many people actually think they are getting, is a standardised comms protocol. Now if OPC had actually worked to fill this void, they could have then presented their slab of meat to others to wrap, each in their own way. RTOS vendors would have wrapped it into libraries to sell to their IA field devices coustomers, Linuxers would have happily done a free set of wrappers for their
environments as a little warm up exercise before configuring sendmail, and Microsoft with all the money we are paying them would, I hope, have produced a set of COM wrappers available as a
downloadable service pack. Everyone is happy, AND, in a real distributed field device
application, we could cut out that 'PC in the middle' with all those Modbus/Profibus/whatever
protocols on board.

By Ralph Mackiewicz on 18 August, 2000 - 2:34 pm

> And that his how this thread started, lest we forgot along
> the way! Somebody bemoaning the fact that there were so many
> standards out there, and straight away people said 'now OPC is
> becoming popular as a standard'. So what is it? A class wrapper or
> a protocol?

At the risk of being redundant:

OPC is an API!!!!!!

OPC is not a protocol!!!!!!

OPC can be used to build a wrapper. Cimplicity, InTouch, Fix, etc. all have OPC wrappers (called OPC clients) that allow them to attach to other OPC servers (a wrapper for an IA protocol). But the OPC specification itself is an Application Programming Interface (API) specification.

You should ignore any OPC evangelist who tries to tell you that OPC is a protocol and can be used as a replacement for any of the myriad of IA protocols that are out there. They are, quite simply, wrong. An API is not a replacement for a protocol. If you are communicating only between Windows nodes then DCOM might be a suitable protocol for that application (maybe). But the protocol (DCOM) issue is independent of the API (OPC) issue.

> In reality users want a common protocol to go between the desktop
> and the field device,

I see no evidence that this is what users really want. It is what you want. But, in general, users don't care about the protocols. If they wanted a common protocol, they would buy a common protocol. Most users fill their need for interoperation by standardizing on a vendor, not a protocol. I'm not saying that I think this is optimal. I am saying that this is reality. As reasonably independent standards become available some users are selecting these standards instead of vendors (ie. Profibus, FF, DeviceNet, etc.). But any one of these standards is not going to solve every possible type of IA application
that is out there. The choice then becomes multiple incompatible standards v.s. a single vendor who also uses multiple incompatible
protocols but usually (not always) offers a product to interconnect them.

I too would like to see a common protocol. I think it would lower the life-cycle cost for automation substantially and thereby bring
numerous benefits to the manufacturing industry. However, a single common protocol must address a wide variety of different kinds of applications. A protocol capable of doing that will, by necessity, be complex. And, it will involve tradeoffs for any given niche application. Right now users and vendors both despise the complexity
and tradeoffs more than they despise the costs of incompatible systems. "Good enough" is the mantra today.

> What people expect of the OPC, and what many people actually think
> they are getting, is a standardised comms protocol.

I think the vast majority of people understand exactly what OPC is: an API that allows their HMIs to plug into IA comm drivers in a way that offers better performance and easier configuration versus DDE. Every OPC customer we have understands this because they have actually bought our product.

Regards,
Ralph Mackiewicz
SISCO, Inc.

Ralph Mackiewicz
SISCO, Inc.
6605 19-1/2 Mile Road
Sterling Heights, MI 48314-1408 USA
T: +810-254-0020 F: +810-254-0053
mailto:ralph@sisconet.com http://www.sisconet.com

By Alex Pavloff on 17 August, 2000 - 2:26 pm

> I am getting confused over some of these discussions regarding
> Open Vs. Proprietary.

Don't worry about it. I think most people are nowadays, especially because everyone likes to bandy around their own definition.

When one hear's open now, as related to software, one is thinking "open source". This is where the software is licensed under the GNU Public
License or another similar license, where the code for the software is freely available to everyone. The licenses usually prevent someone from taking your code, modifying it, and redistributing it without making your source code changes available.

While everyone will pretty much agree that open, published interfaces (your examples of Modbus and TCP/IP are those) are a Good Thing, people differ in opinion the implementation of the interface.

And thats the point that many people disagree on whether it should be open. I would say that if reliability is your goal, then yes. BSD Unix is open source, and widely regarded as the most stable and secure Unix around. It is open source, although with a lot less fanfare than Linux. The BSD Unix TCP/IP stack served as the basis for Microsoft's implementation of TCP/IP.

And quite frankly, from a reliability standpoint, getting a common, open, widely used implementation of an interface means that you, as an engineer, can spend less time writing the standard stuff that everyone else write (because some of the protocols are complicated!), and more stuff doing the things that make your product better than the next guys. Reinventing the wheel is a bad thing, and the "open interface, closed implementation" systems leads to much reinventing and associated problems.

For example: I've been writing more than a few communication drivers lately, for various PLCs and motion controllers. With the singular exception of the Aromat FP protocol (kudos to those guys for writing a GOOD communication spec), the documentation on the protocols was very, very poor. If I had some sort of open implementation around, I could either drop that
into my code, or just eyeball it to figure out all the gotchas. When we're talking about communication over standard wiring and protocols (TCP/IP or other), which is something thats going to become extremely important over the next N years much more so than it is currently, I think that we'll all be much happier, as engineers when we have a solid standard, OPEN implementatation of these protocols.

By Michel A. Levesque, ing. on 18 August, 2000 - 2:29 pm

This thread is getting more and more interesting:
But maybe we should all reflect back on the how
OPC came to be:

Everyone remembers that during the DOS and Win3.x era MMI manufacturers had to have their own set of drivers for each and every automation product that they wanted to talk to. Intouch lead the way with 300+ drivers and won the lion's share of the MMI market.

Some other PLC manufacturers provided their own drivers to talk to their own equipment via DDE (in Windows only). DDE was not really meant for data acquisition, it was too slow, unreliable for hard-core data acquisition.

So along comes OPC which promised to do away with the dreaded driver wars and standardize the way MMI's talked to PLC's and other automation equipment.

From comments on this list, some people want to use OPC to integrate MMI peer to peer communications. It seems to me that this was not intended for OPC.

Why are we trying to fit a size twelve foot into a size eight shoe? IMHO, we should use OPC to get the field data into the MMI packages. Then we can use anything else that fits better to get the MMI data out to other MMI packages, or SCADA, MES, ERP, etc.

SOAP, XML and the like are for computer program to computer program communications. I hope nobody is seriously going to use this to acquire data from field devices. If so, then we are going to see a lot of people with shot off feet.

So what is left to use to get field data into a computer program that runs on Windows? From where I stand: OPC. (Remember, at the present we are all locked into Windows because all big-name MMI's run on this platform. We are even seeing most DCS vendors jumping onto the Windows wagon.)

Michel A. Levesque eng., mcp
Directeur Bureau Montreal
AIA Inc.
mlevesque@aia.qc.ca

By Michael Griffin on 24 August, 2000 - 12:37 pm

<clip>
>SOAP, XML and the like are for computer program to computer
>program communications. I hope nobody is seriously going to
>use this to acquire data from field devices. If so, then we
>are going to see a lot of people with shot off feet.
<clip>

From what admittedly little I know about XML, I can think of a few very good applications which could use it today. These would involve
integration of production equipment into larger overall systems, not addressing of field devices.

For example, you can now buy "web server" cards for PLCs. The idea seems to be that you can install these cards in your PLC racks and create a
very simple system for monitoring the current status of your equipment using an ordinary web browser. The web page can access registers inside the PLC CPU representing whatever it is you want.

This sounds good if you have a dozen or two machines. You can look at each machine every morning and get the current cycle time, production
counts, etc. Now suppose you have not a dozen machines, but rather 200 machines. If you spent 3 minutes per machine to examine a dozen machines,
this would take slightly more than half an hour. The same amount of time spent on each of 200 machines would take more than 10 hours. This idea is obviously no longer practical on this scale.

However, suppose you had a special software program which could go out and examine each web page for you and tell you which ones need
attention. If the PLC "web card" were also an "XML card", this would be possible. Each PLC register you are interested in would have an XML "tag" associated with it by the PLC programmer.

You could of course accomplish the same thing by defining machine register addresses for each value, but this is a lot of work and potentially quite error prone (although this is exactly what I am working on at the moment). However, if you could simply ask the machine for "Cycle Time", regardless of the type of controller used, then this becomes much easier.


**********************
Michael Griffin
London, Ont. Canada
mgriffin@odyssey.on.ca
**********************

By Anthony Kerstens on 21 August, 2000 - 9:26 am

I find it interesting that the most verbose thread is about the platform with the most verbose languages and complicated setups.

Hail the PLC.
Hail DOS too.

Anthony Kerstens P.Eng.