Communication protocols

L

Larry Lawver

In reply to Curt's post, which appears at the bottom of this one:

Use whatever is best for the task at hand. In a lot of cases, that will be a proprietary system. That is my simple message.

Curt's disagreements with me seem to resolve into three categories:
the failure of proprietary systems to be open, the unimportance of cost in the open vs. proprietary discussion, and the superiority of
commodity products over carefully engineered proprietary components.

In the first set of issues, I'll simply concede. We all agree enough on the definitions to agree that a proprietary solution will not be open. If your project criteria include "open" by any definition, then don't use proprietary stuff.

On the cost issue, on the other hand, I'll dig in and reject disagreement.

Cost is brought up every time open systems are discussed, and is central to the advertising of the vendors in the field. Curt brings it up in his discussion, after briefly brushing it off. Proprietary systems involve more PURCHASED components than open systems, and they are probably always more expensive when measured that way. If it wasn't about cost, first and foremost,
there wouldn't even be a discussion here.

A hypothetical: Suppose that I had a proprietary black box for you that met all of the requirements for your next project. What would
you pay for it? If your fixed price contract was for US$1M, and I wanted US$10K, you'd probably buy it. Reductio ad absurdum, of course.

In the practical example I gave in my post, I described a client (an OEM) that prefers a US$7000 open system bill of material to
buying a proprietary system bill of material from me (I am currently a distributor, formerly an integrator, for those that didn't already
know that.) for US$18000. If they get a hard customer specification for my stuff, they use it and have a trouble-free start-up. The open
system takes weeks of set-up in the field, and provides me with lots of anecdotes about using Ethernet for real-time I/O.

In my opinion, this open system is clearly inferior to my proprietary system. I suspect that a few folks from this list could improve the
track record of this particular open system, but that isn't the point. The point is that the solid reliability of proprietary systems,
properly applied, gets compared unfavorably with less expensive open systems that don't work as well. In the case of my client, they claim that their favored system is better because it is
open. Strangely, if I brought my price down to their number, though, they say they would buy it.

(That example is only one of many I have from personal experience. I do not mean to generalize that to all open systems, though. I have personally implemented successful open systems.
I'm defending proprietary, not dismissing open.)

Finally, on leveraging the stuff that has become commodity rather than relying upon proprietary catalog items: Who determines, ten years later, that a certain commodity item is form, fit, and function compatible with the original? Eight-tracks and Betamax are the usual targets to bring up at this point, but for this audience I will
mention pre-IDE disk drives and 256K memory sticks that we were all using ten years ago. Without a brand-name manufacturer and a
believable commitment to spare parts, you can't rely on the market still offering the spare you need ten years from now.

Ten years from now, when a rollercoaster, launch pad, or juice plant in my territory is down due to a component failure, I know that the owner will be able to get a spare part quickly, probably out of local inventory. Can anyone guarentee that a generic ISA Ethernet card will still be available at any price? Curt --- that US$400 price SHOULD include that longterm proprietary support I value so much!

This kind of longterm stability is very important to owners that will keep their systems running indefinitely. It is not marketing hype --- it is the result of a long, consistent track record. It is a reputation that is easily lost, as Westinghouse found out fifteen years ago. Thus, perception is more important than Curt allows.

Generic is a particular problem in tightly configuration managed systems.

When you ride a rollercoaster, would you be satisfied with your safety if you thought that maintenance workers had discretion to
substitute generic components, including software downloaded from the Web? Or, would you rather that the system is only allowed to carry guests if it can be proven that the system is identical to what passed the Acceptance Test Procedure at commissioning time? Doing that requires catalog numbered, proprietary components.

Notice that I am not denying the advantages Curt mentions. Many of you work on projects where the things I bring up are not important. I trust all of you to choose the best solutions for your
clients, and many of those solutions will be proprietary.

Hope this helps!

Larry Lawver
Rexel / Central Florida
 
R
> If Sun, for example, had the market sewed up, would it then
> be ok if they controlled all the communications?
>

Actually SUN did have the market sewed up with SUN RPC calls, which were the predecessor of DCOM and CORBA technologies. Allthougth telecomms
is now going over to CORBA, 90% of the worlds telecommunications networks are still managed by RPC's, which have been in use for over 15 years.

But SUN never tried to monopolise this technology, they made it freely available and most POSIX systems include an implementation of RPC derived from original SUN source code. There is even a port to NT, complete with an rpcgen that works with Visual C++.
 
Yes. I am serious. Open technology is the technology that is most widely used. I would add that you would not pick a widely used technology if the technology is at the end of its lifecycle. I see it as open because it opens up my options.

The classic definition of "open" is like communism. It sounds good on a mailing list discussion, but it doesn't work. Capitalism does
not foster "open" technologies, because the market leader gains nothing by creating an even playing field. TCP/IP is a defacto standard that came about from a research project. Because it was
widely used it never got replaced by better technology. TCP/IP is "good enough" and still is just that.
 
R
> "You can use whatever system you want, as
> long as it has COM, OPC, Runs an Access database, and has
> feature A, B, and C" because that is what the sales people that
> they talked to promoted as their biggest features.

Of course people promoting their products on the basis of COM/OPC are going to feel a bit silly now;-)

> The thought process here is
> "If it were really possible to make a system that was that stable,
> Microsoft would have done so. They are, after all, the largest
> software company, they have the resources to do it right, so that
> must be as good as it can be.

Microsoft are very good at their core business, which is making software that is quick to learn to do something, like one of those electronic pianos that you play by following numbers and colours. Their desktop technology is good, but they keep changing the standard, much faster than the lifetime of IA systems. The OPC claim that MS keeps thier distance (sure, they let any Tom Dick
or Harry hold their AGM in their Redmond campus...), the truth is they are just not that interested in IA, if people are stupid enough to use thier OA solutions in IA systems, that is fine
by them, but they are not going to extensively modify their OA solutions to IA requirements, and it would be a mistake to do so. They did promise specific IA solutions, but they have not been forthcoming because the embedded division is up to their necks in it with the consumer device market where they are suffering real competion from the likes of EPOC and Palm.

Of course Unix is little different. Unix systems have always maintained the edge on reliability and scalability, and still do. But as system specs
go steadily upward we have reached the point were MS systems, despite being inferior (and before you flame me rember that SUN will sell you systems
capable of handling 1Mega users plus off the shelf) are more than capable of handling the limited capacity of IA requirements.

Technically, the best solutions for IA are systems like WRS or QNX, but the downside of such niche systems is the it can be very difficult to get drivers and general purpose apps were needed.

Perhaps the reason Linux is gaining so much ground in IA is not because it is automaticaly adapt, but because it is so adaptable. People do embedded Linux with 2M flash based systems. Other people do very large databases, or run it in conjunction with hard real time schedulers. Also, people are using it on non PC hardware, the fan cooled CPU modules that are now standard in the PC industry are just out of line with many embedded requirements, but linuxers have a wide rnge of hardware to choose from, and we are starting to see IA products shifting to RISC based solutions such as the ARM.

> If we went to Unix, we would have
> the same problems, but we would either need an expensive support
> person, or it would take twice as long to get back up and running
> due to unfamiliarity".

Very few people know how to handle or program NT correctly, I know I don't but I also know enougth to realise the ignorance of the 'experts' I
resort to for help. I do not think the arrival of W2K is going to help that situation, given that it is all new and more complex under the hood.

One thing I do know is that one only needs to know 2 OS's in this world, MS and non MS, because everybody else works towards a common style
for API's and command shells, whilst MS invariably do the opposite. Telenetting into and maintain a small embedded QNX box is essentially similar (from a sysadmins point of view) to telenetting into the mega galatic Ultra SParcs that run hotmail or Amazon. Even BeOS lets you telnet into a familiar bash environment.

Given that MS do not provide an OS for IA applications, and thus there are inevitably situations where MS just does not cut, one could say that learning non MS systems (learn one - use all), is a much better option to IA personnel
than learning windows but not being able to understand all systems.

Of course I know the suits will not buy this, and I know people who have never used non MS systems (which is most) will not accept it, but the world
is full of rules and conventions that make things worse rather than better, and one simply cannot re-educate those millions of people who think MS invented the internet and Bill Gates wrote DOS (BTW, anybody know what happened to Tim Patterson?).
 
R
> From: "Jansen, Joe" <[email protected]>
> I would ask why you think it is that this has already happened?
> TCP/IP is an OPEN standard. Nobody owns TCP/IP. I can write a
> TCP/IP driver without paying a royalty. Why, I can even find a spec
> that tells me what TCP/IP is supposed to do! Here is the
> distinction: It is widely used because it is open. It is not open
> simply because it is widely used.

Joe, don't be too hard on him, it is a widespread belief among computer users that MS invented the internet, ethernet, TCP/IP, DOS, GUI's, BASIC, C++ and just about everthing else out there.

That's what MS pay all those spin doctors to do.

Interesting though, that just about all the success stories in the computer industry have been born out of open standards/source rather than
proprietry solutions. In fact Microsoft itself was born when Bill Gates persuaded his classmate Paul Allen to port the openly available source of the basic on the VAX to the Altair computer.

By contrast, it is only a few years ago that Microsoft attempted to implement it's own proprietry worldwide network, but was beaten out of the market by the open internet standards.
 
M

Matthew da Silva

What is it about Texas that makes it so enterprising? On the Net, I've met many people who live in Texas, and who are unusually progressive and original. This should not cast aspertions over the capabilities or populations of other states, but it seems that the South is leading in very many ways. Must be all that hot food and wide-open spaces; and being close enough to Mexico an' all.

Cheers.
 
R
> The classic definition of "open" is like communism. It sounds good
> on a mailing list discussion, but it doesn't work. Capitalism does
> not foster "open" technologies, because the market leader gains
> nothing by creating an even playing field. TCP/IP is a defacto
> standard that came about from a research project. Because it was
> widely used it never got replaced by better technology. TCP/IP is
> "good enough" and still is just that.

I'm sorry, TCP/IP is a perfect example of the classic case of an open standard. The standards are published openly, and anybody and everybody can, and does, contribute openly to them. They are
neither de-facto nor proprietry.

TCP/IP definitions can describe protocols that cannot be freely used for reasons of patents, license requirements, or maby even lack of a 'key', and in fact there are quite a lot of RFC's that define protocols that cannot be freely adopted. Yet the protocols that we actually all
use (FTP, SMTP, POP3, HTTP etc.) are all completely free.

Other examples of 'open' standards (i.e. standards that anybody may use and that have been established by people freely contributing to a not-for-profit organisation) include C/C++, Ethernet and RS232/485.

Note that many standards are handled by institutional groups such as the IEEE and ANSI, as would happen in any other industry, indeed
the computer industry is unique in its wide scale adoptance of proprietry and de-facto standards. In the past this has been in part because technology was moving faster than standards
bodies could cope. This is no longer the case, and allthougth wide scale use of the internet has led to a whole new load of requirements, the internet itself allows standards to be hammered
out an agreed very quickly. Nowdays the imputus to push proprietry standards is to gain license share.

In many cases proprietary standards are turned into open standards in order that they may be improved, for example the IEEE defines far more functionality for the PC parallel port than the original proprietary centronics interface

Please forget this communism business, like I said, in every other industy open standards are the norm. Proprietary standards are about Monopolism not Capitalism. Capitalism relies on everybody being able to compeet on equal terms in an open marketplace, that's why all capitalist societies have anti-monoply commisions.

But lets get to the bottom line on the business side. A major advocate of open standards and open software is IBM. They will sell you support
contracts for sendmail, they will install Linux on a 390 mainframe (and at $100,000 per CPU that has to be the most expensive Linux distro ever),
and thier e-commerce solutions packages are based on the Open source Apache server. Their buyline on open source is "it's about the service, stupid". Of course IBM also happen to be the largest computer company in the world ($90B sales, MS has $20B). Now go ahead, tell me IBM are communists, tell me they do not understand the market.

Before you make such riduculous statements, you should get yourself more informed about the history of computing and the origin of what you are using, everybody makes mistakes, gets numbers wrong etc, but your comments demonsrate a wholesale lack of knowledge of the items you are citing as an example. When I read your original post I thought you had cited TCP/IP by mistake!

All success stories in computer communications have been born out of open standards. The most borderline exceptions are Netware and SMB (aka Windows file and printer shareing).

Netware is proprietary, but is based on IPX which is a simplification of IP, it was done when PC's had not yet reached the necessary power to support full blown TCP/IP.

SMB (your windows network neighborhood), is based on IBM LanManager. IBM published the protocol, and anybody can implement it, there are no patent issues etc. However, MS have extended the protocol in their implementations. In order to
interface to windows computers, Unix programmers developed an open source implementation called SAMBA. Being open source anybody can add features to Samba, and in fact there are a few things you can do with Samba which you cannot do under windows. Because SMB implements user to user connections (as opposed to system to system
connections which is the case of Unix's default NFS file shareing protocol) it is often used for networks which involve no Windows machines. Quite how we define SMB is therefore unclear.

But lets wrap up getting back to IA communications. OPC is based on DCOM, which is an open standard. However much of DCOMS implementation is based around an underlying WIN32 API. It can, and has been, implemented on non windows platforms but it does not make sense, you must have windows to be interoperable. So most
of us consider OPC to be based on proprietary technology. But that is NOT the principle reason why I am against it. My prime motive is that DCOM is the architectural inverse of what is required in IA, it was designed for OA where there are a few big managed data servers and a lot of dumb clients, not IA where we have a lot of dumb un managed servers (field devices) and a few (relatively more managed) clients. DCOM has useful application in IA networks, but not for interfacing towards the field, which is what they tout it as doing). It can work great for a few test cases, and for large specific function plants, but handling a typical factory it will quickly become a nightmare. The same architectural
arguments may also be applied against the use of CORBA.

The number 2 reason I do not like CORBA is that MS have allready anounced that it is dead
technology, they are dropping it in favour of SOAP which is even less IA suitable.

That DCOM is (to all pratical effect) proprietary, ranks only number 3, if a
really good and universal alternative existed I would adopt it even if it where proprietary, but
OPC are not even in the ballpark.

It would be nice if this thread concentrated less on commercial politics and more on technical
issues, such as what protocol could we use?

Profibus FMC on ethernet is not far off the mark, but not on it either. They seem to want to take the industrial network into the computer center rather than allow the corporate TCP/IP network reach out into the factory, which I feel is the philosophy we must look to. Has anybody experience of this?
 
C
> Use whatever is best for the task at hand. In a lot of cases, that
> will be a proprietary system. That is my simple message.
>
> Curt's disagreements with me seem to resolve into three
> categories:
> the failure of proprietary systems to be open, the unimportance of
> cost in the open vs. proprietary discussion, and the superiority of
> commodity products over carefully engineered proprietary components.
>
> In the first set of issues, I'll simply concede. We all agree
> enough on the definitions to agree that a proprietary solution will
> not be open. If your project criteria include "open" by any
> definition, then don't use proprietary stuff.
>
> On the cost issue, on the other hand, I'll dig in and reject
> disagreement= .
>
> Cost is brought up every time open systems are discussed, and is
> central to the advertising of the vendors in the field. Curt brings
> it up in his discussion, after briefly brushing it off. Proprietary
> systems involve more PURCHASED components than open systems, and
> they are probably always more expensive when measured that way. If
> it wasn't about cost, first and foremost, there wouldn't even be a
> discussion here.
>
> A hypothetical: Suppose that I had a proprietary black box for you
> that met all of the requirements for your next project. What would
> you pay for it? If your fixed price contract was for US$1M, and I
> wanted US$10K, you'd probably buy it. Reductio ad absurdum, of
> course.
>
> In the practical example I gave in my post, I described a client (an
> OEM) that prefers a US$7000 open system bill of material to buying a
> proprietary system bill of material from me (I am currently a
> distributor, formerly an integrator, for those that didn't already
> know that.) for US$18000. If they get a hard customer specification
> for my stuff, they use it and have a trouble-free start-up.

When we do this we get DOA equipment, bad documentation and lots of headaches, especially with regard to communications. If you can guarantee a trouble free start-up, by all means, send me a line card. I'm serious.

> The open
> system takes weeks of set-up in the field, and provides me with lots
> of anecdotes about using Ethernet for real-time I/O.

Modbus doesn't work too well if you apply it incorrectly either. And if you need realtime IO you should use realtime Ethernet from Lineo. it's free, I believe. Any fieldbus, misapplied or overloaded will slow down. The proprietary implementations I've seen from GEF for example, are niether deterministic or fast. As "foriegn" protocols I suspect they are deliberately hobbled so that the "native" protos are always better. This wouldn't happen in an Open System
implementation. I'll be happy to compare 100mbit/sec switched Ethernet with any of the other common transports for determinism and throughput. And you should hear my stories about a profibus setup that can't follow a 10 hz square wave.

> In my opinion, this open system is clearly inferior to my
> proprietary system. I suspect that a few folks from this list could
> improve the track record of this particular open system, but that
> isn't the point. The point is that the solid reliability of
> proprietary systems, properly applied, gets compared unfavorably
> with less expensive open systems that don't work as well. In the
> case of my client, they claim that their favored system is better
> because it is open. Strangely, if I brought my price down to their
> number, though, they say they would buy it.

On this point, I suspect the problems are due, at least in part to the fact that we define open systems quite differently. Someone using Visual Basic to control some I/O does not constitute an Open System in my book For that matter, why would an open system have to be any different than a closed one? If you took a closed one and published the source and schematics, would it suddenly stop working? Even with commodity class hardware, you have to have decent software to have reliability and predictable results.

> (That example is only one of many I have from personal
> experience. I do not mean to generalize that to all open systems,
> though. I have personally implemented successful open systems. I'm
> defending proprietary, not dismissing open.)
>
> Finally, on leveraging the stuff that has become commodity rather
> than relying upon proprietary catalog items: Who determines, ten
> years later, that a certain commodity item is form, fit, and
> function compatible with the original? Eight-tracks and Betamax are
> the usual targets to bring up at this point, but for this audience I
> will mention pre-IDE disk drives and 256K memory sticks that we were
> all using ten years ago. Without a brand-name manufacturer and a
> believable commitment to spare parts, you can't rely on the market
> still offering the spare you need ten years from now.

With commoditization and standardization you don't need to maintain spares. I can drop my application on a whole new PC economically as long as there's nothing special about the
hardware. I can buy a new PC to run it on for the cost of one of those pre-IDE hard drives. Why would I want an old MFM drive (at full price or more) that's been sitting for years? And I'm very
confident that Linux will run on the PC's we have ten years from now, only much faster. And I'm betting that good ol ethernet is still around. And If I was worried, I can recompile the version I'm using on the new hardware because I own the source.

> Ten years from now, when a rollercoaster, launch pad, or juice plant
> in my territory is down due to a component failure, I know that the
> owner will be able to get a spare part quickly, probably out of
> local inventory. Can anyone guarentee that a generic ISA Ethernet
> card will still be available at any price? Curt--- that US$400
> price SHOULD include that longterm proprietary support I value so
> much!

See above

> This kind of longterm stability is very important to owners that
> will keep their systems running indefinitely. It is not marketing
> hype--- it is the result of a long, consistent track record. It is
> a reputation that is easily lost, as Westinghouse found out fifteen
> years ago. Thus, perception is more important than Curt allows.

Company XYZ may be out of business in ten years. If you have proprietary gear, you are SOL. If you have an Open System, any competent programmer can keep your system working indefinately on contemporary hardware if neccesary.

> Generic is a particular problem in tightly configuration managed
> systems.
>
> When you ride a rollercoaster, would you be satisfied with your
> safety if you thought that maintenance workers had discretion to
> substitute generic components, including software downloaded from
> the Web? Or, would you rather that the system is only allowed to
> carry guests if it can be proven that the system is identical to
> what passed the Acceptance Test Procedure at commissioning time?
> Doing that requires catalog numbered, proprietary components.

I see no difference from PLC's if people are allowed to play with the code. You would use certifiable hardware in this case and I believe
there is a certifiable version of Linux that has met with FAA requirements. Like I said before, simply because it's open doesn't imply it's different or of lesser quality. Almost all of those precautions have nothing to do with being proprietary. Many of the commodity producers are ISO9XXX compliant and would meet the lot tracibility requirements. This wouldn't be a run of the mill application for proprietary hardware either..

> Notice that I am not denying the advantages Curt mentions. Many of
> you work on projects where the things I bring up are not important.
> I trust all of you to choose the best solutions for your clients,
> and many of those solutions will be proprietary.

For my part, I merely want to counter the FUD and misperception that Open Source software and commodity hardware can't be as good as or better than their proprietary counterparts. The amount of
hardware of all types that goes to obsolescence without ever having failed bears this out. And good software is good regardless of the license. All that are now commodities were once proprietary and specialized.

Regards

cww
 
H

Hullsiek, William

I am getting confused over some of these discussions regarding Open Vs. Proprietary.

Back in the 1980's and early 1990's, we defined an "Open" system as being one with well-defined "INTERFACES" that adhere to published standards. (A standard being either de-facto (Modbus) or
de-jure (TCP/IP or ISO). How you implement the interface is always "proprietary", but once it leaves the black-box and place it in on the
wire, it should always be "open" and interoperable.

An example is that phone from Vendor A communicates with a phone from Vendor B. The "Internal electronics" are different, but
the interface to the phone network adheres to standards. You can buy the phone from one vendor, the interface cord from my local hardware store, and then phone service from either the cable
company or the "former baby bell".

A "black-box" is okay, but when it requires me to buy the network cable, power cables, serial cables, plus replacement parts from the same vendor, then I have a concern. This adds to the "Total Life Cycle" cost and stifles competition.

In an Open System, you can readily replace vendor A with vendor B. I can share horror stories of clients who were LOCKED IN to proprietary systems, because they had implemented proprietary
infra-structures.

In the better implemented systems, often time vendor A uses an "open backbone" to communicate with its own components. This allows you to "optimize" for performance and throughput. But
vendor A can share objects with other components using "open interfaces".

William F. Hullsiek
MES Software Engineer
 
N
> TCP/IP is a defacto standard that came about from a research project. Because it was
> widely used it never got replaced by better technology. TCP/IP is "good enough" and still is just that.

I agree. Back in the Multibux I/iRMX era, my group tried to standardize on Intel's OpenNET network which was a full seven-layer OSI implementation that purported to be "truly open". Unfortunately almost no one supported it and users hated having to have two NICs, two network connections, re-boots, etc. Before long we switched to TCP/IP because it was available, and "good enough". Now, 10-15 years later - it seems that everyone else has come to the same point. Claims of being "more open" didn't offset a lack of support and critical mass.

Bob Nickels
Honeywell S&C
 
J
-> Yes. I am serious. Open technology
is the technology that is most -> widely used. I would add that you
would not pick a widely used -> technology if the technology is at
the end of its lifecycle. I see it -> as open because it opens up my
options. -> -> The classic definition of "open" is like communism. It
sounds good -> on a mailing list discussion, but it doesn't work. ->

Tell that to all of the Linux developers. Seems to be working so far....

Capitalism does
-> not foster "open" technologies, because the market leader gains
-> nothing by creating an even playing field. TCP/IP is a defacto ->
standard that came about from a research project. Because it was
-> widely used it never got replaced by better technology. TCP/IP
is ->"good enough" and still is just that. -> ->

I would again suggest that TCP/IP is widely used because it is an open standard. The reason it has not been replaced is not because nobody has gotten around to it, but because it is
ubiquitous due to its public nature, and would be extremely difficult to replace. (Unless you are pushing a dot.net marketecture :^} )

My other questions still stand. What is the magic number of users for something to change from closed to open?

OPC will end up as a fad. (Watch out! I am making predictions. usually a bad thing!) I suspect that it will go the way of DDE in a matter of 3 years or less.

--Joe Jansen
 
A

Anthony Kerstens

Similar here. Several Ontario,Canada universities have created Software Engineering programs. There set to graduate their first batch of students in the next couple of years, and to have the
programs' professional accredation granted hopefully before they graduate.

As for Texas and other southern states being enterprising, it might have to do with all the Canadian talent moving south of the border!!
:)

Anthony Kerstens P.Eng.
 
I am well informed. I just disagree with you. Your argument is that we go back to developing text based TCP protocols over using a distributed object technology like OPC. Your biasness against anything associated with MS shows that your
reasoning has been clouded by emotion. I had 17 years of developing systems in UNIX. I developed all kinds of communications systems include text based application layer protocols like yours. The day I fired up VB and made a seamless distributed object connection to a PLC I was sold. Unlike systems I had used in the past like RPC I didn't have to jump through hoops to build the interface. My VB code doesn't know whether it is connecting to an in-process object or one across the network.

I use to argue for open systems and the superiority of my UNIX-based solutions, but I stopped when I saw what could be done with NT, COM and OPC. There is no comparison between OPC and your dated recommended way of doing it.

Obviously, you and I won't sway each other.

I suggest the readers of the list, especially people using UNIX to investigate NT, COM and OPC technology and give it a fair shake. I believe that you will find, like the rest of the leading companies in this field of work, that this is
the future. In any case, you need to make the decision based on test driving the technology and not on arguments on this list.
 
My work with connecting PC's to PLC's is actually starting out on windows/VB/VC++/NT. I have had enough headaches and support issues that I am actually moving the other direction. As you point out though, we could throw anectdotal stories at each other all day, and at the end, we
would both just figure the other didn't know what they were talking about :^}

I will summarize what I stated in an off-list discussion. this was in regards to my "OPC is a fad" prediction. Here are my basic fears with
using the windows solutions:

1. Microsoft controls OPC. This means that it is re-definable.
Obviously they cannot completely rewrite the spec, due to market forces. But the fact remains that this is entirely at their discretion.

2. Microsoft's main revenue stream is on product upgrades. They make their money because corporate customers migrate from 3.1 to 95 to 98 to NT Workstation to Windows 2000. That's 4 products in 5 years. I have not yet used Win2000, but I know that many apps that are for 95 do not work on NT, and vice versa, and most 3.1 apps fail miserably in the Win32 API. This is by design. This design is for 2 reasons. A,
improvements in technology mean that things are made diferently to support new devices, etc. B, it encourages upgrades. If the latest version of package X works on W2K, but not not on NT, you need to upgrade to use it. If package X is needed to communicate to a machine, and you have several copies, eventually you will need to upgrade the rest for consistency.

3. Microsoft uses upgrades competitively. I installed OS/2 2.0 as soon as it came out. The Windows system rev'd several times, and the biggest difference is that it broke OS/2's ability to run Windows apps. ("The system isn't done until Lotus won't run!"). If Linux/Unix makes a big move into the OPC arena, what will stop MS from doing an 'embrace and extend', thus making the old stuff incompatible.

4. MS is an office software company. They have a different mindset than the automation market does. It is like they get it on the surface,
and they can comprehend what we are saying, but at a gut level, it just doesn't quite click. They still tell us that "this is the greatest
thing in the world. It is much better than the last version! Everyone will be doing this in the next few months!" That isn't what I want. My
brand-new-out-of-the-box SLC5/05 will still network to a 8 year old SLC fixed I/O brick, and communicate natively. And of course, there is
Modbus, which anyone can use, is not controlled by anyone, and has been around decades.

My bottom line point is that OPC as we know it today will not be what is promoted in 3 to 5 years. It will most likely be incompatible, and we will have either abandoned it, or will be caught in the upgrade cycle that corporate software is stuck on today.

If OPC and Microsoft is what is working best for you, then hey, knock yourself out. I just do not have that level of trust in them to not leave me hanging in the wind some day, and would prefer to write my own stuff on a platform that I know won't arbitrarily pull me into an endless upgrade cycle. It is a matter of preference. I agree that everyone should decide based on experience and investigation. I would much rather see that than blindly following marketing material that only shows one side of the issue.

--Joe Jansen
 
A
> The day I fired up VB and made a seamless distributed object connection
to a PLC I was sold.
> Unlike systems I had used in the past like RPC I didn't have to jump
through hoops
> to build the interface. My VB code doesn't know whether it is connecting
> to an in-process object or one across the network.

Component programming with COM or CORBA is slick. I love it, but....

> I use to argue for open systems and the superiority of my
> UNIX-based solutions, but I stopped when I saw what could be done with NT,
COM and
> OPC. There is no comparison between OPC and your dated recommended way of
doing it.

Honestly, one of the major problems with OPC is that while DCOM *CAN* run on a non-Microsoft designed system, it sure as hell doesn't make much sense. So much of COM, and DCOM especially is based on an NT server for authentication and a system registry, which AREN'T common things in the all the devices that I deal with. Sure, what you do is slap an OPC server onto the machine getting the data that will happily read the data from the PLC in the manner that it's accustomed to, and that works great. This is where COM is great, because yes, talking to various devices via a common COM interface from whatever language you want is a very powerful and useful thing. There are no good arguments against that.

But the moment that people say that we have to start using DCOM on all our devices is the moment where we sit back and say "Hey! This doesn't make a lot of sense for a simple device!" So, what you end up with is OTHER protocols, ModbusTCP and the like, being the actual method used to talk to devices, with OPC being used to glue all this stuff together at the other end. That's what OPC can do. Glue things together. Anything else just don't make much sense to me!

> I suggest the readers of the list, especially people using
> UNIX to investigate NT, COM and OPC technology and give it a fair shake.

Alright. Hrm. Next target platform for us: AMD SC520 with probably 8 megs of RAM and 8 megs of Flash (as a disk). Hrm. NT on that. Hrm. Not going to happen. Windows CE? Hrm. Maybe, if Microsoft would actually bother to help us small embedded folks. Embedded Linux. Hrm. Lets see, it does fit, I do get all the code, and I do all this great TCP/IP stuff with standard protocols (which then all you guys with big heavy machines can use OPC to call and ask without knowing the underlying details of the protocol). Plus,
licensing is dirt cheap compared to WinCE licensing. Heck, Montavista Hard Hat Linux has no runtime licenses. This is quite a big deal for a small outfit like ours, which would have to pay nearly $30 a license for WinCE and pass that cost onto the user.

On the technical side, I'd say that an embedded Linux is right for me. What do you think?
 
R
> I am getting confused over some of these discussions regarding
> Open Vs. Proprietary.

Well yes, it is not that clear cut. OPC, for example, is an open standard. The Windows API is also an open standard (allthougth it seems that 'undocumented' usage plays an important role). OPC can be implemented on non-Windows platforms. But windows itself does not conform to the standards, de-facto or de-jure, it makes it own rules up as it goes along. Given that it is
very widely used, that does make it a sort of de-facto standard, but standard is the wrong word when things get changed so quickly. This makes it very inconvinient to use OPC on other platforms and thus to all intents and purposes a proprietary standard.

The telephone network is defined across the board by open standards, so anybody can implement
any element. Putting OPC in the telephone context would mean you can use any phone across any
telephone network EXCEPT that the combo must be a particular Lucent technologies chip, and they
only sell this chip mounted on a board which has a proprietary backplane connector which is only sold mounted on Lucent backplanes.........

Standards are not laws. Participents have to sincerely WANT to make an open interoperable
standard. All to often that is not the case, the reasons are obvious and the examples are well
known. OPC is not one of these, however. I believe OPC members DO want to interoperate,
but they are also strangely hell bent on interoperating on Windows and Windows alone.

Apparently (according to the OPC website) one of the principal benefits of being an OPC member is that you get to recieve an annual sneak preview of where Microsoft are planning to go that year. Strange.

Of course OPC justly point out that they are only designing wrappers as a standard way of making IA devices available to windows desktop apps, an important and commendable task Trouble is, they have nothing to wrap. You can wrap assorted media formats in an AVI file, you can wrap a TCP/IP connection in a winsock, but you can't say I am going to make a wrapper, period.

OPC is not a wrapper, it is convenient to say that at times but it is being touted, and above all accepted, as a standard interface between standard computing environments and IA devices.
The basic problem with doing that is the plethora of standards out there. And that is how this thread started, lest we forgot along the way! Somebody bemoaning the fact that there were so many standards out there, and straight away people said 'now OPC is becoming popular as a standard'. So what is it? A class wrapper or a protocol?

In reality users want a common protocol to go between the desktop and the field device, and as there are existing transports for COM objects they are saying 'that will do'. No matter that these transports were designed to solve different problems in different environments. No matter that these transports are not suitable for the field devices themselves, they are saying 'we have an egg carton, lets wrap eggs'. I hope you
all like omlette.

So then the OPC say 'hey, we are not tied to DCOM, we are looking into using XML for the transport'. XML is a great buzzword, and a real open standard. But it is not a communications protocol. It is a machine readable metalanguage that allows an intelligent device to read the specification for itself, so the human does not have to read the spec and then specify it to the device. But what is it actually going to specify?

Now I am just a single simple idiot who just happens to have spent a large part of his career
designing both hardware and software to stuff bits down a piece of copper wire, so far be it from me to doubt the 250+ corporate members of OPC, or, even more pretentiously, to question the wisdom of a Microsoft evangelist. But, IMHO, they are up the creek with this one.

Never mind that they only intended to provide wrappers (atually they started out defining DDE profiles). What people expect of the OPC, and what many people actually think they are getting, is a standardised comms protocol. Now if OPC had actually worked to fill this void, they could have then presented their slab of meat to others to wrap, each in their own way. RTOS vendors would have wrapped it into libraries to sell to their IA field devices coustomers, Linuxers would have happily done a free set of wrappers for their
environments as a little warm up exercise before configuring sendmail, and Microsoft with all the money we are paying them would, I hope, have produced a set of COM wrappers available as a
downloadable service pack. Everyone is happy, AND, in a real distributed field device
application, we could cut out that 'PC in the middle' with all those Modbus/Profibus/whatever
protocols on board.
 
R
I haven't been following this discussion too closely, but this section brings up a concern that I have about PC-based control vs. traditional PLCs. I would like to hear some opinions and explanations of how any of you may be
handling this.

Suppose we manufacture a machine that uses PC-based control instead of a PLC, and send this to one of our customers. After several years they have a problem with part of the computer system and need a replacement. The original hardware is no longer available and current hardware does not
support the software used. (We have seen this situation before - system ships in '92 with Win 3.0--> touch screen fries in '98 --> new version of touch screen does not support Win 3.0 --> search for compatible replacement --> engineering required to reconfigure system with new drivers, etc).

I realize that the PC-based solution can offer tremendous advantages, but I have a hard time getting past this problem. We manufacture capital
equipment, and our customers typically perform their own maintenance. Trying to get our application running on new hardware may be beyond their level of expertise, whereas installing a new PLC and downloading the program is not a problem.

We have been shipping PLCs on our equipment since the early '80s and except for a few of the very old systems, spare parts are still readily available. I realize that some of the industrial computer vendors will support their hardware for a certain length of time. What kinds of timeframes are you seeing?

Thanks in advance
Randy DeMars
 
Roger

Yes we have tried to implement TCP/IP into the factory floor, and not yet succeeded. We build batchweighing machines with multiple industrial CPU's which are linked on Ethernet in Master slave
configurations.

The difficulty I have is that almost all the vendors of TCP/IP stacks want royalties for each installation that you make. To me that is
extremely messy. I prefer to pay a reasonable up front price and do what I like with it.

So our first attempt ( and successful) was to communicate by simply sending Ethernet packets to the slave's. This works well, is fast, never loose packets BUT does not interface to the factory floor. I.e: The customer wants the data pumped into their Windows box.

Currently I have resigned myself to the fact that I have write my own TCP/IP interface using public domain modules typically from watch.

But it is a long learning curve.

My wish list is a simple TCP/IP implementation for DOS based industrial installations, that my software can call with an address and data.

Regards
 
M

Matthew da Silva

Once again, moores3 has hit the nail on the head. Well done. The writing style and attitude of tolerance reflect a helathy respect for the issues at hand. Until recently, I, too, had little reason to trust Miscrosoft. I started using MS products a rev. 3.0 when the blue screen of death was also the startup screen. Having worked on Macs before that, I was sceptical but powerless due to the prevalence of PCs in the new office I had joined.

I became acquainted with the PC and soon got to enjoy the black screen and prompt as a way of getting quickly to the bottom when a problem occurred with the user interface. Which was often. I then left and joined another company, where I switched back to Mac OS. The lack of transparency was a barrier but since the software
worked well (more often than with Windows 3.1, at least), I didn't complain.

Now, due to the requirement to access a single network here, I've got back to PC (Win). The facility with which applications are online and at which remote servers are accessed is not
comparable in the world of Macs. It's alright to support open source and even open communications protocols, but most businesses couldn't operate without pre-packaged, 'open-by-default,' and
virtually standard engines such as Windows.

That does not mean that any manufacturer or developer isn't examining all the options. There's no black-and-white result. It's an
ongoing process in which corporation culture selects the individuals to make purchasing decisions who, usually, have both the wisdom
and the knowledge to do so in the best interests of the company.

It's an ongoing process also, for developers of control and atuomation systems. There are many individuals in such companies who plan future systems. Not just one per company. Such a monolithic and 'totalizing' view is fine for thick paperbacks, but the real world is more complicated and shadowy.

My comparison of Mac and Win is hardly technologically breathtaking, but I think it illustrates how the evolution of a major
industrial product closely mirrors the development of commerce and business in general. In the case of Win, to see openness may simply require a momentary suspension-of-disbelief which can allow us to see the simple reality where otherwise there is the illusion of chaos.

Regards,
Matthew, Yamatake, tokyo
 
A
> I am getting confused over some of these discussions regarding
> Open Vs. Proprietary.

Don't worry about it. I think most people are nowadays, especially because everyone likes to bandy around their own definition.

When one hear's open now, as related to software, one is thinking "open source". This is where the software is licensed under the GNU Public
License or another similar license, where the code for the software is freely available to everyone. The licenses usually prevent someone from taking your code, modifying it, and redistributing it without making your source code changes available.

While everyone will pretty much agree that open, published interfaces (your examples of Modbus and TCP/IP are those) are a Good Thing, people differ in opinion the implementation of the interface.

And thats the point that many people disagree on whether it should be open. I would say that if reliability is your goal, then yes. BSD Unix is open source, and widely regarded as the most stable and secure Unix around. It is open source, although with a lot less fanfare than Linux. The BSD Unix TCP/IP stack served as the basis for Microsoft's implementation of TCP/IP.

And quite frankly, from a reliability standpoint, getting a common, open, widely used implementation of an interface means that you, as an engineer, can spend less time writing the standard stuff that everyone else write (because some of the protocols are complicated!), and more stuff doing the things that make your product better than the next guys. Reinventing the wheel is a bad thing, and the "open interface, closed implementation" systems leads to much reinventing and associated problems.

For example: I've been writing more than a few communication drivers lately, for various PLCs and motion controllers. With the singular exception of the Aromat FP protocol (kudos to those guys for writing a GOOD communication spec), the documentation on the protocols was very, very poor. If I had some sort of open implementation around, I could either drop that
into my code, or just eyeball it to figure out all the gotchas. When we're talking about communication over standard wiring and protocols (TCP/IP or other), which is something thats going to become extremely important over the next N years much more so than it is currently, I think that we'll all be much happier, as engineers when we have a solid standard, OPEN implementatation of these protocols.
 
Top