Communication protocols

I don't know about facades of openness or resistance to real open protocols. For me it is a matter of what makes sense. Do I implement a
system with an operating system that is unique to what my support staff current maintains? There needs to be a very good reason to do that. What I find is that there isn't enough justification for that, because the common one, the one that is running on hundreds of desktops in the organization is good enough. The same logic
follows from there. If the object system that comes with that operating system is adequate then I will use it rather than going with another technology.

Obviously, there are different definitions of what "open" means. One definition is that the technology is widely used. OPC is being used by a number of companies, like many commercial SCADA
vendors. I consider it "open" because before the device interface for each commercial system was unique to each vendor.

As far as Ethernet and TCP/IP is concerned the argument is over. People can argue against whether or not they are best suited for controls networking, but the same logic applies. If the commodity technology (Ethernet/TCP/IP) is good enough then it will prevail - as it is already.

Sam
 
R
> What I don't understand
> is the resistance to real open protocols and this fierce, rabid, Windows
> everywhere and nothing but Windows concensus in the automation market. It
> seems as if no technical argument can stand this "Windows at any cost"
> mindset.

Because most industrial users know nothing about comms Curt, and they think MS do (which is surprising, as comms is an area where they have an
abysmal track record).

Most users marvel at what they can do with OPC, without realising they could do the same more simply without OPC.

Certinaly they do not understand that they are using an architecture that is the inverse of what automation systems require. Interestingly MS
have shoehorned them into DCOM because it is what they are pushing across all market sectors. But in the sectors were it is more appropriate
(telecomms, banking, government, large legacy systems), everybody seems to be standardising on CORBA, MS must really love the OPC crowd!

Certainly there is an irony, MS welshed on thier annoucements to make an OS that was suitable for industrial embedding. I can understand them wanting to chase the consumer gadget market (it's much bigger!), and as they have heavy competition there from things like EPOC, I can understand then
needing to slim WinCE down and forget aspects not strictly necessary, but as this point they should either get to work on another OS or pull out of OPC, they are leading them up a gum tree.

Personaly I have said goodbye to the lot of them. I use my own TCP/IP protocols, it takes me less time to implement than even reading the FAQ of
most standards groups, industrial communications does not need to be so complicated. Yet I can access my field devices with just a few lines of VBA code, or easily put them in DCOM wrappers.

I would be quite happy to publish my own protocols, they are quite flexible and simple, but I will not bother as I know nobody will be interested in them as it does not say MS on the packet.

No skin off my nose, though, I do what the customer wants for a lot less money, a LOT less, and that has allowed us to deploy were more complex solutions have been evaluated and rejected because of cost.
 
L

Larry Lawver

To the List:

Curt's questions have prompted me to write about something that has been stewing in me over the last year of flames, religious fervor, and cheap shots at brand name automation suppliers and
Microsoft. Executive Summary: Use whatever is right for the job at hand because it is the best way to do it, not because all other possibilities are in some way evil!

Curt asks: Why is proprietary good? The answer is: Because it almost always works.

The good news about truly open systems is that you can do ANYTHING. The bad news is that YOU are responsible for EVERYTHING. When a proprietary system fails to do something it
should reasonably be expected to do, you should expect and receive complete support from the supplier. [Please do not flame me about that statement. It has always worked for me, over
the last three decades and four careers. If your experience seems to be different, try reading that sentence again, carefully.]

I have long seen "open" vs. "proprietary" as a question of make vs. buy, a classic engineering issue. If you need a small quantity of
something, it is difficult to justify the development costs associated with making it yourself if someone else has it in a catalog. If you truly need something that no one else has ever created, then you have to make it. Even at that point, though, you should take some time to decide if you really have a project so unique that no engineer has ever addressed it before--- or if it is a project so wretchedly defined that no one is going to come out of it alive!

Once, I found myself simultaneously at three different points along the make vs. buy continuum. I was managing two large proprietary
PLC projects, a UNIX project, and a volume product development project with an embedded controller and custom PCB at the heart of each of a planned 1000 units / year. All three projects were successful and profitable. Each was as "open" as was appropriate, but that wasn't a criterion. The important criteria were the project requirements, and how best to meet them.

Many complaints about "proprietary" systems (and, yes, I agree Microsoft is proprietary) have to do with unreasonable expectations, especially about cost. Many complaints I have seen on the list could be solved by purchasing a $1000 component.
Unapologetically, I have to tell you that "costs $1000" is not the same as "impossible." If you are in the automation business, you have to be mentally prepared to take $1000 out of the bank and burn it in an ashtray at any given moment. Of course, I am not encouraging waste or sloppy engineering. I am merely pointing out that it's an expensive business!

When the cost of proprietary systems is compared with "open" systems, I frequently find that the performance criteria are very relaxed for the "open" system. One of my clients regularly uses a
$7000 PC-based bill of material unless hard specified to use my $18,000 bill of material. The PC-based system takes six weeks in the field to tune and has spotty reliability after that. The proprietary bill of materials goes together quickly in the shop and rarely requires field visits. Which is really the more expensive bill of material?

Across the universe of automation projects, a universe of solutions is possible. I frequently mention Larry's Rule to my clients: If it works, it must be right; just be careful of your definition of "works." If you found a way to run payroll on a PLC, you are insane, but if it works for you, then it must be right. If you build an entire automation project around open source software running on XT clones, it must be right for you, or you wouldn't have done it. Demonizing the products and services you didn't use doesn't make your project any better!

The demands of automation are a very small niche, to this day, and while we should take advantage of any technology that will benefit our clients, we should note that the rapid evolution of the Internet and open systems does not directly relate to automation. It's all about numbers. Millions of people are working on open systems
and the Internet. Thousands of people are working on automation (at the level of an A-List participant). Face it: Our entire worldwide community wouldn't amount to a decent beta test field for Microsoft! The promise of excellence in open systems is a function of manhours spent on it, and it will take years for automation to enjoy benefits other businesses are reaping now.

A proprietary system lets some company make a buck off of it. That motivates them to engineer it carefully and support it strongly. It means that they service the warranty to the end user, and provide spare parts for at least a decade. This stability is very important to end users and critical to small end users. And this is why I have always questioned PC-based control in general, not just because of Microsoft issues.

I insist that proprietary systems, properly applied, are the simple solution to a wide range of automation projects. That doesn't mean that I doubt or demean the success of everyone on the list who avoids proprietary systems. This is a brilliant newsgroup, and the people who will prove me wrong are probably reading this now.
You just haven't done it yet.

Hope this helps!

Larry Lawver
Rexel / Central Florida
 
R
> I think that part of it is that the automation and IT industries are
> no longer made up of the 'geeks of old', so to speak. The way I put
> it to people is that "I was a geek before it was cool'.
>

Automation has not had geeks, up till recently they have got by quite happily pretending that everything can assimilate a bank of relays or a cam, and user interfaces can be made up of screens
that are swithched on and off.

Now things are getting more serious, and automation people are understandably finding it difficult to come to terms with the changes.

But I think they must try to learn some of the fundamentals, so often I see my collegues and people on this list talking about absurd solutions.

> Those of us that were writing assembler programs on our
> Commodore 64 oh-so-long-ago are by nature out of the
> mainstream. we were the ones that usually ate at the lunch table
> by ourselves with our noses buried in the apple II programmers
> reference. And we liked it that way! We did not want to be
> bothered by the peer issues of the mainstream.

In an industry that doubles each year, it is only natrual that 1/2 the people have less than one years experience;-)

What amazes me is that inexperienced users are happy to believe any marketing release or commercial guy who happens them by, and

> Also, as more business managers get involved, they have even less
> understanding of what is out there. All they know is windows. And
> this is because they go to best buy, circuit city, Wal-Mart, or
> whatever other chain store, and that is what they see.

What I do not get is why everybody is an expert when it comes to computers. If you decide to install a 4 quadrant brushless DC motor and drive on a simple ventilation fan, nobody would question your wisdom, do anything with a PC and every tom dick and harry is there telling you how it should have been done.

> They cannot accept that there is a viable alternative
> to windows.

Well Microsoft do spend an awfull lot on evangelists. One of the things they spend an awful lot of effort on is promoting windows as the easy solution. They also promote the image that 'geeks'actually waste time because their solutions, even when idealisticaly correct, waste time.

It is an excellent approach, MS products are characterised as being quick to have 'something' running. In fact my experience with MS is that you can get 80% of the way very quickly, it is just the last 20% that turns out to be a nightmare!

Not that I advocate that everybody types documents with vi! It is a very efficient editor in the hands of an expert, but it does take years to get that expertise!

No, there has to be a balance. A small investment in time trying to actually understand what you are doing, and selecting the right approach, pays big dividends in the long term.

But at the end of the day that marketing is good. You can waste as much time as you like with a windows solution, and everybody sympathises with
you. Dammned windows they say, and crack a few jokes about blue screens and 'if MS made cars....'. But just waste a couple of hours trying to get a non windows solution working and everybody is coming down on you because you are
not using windows.

My experience with unix is that it takes longer to do most setup tasks. But, when I do have problems I can get to the bottom of things (which is often impossible with windows), and once I have the unix box setup it is setup for good. In fact I believe a lot of windows stability problems are due to misconfigured systems, but the reason they become misconfigured is because when the wizards don't work then you are left with ugly work around, you cannot get under the hood and get to the root of the problem, partly because the rampart use of wizards means you never actually get to learn what is under the hood!

I must confess, I deploy windows far more than non windows. There are cases where windows is a good solution, their desktop environment is sleek, but there are many occasions were there is a much better case for non windows, especially in dedicated embedded boxes. Yet even here I err on the side of windows. Despite the fact I am
expert in using unix and specialist operating environments, I have a fear of deploying them,
with windows I can waste 10 times as much time, and risk having unstable and/or unscalable systems.

And yet some customers are not interested in the inner workings, they are just interested in what the system achieves, and the price. As it should be. In these cases my use of windows is limited to front ends that must co-exist with windows desktops. I do far more for far less in these circumstances.

Managers should learn to look at the bottom line, and long term results.
 
J
yeah, Unix.

-> We're probably too late. It seems that most people attribute
-> internet communications to the 'fact' that all the computers
-> run one operating system.
 
R
Despite the fact that the internet was actually born and bred on unix, in fact unix servers form the mainstay of the internet (even MS run hotmail on SUN systems!).

If truth be known, the success and interoperability of the internet has been due to the openess of it's protocols

(As someone once said "thank god TCP/IP is not patented).
 
Joe:

I think that you are accurate to a degree. Before I got to my current job I used C and UNIX. I also had experience developing systems back when computers didn't have virtual memory and we had to swap pages in and out of memory. How about counting the time for each machine instruction so you knew when your slice of time was up? I didn't do Commodore 64, but I worked with C/PM, Apple II
and a lot of other early computers.

For a long time I pushed UNIX and what I considered "open" standards. I was not comfortable when I had to start using Windows. I refused to even consider it until NT came along. Once I started using NT I realized it was good enough. Being good enough was more than enough of a reason to use it, since it is the dominant computer architecture.

I used to be like the old cigarette ad, "I'd rather fight than switch!", but once I finally switched I found that the benefits outweigh the limitations. I don't like Oracle's CEO, either, but I like their products...

Sam
 
C

Curt Wuollet

Hi Joe

So, to sum it up, familiarity? Ease of use I attribute to familiarity because for example, I find NT hard to use and Linux easy because
that's what I'm familiar with. I didn't think the resistance to change was that powerful. Off-list I got the word that "My boss makes me use it".

> I will take a try at responding to some of these issues:
>
> I think that part of it is that the automation and IT industries are
> no longer made up of the 'geeks of old', so to speak. The way I put
> it to people is that "I was a geek before it was cool'.
>
> Those of us that were writing assembler programs on our
> Commodore 64 oh-so-long-ago are by nature out of the
> mainstream. we were the ones that usually ate at the lunch table
> by ourselves with our noses buried in the apple II programmers
> reference. And we liked it that way! We did not want to be
> bothered by the peer issues of the mainstream.
>
> (trying to keep flames to a minimum, realize that I do not mean
> everyone, just some people.)
>
> Many of the new automation engineers and (especially) IT staffers
> do not remember the world before windows. If I go upstairs to the
> computer room right now and ask the 3 people in there what they
> know about the "Trash-80 coco 2", I would get a blank stare. They
> literally would be unable to even translate that into the correct
> model name. And forget about file redirection. I spent about 1/2
> hour explaining redirection at a DOS prompt once.

I wonder if there is a relation to the number of non-professionals that wander into it because it's "easy with windows"


> These groups of people got into it after windows was dominant.
> And since they came to it after it was cool, they are squeamish
> about going outside the accepted boundaries. Their comfort zone
> is smaller.
>
> Also, as more business managers get involved, they have even less
> understanding of what is out there. All they know is windows. And
> this is because they go to best buy, circuit city, Wal-Mart, or
> whatever other chain store, and that is what they see.
>
> For those of us that remember the "holy wars" of Commodore vs.
> Apple II vs. PC-compat., it is easy to get the concept that if you
> don't like one product, you just switch to the other. For those that
> have grown up on only a single platform, there is literally nothing
> else out there. They cannot accept that there is a viable alternative
> to windows.
>
> I am the same way on some things, and I am sure you have some
> things that people consider "quirks". I still keep track of all of my
> project notes in bound journals, using a pencil. I realize that there
> is software out there that is better. I realize that there are a
> thousand and one arguments for making all of my notes
> electronically rather than on paper. I still do it because when I
> developed the habit, there wasn't any viable alternative.

The problem is, one of my quirks is reliability, I have customers I haven't heard from in a year. Of course, service calls may be billable. Windows is good enough for a lot of things, but controls?

> Likewise, Microsoft is their habit. It is all they are comfortable
> with.
>
> In fairness, Windows does have redeeming value: It is VERY easy
> to set up. I have computers that cannot run Linux because of hard
> drive sizing issues, monitor incompatibilities, video and network
> drivers, etc. But they run windows as well as any other machine.
> MS has a very simple interface for the user. When I set up a
> database and file server, I used NT 4.0 SP6. Not because it was
> my preference, but because I needed it up that day, and Linux
> would have taken longer to get drivers and such ready, install,
> compiled, etc. (Note: I am in the process of migrating :^} )
>

I have seen some valid cases the other way too, in fact I have some machines here I got because they wouldn't run windows.

> This 'ease of use' is the biggest driver. Since everyone has less
> time to do more stuff, we like to grab something that we can slap
> into place, and deal with minor issues as they arise. I have never
> gotten a bit of argument when my server is offline, as I just say
> "Windows crashed. It will be back up in about 15 minutes". All
> the suits just smile and say "Oh. OK. Let us know when it is
> ready".

:^)

> Since there is the windows buy in on ease of use, everything else
> comes part-and-parcel. Don't like what MS did to Netscape? Still
> want windows? Then you compromise your principles and fire up
> IE.
>
> Translate that to the automation world, and you have AB/Rockwell.
> Nobody has ever told me that they thought AB was price
> competitive. Nobody has ever accused AB of having the latest
> technology. They are usually a step behind and twice the price.
> But they are "standard". Many, many, many places spec it, just
> because that is what they are used to. And again, you get what
> they offer as a package deal.

But why are AB, Siemens, GEF et al. In bed with MS? Why would GEF for example, replace a good stable UNIX Cimplicity product with a product that crashes during the demo? And they refuse to
continue the UNIX product. These guys treat me like a raving lunatic when I want to use something else and simply ignore the reliability aspect.

> I will stop here, as I could end up writing a book if I get going...
> All responses are welcome, provided that they are thought out and
> civilized.

I am not looking for a X sucks, Y rules type of discussion. I can find those elsewhere. The paradox I'm working through is the difference between stated priorities and priorities in practice.

Curt W.
 
R

Rob Hulsebos

>> Cannot totaly agree. Firstly many protocols have different
>> characteristics for different requirements. CAN goes
>> short distances, but with tightly controlled timing.
>> Profibus DP is easy, but costly and not scaleable
.
What is "not scaleable" about DP ?

>Another point, is the short distance of CAN link...
>Simply speeking, the max distance of a link comes from the link
>length, you can only get 40m for 1Mbit/s, but you can
>get 500m with 125Kbit/s and even 1000 with 50Kbit/s.
>And this without repeater.

With even lower bitrates longer distances can be achieved. I heard about bitrates of 5 Kbit/s and 5 km. Haven't tried it, though.

>With ProfibusDP you can obtain a maximum link of 9600m with
>93.75Kbit/s AND 7 repeater.

According to the Profibus spec you can only have *three* repeaters maximum. But that's theory, current practice allows more then this.
(luckily!). I have also heard of distances of 90km with fiber.

Greetings,
Rob Hulsebos
 
J
-> Curt's questions have prompted me to write about something
-> that has been stewing in me over the last year of flames,
-> religious fervor, and cheap shots at brand name automation
-> suppliers and Microsoft.

I would take that as a compliment!

<snip>

-> Many complaints about "proprietary" systems (and, yes, I agree
-> Microsoft is proprietary) have to do with unreasonable
-> expectations, especially about cost. Many complaints I have
-> seen on the list could be solved by purchasing a $1000
-> component. Unapologetically, I have to tell you that "costs
-> $1000" is not the same as "impossible."

<snip>

-> I am merely pointing out that it's an expensive business!

This point I have to disagree with somewhat. If proprietary system 'A' costs $1000, and performs a set of functions, and open system 'B' is free and performs the identical set of functions, I HAVE to ask why 'A' costs $1000. The point being that it is only an expensive business because there are no choices to the overpriced otions. I
worked for AB for a year and was disgusted by the waste and overpricing of the components produced on my production line ( I was a line supervisor). The item we made cost all of about $5.00 worth of parts, and sells on the street for slightly over $100. Why? Because there wasn't an alternative. I am all for supply-and-demand, market forces, and what have you. But when a proprietary system leverages off of an installed base to keep others
from competing, you have an unfortunate situation that goes against the free market system. The truly open options are just the competition. As with most markets, those who support the
'underdog' are usually very convicted of their decision, and not afraid to share their reasons. This is a personality trait (refer to my earlier post on this subject).

-> When the cost of proprietary systems is compared with "open"
-> systems, I frequently find that the performance criteria are very
-> relaxed for the "open" system. One of my clients regularly uses
-> a $7000 PC-based bill of material unless hard specified to use
-> my $18,000 bill of material. The PC-based system takes six
-> weeks in the field to tune and has spotty reliability after that.
-> The proprietary bill of materials goes together quickly in the
-> shop and rarely requires field visits. Which is really the more
-> expensive bill of material?

I would assume, however, that you are more familiar with your $18,000 system, and are much better prepared to setup and support that system than the lower priced system. Not suggesting
anything sinister, just pointing out that, as with anyone, the system you prefer is always going to work better for you. Also, although I obviously do not know for sure, I would ask if the $7000 system is based on open standards, or is it also proprietary? (ie. Windows and a proprietary control system.) If so, that sort of
shoots down the 'proprietary always works' statement. I am currently only aware of one open control project (LinuxPLC) and this has not been released to the public at large as a stable control system.


-> Across the universe of automation projects, a universe of
-> solutions is possible. I frequently mention Larry's Rule to my
-> clients: If it works, it must be right; just be careful of your
-> definition of "works."

<snip>

Agree completely. I have used Microsoft products for many things. This is because of the Ease-of-setup I mentioned in my other post. However, I do not claim that they are more stable, or that they
provide the highest quality of service. They were just easy to slap together, and provides the ready made excuse for downtime (Windows crashed). If i take down a Linux server, people start to get stressed out over the downtime. This to me speaks volumes of what is expected of Windows vs. what is expected from anaything
else. Nonetheless, I still will throw together a VB program on WinNT to do a quick monitoring project if something is working incorrectly. I just want to make sure I make the distinction
between 'easy' and 'better'.

<snip> All comments agreed to.

-> A proprietary system lets some company make a buck off of it.

Yes, by definition.

-> That motivates them to engineer it carefully and support it
-> strongly. It means that they service the warranty to the end user,
-> and provide spare parts for at least a decade. This stability is
-> very important to end users and critical to small end users.
-> And this is why I have always questioned PC-based control
-> in general, not just because of Microsoft issues.

WHAT? '64,000 known bugs at release' is engineered carefully? 3 hours in the tech support phone queue is supported strongly?
quote from the MSDN documentation on VB ...this feature "is not compatible with WindowsNT. This is by design"... And don't even get me started on stability.

In the PLC realm, I stand by my earlier statements that nobody has ever accused AB of being on the cutting edge nor price competitive. They are simply a standard. It is about the comfort zone, not quality of product. Referring earlier in the message, why do I pay several hundred dollars for RSLogix, when others offer their software for free or next to free.

-> I insist that proprietary systems, properly applied, are the simple
-> solution to a wide range of automation projects.

Simple, yes. Best, not always.
 
E

Edelhard Becker

Hi Sam,

> I don't know about facades of openness or resistance to real open
> protocols. For me it is a matter of what makes sense. Do I implement
> a system with an operating system that is unique to what my support
> staff current maintains? There needs to be a very good reason to do
> that. What I find is that there isn't enough justification for that,
> because the common one, the one that is running on hundreds of
> desktops in the organization is good enough. The same logic follows
> from there. If the object system that comes with that operating
> system is adequate then I will use it rather than going with another
> technology.

IMHO (and by experience) these "this-is-currently-good-enough" solutions will bite you somedays. Some examples:

- the "good-enough" operating system might be, over time, not as stable as expected (usually desktop OSse don't run 24x7). Then, you can spend hours and days to fine-tune that system for
stability (remove unnecessary programs, drivers; clean up dynamic libraries etc.etc.)
- projects grow over time. As soon as a system runs flawlessly, there might be new demands, options etc. Once started with a good-enough solution, these systems will become a nightmare.

- using good techniques (usually, not automatically) leads to elegant and simple solutions (and therefore drops development time and cost). Using unappropriate techniques (always) leads to crappy solutions. You can put a screw into the wall with a hammer, that might be good enough, but is it good?

- usage, including local backups etc., for the local staff can never be too simple. We (as a software company) have to make the system as simple as possible. E.g., for backup insert empty floppy and press some button, nothing more. On this list a few weeks ago was a thread where somebody screwed up his Win9x system by simply copying some files to a floppy manually!

- BTW: i have usually better experiences with staff that doesn't know anything on computers. You simply make instructions what to do and that's it. When using well-known OSses in a production system, there likely is somebody with the same OS at home who starts fiddling around (e.g. try CTRL-ALT-ESC etc.).

At hand, i can only think of two reasons why someone would not choose the technically best solution:

- price (seldom, but that's another story)
- a (HW or SW) interface, which is absolutely needed for the customer's environment, is not available

> Obviously, there are different definitions of what "open" means. One
> definition is that the technology is widely used.

Sorry, i often see the term "open" misunderstood, but i never heard that definition before.

> OPC is being used by a number of companies, like many commercial
> SCADA vendors. I consider it "open" because before the device
> interface for each commercial system was unique to each vendor.

You can name OPC everything but "open". It relies upon OLE/DCOM which is a proprietary protocol by MS. You can not get a strict specification of the protocol. It simply is widely used, because the
SCADA vendors use that particular "good-enough" OS, where you get OLE/DCOM libraries (binaries, no source!) with the development system. It just is too easy to use (especially if the programmer doesn't look into the future). Linux implementations of OLE/DCOM have to use reverse engineering (because there is no spec) and therefore are always at least half a year behind OLE's native platform (if they get 100% compatible ever, i don't know of any 'DCOM conformance test').

Regards,
Edelhard
--
s o f t w a r e m a n u f a k t u r --- Software, that fits!
OO-Realtime Automation from Embedded-PCs up to distributed SMP Systems
[email protected] URL: http://www.software-manufaktur.de/
Fon: ++49+7073/50061-6, Fax: -5, Gaertnerstrasse 6, D-72119 Entringen
 
M

Matthew da Silva

It is not long now that computer systems engineers will be certified just like any other type of engineer. As IT becomes 'mission-critical' for companies, the CEO will be asking for people with more accountability. Maybe this is one of the problems with 'everyone's an expert' syndrome. It's the same with literature. Technical writers (which I have been in the past and shall possibly again be, in future) are largely considered an unfortunate adjunct to the expense of a project.

Not only that, but they are often brought in only at the end of the development project. What happens is that they suddenly start asking for a
lot of changes to screen displays, error messages and such, to match their terminology and taxonomy choices, which are being made in the documentation. Because these types of 'user documentation' are part of the softwaare source
code, delays result (the alternative is to give the tech. writer the code and have them make the changes themselves -- would you do this?).

When it comes to documentation, every engineer and project manager is telling you how it should be done. As time goes on, the importance of
documentation will increase. Online help will be expensive and anyway, the online help will be, essentially, the same material as is being put into the paper manuals. Tech. writers must (should) learn more skills; librarian skills, for example, to help with classifying and structuring information. Indexing is a valuable skill that most tech. manuals lack. Indexing is also the key to effective online help. It is labor-intensive (cannot be automated) and requires significant efforts and commitment on the part of the project manager and the tech. writer.

Future software cycle times should shorten more, and more. In this scenario, it should be a requirement for tech. writers to be brought into development projects earlier and to be given an amount of latitude commensurate with their analytical abilities. Despite appearances, many tech. writers are quite intelligent and may even have useful input into marketing and sales
strategies. Like IT professionals, tech. writers will be more valuable if brought in-house, rather than used as an outside resource.

Cheers,
Matthew
Tokyo, Japan

>> when it comes to
computers. . . every tom dick and harry is
there telling you how it should have been done.<<
 
C

Curt Wuollet

> I don't know about facades of openness or resistance to real open
> protocols. For me it is a matter of what makes sense.

Me too.

> Do I
> implement a
> system with an operating system that is unique to what my support
> staff current maintains? There needs to be a very good reason to do
> that. What I find is that there isn't enough justification for that,
> because the common one, the one that is running on hundreds of
> desktops in the organization is good enough. The same logic
> follows from there. If the object system that comes with that
> operating system is adequate then I will use it rather than going
> with another technology.

So, if it's good enough for the desktop, it's good enough for controls? Or a replacement must run everything in the company?

> Obviously, there are different definitions of what "open" means. One
> definition is that the technology is widely used. OPC is being used
> by a number of companies, like many commercial SCADA
> vendors. I consider it "open" because before the device interface for
> each commercial system was unique to each vendor.
>
> As far as Ethernet and TCP/IP is concerned the argument is over.
> People can argue against whether or not they are best suited for
> controls networking, but the same logic applies. If the commodity
> technology (Ethernet/TCP/IP) is good enough then it will prevail - as
> it is already.

But, suppose Ethernet and TCP/IP required you to run Sun Solaris for example, would that then be open in the sense of OPC? I think people would have a problem with that. Why is it not an issue if MS controls OPC? Why would people shy away from one and embrace the other? If Sun, for example, had the market sewed up, would it then be ok if they controlled all the communications?
 
J
-> So, to sum it up, familiarity? Ease of use I attribute to
-> familiarity because for example, I find NT hard to use
-> and Linux easy because that's what I'm familiar with.
-> I didn't think the resistance to change was that powerful.
-> Off-list I got the word that "My boss makes me use it".

<snip>

Yes, to a degree. The catch here though, as stated to you off-list, is that you are not always dealing with the engineers familiarity.
Many times it is the suits in the front office that are making the platform decision. They go out and play some golf with a vendor, and suddenly they are standard. Or, as another scenario, they get one or two vendors to come in and give them a dog-and-pony-show, then boil down the feature lists to some common denominators,
and tell the engineer "You can use whatever system you want, as long as it has COM, OPC, Runs an Access database, and has feature A, B, and C" because that is what the sales people that they talked to promoted as their biggest features.

Summed up: You are dealing with familiarity for non-technical individuals.

-> I wonder if there is a relation to the number of non-professionals
-> that wander into it because it's "easy with windows"

You actually wonder? :^}

<snip>

-> The problem is, one of my quirks is reliability, I have customers I
-> haven't heard from in a year. Of course, service calls may be
-> billable. Windows is good enough for a lot of things, but
-> controls?

Not in my process. Of course, if anything bumped in my process we are out of production for 6 to 8 hours. Fortunately the VP engineering has been convinced that control by windows is a "bad
thing". He presses us about every 4 months, but we have a standard list of replies that keeps him at bay! (Typically "How long has it been since you rebooted your computer on your desk? That
would be a complete system shutdown and restart.")


-> But why are AB, Siemens, GEF etal. In bed with MS? Why
-> would GEF for example, replace a good stable UNIX Cimplicity
-> product with a product that crashes during the demo? And they
-> refuse to continue the UNIX product. These guys treat me like a
-> raving lunatic when I want to use something else and simply
-> ignore the reliability aspect.

Bandwagon. Also, they are no longer selling to engineers. they sell to Executives. The suits only know Windows. it isn't even that hard to distract them with questions like " Do you have a full time Unix administrator on staff? You would need one, you know, if you tried this Linux thing. They cost anywhere from $50K to $100K to
get one that knows what their doing. Our Windows package, however, is SOOO simple, my 8 year old set up the oil refinery down the street...."

And once they are in the door, they can blame operator error, blame Microsoft, blame the hardware, or worst of all, say that the plant
engineers are at fault. "Gee, I haven't had any service calls from the oil refinery that my 8 year old set up. How competent are your people?"

-> I am not looking for a X sucks, Y rules type of discussion. I can
-> find those elsewhere. The paradox I'm working through is the
-> difference between stated priorities and priorities in practice.

Have you ever seen a spec that didn't include reliability? Nobody has ever said "We want this system to control our process, but it can go down whenever, we really don't care...." The problem is that they feel that they must trade the reliability for ease of use. Their IT staff cannot support anything that didn't come from
Redmond, so supporting a Unix system day to day is perceived to be more costly than the Windows based system. Why? Because deep down, they cannot believe that there could be a system that
doesn't crash as much as windows. The thought process here is "If it were really possible to make a system that was that stable, Microsoft would have done so. They are, after all, the largest software company, they have the resources to do it right, so that must be as good as it can be. If we went to Unix, we would have the same problems, but we would either need an expensive support person, or it would take twice as long to get back up and running due to unfamiliarity".

I appreciate that everyone is taking the high ground with this. The only (candle)flame I got off list was someone telling me that I forgot about those that hunched over their old Atari computers....

mea culpa.

--Joe Jansen
 
S
For what it is worth, CAN is limited to a total width of the network, including any propagation delays in the network, to at most one bit time.
The bitwise arbitration used by CAN requires that every node "see" a bit while the transmitter is still transmitting it in order to do the bitwise
back off arbitration. This is at the core of CAN and is a hard limit. The slow a CAN network runs, the wider (longer) it can be, but at some point the bit times and widths just don't make sense anymore. A CAN system's size is directly related to its bit propagation time. Signal degradation does not matter.

BTW, this is similar to the maximum width of an Ethernet 10BaseT segment to less than 64 bytes -- to make sure the two farthest away node both see a
collision between them within the specified time..... Signal degradation does not matter.

Since Profibus uses RS-485 and no collision detection or monitoring or arbitration (depending upon timing and command/response to avoid
collisions), Profibus length depends upon signal degradation and some overall node response parameters. Network size is related only indirectly related to bit rate. This is the same for most of the RS-232 asynch communications we are all used to....

Ah, details, details.

steve
 
C

Curt Wuollet

> Curt's questions have prompted me to write about something that has
> been stewing in me over the last year of flames, religious fervor,
> and cheap shots at brand name automation suppliers and Microsoft.
> Executive Summary:
> Use whatever is right for the job at hand because it is the best
> way to do it, not because all other possibilities are in some way
> evil!
>
> Curt asks: Why is proprietary good? The answer is: Because it
> almost always works.

Unless interoperability and integration are your criteria. In the context of heterogenous communications it fails miserably, by design.

> The good news about truly open systems is that you can do
> ANYTHING. The bad news is that YOU are responsible for
> EVERYTHING. When a proprietary system fails to do something it
> should reasonably be expected to do, you should expect and receive
> complete support from the supplier. [Please do not flame me about
> that statement. It has always worked for me, over the last three
> decades and four careers. If your experience seems to be different,
> try reading that sentence again, carefully.]

I feel that systems can reasonably be expected to communicate. Systems in the rest of the computing world all communicate, they all use common, open protocols and are expected to interoperate to a reasonable degree. Also the "anything" I have to do includes talking to "foreign" equipment, and the major obstruction is the _deliberate_ omission of any means to do so. It shouldn't be my responsibility to provide a gateway that talks to each machine in it's own unique means, but at least open systems (primarily Linux) make it
possible. Linux speaks many protocols and they are all available free, with source. Obviously supporting open protocols is not too expensive. The "reasons" things don't interoperate are merely
excuses for non-cooperative behavior and avarice.

> I have long seen "open" vs. "proprietary" as a question of make vs.
> buy, a classic engineering issue. If you need a small quantity of
> something, it is difficult to justify the development costs
> associated with making it yourself if someone else has it in a
> catalog. If you truly need something that no one else has ever
> created, then you have to make it. Even at that point, though, you
> should take some time to decide if you really have a project so
> unique that no engineer has ever addressed it before--- or if it is
> a project so wretchedly defined that no one is going to come out of
> it alive!

Exactly my point. If instead of senselessly inventing new protocols that serve the same function as existing protocols, manufacturers
were to use something "off the shelf", large amounts of resources would be saved and the customer would be better served in the process. By using high volume protocols and making it
interoperable they could sell a lot more I/O which is where the money is anyway.

> Once, I found myself simultaneously at three different points along
> the make vs. buy continuum. I was managing two large proprietary
> PLC projects, a UNIX project, and a volume product development
> project with an embedded controller and custom PCB at the heart of
> each of a planned 1000 units / year. All three projects were
> successful and profitable. Each was as "open" as was appropriate,
> but that wasn't a criterion. The important criteria were the
> project requirements, and how best to meet them.
>
> Many complaints about "proprietary" systems (and, yes, I agree
> Microsoft is proprietary) have to do with unreasonable
> expectations, especially about cost. Many complaints I have seen on
> the list could be solved by purchasing a $1000 component.
> Unapologetically, I have to tell you that "costs $1000" is not the
> same as "impossible." If you are in the automation business, you
> have to be mentally prepared to take $1000 out of the bank and burn
> it in an ashtray at any given moment. Of course, I am not
> encouraging waste or sloppy engineering. I am merely pointing out
> that it's an expensive business!

In most cases I don't use open systems to save money. I use them to solve the interoperability problem because no one vendor will address this problem. I do have a problem with $400.00 serial cards that differ only in a connector from $4.00 cards and similar clear abuses. Using unique and proprietary hardware and software when there is no clear technical advantage over commodity items is poor engineering at best and amounts to exploitation. The commodity item is likely to be better tested and more reliable through greater field experience and refinement.

> When the cost of proprietary systems is compared with "open"
> systems, I frequently find that the performance criteria are very
> relaxed for the "open" system. One of my clients regularly uses a
> $7000 PC-based bill of material unless hard specified to use my
> $18,000 bill of material. The PC-based system takes six weeks in
> the field to tune and has spotty reliability after that. The
> proprietary bill of materials goes together quickly in the shop and
> rarely requires field visits. Which is really the more expensive
> bill of material?

I would change the OS and software the PC is running. The hardware has demonstrated reliability. Of course the OS and software vendor will always blame the hardware. There is no
inherent reason that PC's should be less reliable. After all, many PLC's now are very close to an embedded PC. If you use software of known questionable reliability for control apps that's more than bad engineering, that's negligence, no matter how pretty it is.

> Across the universe of automation projects, a universe of solutions
> is possible. I frequently mention Larry's Rule to my clients: If
> it works, it must be right; just be careful of your definition of
> "works." If you found a way to run payroll on a PLC, you are insane,
> but if it works for you, then it must be right. If you build an
> entire automation project around open source software running on XT
> clones, it must be right for you, or you wouldn't have done it.
> Demonizing the products and services you didn't use doesn't make
> your project any better!

If you can point out a case where I demonized an entity without just cause I will be happy to apologize.

> The demands of automation are a very small niche, to this day, and
> while we should take advantage of any technology that will benefit
> our clients, we should note that the rapid evolution of the Internet
> and open systems does not directly relate to automation. It's all
> about numbers. Millions of people are working on open sytems and
> the Internet. Thousands of people are working on automation (at the
> level of an A-List participant). Face it: Our entire world- wide
> community wouldn't amount to a decent beta test field for Microsoft!
> The promise of excellence in open systems is a function of manhours
> spent on it, and it will take years for automation to enjoy benefits
> other businesses are reaping now.

It would surely make sense to leverage those millions of manhours where appropriate rather than reinventing proprietary solutions, especially at low volumes.

> A proprietary system lets some company make a buck off of it.
> That motivates them to engineer it carefully and support it
> strongly. It means that they service the warranty
> to the end user, and provide spare parts for at least a decade.
> This stability is very important to end users and critical to small
> end users. And this is why I have always questioned PC-based
> control in general, not just because of Microsoft issues.

That's the theory and sometimes it works in practice. The bazaar method seems to work fairly well without the risk of single sourcing. If the parts are generic and non-proprietary there is no need to stock spares for decades, that's what standardization is for. Desktops are generic, notebooks are proprietary. Which one has more problems with compatibility and obsolescence?

> I insist that proprietary systems, properly applied, are the simple
> solution to a wide range of automation projects. That doesn't mean
> that I doubt or demean the success of everyone on the list who
> avoids proprietary systems. This is a brilliant newsgroup, and the
> people who will prove me wrong are probably reading this now. You
> just haven't done it yet.

I seek only the opportunity to provide a choice based on its merits and technical aspects. What I am finding is that merit and technical
considerations are almost irrelevent in the face of marketing and perception.

cww
 
L

Larry Lawver

I wrote, in part:

-> When the cost of proprietary systems is compared with "open"
-> systems, I frequently find that the performance criteria are very
-> relaxed for the "open" system. One of my clients regularly uses
-> a $7000 PC-based bill of material unless hard specified to use
-> my $18,000 bill of material. The PC-based system takes six
-> weeks in the field to tune and has spotty reliability after that.
-> The proprietary bill of materials goes together quickly in the
-> shop and rarely requires field visits. Which is really the more
-> expensive bill of material?

Joe's reply (not repeated here) to this section of my post was off-point, obviously because I didn't mention that I am a distributor of well-known proprietary stuff, not an integrator or OEM. The client I mention is an OEM. The rest of Joe's comments are appreciated.

Hope this helps!

Larry Lawver
Rexel / Central Florida
 
J
Sam wrote:
-> Obviously, there are different definitions of what "open" means. One
-> definition is that the technology is widely used.

Are you seriously presenting that as a definition? The PLC-5 is widely used. Does that suddenly make it open? Was OPC a 'closed' system before being adopted by most of the SCADA
packages? (That is what I am to believe, based on your definition.) What is the magic number of users for a system to metamorphosize from closed to open? The only place that a closed system becomes an open system with no change to the source or binaries is in the marketing department. Anything that suggests that 'open architecture' is based on user count is rubbish. I assume then that you would consider the LinuxPLC project a closed, proprietary solution due to lack of widespread use?

-> OPC is being used by a number of companies, like many commercial SCADA
-> vendors. I consider it "open" because before the device interface for
-> each commercial system was unique to each vendor.
->
-> As far as Ethernet and TCP/IP is concerned the argument is over.
-> People can argue against whether or not they are best suited for
-> controls networking, but the same logic applies. If the commodity
-> technology (Ethernet/TCP/IP) is good enough then it will prevail -
-> as it is already.

I would ask why you think it is that this has already happened?
TCP/IP is an OPEN standard. Nobody owns TCP/IP. I can write a TCP/IP driver without paying a royalty. Why, I can even find a spec that tells me what TCP/IP is supposed to do! Here is the
distinction: It is widely used because it is open. It is not open simply because it is widely used.
 
> It is not long now that computer systems engineers
> will be certified just like any other type of engineer.

I just noticed this yesterday in the Spring 2000 "PE Newsletter" by the Texas Board of Professional Engineers. It's been in the works for a while, it seems.

"In June 1998, Texas became the first state to license software engineers. This action had an influence on the recognition by ABET of software engineering as an engineering discipline. Subsequently, a committee formed by IEEE and ACM has developed a "body of knowledge" needed to serve as the basis for a national NCEES exam and curriculum in software engineering. Board Member Dave Dorchester, P.E., has spearheaded this effort from its inception."
 
> Do I implement a
> system with an operating system that is unique to what my support
> staff current maintains? There needs to be a very good reason to do
> that. What I find is that there isn't enough justification for that,
> because the common one, the one that is running on hundreds of
> desktops in the organization is good enough. The same logic follows
> from there. If the object system that comes with that operating
> system is adequate then I will use it rather than going with another
> technology.

->So, if it's good enough for the desktop, it's good enough for
controls? ->Or a replacement must run everything in the company?

I am not saying if the operating system is good enough for the desktop then it is good enough for controls. I am saying that if you can run one operating system and it satisfies the requirements of both, then you should use it. There is little benefit in going with a specialized technology if the improvement you get is not significant. If the commodity technology will not perform to requirements.

> Obviously, there are different definitions of what "open" means. One
> definition is that the technology is widely used. OPC is being used
> by a number of companies, like many commercial SCADA vendors. I
> consider it "open" because before the device interface for each
> commercial system was unique to each vendor.
>
> As far as Ethernet and TCP/IP is concerned the argument is over.
> People can argue against whether or not they are best suited for
> controls networking, but the same logic applies. If the commodity
> technology (Ethernet/TCP/IP) is good enough then it will prevail -
> as it is already.

->But, suppose Ethernet and TCP/IP required you to run Sun
Solaris
->for example, would that then be open in the sense of OPC? I
think
->people would have a problem with that. Why is it not an issue if
MS
->controls OPC? Why would people shy away from one and
embrace ->the
other? If Sun, for example, had the market sewed up, would it -
>then
be ok if they controlled all the communications?

It doesn't matter to me if a vendor controls a technology. For instance, I ran SunOS and Solaris for many years. These operating systems are owned by Sun and I didn't have a problem with using
them - what was my choice. NT is owned by MS. There is no difference.

Sun controls Java, yet many people consider it an "open" technology. OPC is based on COM, but the specification of OPC is based on the work of people outside of MS. The implementations of OPC are done by automation companies and not by MS.

If Sun was in MS's place I would be running Solaris and programming in Java.
 
Top