Today is...
Monday, May 22, 2017
Welcome to Control.com, the global online
community of automation professionals.
Featured Video...
Featured Video
EtherCAT with CTC’s master lets your multivendor network play well together...
Our Advertisers
Help keep our servers running...
Patronize our advertisers!
Visit our Post Archive
Software Quality
Living with bugs, missing features, unsupported versions? Have you been coerced to upgrade before you even got that last version working? I saw this article and it sounded so familiar I thought I would share...
By Curt Wuollet on 18 October, 2001 - 9:25 am

Hi All

Living with bugs, missing features, unsupported versions? Have you been coerced to upgrade before you even got that last version working? I saw this article and it sounded so familiar I thought I would share. What's coming up and some remedies are discussed. Relevent since almost everyone in this market uses proprietary software.

http://www.cio.com/archive/101501/wasting.html


Regards

cww

--
Free Tools!
Machine Automation Tools (LinuxPLC) Free, Truly Open & Publicly Owned
Industrial Automation Software For Linux. mat.sourceforge.net.
Day Job: Heartland Engineering, Automation & ATE for Automotive
Rebuilders.
Consultancy: Wide Open Technologies: Moving Business & Automation to
Linux.

By Rob Hulsebos Philips CFT on 18 October, 2001 - 11:24 am

Yeah, and I wonder if it is getting worse with the new policy of Microsoft - a new version or update of Windows, will it also require updates of the other software installed? Which in turn may force other software to be updated? Does a whole avalanche of updates have to be installed
at once? Questions, questions...

Rob Hulsebos

By Anthony Kerstens on 18 October, 2001 - 4:31 pm

Sounds just like Rockwell, making you pay thousands of dollars for each of their software packages with little hope of inter-compatibility. It's alway peeved me to have to pay 4k for RSLogix5 and then pay another 4k for RSLogix500. It peeved me even more to pay for RSLogix5000, and then see all the features of the other two not present in 5000.

You'd think since hundreds of thousands of dollars is being sunk into hardware that at least they'd offer the software for free, or at least <1k. And make one software package that would do it all.

Is anybody at Rockwell listening???? Hello!!!!


Anthony Kerstens P.Eng.
(Oh, how I wish my customers would smarten-up and
start using Modicon. :-)

Hey you like to exaggerate. RSlogix500 is $700 and RSLogix5 is 3K dollars. If you're a important large customer you pay alot less and sometimes you pay nothing. Maybe you should ask why they want the equipment. I can't speak for the others, but the PLC5 is the most reliable PLC ever made.

By Anthony Kerstens on 22 October, 2001 - 3:24 pm

I stand corrected. I should have noted Canadian dollars.

From the Rockwell website:
RSLogix500 US$1100 -> CDN$1650 + TAX = CDN$1900
RSLogix5 US$3300 -> CDN$4950 + TAX = CDN$5692

If I were to buy the programming bundle that would be CDN$6900.

In sum, I'm about CDN$400 off for separate purchases, and off $1100 for the bundle. That only 14 percent off the mark...

My point is still valid. A big chunk of change has gone out the door for two programs that are not inter-compatible. A third must now be purchased for another big chunk of change, and it
doesn't have the features built into the first two.

Rockwell is just as bad as Oracle.
Hello Rockwell. Is anyone there listening??????

Anthony Kerstens P.Eng.

By Curt Wuollet on 18 October, 2001 - 3:38 pm

It'll probably depend on a quota. Once you hit $500.00 per license per year for your share of bugs, they call off the enforcers. After all, a protection racket doesn't work if it's more painful to pay than not. I'm sure thay don't care if you install them or not as long as you pay for them.

Regards

cww

By Alex Pavloff on 19 October, 2001 - 5:16 pm

Interesting article -- it really does point out how the big IT vendors make money and the frustration that it engenders in their customers.

Some points:
In the case of IT companies mentioned in the article, the "subscription" model does appear to make sense.

However, I don't think that this model applies that well for those of us in this industry. With a few exception (Intellution et al), nearly everything in the automation industry is a piece of physical something that you can hold in your hand (and throw back at the vendor if it becomes too annoying). It's hard to charge a subscription for something that I've already got.

Now, as another poster here complained, Rockwell is charging yearly for their RS* software, in addition to charging a premium for their hardware.
Why can they get away with it? Easy -- they're a big company selling to big companies.

Eason Technology (my company), is small (not completely by choice, of course <g>). As such, we're selling to small-medium companies. Our software is now selling for $99 -- a tiny fraction of the cost that we've invested in it, and we have no yearly fees and have never charged any money for an upgrade. Our software, is, of course, completely useless without the hardware we sell. Going to a subscription model makes absolutely no sense -- and the customers at our end of the market wouldn't go for it.

Now, in regards to open source. To make open source work, you need to have smart people, and you need them on staff. Its great if you have the money for that. The reason that open source hasn't taken off in the automation market is because, well, software is one of many elements to a project. Generating enough work for someone good enough to pay for themselves would be difficult. Curt, you're one of the main people behind the LinuxPLC project, but you still do "other stuff" to pay the bills, right? Large
companies are those that generally "sponsor" open source projects (Hi, IBM!). As commented in the "Integrator" thread, large manufacturer (Allen
Bradley et al) are trying to get into the integration game in order to make some money. What incentive do they have to open source any of their stuff (or sponsor, say, the LinuxPLC) -- they just want to sell their own stuff. IBM, in contrast, is sponsoring many open source projects because then they can turn around and sell their services using this software.

Until something changes, open source development in this industry will be mainly an after-work project. My company will be using Linux & open source software on our next generation of HMI panels, but our product is a lot closer to a computer, than say, some sort of motor. With a few exceptions, hardware doesn't lend itself to the open source paradigm.

In general, I don't think that we can take arbitrary lessons from the IT field and apply them to our field.

Alex Pavloff
Software Engineer
Eason Technology

By Curt Wuollet on 20 October, 2001 - 9:31 am

Hi Alex

List Manager wrote:
>
> ---------- Forwarded message ----------
> From: Alex Pavloff <apavloff@eason.com>
>
> Interesting article -- it really does point out how the big IT vendors make
> money and the frustration that it engenders in their customers.
>
> Some points:
> In the case of IT companies mentioned in the article, the "subscription"
> model does appear to make sense.
>
> However, I don't think that this model applies that well for those of us in
> this industry. With a few exception (Intellution et al), nearly everything
> in the automation industry is a piece of physical something that you can
> hold in your hand (and throw back at the vendor if it becomes too annoying).
> It's hard to charge a subscription for something that I've already got.
>
> Now, as another poster here complained, Rockwell is charging yearly for
> their RS* software, in addition to charging a premium for their hardware.
> Why can they get away with it? Easy -- they're a big company selling to big
> companies.
>
> Eason Technology (my company), is small (not completely by choice, of course
> <g>). As such, we're selling to small-medium companies. Our software is
> now selling for $99 -- a tiny fraction of the cost that we've invested in
> it, and we have no yearly fees and have never charged any money for an
> upgrade. Our software, is, of course, completely useless without the
> hardware we sell. Going to a subscription model makes absolutely no sense
> -- and the customers at our end of the market wouldn't go for it.
>
> Now, in regards to open source. To make open source work, you need to have
> smart people, and you need them on staff.

Yes and no. Yes, Open Source would be invaluable to integrators with the expertise to add or modify a product to get those functions that
you need, but no one has thought of yet. And contributing such work back to the public pool would build very rich functionality and broad
application quickly. But no, simply being Open Source doesn't imply any extra skills or talent. If the products you use today suddenly released their source, would that make them any harder to use? I don't think so. Implicit in the authors argument is that the Open Source be self supported. Community support would make up the difference. This list is an example of how well that can work. Especially since many people on the list get the impression that their current tools are more or less self supported ;^). But a lot of Open Source is used exactly the same as closed source, by installing the binaries and leaving the source on the CD as insurance. I use many OSS products every day that I have never read
the source for. It's nice that I could fix them if I want to and I can never be left hanging or forced to upgrade and can maintain them
indefinately, but I am not interested in improving those outside my interests.


Its great if you have the money
> for that. The reason that open source hasn't taken off in the automation
> market is because, well, software is one of many elements to a project.
> Generating enough work for someone good enough to pay for themselves would
> be difficult. Curt, you're one of the main people behind the LinuxPLC
> project, but you still do "other stuff" to pay the bills, right?

Yes, I do captive automation with OSS for my employer. For the type of work we do, integrating existing machines and equipment, proprietary
off the shelf technology is not cost competitive because by design, no two machines or closed systems can interoperate. It is possible to do
it OTS in some cases but you end up buying many adapters, bridges and general kludges which eat up much more time and money than custom
programming with generic hardware. Some new work is done with PLC's etc. some is done with Linux. PLC's are very good for a few things. For most other things Linux requires a lot less hardware and time. This is a tremendous competitive
advantage that we have almost exclusively to ourselves because most folks are still trying to make PLC's do everything.

Large
> companies are those that generally "sponsor" open source projects (Hi,
> IBM!). As commented in the "Integrator" thread, large manufacturer (Allen
> Bradley et al) are trying to get into the integration game in order to make
> some money. What incentive do they have to open source any of their stuff
> (or sponsor, say, the LinuxPLC) -- they just want to sell their own stuff.
> IBM, in contrast, is sponsoring many open source projects because then they
> can turn around and sell their services using this software.

For the great majority of the readers of this list, the money _is_ in services. You typically don't get rich selling AB or GEF hardware or
software. What you get paid for is expertise and solutions. That's why we aren't looking for AB or GE or even IBM as sponsers. We want you to join us in building stuff that you can use to build better solutions for higher profitability. The big guys can take care of themselves.

> Until something changes, open source development in this industry will be
> mainly an after-work project. My company will be using Linux & open source
> software on our next generation of HMI panels, but our product is a lot
> closer to a computer, than say, some sort of motor. With a few exceptions,
> hardware doesn't lend itself to the open source paradigm.
>
> In general, I don't think that we can take arbitrary lessons from the IT
> field and apply them to our field.

No, not generally but OSS and generic hardware fit perfectly in the problem areas that the Big guys not only don't address but have in fact created. Like communications, integration, networking, computation and any area that requires flexibility and interoperation. That's
plenty of work to keep me busy and we're a small shop. It's the most useful tool in my toolbox. Doing what a PLC does isn't that much to add.

Regards

cww

--
Free Tools!
Machine Automation Tools (LinuxPLC) Free, Truly Open & Publicly Owned
Industrial Automation Software For Linux. mat.sourceforge.net.
Day Job: Heartland Engineering, Automation & ATE for Automotive
Rebuilders.
Consultancy: Wide Open Technologies: Moving Business & Automation to
Linux.


By Michael Griffin on 20 October, 2001 - 9:32 am

At 16:35 18/10/01 -0400, Alex Pavloff wrote:
<clip>
>Now, as another poster here complained, Rockwell is charging yearly for
>their RS* software, in addition to charging a premium for their hardware.
>Why can they get away with it? Easy -- they're a big company selling to big
>companies.

They can charge a high price because they are perceived as having dominance in a particular sector of the market. People are willing to pay
extra for AB hardware partly because they believe it is easy to find lots of companies (integrators, etc.) who will have the software and so can support them. Integrators are willing to pay high prices for the software because
they believe there are lots of AB PLCs being installed in machines, and so they can pick up lots of work programming them.
However, this whole positive cycle can go disastrously (for AB) into reverse if people start to perceive any weakness in the company's market position. Integrators will become reluctant to continue investing in the software if they think there will be less AB hardware around to make money off of (and so pay for the software) in future. Customers will become
reluctant to pay a premium for AB hardware if they think it is going to become less easy to find integrators who have the software to work with it. The two trends then feed off each other to continue the decline.
Once this sort of downward spiral begins, the only thing Rockwell can do is cut prices on everything, and I don't think they have the cost
structure to let them do that.


>Eason Technology (my company), is small (not completely by choice, of course
><g>). As such, we're selling to small-medium companies. Our software is
>now selling for $99 -- a tiny fraction of the cost that we've invested in
>it, and we have no yearly fees and have never charged any money for an
>upgrade. Our software, is, of course, completely useless without the
>hardware we sell. Going to a subscription model makes absolutely no sense
>-- and the customers at our end of the market wouldn't go for it.

For a small company, inexpensive software makes a lot of business sense because by lowering the software investment cost, it lowers the risk
anyone takes in trying out out one of your hardware products. At $99, it is easy to justify buying it for a single project. The "loss leader" is a well accepted approach in marketing.


>Now, in regards to open source. To make open source work, you need to have
>smart people, and you need them on staff. Its great if you have the money
>for that.
>The reason that open source hasn't taken off in the automation
>market is because, well, software is one of many elements to a project.
>Generating enough work for someone good enough to pay for themselves would
>be difficult.

I don't see that open source necessarily means "free". You might still have to pay someone (i.e. a consultant) some money to do something for
you. I could theoretically build my own "open controller" using bits and pieces of hardware that I bought from various places, and software that I downloaded off the internet (once Mr. Wuollet and friends finish the LinuxPLC). However, why would I do this, other than for the educational value? Whatever it is I may be good at, it isn't piecing together these kinds of systems and then testing and documenting them.
It would make much more sense to buy a standard tested system (hardware plus software) from someone who is spreading their engineering
cost over a much larger number of systems than I would ever use in a lifetime. I would rather simply pull the complete controller out of the box
and install it and load in my PLC program.
Since I could buy equivalent systems from several different companies I would still have the advantage of more choice as to whom I deal
with. I would also enjoy a lower risk since different suppliers would be offering essentially similar and compatable systems. I could switch between them if I grew dissatisfied with one, or if one of them dropped out of the business.
A supplier would in turn have the advantage that this lower risk would make customers more willing to do business with them to begin with. This would be a quicker and easier way for a new entrant to get into the PLC
business than creating their own system entirely from scratch.


>Curt, you're one of the main people behind the LinuxPLC
>project, but you still do "other stuff" to pay the bills, right? Large
>companies are those that generally "sponsor" open source projects (Hi,
>IBM!). As commented in the "Integrator" thread, large manufacturer (Allen
>Bradley et al) are trying to get into the integration game in order to make
>some money. What incentive do they have to open source any of their stuff
>(or sponsor, say, the LinuxPLC) -- they just want to sell their own stuff.
>IBM, in contrast, is sponsoring many open source projects because then they
>can turn around and sell their services using this software.

However, large equipment OEMs could "sponsor" open source projects because their main business is selling machines, and they don't make money from selling software. Some of these companies have their owm proprietary
systems, so it isn't as if this software field is entirely new to them. Customers may be more willing to accept an "open source" system than one
which is proprietary to a single machinery OEM.

>Until something changes, open source development in this industry will be
>mainly an after-work project. My company will be using Linux & open source
>software on our next generation of HMI panels, but our product is a lot
>closer to a computer, than say, some sort of motor. With a few exceptions,
>hardware doesn't lend itself to the open source paradigm.
<clip>
To give you an example that you could relate to, if you decided that you wanted to include PLC functionality into one of your displays (I'm not suggesting that you should - but suppose you wanted to), you could create a
new product (combined MMI panel and PLC) without undertaking a big software development project if most of the software you needed existed as open source.
When you sold these to the public, you could point out how the PLC part of the system was a standard supported by other companies and not some odd system of your own. This would give the product greater credibility and acceptance among customers.

Notice how in the scenario I have outlined here your customers would be using "open source" software without any of them downloading anything off the internet (except perhaps for your sales brochures). I think that the idea that everyone and his dog would be downloading software off the internet to build their own controllers one by one is a red herring. It
doesn't make business (or technical) sense to me.


**********************
Michael Griffin
London, Ont. Canada
**********************

By David McGilvray on 20 October, 2001 - 9:47 am

Thursday, October 18, 2001 10:21 PM Michael Griffin wrote:
<big clip>

I don't see that open source necessarily means "free". You might
still have to pay someone (i.e. a consultant) some money to do something for
you. I could theoretically build my own "open controller" using bits and
pieces of hardware that I bought from various places, and software that I
downloaded off the internet (once Mr. Wuollet and friends finish the
LinuxPLC). However, why would I do this, other than for the educational
value? Whatever it is I may be good at, it isn't piecing together these
kinds of systems and then testing and documenting them.
It would make much more sense to buy a standard tested system
(hardware plus software) from someone who is spreading their engineering
cost over a much larger number of systems than I would ever use in a
lifetime. I would rather simply pull the complete controller out of the box
and install it and load in my PLC program.
Since I could buy equivalent systems from several different
companies I would still have the advantage of more choice as to whom I deal
with. I would also enjoy a lower risk since different suppliers would be
offering essentially similar and compatable systems. I could switch between
them if I grew dissatisfied with one, or if one of them dropped out of the
business.
A supplier would in turn have the advantage that this lower risk
would make customers more willing to do business with them to begin with.
This would be a quicker and easier way for a new entrant to get into the PLC
business than creating their own system entirely from scratch.
<clip>

Michael,
Although I can't speak to the LinuxPLC product specifically, an open source system need not necessarily be different from a proprietary alternative. I expect the LinuxPLC, when finished will be an open source option similar to
the proprietary automationX. The scenario you describe of "building" your own controller with "bits and pieces of hardware" and "down loaded software" grossly misrepresents the situation for server based systems. First, the
fact that you have choice is generally considered a good thing. Second, there is no more "building" involved and conceivably substantially less "building" (certainly the case with automationX - try up to 80%) compared
to conventional PLC's - it's generally called configuration. Third, server based systems typically have far less hardware - resulting in much less costly installation & maintenance.

Look to the IT industry for inspiration. Once upon a time, IBM dominated the industry selling complete system, including hardware, software, etc all from big blue. Today, the landscape is substantially different with a huge amount of choice covering virtually every conceivable features and function set. And best of all, the final solutions are much better and much less
costly.

David McGilvray
M&R Automation


By Michael Griffin on 22 October, 2001 - 12:06 pm

David McGilvray wrote:
<clip>
>The scenario you describe of "building" your
>own controller with "bits and pieces of hardware" and "down loaded software"
>grossly misrepresents the situation for server based systems.
<clip>

I don't think I made any misrepresentations about server based systems in general or your product in particular, since I didn't mention them. I was referring to a normal PLC (especially small ones) used in typical small machine applications.


>Look to the IT industry for inspiration. Once upon a time, IBM dominated
>the industry selling complete system, including hardware, software, etc all
>from big blue. Today, the landscape is substantially different with a huge
>amount of choice covering virtually every conceivable features and function
>set. And best of all, the final solutions are much better and much less
>costly.
<clip>

Alright - I suggest we do look at the IT industry. How many people (non-hobbyists) build their own computers out of parts they found in surplus stores and software they scrounged up on the internet? I'm sure you could build a perfectly good system that way (if you knew what you were doing), but is it economical? Most people would rather just buy a computer.
If I am going to buy an industrial controller, I don't want to have to figure out which board works with what driver. I am willing (because of cost) to pay someone else to do that.
This is particularly important with low cost systems where there is very little margin to spend on unexpected problems. This may be less
significant on larger systems.


**********************
Michael Griffin
London, Ont. Canada
**********************

David McGilvray:
> >The scenario you describe of "building" your own controller with "bits
> >and pieces of hardware" and "down loaded software" grossly misrepresents
> >the situation for server based systems.
...
> >Look to the IT industry for inspiration.
...

Michael Griffin:
> Alright - I suggest we do look at the IT industry. How many people
> (non-hobbyists) build their own computers out of parts they found in
> surplus stores and software they scrounged up on the internet?

Exactly. The same thing would apply to building your own open-source controller - few would do it, and then only if they have special needs.

> If I am going to buy an industrial controller, I don't want to have to
> figure out which board works with what driver. I am willing (because of
> cost) to pay someone else to do that.

Yup. So you pay your local VAR, or a national chain if you prefer, to put an industrial controller together for you from standard parts. Most likely you'll get one of the standard configurations off the shelf, maybe with minor variations.

Because anyone has access to the parts and the software, though, you'll have several sources[1] - which is good for price, but more importantly
good for risk management. If your VAR goes out of business, you can get substantially the same product from next door.

Proprietary software, by its nature, is single-sourced. You should always think twice before using a single-sourced component.

Jiri

[1] in the worst possible scenario, the alternative source will be doing it yourself, which would be expensive, but should be rare and in any case will be better than nothing.
--
Jiri Baum <jiri@baum.com.au> http://www.csse.monash.edu.au/~jirib
MAT LinuxPLC project --- http://mat.sf.net --- Machine Automation Tools

By David McGilvray on 23 October, 2001 - 3:26 pm

Michael,

Yes, I agree, most if not all consumers are better off buying a complete system. The fact that choice is available, however, I would argue,
contributes to greater competition and leads to better products at a lower price. And, of course, if some of those components are free to the vendor, the overall system price can be expected to be that much lower.

For those hobbyists, or systems integrators, to put things into perspective, downloading software, or purchasing on CD, ("scrounged" from your local sales rep, distributor or other normal channels that any other product is procured) and buying a computer separately (I suggest buying new rather than going the surplus route) is about the same effort as loading any popular word processor package on separately purchased PC hardware (or similar to the effort required to integrate a PLC, MMI and PC system). That is, not
much.

David McGilvray
M&R Automation

By Johan Bengtsson on 29 October, 2001 - 11:20 am

So if you buy a PLC and you get the source code for the PLCs internal operation with it (or even you didn't get it but you got a link to where you could download it if you like) would that PLC be worth less and harder for you to use because of that?


/Johan Bengtsson

----------------------------------------
P&L, Innovation in training
Box 252, S-281 23 H{ssleholm SWEDEN
Tel: +46 451 49 460, Fax: +46 451 89 833
E-mail: johan.bengtsson@pol.se
Internet: http://www.pol.se/
----------------------------------------

By Michael Griffin on 29 October, 2001 - 12:30 pm

Are you sure you were replying to the correct message? I believe the discussion was on the cost of assembling, configuring, testing, and
documenting your own controller versus buying one with that already done (e.g. as with the PLC you mentioned).

My original message (from which Mr. McGilvray quoted) made the point that someone using a controller which has open source software would probably be better off buying an off the shelf system rather than putting their own together (at least for typical PLC type applications). This means there should be a market for OEMs (although not necessarily traditional PLC manufacturers) to build these systems. I'm sure I could do it myself, but why would I want to - or rather, why would anyone want to pay me to do so?

Perhaps in the types of industries which Mr. McGilvray serves (mining, smelting, and forest products if I recall), the economics may be
different. However, for most of my applications I need to be able to flip open a catalogue and pick out a controller by I/O and features count and
have an electrician install it after it arrives. The only software I should have to worry about is writing the PLC program to make the machine go. It
just isn't economic to spend any more time on it.

The last time I bought a car, I just went to the dealer and bought a car. I didn't shop around at the wreckers' to find the parts to make my own. I think that most people would do the same.


**********************
Michael Griffin
London, Ont. Canada
**********************

Johan Bengtsson:
> >So if you buy a PLC and you get the source code for the PLCs internal
...
> >would that PLC be worth less and harder for you to use because of that?

Michael Griffin:
> I believe the discussion was on the cost of assembling, configuring,
> testing, and documenting your own controller versus buying one with that
> already done (e.g. as with the PLC you mentioned).

I believe someone was conflating Open Source with build-it-yourself, which is what Johan was trying to correct.

The two are mostly unrelated.

> My original message (from which Mr. McGilvray quoted) made the point that
> someone using a controller which has open source software would probably
> be better off buying an off the shelf system rather than putting their
> own together (at least for typical PLC type applications).

Definitely.

> This means there should be a market for OEMs (although not necessarily
> traditional PLC manufacturers) to build these systems. I'm sure I could
> do it myself, but why would I want to - or rather, why would anyone want
> to pay me to do so?

The option to do so will make the OEMs more responsive, even if you never take it up. And, of course, there are the rare times when you do need to, or when you need to make a minor alteration to the supplied package.

> The last time I bought a car, I just went to the dealer and bought a car.
> I didn't shop around at the wreckers' to find the parts to make my own.
> I think that most people would do the same.

No, but neither did the car have a big lock on the hood that only authorized servicepeople can open.

You *can* open the car yourself to check the oil (or change the cyliner head gasket), and so can the corner mechanic. You probably don't, but the
mere option makes the cost of service much lower and the quality better.


Jiri
--
Jiri Baum <jiri@baum.com.au> http://www.csse.monash.edu.au/~jirib
MAT LinuxPLC project --- http://mat.sf.net --- Machine Automation Tools
tlhIngan Hol jatlhlaHchugh ghollI' Hov leng ngoDHommey'e' not yISuD
Never bet on Star Trek trivia if your opponent speaks Klingon. --Kung Foole

By David McGilvray on 29 October, 2001 - 1:56 pm

The fact that a product is open source does not make it inherently different from a proprietary alternative, except, of course, the cost. An off the shelf PLC may be proprietary or open source. Off course, from a practical point of view, an open source PLC option would require proprietary hardware (PC and I/O). My expectation is the OS PuffinPLC, when completed, will be similar to
automationX : a software based system - one OS & the other proprietary - that is a fully functional, tested, documented, etc. In fact, software based systems are typically lower cost alternatives to the traditional alternatives.


David McGilvray
automationX
www.mnrcan.com

By Michael Griffin on 31 October, 2001 - 12:52 pm

David McGilvray wrote:
<clip>
>The fact that a product is open source does not make it inherently different
>from a proprietary alternative, except, of course, the cost.

I agree that in either case, the products must be judged upon their own merits, not upon their provenance. I think however, that it is a mistake to concentrate too much upon the cost (or rather the price) of open source software. In any project there are many costs, and they all must be
considered together.

>An off the
>shelf PLC may be proprietary or open source. Off course, from a practical
>point of view, an open source PLC option would require proprietary hardware
>(PC and I/O).

I believe my original example went much further and postulated open source software running on entirely proprietary hardware in an off the shelf combination. Something which really hasn't been mentioned too much is that if a variety of different hardware ran the same "soft logic" system the end customer would be able to separate the issue of what hardware to use from
what software to use.

The "vendor lock-in" which we sometimes hear about comes more from software than from hardware. This is why for example Siemens went to such effort to keep the basic software architecture of the S7-300/400 series broadly similar to the S5 series even though the hardware is completely different. The S7-300/400 would probably have been a better PLC if it had made a cleaner break with the past, but Siemens wanted to make it easier for S5 customers to switch to the S7-300/400 than to something else. (And no
doubt some customers appreciated this continuity as well.)

>My expectation is the OS PuffinPLC, when completed, will be similar to
>automationX : a software based system - one OS & the other proprietary -
>that is a fully functional, tested, documented, etc. In fact, software
>based systems are typically lower cost alternatives to the traditional
>alternatives.
<clip>

I come to a slightly different conclusion as to application area, although admittedly on fewer facts than you have. My own impression is that AutomationX is oriented more towards larger systems, while the "PuffinPLC" (or MAT, or whatever) will concentrate on smaller ones. I see it as a difference in emphasis rather than potential. It also means that I don't see the two as direct competitors.

With smaller systems, there is a strong emphasis on minimising the engineering man hours and on buying pre-engineered sub-systems (e.g. - the controller). This is why I see a market for "off the shelf" controllers, even with ones which use open source software.

The situation may be different with very large systems - but I don't have much experience in that field to base an opinion on.


**********************
Michael Griffin
London, Ont. Canada
**********************

By David McGilvray on 1 November, 2001 - 9:24 am

Michael,

ax device, a new complementary product to automationX, is the hardware independent "soft logic" you describe. Fully qualified and tested on VIPA hardware, with announcements for major vendor's hardware compatibility to follow shortly.

David McGilvray
automationX
www.mnrcan.com

This is true. We have found engineers and techs tend to group soft logic products together and make assumptions as to how they work. In our case they have a difficult time understanding we have eliminated the requirement for hardware PLC's or Controllers to run soft logic.

If the system is truly software based - such as automationX - the hardware is completely decoupled in comparison. The only way to do this, and to unleash a power assault of computing at your automation project is to execute the soft logic on a server. The hardware may be swapped out, and the software stays the same. There are no strings attached for using hardware, even custom interfaces are inexpensive. We extend this concept further with aXDevice as Dave has mentioned.

I don't know why we focus so much on hardware, PLC's and controllers etc., and give them demanding computing tasks when they are such crappy computers. Think about it - It doesn't make sense. The most frequent argument we get
is reliability, but this is a perception. We've seen on this list that Siemens, Allen Bradley PLC's crap out more often than a properly tuned
server. And of course for computing power PLC's are even close.

You work with a company by nature you get some sort of "vendor lock in". But the term comes from lack of options, when faced with a decision to interface, add to, or upgrade an automation system. For example a DeltaV system works best with DeltaV controllers. A DCS vendor tries to sell their products at premium prices, or charges $30,000 for a little PLC interface - this is what gets people upset.

Paul Jager
CEO
www.mnrcan.com


By Joe Jansen/ENGR/HQ/KEMET/US on 2 November, 2001 - 4:06 pm

Sorry, but this is absolutely untrue. I can brag up a server running 2, 3 4 or more years, and that is considered a very good system. PLC's run for decades without downtime. How many of us have systems still running on the "new at the time" PLC-2? Stating that a PLC will crap out more often than a PC is absolute fallacy.

--Joe Jansen

Paul Jager wrote:

>... The most frequent argument we get is reliability, but this is a
perception. We've seen on this list that Siemens, Allen Bradley PLC's
crap out more often than a properly tuned server. And of course for
computing power PLC's are even close....<

By Curt Wuollet on 2 November, 2001 - 4:28 pm

Hi Joe

Sounds like we have to switch brands.
The GE PLCs we're using are good but not that good. They, in aggregate, are running about as well as the Linux boxes or a little worse. But the Linux boxes require maintenance though seldom requiring downtime. Once I get rid of the fans and hdd's...?

Regards

cww

--
Free Tools!
Machine Automation Tools (LinuxPLC) Free, Truly Open & Publicly Owned
Industrial Automation Software For Linux. mat.sourceforge.net.
Day Job: Heartland Engineering, Automation & ATE for Automotive
Rebuilders.
Consultancy: Wide Open Technologies: Moving Business & Automation to
Linux.

I have a new name for the PLC- it's a Pathetic Little Computer. Imagine any critical business or operation say a government service, a highly popular web site, banking services, etc. - running on PLC's. Obviously it can't be done. Yet from a computing standpoint this is what we blindly and faithfully do for industrial applications. This approach is hurting the industrial business. There is a lack of available computing power, lost opportunity of data distribution and computational flexibility, and finally the total cost of ownership.

It's not a simple question of Server more reliable than PLC, etc. My statement is that a server pair can and does deliver the required
reliability with benefits far exceeding what a PLC based platform can provide. I should point out that proper server SOFTWARE is a critical
element in the non-PLC equation. This market has some top solutions and is developing at a rapid pace.

In my travels to various sites I see a lot of scared engineers, with a personal fear of innovation. As a group we are not very open-minded. Those that combine business savvy and technical prowess are rare. Even rarer still are those that can interface with management executive.

Yes the PLC of course has less moving parts than today's servers. Taken for face value the PLC might eek out a server for uptime, but you are costing your company a hefty price overall with such loyalty to a Pathetic Little Computer.

Paul Jager
CEO
www.mnrcan.com

By Joe Jansen/ENGR/HQ/KEMET/US on 8 November, 2001 - 10:52 am

Wow! This has gotten almost funny! After reading your post, I decided to head over to your website in hopes of finding something with a bit more substance. What I found was even more amusing. Your gleeful renaming of the PLC, and your outright hostility to them is more than apparent. I guess the reason this is funny is because while I was still in school, I was being told by people to not bother learning PLC's, since they will be gone in a year or less anyway. That was over a decade ago. I seem to
detect the same attitude from your site: PLC's are gone, and anyone who buys one is obviously clueless. I find nothing on your website that is
beyond nine months old. Have you been around longer than that?

OK. Now to address the posting:

---
I have a new name for the PLC- it's a Pathetic Little Computer.
---

Cute.......

---
Imagine any critical business or operation say a government service, a highly popular web site, banking services, etc. - running on PLC's. Obviously it can't be done.
---

Gee. you mean I can't load Oracle onto my Micrologix? Boy, this thing *must* be a piece of garbage! This is simply a case of right tool / right job. What about a 'mission critical' medical process? Or a food processor. I can guarantee that anyone who needs absolute uptime isn't running a PC control.

---
Yet from a computing standpoint this is what we blindly and faithfully do for industrial applications. This approach is hurting the industrial business. There is a lack of available computing power, lost opportunity of data distribution and computational flexibility, and finally the total cost of ownership.
---

A properly deployed system does not suffer from any of these problems. You cannot take a 10 year old installation, and compare it to a PC based
installation today, and then point out the differences. Where was -your- system 10 yrs ago? The point is that by properly combining the
technologies, be they PLC, PC, Network, HMI, etc. you can get a system that is far more robust than any one of these alone. Just as I would not try to use lights and buttons as the sole UI on a system of any complexity, you cannot honestly tell me that it is cost effective and justifiable to use a PC on *every* control system.

---
It's not a simple question of Server more reliable than PLC, etc. My statement is that a server pair can and does deliver the required
reliability with benefits far exceeding what a PLC based platform can provide. I should point out that proper server SOFTWARE is a critical
element in the non-PLC equation. This market has some top solutions and is developing at a rapid pace.
---

You were the one that made the reliability statement originally, I believe. What exactly do you now mean by "server pair"? Are you suggesting that the only way to make your system more reliable is by having two of them? Is that cost effective against a Micrologix 1000 with 16 I/O points? Remember, you are the one saying that PLC's are useless. I am not saying PC's are useless, just not the best solution for every problem.

---
In my travels to various sites I see a lot of scared engineers, with a personal fear of innovation. As a group we are not very open-minded. Those that combine business savvy and technical prowess are rare. Even rarer still are those that can interface with management executive.
---

I think you are mistaking 'fear of innovation' for fear of having to support some PC based nightmare whose rapid pace of development means
patches, upgrades, and bugs. The reason that induistrial controls are not on the bleeding edge is because we have no time for firmware and control software that needs constant patching. I have never experienced a PLC processor going into the equivelant of a 'blue screen'. (yes, I realize that PLC's have no screen. DUH! What I mean is that the processor doesn't just go out to lunch because an I/O driver was written wrong and created a memory leak, or whatever....)

---
Yes the PLC of course has less moving parts than today's servers. Taken for face value the PLC might eek out a server for uptime, but you are costing your company a hefty price overall with such loyalty to a Pathetic Little Computer.
---

"Might eek out"? ROFL! Let's see. at the last plant, we had to redesign the RSView apps and PLC programs so that the process could continue while
the PC rebooted, since it went down about every 4 to 6 months. RSView on NT. No extra software, no games, all service packs applied, blah blah
blah. It was a noisy environment that the PC was in. But guess what? Sometimes that is the environment that you get. Also, I am not costing my company anything by using the proper tool for the job. I guarantee that what I am doing with a PLC, you cannot do with a PC for the same price and same capabilities. And what I *do* use a PC for is the best use of a PC in *our* production environment

Looking at the website, specifically products.phtml, that looks like a lot of computing power. I notice that you have a hot standby machine in the loop. Is that to indicate that reliability is defined as redundancy? Also, you make the statement that:

"Field components are typically accessed via (E)ISA or PCI boards inside the control servers, an integrated Soft PLC enabling many different
combinations. Typical cycle times (to perform all the control tasks and send data to and from the field devices) are from 20 to 100 milliseconds"

A couple things on that. One, I have noticed that (E)ISA bus is disappearing from new PC's. How do you intend to support systems that still rely on (E)ISA cards to communicate? Or do those get dropped? What if my process needs a faster scan time? Most my stuff runs in the 5 to 10
msec time frame.

Lastly, on the website, I read your tirade against PLC's that is disguised as a FAQ.

You state:

"WARNING: For all of you out there wiring your 1761-NET-AIC devices to these terminals to interface to .. a SLC-500/01/02/03 processor beware. If you short the terminals the program in the processor is lost. "

Since most times, this is a one time connection, I feel comfortable comparing this to the time I was installing a CD-ROM drive at home, and
mistakenly plugged the power connector in upside down. Oops. Guess that is comparable to not paying attention and shorting the terminals together. Unfortunately, I couldn't just reload the program and go again. My PC did not have adequate protection against me being an idiot, and therefore I wound up buying a new CD ROM drive, and my power supply went out a month later. I believe the (PC) has a firmware bug. (Paraphrasing your website).

Of course, you answer your own imagined problem on the same page:

"(Of the) SLC memory corruption instances there were only a few in which a fat green wire to a solid ground didn't solve the problem."

Are you sugesting by comparison that I can run my PC solutions without a ground terminal?

And, of course:

My (P)athetic (L)ittle (C)omputer has never gotten a virus.
My (P)athetic (L)ittle (C)omputer never has operators loading games on it.
My (P)athetic (L)ittle (C)omputer never stops running because of an I/O driver getting corrupt.
My (P)athetic (L)ittle (C)omputer never has a hard drive crash.
My (P)athetic (L)ittle (C)omputer has near-zero boot time requirements.
My (P)athetic (L)ittle (C)omputer can keep a complete application backup in a EEprom for when the program does get dumped. Of course, My (P)athetic (L)ittle (C)omputer never dumps the program when handled properly.
My (P)athetic (L)ittle (C)omputer can continue to run without a screen.
My (P)athetic (L)ittle (C)omputer can continue to run without a keyboard.
My (P)athetic (L)ittle (C)omputer can continue to run without a mouse.
My (P)athetic (L)ittle (C)omputer doesn't rely on hardware that is revving every 6 months. I can find A (P)athetic (L)ittle (C)omputer **exactly**
like the 5 year old one that my forklift driver just speared on his fork.

And for the record, although control.com reserves the right to post all of the discussions here on their website, I want to make sure that -none- of
my comments in this or any other posting get re-posted on your or any affiliated companies website.

--Joe Jansen
Controls Engineer

The opinions expressed here are mine, not my companies, blah blah blah.

Paul Jager wrote:

<clipped>
>We've seen on this list that Siemens, Allen Bradley PLC's crap out more
often than a properly tuned server.
<clipped>

I must have missed that posting. Who compared servers to PLC's and said they have fewer failures?

Sincerely,

Mark Wells
President
Runfactory Systems Inc.
http://www.runfactory.com
1235 Bay Street, Suite 400
Toronto, Ontario, Canada M5R 3K4
Ph. 416-934-5038
Fax 416-352-5206

By Curt Wuollet on 1 November, 2001 - 9:22 am

Hi Dave.

This can be corrected too. See my other post.

cww

By Curt Wuollet on 1 November, 2001 - 9:24 am

Hi All.

I have been watching closely for suitable hardware for direct replacement of say an AB PLC with LPLC. SoftPLC sells some suitable hardware called tealware. Unfortunately, it's awful spendy. Right now, you really can't buy suitable stuff off the shelf. This is strange, kinda like
the chicken and the egg problem. Excellent embedded class hardware exists in usable form factors, but no one is packaging it for this market. I have been trying to generate enthusiasm for doing just that. If this can be accomplished, then I would agree on the hardware point. Right now, doing PC based control suffers under the additional burden of hardware that is more like a PC than a PLC. There is simply no reason for this
as a very complete and highly integrated PC can be had in PC104 form. This could be packaged for dinrail mount, pluggable modules, backplane
Bus, etc. For the time being "Off The Shelf" means buy a proprietary PLC.

What I would like to see is easily doable and would go a long ways toward making PC control just as easy a choice. The SoC's have more than
enough features and hardware. I have demonstrated the discrete IO can be done economically and analog is even being included on the PC104 boards.
The architecture could concievably be the standard PC architecture with an ISA or PCI backplane. You would then have what amounts to an off the shelf product that is standard to the extent that it runs the same software and is programatically identical to a desktop. If you think of it as simply a stretched out PC104 stack, it's obvious this is workable.

This would be a great first step. To achieve lower costs would require a simpler bus structure, exclusion of peripherals not needed for control and simpler IO modules. This is where things get too proprietary. The solution would be for a few like minded individuals to get together and do and publish under an Open and Free License, a reference design. Schematics, netlists, Gerbers for the boards, bill of materials, everything needed to make a standardized platform. If this were done and the corresponding drivers published under the GPL,
along with say, a LPLC port, a QNX port,(and maybe a wince port) it might be possible to get more than one company to make automation hardware
that is open and truly compatible. No one can argue that this would not be a giant leap forward for the industry. Everyone wins if the volumes can be multiplied by standardizing. The talent to accomplish this is reading this list. All it takes
is the desire to do things right. A coalition of sponsers for the very modest costs would be helpful also. I could do this, but that's the wrong way. If _we_ do this as a community effort it would be far more likely to be accepted and less likely to be exploited or derailed.

> >> The last time I bought a car, I just went to the dealer and bought a
> >> car. I didn't shop around at the wreckers' to find the parts to make my
> >> own. I think that most people would do the same.
> >
> >No, but neither did the car have a big lock on the hood that only
> >authorized servicepeople can open.
> >
> >You *can* open the car yourself to check the oil (or change the cyliner
> >head gasket), and so can the corner mechanic. You probably don't, but the
> >mere option makes the cost of service much lower and the quality better.

And it lets those of us who need to work under the hood to afford to drive continue to do so. My 91 Dodge needs to go another 100,000 miles. This is not economically feasible unless I do the work myself. There is a strong analog here to the folks who have unsupported proprietary software. If it were open, it's not a problem if you can read the book.

> I have actually heard that there are new cars today having this
> big lock on the hood. I won't buy one if I have any possibility
> to avoid it.

Exactly. Why can't people see this logic with software. Of course, you could get out your big drill as soon as you get home.

> And no, I have not tried to repair my current car myself and I have
> always serviced it at the authorised shop. Mostly because that
> currently is one of the better options considering cost and result.
> I would still not buy a car if I can avoid it where I am required
> to have them make the service.

Well, If you weren't in Sweden, I'd say stop over and we could drink beer and fix it right up. That's what motorheads do here. It's a shame automation people don't collaborate and share. I wonder why?

Regards

cww

By Alex Pavloff on 29 October, 2001 - 12:07 pm

No, but if "the source is available" is used as an excuse to skimp on documentation, then it is worth less and harder to use.

I think I speak for many people on this group when I say "Give me a big thick paper manual."

Alex Pavloff
Software Engineer
Eason Technology

Alex Pavloff:
> No, but if "the source is available" is used as an excuse to skimp on
> documentation, then it is worth less and harder to use.

Obviously. At the MAT project are well aware that this is our greatest weakness at this point, and the biggest task we must address before a full
release.

> I think I speak for many people on this group when I say "Give me a big
> thick paper manual."

The PDF file is at 32 pages right now. I guess 10-20x that much would be about the right thickness?


Jiri
--
Jiri Baum <jiri@baum.com.au> http://www.csse.monash.edu.au/~jirib
MAT LinuxPLC project --- http://mat.sf.net --- Machine Automation Tools

By Jerry Miille on 29 October, 2001 - 1:52 pm

I for one, don't want the big thick manual. Give me a CD with files in .PDF format. Let me decide what to print, if anything. The later .PDF
readers have a powerful search capability built in. Makes it very easy to find what you need and get on with the work. I can read .PDF files on any
machine that I have.

The .PS files are another story. I do have a reader, but it is only on one machine. These files are hard to print in a windows environment.

I agree with the comment on application specific documentation. Also, anything that requires the installation of a "reader" program is no good.

Jerry Miille

By Johan Bengtsson on 29 October, 2001 - 1:56 pm

You definitely have a point there.

Could we agree that the following scale (from best to worst) defines the value regarding this point:

1. Good documentation with sources
2. Good documentation without sources
3. Bad or no documentation with sources
4. Bad or no documentation without sources

The leap between 2 and 3 would probably be very high for most users and the leap between 1 and 2 very small.

You can never document everything as fully as you can by *also* supplying the sources, but as you say, supplying sources should never be an excuse for less documentation. Even if you cannot
yourself use the source as a way of getting more information you can always hire someone else in those rare cases there is left.


Then there is the question about form of documentation (paper, pdf windows help, html, whatever) but this is a separate question (from
open vs closed source) as I see it entirely.


/Johan Bengtsson

----------------------------------------
P&L, Innovation in training
Box 252, S-281 23 H{ssleholm SWEDEN
Tel: +46 451 49 460, Fax: +46 451 89 833
E-mail: johan.bengtsson@pol.se
Internet: http://www.pol.se/
----------------------------------------

By Curt Wuollet on 30 October, 2001 - 2:20 pm

How would you feel about a compromise? Documentation in pdf or postscript that you can convert into a big thick paper manual at your leasure. I think a lot of the gripes about online docs are that they are in file formats that are essentially useless offline. An actual book that can be used either way solves a lot of those issues for me as I also like the attributes of hardcopy. Carrying full documentation on a CD in your notebook case has quite a bit of merit too. The absolutely worst, less than useless documentation for me is the stuff you can only access in the application, Various Microsoft formats vie for that distinction as they are useless to me and often useless unless you have a fairly specific set of expensive software on every machine you work on. Unfortunately these are the most prevalent types, possibly because they are convenient for the technical writer in a cubicle somewhere. Postscript is great on Linux, but Microsoft went through a great deal of trouble to avoid it as a standard, are there convenient means to view it in Windows? I know pdf is fairly universal as Adobe provides free readers.

HTML is pretty good also, but the typography and printability are worse than PS or pdf. I'm curious, are pdf and/or PostScript good for everybody else? Perhaps the vendors (and our project) will be listening.

Regards

cww

By Rick Jafrate on 30 October, 2001 - 2:23 pm

You seem to be describing DOCBOOK. I used it a couple of years ago. You write once and publish to HTML, POSTSCRIPT, and others. It creates hyper links, table of contents etc.

It formats the document for both web and for hardcopy. I know some brought this up before but I did not follow the discussion to it's conclusion. Is there a reason why DOCBOOK is an inadequate or unacceptable solution?

regards

Rick Jafrate

By Curt Wuollet on 1 November, 2001 - 9:23 am

Yes. One reason is that we can't get the vendors to use it :^) Docbook would suit our (MAT) purposes well if anybody knows it but is not likely to be adopted by the majors anytime soon.
Are there decent free authoring tools? We won't convince anyone to do a markup language by hand. Not criticism, I'm asking. Perhaps now that StarOffice uses XML, there will be filters to
convert. Programmers are more likely to work with low level tools but tech writers have large time investments in particular systems and are very difficult to switch. Many still use Xywrite.

The reason I started with ps and pdf is that many browsers and WPs can export ps which can be converted or pdf. Although I've dabbled with TeX and LaTeX and they do beautiful work, they are not first choice for tech writers in this market.
I guess I don't care how they create content as long as they publish it in some format that doesn't require their product or a full blown Windows installation to read the one paragraph I care about.

Regards

cww

By Michael Griffin on 2 November, 2001 - 10:39 am

At 11:58 30/10/01 -0600, Curt Wuollet wrote:
<clip>
>The reason I started with ps and pdf is that many browsers and WPs can
>export ps which can be converted or pdf.
<clip>

In case anyone who produces documentation may be interested, if you are using Windows, then virtually any program can be made to produce postscript output. All you have to do is set up a new printer definition. Create a new printer selection using a postscript printer definition, and set it to "print to file". Select this "printer" to print to, and then print from your program. This will create a postscript file (a window will automatically open to ask you for a file name). Next, use Ghostscript to convert the postcript file to PDF format.
This is all fairly painless, and costs nothing. I have used it many times quite successfully to produce various forms of documentation.

**********************
Michael Griffin
London, Ont. Canada
**********************

By Michael Griffin on 1 November, 2001 - 9:20 am

There has actually been some serious academic research into the two types of documentation formats (book style versus on-line "link" type help). I believe it also addressed printed books versus viewing the exact
equivalent on a screen.

The conclusion was that a printed book was slightly better than the exact same format viewed on a screen. It was believed that this was due to the book being easier to see (larger format, better viewing contrast). The difference between printed versus electronic though, was not very large.

There was a significant difference though between book format (in electronic form) and the typical "help" format you are referring to. The
book format was found to be much better for actually learning something new. While the "help" format was very poor for learning new subjects, it was convenient for looking up short references to previously learned subjects.

The conclusion was that there is a place for both formats. The degree of emphasis should depend upon how much learning is expected to be
done, versus simply looking up previously known facts.


**********************
Michael Griffin
London, Ont. Canada
**********************

By Lillie, Dave on 1 November, 2001 - 9:23 am

On the documentation sub-thread:

While pdf provides great printing capability, it suffers from some of the same proprietary usability issues as Microsoft's legacy winhelp, and present compact html doc format. The issue is that these proprietary formats are either difficult, slow, or $expensive to use from an
independent search program's perspective.

Why is independent search important? The ability to "text search" for user supplied keywords across a collection of documentation/manuals is an
amazing productivity enhancement. As it is very rare for any of today's systems to come exclusively from one vendor: The ability to integrate documentation search across an ad hoc collection of various vendor docs - including your own custom documentation will become increasingly important as evolution continues (I.E. as young "Google Savvy" Engineers & customers begin to demand productive documentation design).

There is one more major issue besides index/search, which demands attention - Document Design. There is a difference between the design of a hard printed manual, and the design of a hyper text based (or web based document). As we all know, "refer to chapter 12 section 3" in a hard printed manual is a nuisance, while the equivalent "hypertext link" (in proper context) in an html document is a wonderful thing that reduces the clutter of "optional" or "obnoxious clarification" information, tables, etc. Hypertext clearly aids in user comprehension due to it's ability to encapsulate supplementary information and naturally cross reference
associations. Hypertext provides the ability to bind to dynamic, up-to-date (I.E. relevant) information such as catalogs, news, and events.

Unfortunately a solution that adequately addresses the requirements for both hard and soft documents is not here yet. I do not believe that the techwriting community has agreed upon an "Open System" documentation design that both embraces hypertext, and provides an organized way to print out the procedures, configuration tables, and job aids that are needed to operate offline. If the techwriting community embraces, and abides by the W3C - good things should be happening soon. If on the other hand, they get hooked into old Macintosh print format (pdf), or the next "gimme your wallet" Microsoft scheme, it is going to be a long bumpy ride.

Hopefully this issue can be resolved using W3C standards and an application Open Source implementation that prevents certain vendors from
convoluting it. (note: application open source extends past Linux to include Windows & Unix. Examples - NetBeans, Castor, JBoss, PostGreSQL. I
had to clarify, as I have seen postings on this list that use Linux and Open Source as synonyms)

I do not claim to be a documentation expert. I have had experience managing a program that attempted to fully integrate the documentation,
help, and website of 14 independently developed products. I have had experience managing a program containing Open Source applications and
development tools. I question whether .hlp. .chm, .pdf, .vendorwhatever
will help us get where we need to go. I hope that some form of XML compatible html, combined with CSS and an open, standards body endorsed, reference implementation will materialize.

The documentation issue extends well past the boundaries of Manufacturing or Industrial Control. It will most likely be solved by the general software industry under the "eCollaboration - workflow" category, as
opposed to being solved in our smaller "cManufacturing" realm.

In summary - The technology exists to completely solve this problem. Unfortunately we do not have a coordinated effort to leverage it ...... yet.


My 2 Cents,

Dave Lillie
Software Program Manager

(Opinions expressed are mine alone, and should not be associated with the opinions of my employer - Rockwell Software Inc.)

By Curt Wuollet on 2 November, 2001 - 3:49 pm

Hi Dave

An excellent and well reasoned analysis. I have few comments.

Dave Lillie wrote:

> On the documentation sub-thread:
>
> While pdf provides great printing capability, it suffers from some of the
> same proprietary usability issues as Microsoft's legacy winhelp, and
> present compact html doc format. The issue is that these proprietary
> formats are either difficult, slow, or $expensive to use from an
> independent search program's perspective.

Yes, pdf is the only defacto "standard" that is reasonably capable and portable. It is not without major problems, merely more so than the others.

> Why is independent search important? The ability to "text search" for user
> supplied keywords across a collection of documentation/manuals is an
> amazing productivity enhancement. As it is very rare for any of today's
> systems to come exclusively from one vendor: The ability to integrate
> documentation search across an ad hoc collection of various vendor docs -
> including your own custom documentation will become increasingly important
> as evolution continues (I.E. as young "Google Savvy" Engineers & customers
> begin to demand productive documentation design).

Absolutely.

> There is one more major issue besides index/search, which demands
> attention - Document Design. There is a difference between the design of a
> hard printed manual, and the design of a hyper text based (or web based
> document). As we all know, "refer to chapter 12 section 3" in a hard
> printed manual is a nuisance, while the equivalent "hypertext link" (in
> proper context) in an html document is a wonderful thing that reduces the
> clutter of "optional" or "obnoxious clarification" information, tables,
> etc. Hypertext clearly aids in user comprehension due to it's ability to
> encapsulate supplementary information and naturally cross reference
> associations. Hypertext provides the ability to bind to dynamic,
> up-to-date (I.E. relevant) information such as catalogs, news, and events.
>
> Unfortunately a solution that adequately addresses the requirements for
> both hard and soft documents is not here yet. I do not believe that the
> techwriting community has agreed upon an "Open System" documentation
> design that both embraces hypertext, and provides an organized way to
> print out the procedures, configuration tables, and job aids that are
> needed to operate offline. If the techwriting community embraces, and
> abides by the W3C - good things should be happening soon. If on the other
> hand, they get hooked into old Macintosh print format (pdf), or the next
> "gimme your wallet" Microsoft scheme, it is going to be a long bumpy ride.

The big problem here has been that the exclusive adoption of MS products for tools has implied their "standards" across the board. There is obvious merit in that approach but, it purposely scuttles any other options and inhibits cooperative efforts to really solve the problem. That's a very high threshold to overcome. And it's hard to get Windows users to care because it "works for them".
>
> Hopefully this issue can be resolved using W3C standards and an
> application Open Source implementation that prevents certain vendors from
> convoluting it. (note: application open source extends past Linux to
> include Windows & Unix. Examples - NetBeans, Castor, JBoss, PostGreSQL. I
> had to clarify, as I have seen postings on this list that use Linux and
> Open Source as synonyms)

I was worried until the W3C apparently decided that patented "standards" were not in their best interest. Nothing is final yet. We may yet be
without an objective and uncorrupted standards body. If they yield, all we will have is the OSS community.

I have made a special effort to be inclusive in this discussion as it affects everyone. The list is actually much longer here than in the general computing population. It's not simply Windows VS
everyone else. e.g. A surprising number of folks still need DOS.

> I do not claim to be a documentation expert. I have had experience
> managing a program that attempted to fully integrate the documentation,
> help, and website of 14 independently developed products. I have had
> experience managing a program containing Open Source applications and
> development tools. I question whether .hlp. .chm, .pdf, .vendorwhatever
> will help us get where we need to go. I hope that some form of XML
> compatible html, combined with CSS and an open, standards body endorsed, reference implementation will materialize.

We can only hope.

> The documentation issue extends well past the boundaries of Manufacturing
> or Industrial Control. It will most likely be solved by the general
> software industry under the "eCollaboration - workflow" category, as
> opposed to being solved in our smaller "cManufacturing" realm.
>
> In summary - The technology exists to completely solve this problem.
> Unfortunately we do not have a coordinated effort to leverage it ......
> yet.

And certain nearly omnipotent parties will be frantic and desperate to prevent or derail it. Hopefully there is enough enlightenment
to cause them to fail.

> My 2 Cents,
>
> Dave Lillie
> Software Program Manager
>
> (Opinions expressed are mine alone, and should not be associated with the opinions of my employer - Rockwell Software Inc.)

I don't think you have to worry about that :^) at least the misassociation.

Regards

cww

--
Free Tools!
Machine Automation Tools (LinuxPLC) Free, Truly Open & Publicly Owned
Industrial Automation Software For Linux. mat.sourceforge.net.
Day Job: Heartland Engineering, Automation & ATE for Automotive Rebuilders.
Consultancy: Wide Open Technologies: Moving Business & Automation to Linux.

Michael,

Interestingly, for OEM's like Eason, the licensing costs of including some open source products may actually be more expensive than old-fashioned proprietary products. I was going to test MySQL (open source database product) to replace a proprietary SQL database product that is used in one of my company's products. I just needed a single workstation version that supported SQL. The licensing terms for NON-END-USERS make licensing MySQL more expensive than my original proprietary database product, especially for low product volumes.

In the case of MySQL, if the end-user configures and installs the database (or hires a consultant to do it), then there is no licensing fee for my
company's product if I want to include MySQL connectivity. Unfortunately, the industry that the product is targeted at typically has factories with fewer than 50 people, which means that they usually have very little IT support, and very little money in the consulting budget.

Sincerely,

Mark Wells
President
Runfactory Systems Inc.
http://www.runfactory.com
1235 Bay Street, Suite 400
Toronto, Ontario, Canada M5R 3K4
Ph. 416-934-5038
Fax 416-352-5206

By Michael Batchelor on 22 October, 2001 - 10:02 am

This licensing cost is only one of several problems I've had with MySQL, so I've started doing some work with Postgres. The open source license is really open source. There is a Windows build, but I haven't started experimenting with it yet. And there is an ODBC driver freely available, too. My guess would be that it's not going to be easy to set up the way Wonderware just installs MS SQL Server, but I think it will be workable.

By Vincent QUILLET on 22 October, 2001 - 2:11 pm

We are doing some work with Interbase in open source licence. It works well and all I have spent is 149$ for the IBPhoenix Developer CD. I am just curious to know why Borland drop the product in the open source community and few months after, continue to sell it.

Vincent QUILLET
ASALOG
FRANCE
TÚl.: +33 442 94 06 87
Fax : +33 442 94 06 88
e-mail : vquillet@asalog.com

By Ed Mulligan on 22 October, 2001 - 10:05 am

In this case, aren't you serving as the consultant to install the software for the end user, so you are covered? Have them write their P.O. so that they are buying the system _and_ the consulting service. For your development box, aren't you the end user and also covered?

I won't let never having read MySQL's agreement stop me from answering. 8^)

Ed

Speaking for me, not for Starbucks. . .

By Mark Blunier on 22 October, 2001 - 12:11 pm

We have to pay for the software over and over again through support contracts, because we expect to make changes to the system, and know that if we do, we are likely to run into problems, and our only hope to get help from Rockwell is to have a support contract. We also figure that they will gradually eliminate a few of the bugs in the newer versions of the program. This is also some of the reasons some of us prefer Modicon. They don't charge us for support beyond the cost of the equipment, and the
software (at least the old stuff that I'm using) doesn't seem to be full of bugs.

Mark Blunier
Any opinions expressed in this message are not necessarily those of the
company.

> >Now, as another poster here complained, Rockwell is charging yearly for
> >their RS* software, in addition to charging a premium for their hardware.
> >Why can they get away with it? Easy -- they're a big company selling to
big
> >companies.
>
> They can charge a high price because they are perceived as having
> dominance in a particular sector of the market. People are willing to pay
> extra for AB hardware partly because they believe it is easy to find lots
of
> companies (integrators, etc.) who will have the software and so can
support
> them. Integrators are willing to pay high prices for the software because
> they believe there are lots of AB PLCs being installed in machines, and so
> they can pick up lots of work programming them.

By Robert McDonald on 22 October, 2001 - 1:38 pm

I can only agree with the lock-in R$ seems to have over its software.

Compounded by the fact that the newer hardware can only be supported by the newer software, so you MUST upgrade. Take a micrologix 1200 for example, we had a dozen of so used in a neat OEM project. Tried to build another recently, now its series C, sorry, you need to upgrade R$500 to program it, please deposit money here.

Robert McDonald

By Michael Griffin on 22 October, 2001 - 11:22 am

We have a piece of custom test equipment we wanted duplicated (with minor changes) in a hurry by the original integrator. It is a custom Visual Basic application running on a PC with a Windows NT operating system.

To make a long story short, the integrator spent his entire on-site time allocation (3 days) trying to solve Windows NT computer hardware dependencies (problems with talking to various boards), and had to switch to
Windows 98 just to get things working. The actual integration work (actually solving the problem he was getting paid for) ended up being done on his own time (with the customer breathing down his neck).
He of course still has to spend additional time figuring out the Windows NT problem to deliver on the terms of the contract. He will likely solve this by replacing the computer. All of this extra work (and hardware)
is of course at his own expense. This isn't unusual when dealing with PC systems though.

This example brings up an interesting question. It has been asked in the course of this discussion as to where someone would turn to for support with open source software.
The fellow mentioned above was using Microsoft Visual Basic with Microsoft Windows NT, which is of course "closed" and proprietary. Where
could he turn to for support with his software problems? No where as far as I could see. He had to solve everything himself. Who *would* he have talked to? The computer worked. The boards worked. Windows NT worked. Visual Basic worked. They just didn't work together.

I guess the only solution to this sort of problem is "to have smart people on staff". Have you got any better ideas? Should we have called
Microsoft to complain about their software? Do you really think that would have helped?

I don't know if we would have been any better off using open source software in this sort of situation. I find it hard to imagine though how things could have been any worse.

**********************
Michael Griffin
London, Ont. Canada
**********************

Michael Griffin:
> Where could he turn to for support with his software problems? No where
> as far as I could see. He had to solve everything himself. Who *would* he
> have talked to? The computer worked. The boards worked. Windows NT
> worked. Visual Basic worked. They just didn't work together.

> I guess the only solution to this sort of problem is "to have smart
> people on staff". Have you got any better ideas?

Community support - like this list, or the lists for whichever were the dominant parts of the problem. A couple of years back I think one of these actually won an award for ``best support'' or ``support of the year'' or something...

> Should we have called Microsoft to complain about their software? Do you
> really think that would have helped?

It's been suggested that calling MS - and keeping good logs - might be a way of convincing management to drop the damned thing... which may or may not be so, and isn't applicable in the given situation anyway.

> I don't know if we would have been any better off using open source
> software in this sort of situation. I find it hard to imagine though how
> things could have been any worse.

The advantage would be that both he and whoever he turns to for support has access to the *whole* of the problem. With proprietary systems, even if you do get someone to look at the problem, Microsoft can't see what's happening in the board driver and the board maker can't see what's happening in VB or Windows.

If all the source is open, anyone can look at anything.

Jiri
--
Jiri Baum <jiri@baum.com.au> http://www.csse.monash.edu.au/~jirib
MAT LinuxPLC project --- http://mat.sf.net --- Machine Automation Tools

By Curt Wuollet on 22 October, 2001 - 3:10 pm

Only if you are ill prepared enough to go on site and try to install on random hardware. He can't possibly do that often or he would know that
even with "built for windows" the occasional nightmare occurs. I wouldn't try this without a plan B. Either a known MB I could swap in or a bare bones spare. Anyone that you pay to do PC's should be prepared for this and at least know the configuration is going to work ahead of time.
When I do this for business systems, I have them dump and fax the system info to prevent surprises. For testers I specify the hardware to be sure. They can buy it from anyone they want, but my sources are hard to beat. Linux is remarkably good about the basic PC, but most name brand PC's come loaded with Winjunk which isn't desirable even under Windows. I wouldn't spend more than a couple hours with the platform on a
customer site. If time is tight, I might bring a hard drive with a working system on it. Unfortunately this doesn't often work with
Windows. He should eat the time because he didn't do this homework. Of course, Windows being so much easier to install, I thought this never happened. And if all else fails, I just read the driver source to see what's going on and how to fix it. I would sweat a lot more if I had to depend on "hotlines". Linux and Open Source gives me the power to see that this doesn't happen if I do my part.

> This example brings up an interesting question. It has been asked in
> the course of this discussion as to where someone would turn to for support
> with open source software.

And answered unambiguously. For applications built on LPLC for example, your solution provider. For LPLC, the project. The application provider would know everything about the application and the project knows more about LPLC than anyone you will ever get to talk to knows about the commercial products you use now. You not only have greater access to help but the help is of a much higher quality. Quite often you can
mail the author.

> The fellow mentioned above was using Microsoft Visual Basic with
> Microsoft Windows NT, which is of course "closed" and proprietary. Where
> could he turn to for support with his software problems? No where as far as
> I could see. He had to solve everything himself. Who *would* he have talked
> to? The computer worked. The boards worked. Windows NT worked. Visual Basic
> worked. They just didn't work together.
>
> I guess the only solution to this sort of problem is "to have smart
> people on staff". Have you got any better ideas? Should we have called
> Microsoft to complain about their software? Do you really think that would
> have helped?

The blame rests squarely on the integrator. Unfortunately, 99% of the project was beyond his control. That's one of the major reasons I use
Linux and OSS. I can handle almost anything that can happen and whatever your code does, it can't run any better than the platform it's on.

> I don't know if we would have been any better off using open source
> software in this sort of situation. I find it hard to imagine though how
> things could have been any worse.

I have never gone more than 24 hours without a solution to platform problems or at least a definative answer. I very seldom have to ask
anyone. Almost all the problems I've seen have been encountered before and the solution is on deja.com or another list. This is better than buying answers from Microsoft or other proprietary vendors. It might be 3 days before you talk to anyone who knows anything and in the meantime the lower echelons insist on doing
things that might wreck your installation.

Once you know how to use the OSS resources, it really works well.

Just as an example, I was updating my standard tester platform and needed a driver for the DAQ card on the 2.4 Linux kernel. I found it and downloaded it, but it wouldn't compile. I politely
emailed the author and had the patch and an apology in two hours. I was actually embarrassed by the apology but, this has been typical of my experience with OSS folks. Contrast that with the
support you get after paying big bucks for it.

Regards

cww

--
Free Tools!
Machine Automation Tools (LinuxPLC) Free, Truly Open & Publicly Owned
Industrial Automation Software For Linux. mat.sourceforge.net.
Day Job: Heartland Engineering, Automation & ATE for Automotive
Rebuilders.
Consultancy: Wide Open Technologies: Moving Business & Automation to
Linux.

By Jeffrey D. Brandt on 22 October, 2001 - 3:35 pm

FOUL!
We all should remember the 'triangle of truth' here, if nowhere else. The story below: 'in a hurry' & 'duplicated' (=fast & cheap). So the triangle (good, fast, cheap: pick two, give up the other) is met, and you have CHOSEN to give up GOOD. YOUR CHOICE!
The poor integrator is simply doing what you tell him. He is eating all the costs to make the job work, so he gets a chance to do the next one. NO FAIR picking on this integrator, IMhO.
jDb

By Michael Griffin on 23 October, 2001 - 3:34 pm

We asked the integrator if they could do it in the time window allowed to us by our customer. The integrator said "yes, they could do it". We proceeded on that basis. It may have been "fast", but it was definitely not "cheap". The integrator was very generously paid for quick turn around. If you say you can do something, and demand a stiff price for it, don't complain when you are asked to deliver on your promises.

The integrator's mistake was to assume that the PC and operating system would not be the source of any problems. They picked up the PC (from their supplier) on the way to our place. We didn't like them leaving the testing of the PC until the last moment, but it was a rush project and we assume that they know their business. They don't need us driving from the back seat.

The problem arose in that Windows NT had various unexpected compatability problems with the combination of hardware (it appears to be
related to the particular model of motherboard used - one which they selected).

I think there is a lesson to be learned from this. It seems to be a mistake to assume that just because Windows NT is fairly common as an operating system and that Visual Basic is a common development package, that any problems involved in using them have already been ironed out long ago. This does not seem to be the case. This is not the only minor disaster I have seen due to Windows "mysteries".

If you are developing applications in Visual Basic for Windows NT, you had better have "good people on staff" (to paraphrase a previous writer). You can probably whip something together without really knowing what you are doing and not have any problems, but you can't count on it. I think the story I related is a good example of what can go wrong.

The biggest complaint I hear from integrators is the lack of useful information about how Windows actually works. Most of them are reduced to guessing and poking in the dark. There may be a thousand books at the local "Chapters" book store on Windows NT, but not a single one of them has proved to be of any use to someone constructing the sort of applications we need.

Perhaps our applications are asking too much of the operating system (Windows) our integrators have been using for us. We began as adherants of the conventional philosophy of using "readly available" office PCs and operating systems. DOS seemed to work satisfactorily in its day. The more recent results have been discouraging. This has caused us to question our
original assumptions. So far we have not tried any alternatives, but we are thinking about them very hard indeed.

It is no good as you have done to simply blame the schedule. This is the world we have to live in. We have customers too, and they have no
patience what so ever. One of our customers in Europe wanted an immediate turn around on a very large order. We have an obligation to serve our
customers which we take very seriously. This equipment was to deal with that.

We also have our component suppliers, and they have their own delivery problems. We have to work around those as well. One of our component suppliers (located in Taiwan) encountered quality problems with one of their suppliers of chemicals (located in the USA). The normal solution would have been air freight, but due to recent events in the United States they discovered that they are no longer allowed to ship large quantities of flammable chemicals in aircraft from American suppliers. The gentleman who created that particular situation neglected to inform us of it
in advance. An oversight, I'm sure. Welcome to the global economy.

**********************
Michael Griffin
London, Ont. Canada
**********************

By Alex Pavloff on 22 October, 2001 - 3:36 pm

Curt, some of what you say has merit, but I've got some issues with the rest...

> Only if you are ill prepared enough to go on site and try to
> install on random hardware. He can't possibly do that often or he would
know that
> even with "built for windows" the occasional nightmare occurs. I
> wouldn't try this without a plan B. Either a known MB I could swap in or a
bare
> bones spare. Anyone that you pay to do PC's should be
> prepared for this and at least know the configuration is going to work
ahead of time.
> When I do this for business systems, I have them dump and fax
> the system info to prevent surprises. For testers I specify the hardware
to be
> sure. They can buy it from anyone they want, but my sources
> are hard to beat.

This advice should be followed by everyone using PCs!

> I wouldn't spend more than a couple hours with the platform on a
> customer site. If time is tight, I might bring a hard drive with a
> working system on it. Unfortunately this doesn't often work with
> Windows.

Get a system like Ghost, and it works EVERY TIME. Curt, you know about Linux, but your Windows knowledge, understandably, is lacking. You say you haven't used Windows since what, 1997. Your qualifications in this area are slim, and when you turn around to bash Windows for sucking, you miss the mark sometimes.

> Just as an example, I was updating my standard tester platform
> and needed a driver for the DAQ card on the 2.4 Linux kernel.
> I found it and downloaded it, but it wouldn't compile. I politely
> emailed the author and had the patch and an apology in two hours.
> I was actually embarrassed by the apology but, this has been
> typical of my experience with OSS folks. Contrast that with the
> support you get after paying big bucks for it.

I don't think everyone is willing to slap a two-hour fix into their system without testing. I don't know of your case, so it might have been a simple fix that didn't need it. However, keeping up with the 0.1,0.2,0.3... etc revs of many Linux projects can cause problems too.

Alex Pavloff
Software Engineer
Eason Technology

By Michael Griffin on 29 October, 2001 - 11:55 am

>Only if you are ill prepared enough to go on site and try to install on
>random hardware. He can't possibly do that often or he would know that
>even with "built for windows" the occasional nightmare occurs.

This was their mistake - with the mitigating circumstance of the limited time available, and their PC supplier was late (of course).

>I wouldn't
>try this without a plan B. Either a known MB I could swap in or a bare
>bones spare. Anyone that you pay to do PC's should be prepared for this
>and at least know the configuration is going to work ahead of time.
<clip>

A known motherboard? How do you do that? If you use "standard" office PC style motherboards, you generally find that they go obsolete so fast you can't buy two in a row that are exactly the same.
You must either have a source of motherboards that changes their design very little over time, or you must have an operating system that is very tolerant of different hardware. I find Windows NT's fussiness particularly surprising considering that the motherboard suppliers (and the chip designers) are designing their products with Windows in mind.


>The blame rests squarely on the integrator. Unfortunately, 99% of the
>project was beyond his control. That's one of the major reasons I use
>Linux and OSS. I can handle almost anything that can happen and
>whatever your code does, it can't run any better than the platform it's
>on.

Yes in one sense, the blame lies with the integrator. However, I see the problems they routinely face, and they are not what I consider to be "value added" activities. These guys are supposed to be solving *testing* problems, not Windows problems. It isn't just one company, or just integrators. I see OEMs having the same difficulties.
For example, they want to fix a bug, but they upgraded their compiler since the last time they worked on it. Now it's no longer compatable with the database, so that has to be upgraded. That in turn affects something else, which also needs an upgrade. Now we need a memory or hard drive upgrade because of all the software upgrades.
One minor software incompatability leads to a chain reaction of cascading upgrades through the whole system. They plan on one "minor" change and find themselves being pulled in by an undertow whose existence they never suspected. How do you plan anything in an enviroment like that?


**********************
Michael Griffin
London, Ont. Canada
**********************

> A known motherboard? How do you do that? If you use "standard"
>office PC style motherboards, you generally find that they go obsolete so
>fast you can't buy two in a row that are exactly the same.

I have started using Avantech CPU boards with Intel's "embedded" chipset and CPU. It has been pledged for a minimum of five years supply. It is a 266MMX and a TX chipset. It works fine with our software. The CPU is surface mounted and doesn't require a fan. The price is reasonable also. Very cool. (: No pun intended...

I think Contec also has long design life systems like these. More vendors should do this.

Bill Sturm

I agree with Bill- I used to do something similar at my last job. Every year, we would select and document a motherboard and supporting hardware
(video card, drives, modem, NIC, etc.) for machines used that year. At least that way, we knew what the base was in future and had a starting point for upgrades or compatibility issues. In a couple of cases, I supplied spare motherboards and CPUs to some customers that they stocked as spares in their facility.

Paul T

By Michael Griffin on 29 October, 2001 - 12:49 pm

Bill Sturm wrote:
<clip>
>I have started using Avantech CPU boards with Intel's "embedded" chipset
>and CPU. It has been pledged for a minimum of five years supply. It is a
>266MMX and a TX chipset. It works fine with our software. The CPU is
>surface mounted and doesn't require a fan. The price is reasonable also.
>Very cool. (: No pun intended...
>
>I think Contec also has long design life systems like these. More vendors
>should do this.
<clip>

I have just spent the past couple of days going over various catalogues and web sites for information about industrial computers. I have
been asked to research industrial computers to set some sort of standard for our custom applications. We are fed up with office type desk-top computers, regardless of what type of box they get put in. The latest fiasco was the
last straw for us.

So far I liked what I saw in Advantech's literature the best. I am looking at:

a) passive backplane (14 slot - some mix of ISA and PCI),
b) (I forget the chassis prefix) 616 chassis,
c) redundant 250W power supplies,
d) a 6178 or 6179 (I think those are the numbers) full length CPU card,
e) the hardware RAID package (mirrored drives).

I haven't added it up yet to see if it all goes together, and if the power supply is big enough. However, I have a few questions which I wonder if you might anwswer.

Have you used any of the above hardware, or at least anything similar to them (from Advantech)? If so, would you do the same thing again?

What would you generally say about Advantech in terms of how good is their

1) reliability of hardware,
2) support,
3) delivery on new systems,
4) delivery on spares,
5) etc.?


I was asked to look into passive backplane because of the difficulty of getting conventional motherboards with enough slots, and slots of the right type.

The 616 chassis looks good for features and diagnostics (and I think it fits into our existing equipment). The redundant power supplies are intended to make power supply replacement easier (we don't really need the redundancy). The CPU cards had the right features and looked modern enough to be around for a while.

The hardware RAID package looks interesting (if we can use it). It fits in one full height bay and is operating system independent.

We have found that just keeping a back-up hard drive on the shelf doesn't work, as making the back-up after any changes (e.g. software
versions, parameter changes, etc.) is such a pain that people will always do it "later" (i.e., never). We have never yet found a back-up hard drive to be up to date when we needed it. The RAID system ought to solve this, as the "back-up" is always installed and up to date.

**********************
Michael Griffin
London, Ont. Canada
**********************

> What would you generally say about Advantech in terms of how good is
>their 1) reliability of hardware,

Too soon to tell.

> 2) support,

I haven't needed much but from what I can tell, they are helpful.

> 3) delivery on new systems,

Good.

Bill Sturm

By Richard Dewees on 2 November, 2001 - 3:22 pm

I have 3 Advantech PC's doing process control.
The one that has been in the longest has been running for 2 years without any failures. It is in a dusty environment where the temperatures can fluctuate from 40-90F. I also have a Compaq desktop sitting right next to it that has been there almost as long and it hasn't failed me either (and it cost half as much).

The Hardware I am using is the PCA-6176 series in the IPC6098 Industrial PC Chassis.

Rick Dewees
Ocean Kayak

By Michael Griffin on 2 November, 2001 - 3:34 pm

We have had desk top PCs in a production environment controlling test equipment for up to 8 years (or more) without problems. However, those
tend to be the exception. The AT&T 6300 (which was actually an Olivetti) was particularly good (one of these was the 8 years or more computer).
I would expect to see at least 2 years without trouble with a desk-top PC. However, most of them will fail eventually, sometimes in less
than 2 years. We have in the past used desk-top PCs in equipment, usually in a Rittal PC enclosure (fan ventilated with filters).

We are attempting to both reduce the frequency of failure, and to reduce the time required to repair a computer system and get it running again without having to call a technician in during the night or on a weekend and without losing too much production (air freight to Europe can get horrendously expensive).

To accomplish this, we are looking to make the items which we have found to fail most frequently (such as hard drives and power supplies) easier to diagnose and replace. Our interest in passive back planes comes from recent difficulties we have had in getting replacement motherboards with the right combination of ISA and PCI slots.

At the moment I am in correspondence with the local Advantech rep (operating out of Toronto) with whom I intend to resolve the technical questions I have. I have never dealt with them before, so I would like to thank you and Bill Sturm for your opinions on the company itself and their products.

**********************
Michael Griffin
London, Ont. Canada
**********************

By Curt Wuollet on 29 October, 2001 - 1:34 pm

Hi Micheal

Michael Griffin wrote:
>
> Curt Wuollet wrote:
> <clip>
> >Only if you are ill prepared enough to go on site and try to install
> >on random hardware. He can't possibly do that often or he would know
> >that even with "built for windows" the occasional nightmare occurs.
>
> This was their mistake - with the mitigating circumstance of
> the limited time available, and their PC supplier was late (of
> course).

I know this because I had days like that. It's a very, very, bad day and what makes it really horrible is that there isn't much you can do after the fact. In fact with closed systems sometimes there isn't anything you can do, period. That's one of the exhilerating things about developing with OSS on Linux. I have never had that sort of disaster. I have energetically researched a few problems on the virge of panic, but it's always worked out all right. The worst I've had to do was trade modems with a customer for a winmodem I trashed at the next rest stop.

>
> >I wouldn't
> >try this without a plan B. Either a known MB I could swap in or a
> >bare bones spare. Anyone that you pay to do PC's should be prepared
> >for this and at least know the configuration is going to work ahead
> >of time.
> <clip>

>
> A known motherboard? How do you do that? If you use "standard"
> office PC style motherboards, you generally find that they go obsolete
> so fast you can't buy two in a row that are exactly the same.
> You must either have a source of motherboards that changes
> their design very little over time, or you must have an operating
> system that is very tolerant of different hardware. I find Windows
> NT's fussiness particularly surprising considering that the
> motherboard suppliers (and the chip designers) are designing their
> products with Windows in mind.


This is much easier with Linux, I keep an FIC VA503+ with a K6-2-500 and an FIC PA2013 around. This is great plenty for Linux. I suppose it'd choke on W2k or WinME. I have used these for about two years and they are still available. I'm researching the next generation. The trick was "super socket 7" a technology designed to extend the life of the MB to future processors and bus speeds. The prospects for next generation aren't quite as good. I've avoided slot a and 1 procs as they are stopgaps. Soyo and AMD Duron are the front runners so far. I research these things very carefully for the reasons you mention it's nice if you don't have two dozen variations out in the field. It's synergistic that the best Linux hardware is very generic and a year back from the leading edge. This happens to be the sweet spot for pricing also. The point is that you can have a sane PC operation if you work at it and manage the details. I enjoy peace and quiet far too much to do it the way most people do. And Linux is _very_ tolerant of hardware, it is written by folks that have every type of system from poor students with a 486 to poor engineers like me with a K6-2-400 to IBM and Compaq who runs it on machines that cost millions. Nothing else has so many qualified testers working on it. Windows, by necessity, lives on fairly new machines. I use very few "gee whiz" features on my test systems and that helps too.


>
> >The blame rests squarely on the integrator. Unfortunately, 99% of the
> >project was beyond his control. That's one of the major reasons I use
> >Linux and OSS. I can handle almost anything that can happen and
> >whatever your code does, it can't run any better than the platform
> >it's on.
>
> Yes in one sense, the blame lies with the integrator. However,
> I see the problems they routinely face, and they are not what I
> consider to be "value added" activities. These guys are supposed to be
> solving *testing* problems, not Windows problems. It isn't just one
> company, or just integrators. I see OEMs having the same difficulties.
> For example, they want to fix a bug, but they upgraded their
> compiler since the last time they worked on it. Now it's no longer
> compatable with the database, so that has to be upgraded. That in turn
> affects something else, which also needs an upgrade. Now we need a
> memory or hard drive upgrade because of all the software upgrades.
> One minor software incompatability leads to a chain reaction
> of cascading upgrades through the whole system. They plan on one
> "minor" change and find themselves being pulled in by an undertow
> whose existence they never suspected. How do you plan anything in an
> enviroment like that?

You realize that that is an insane environment to develop on and support and just say NO. Then you get a $2.98 Linux CD with all the tools and libraries you will ever need and try to forget the bad old days. It works for me :^) If you're a programmer why would you put up with that?

It's strange, all those intelligent people and so very few see the obvious. You don't _have_ to play the Windows game anymore. You have a choice. It's hard for a while, but you'll be grinning soon.

Regards

cww

By Michael Griffin on 29 October, 2001 - 1:40 pm

Curt Wuollet wrote:
<clip>
>You realize that that is an insane environment to develop on and
>support and just say NO. Then you get a $2.98 Linux CD with all the
>tools and libraries you will ever need and try to forget the bad old
>days. It works for me :^) If you're a programmer why would you put up
>with that?
>
>It's strange, all those intelligent people and so very few see the
>obvious. You don't _have_ to play the Windows game anymore. You have a
>choice. It's hard for a while, but you'll be grinning soon.
<clip>

I believe you said you have created test systems. What development software do you use for test systems? There may be a lot of stuff available on the web, but separating the good from the bad is time consuming. I will classify items below for clarity.

1) Programming language (what compiler, etc.) for test applications in the 5k to 15k line size range.
2) What editor, debugger, and other tools seems to be popular for the langauge you would pick (this can be a personal choice I realise)?
3) GUI screen designer - or do you do this the hard way? Software which lets you draw the screens interactively saves a lot of time.
4) GUI toolbox - numeric displays, strip charts, XY charts, bar indicators, sliders, buttons and knobs, etc.
5) Signal processing toolbox for things like FFTs, digital filtering, etc.
6) Mathematical toolbox for things like matrix algebra, array operations, and various other engineering related math stuff.
7) Some sort of low end data base for storing test parameters. Parsing out ASCII data files for this can be rather tiresome, so I guess a simple, low overhead database woud be good for this.
8) A database suitable for logging test results (no more than a couple of megabytes per day). Some sort of standard format would be preferred.
9) Serial comms library.
10) Anything else you want to mention? Some sort of interpreter is handy for the initial work of getting familiar with the boards and other hardware provided it can make the necessary library calls.
11) And of course, what Linux distribution, with what items installed on the final target?

I think I've covered the all major areas needed for test systems above. Anyone who wants to get their feet wet but doesn't know where to start would get a good idea of what to use from any answers you can provide. Anyone else who would like to make suggestions is welcome as well.
Drivers for standard data aquisition and other boards doesn't seem to be a big problem any more. A lot of hardware companies are offering Linux drivers for their products now.

**********************
Michael Griffin
London, Ont. Canada
**********************

By Curt Wuollet on 29 October, 2001 - 1:45 pm

Michael Griffin wrote:
>
> At 20:58 22/10/01 -0500, Curt Wuollet wrote:
> <clip>
> >You realize that that is an insane environment to develop on and support
> >and just say NO. Then you get a $2.98 Linux CD with all the tools and
> >libraries you will ever need and try to forget the bad old days. It
> >works for me :^) If you're a programmer why would you put up with that?
> >
> >It's strange, all those intelligent people and so very few see the
> >obvious. You don't _have_ to play the Windows game anymore. You have
> >a choice. It's hard for a while, but you'll be grinning soon.
> <clip>
>
> I believe you said you have created test systems. What development
> software do you use for test systems? There may be a lot of stuff available
> on the web, but separating the good from the bad is time consuming. I will
> classify items below for clarity.

It's all good :^) I'll stick to the free tools.

> 1) Programming language (what compiler, etc.) for test applications
> in the 5k to 15k line size range.

While there are at least a dozen languages included in the typical
Linux distribution, I use C. It's the clear choice for working with
hardware and small apps don't really benefit from OOP.
Java (real Java), Python, and C++ are also popular.

> 2) What editor, debugger, and other tools seems to be popular for
> the langauge you would pick (this can be a personal choice I realise)?

Again, many choices including several IDE's. I am a traditionalist, I use VI (editor), gcc (compiler), gdb or Xgdb (debugger) and occasionally Electric Fence (memory bounds checker). The GNU tools are world class and widely used on many platforms especially embedded. There is full profiling capability also, something that should be used more often
than it is.

> 3) GUI screen designer - or do you do this the hard way? Software
> which lets you draw the screens interactively saves a lot of time.

There are several of these available also, Glade and Visual TCL come to mind, but our environment precludes mice or touchscreens and our users are typically untrained. Push to test, red for fail, green for pass. Inline testers are often headless with no monitor or keyboard No need to run many megabytes of GUI code for this. I use Ncurses. I have used TCL/Tk (sorta like VB) but it didn't add any functionality and caused more confusion than oohs and ahhs. Better to run in 8 mb.of
RAM.

> 4) GUI toolbox - numeric displays, strip charts, XY charts, bar
> indicators, sliders, buttons and knobs, etc.

Glade, TCL/Tk, Python or Java/swing, depending on who's got the widgets. All are bound to X which makes them ideal for local or remote display.
Unmatched Web capability if that's your thing. Full video capability for those machine vision apps.

> 5) Signal processing toolbox for things like FFTs, digital
> filtering, etc.
> 6) Mathematical toolbox for things like matrix algebra, array
> operations, and various other engineering related math stuff.

I typically write my own math, but that's just vanity. There is a matlab clone and a truly dazzling assortment of scientific, statistical
visualization and hardcore number crunching software available. Not too much of this comes with the distributions as few folks use it. But Linux is a favorite of scientists and engineers and several sites offer libraries and algorithms with source for free. And if you are really hardcore you can run the huge body of code published in Fortran GNU F77 is included. Gnuplot is great for graphing. It's an abundance of riches and costs you nothing. You can even design DSP's and PCB's for free.

> 7) Some sort of low end data base for storing test parameters.
> Parsing out ASCII data files for this can be rather tiresome, so I guess a
> simple, low overhead database woud be good for this.

MSQL, MySQL, Postgress and Interbase for free, DB2, Oracle, and everything else if you want to pay money. No MS databases, but lots of ODBC gateways, etc. I tend to use the Berkely DB tools for parameters and results, SPC, etc. flow directly to the enterprise system via an NFS mount or a socket connection. Samba for those of
you that must connect to Microsoft.

> 8) A database suitable for logging test results (no more than a
> couple of megabytes per day). Some sort of standard format would be preferred.

SQL (real standards compliant SQL) on any of the above. Postgress would be my choice as it's fast and free.

> 9) Serial comms library.

They exist, but I have boilerplate that gives me extensive control. As this is fundimental to Linux and ports are handled as files, the system call interface and ioctls are really easier to use than say, Greenleaf Commlib or equivalents under MS. This is core functionality for automation and the capabilities are far greater
than any library could offer. It is extensively covered in almost any UNIX programming text. This capability is one of the biggest reasons I use Linux for integration. From NC tools that mention
punches and readers to PLC proto's that you have to reverse engineer, Linux talks to them all. It's well worth learning.

> 10) Anything else you want to mention? Some sort of interpreter is
> handy for the initial work of getting familiar with the boards and other
> hardware provided it can make the necessary library calls.

Even MS uses perl for this. I however, don't as C is my native tongue. That is, I code faster in C than anything else. Most of the code I do for testers is tested and reusable so I typically don't prototype much. And the UNIX model for drivers makes them very similar to talk to. This consistancy is invaluable for code reuse.

> 11) And of course, what Linux distribution, with what items
> installed on the final target?

I use RedHat, 7.1 at the moment. Most code is portable across at least the last dozen versions. Some things are still changing, like video and the drivers are version specific, But, for the most part, Linux is Linux and any distribution will do. RedHat here and Suse abroad is a good plan. Debian if you want the most philosophically pure. I load everything on a development machine and often for deployment to cover future needs. You can cut this down drastically, to DOS size
proportions, even below that for embedded depending on what you use. Since I deploy on standard PC hardware and you can't get a small HDD
anymore, I'm not very selective.


> I think I've covered the all major areas needed for test systems
> above. Anyone who wants to get their feet wet but doesn't know where to
> start would get a good idea of what to use from any answers you can provide.

Linux also has world class networking with even IPv6. and there is Novell, Apple, Wireless, ATM, ARCNet, etc. support. Nothing else to buy.

Support for fieldbus and such is hard to come by but will arrive soon. Proprietary is proprietary and it takes big bucks to join these clubs, especially the "open" ones. Not much you can do for free. Not very Open either.

There are a couple of books that are really good for starters:
Of course, all the "nutshell" guides from OReilly(SP?).
Linux Application Development by Eric Troan and another Redhat guy.
And the Linux Programmer's Guide from the Linux Documentation Project.


The latter is free and available online.


> Anyone else who would like to make suggestions is welcome as well.
> Drivers for standard data aquisition and other boards doesn't seem
> to be a big problem any more. A lot of hardware companies are offering Linux
> drivers for their products now.
>
> **********************
> Michael Griffin
> London, Ont. Canada
> **********************

Thank you Micheal, for asking the leading questions.

Regards

cww

By Jeffrey D. Brandt on 22 October, 2001 - 12:40 pm

Well, I always said that public domain software is worth what you pay for it. (nothing, get it?). By that, the AB/R$ should be worth its weigh in gold. And, would'nt you expect at least something that works with ALL AB/R$'s stuff??
The previous comment about "so they can pick up lots of work programming " is also out the window these days, because integrators are competing with
Allen Bradley for work. Check out AB's web site. GTS used to be a VERY thinly veiled integration arm of AB.....NOW they (AB) tout their
'application programming assistance' as a chargable service. Geez, as if business wasn't hard enough.

I'm with Mark. Software isn't the great dividing point that it once was. 'It ain't no ICOM' (for you kids: 'ICOM' was the name of the very good
company that originally produced the 'best ladder programming and documenting software ever known to man', later to be gobbled up by AB and called 'A.I.', and later to be canned in favor of R$Logix) used to be my mantra for pronouncing relative quality of programming packages like Omron's LSS and almost anything windows based.
These days, there are some very good competitors out there, S7 (hated it, loved it), VersaMax (Starting to not hate it so much), and the even
better 'FREE' tools. I remain as mystified as the rest on this list ) except for the obvious AB
cheerleaders). We all know that we already pay more for AB because of their 'great support' (another topic), so....WHY do we have to pay
a premium for almost everything else from this outfit? I guess, we do be cause we will, and they do, because they can. Remember, you can
always buy better, but you'll never pay more. And, BTW, I've already shared this with AB's management...........
Jeff

By Kinner, Russ on 25 October, 2001 - 5:54 pm

I certainly agree with Jeffery that ICOM was very good, but I worked with an even better programming package before ICOM. WRB Associates marketed their "LADDERS" package in the early 80's which allowed up to 15 lines of 13 characters for every logic element, free space commenting (anywhere there was a "white space" you could add comments), compatibility with
multiple vendors and the ability to copy logic between vendors (what many thought IEC 1131 was going to do), and there was more. Unfortunately,
they used the best and quite expensive computer available for the task at the time, a PDP-11/34. Between the $20K price tag and a platform that was
not mainstream, they quietly faded away by 1990. They just were too far ahead of the curve for most management to accept the cost vs. performance.

I worked on a Modicon 584 installed in a steel plant programmed in LADDERS and was able to combine the work from 6 different programmers into one processor. Today that is expected from a decent programming package but in 1984 it saved countless hours and our tight schedule I was able to have a number of programming tasks happening in parallel and combined them at the last minute.

I never heard what happened to Bill Boyd and his staff after the PC took over and the company folded. Anyone who might have an idea where the guys are can contact me off list.

Russ Kinner
AVCA Corporation
Maumee, OH, USA

After WRB folded at the end of 1990, a few of us picked up the pieces without the Boyds and took over support at MicroCODE Incorporated. http://www.mcode.com.

By Jake Brodsky on 19 October, 2001 - 9:46 am

So you must have seen this blurb on Slashdot too...

As I see it, the real problem here is social. Software is not a physical thing. Selling, licensing, copyrighting, and/or patenting are all ideas which don't quite fit the situation.

The major advantage of the GPL, as I see it, is that it gets around all these things by turning the situation on its head. It opens up the code and the development process to the whole world instead of restricting it to a small cabal of monkish programmers who have no time to experience the worlds where the software may propagate.

However the GPL is still a rough and raw concept. It has yet to gain much acceptance among business folk who don't understand what they're really buying when they choose a software platform. Ergo, the article you cited.

Until the world agrees that we have a new branch of intellectual property that simply doesn't fit any of the old models, and until we all agree what this model should be, we are going to have some very poor software because our motives for writing good software simply don't push us in a productive direction.

By Michael Griffin on 20 October, 2001 - 9:40 am

The article was interesting, but I think their idea that the subscription model presented there will put more control into the hands of
customers is just wishful thinking. A single customer really only has any leverage if they represent a significant piece of revenue for a software company and they have the ability to hold back payment until they are satisfied.

I don't think that the large software companies would be interested in the subscription business model unless they felt it would bring in more revenue at a lower cost to themselves. I could see customers (especially the smaller ones) being coerced into paying ever higher monthly fees for an irregular stream of updates of questionable value or utility.
The mainframe computer era operated almost entirely on the "subscription" model being proposed (it used to be called leasing), and I
don't recall hearing that very many software customers were satisfied with the results of that.

**********************
Michael Griffin
London, Ont. Canada
**********************

By Vladimir E. Zyubin on 22 October, 2001 - 3:44 pm

Hello List,

Excellent article. Thanks.

Just a remark:

The discussion rolls mostly around M$ vs Lynux topic... but it seems to me, the problem is more deep...

1. Base economic problem: to survive the vendors need to make _bad_ enough software... to keep their users in the unsatisfied state... etc.

2. Revolution problem: MS has begun Revolution. XP(eXtreme Programming), - the "new paradigm", - establishes the "bad enough software" philosophy as an official religion.

Vladimir.

==
OK. Hayek rulez... but what is the direction? :-)

--
Best regards,
Vladimir

By Alex Pavloff on 23 October, 2001 - 1:25 pm

> 1. Base economic problem: to survive the vendors need to make _bad_ enough
> software... to keep their users in the unsatisfied state... etc.

Well, again, what we have is hardware companies selling software. The software is secondary to the tasks, so "good enough" usually means things
that will work. Fixing those 5 year old bugs that "everyone knows about" and improving usability isn't a priority for something that's traditionally been an afterthought. "Bad software" is very rarely intentional.

> 2. Revolution problem: MS has begun Revolution. XP(eXtreme Programming), -
> the "new paradigm", - establishes the "bad enough software" philosophy as
an official
> religion.

The XP Microsoft Windows XP and Office XP is a marketing shorthand for "eXPerience".

This has nothing to with the "Extreme Programming" methodology, which is also known as XP.

We're running out of acronyms. We'd better start using non-english characters to avoid confusion.

By Johan Bengtsson on 29 October, 2001 - 11:10 am

So if you buy a PLC and you get the source code for the PLCs internal operation with it (or even you didn't get it but you got a link to where you could download it if you like) would that PLC be worth less and harder for you to use because of that?


/Johan Bengtsson

----------------------------------------
P&L, Innovation in training
Box 252, S-281 23 H{ssleholm SWEDEN
Tel: +46 451 49 460, Fax: +46 451 89 833
E-mail: johan.bengtsson@pol.se
Internet: http://www.pol.se/
----------------------------------------

By Ranjan Acharya on 1 November, 2001 - 9:17 am

Regarding Microsoft, readers may be interested in this quote from The Register:

"There are substantial numbers of people out there that openly despise Microsoft with an almost religious furor, describing it as a purveyor of garbage, devoid of any security knowledge, absorbed in an horrifying monopolistic quest for world domination. To them, Microsoft is a group of Evil Troglodytes on coke who want to make the world their company."

The next paragraph points out that most people are somewhere in the middle (IIS is particularly bad!). We need to strive for that kind of balance on The Automation List. Postings with questions regarding a problem with RSView, for example, need responses with direct solutions; not responses espousing the merits of WinCC (or vice versa). The same goes for Linux or Windows. Open-ended postings asking about direction or "what is the best" open the field for us to suggest alternative platforms, OSs et cetera. We are all quite aware of alternatives to the various OSs and platforms. When we have a problem with platform "A" switching to platform "B" is rarely a useful suggestion and comes across as being glib. Poor platforms end up in a sarcophagus eventually; they really don't need help.

Ranjan

By Michael Griffin on 2 November, 2001 - 3:57 pm

At 06:17 31/10/01 -0500, Ranjan Acharya wrote (quoting another source):
<clip>
>"There are substantial numbers of people out there that openly despise
>Microsoft ...
<clip>
>absorbed in an horrifying monopolistic quest for world domination.
>To them, Microsoft is a group of Evil Troglodytes ...
<clip>
>The next paragraph points out that most people are somewhere in the middle
>(IIS is particularly bad!). We need to strive for that kind of balance on
>The Automation List.
<clip>

So I guess the balanced point of view would be that Microsoft is only moderately evil, and it only wants to dominate part of the world
(presumably the part with all the money). I'm sure that Mr. Wuollet would be a gentleman and concede that much.
(I'm sorry about this comment, but I couldn't resist when you handed me a line like that!)

>Postings with questions regarding a problem with
>RSView, for example, need responses with direct solutions; not responses
>espousing the merits of WinCC (or vice versa). The same goes for Linux or
>Windows.
<clip>

Mr. Acharya has a good point. It sometimes pays to think twice before sending a message. We'll all be here the next day if you still think you have a point to make. I have tossed more than a few of my own messages in the scrap bin rather than sending them.

The Moving Finger writes; and, having writ,
Moves on: nor all they Piety nor Wit
Shall lure it back to cancel half a Line,
Nor all they Tears wash out a Word of it.
(Omar Khayyam)

**********************
Michael Griffin
London, Ont. Canada
**********************

By Curt Wuollet on 4 November, 2001 - 5:54 pm

Hi Micheal

All my effort since joining the automation list has been advocating some sort of balance. Even with the LPLC and AutomationX it's 666:2.

> <clip>
>
> So I guess the balanced point of view would be that Microsoft is
> only moderately evil, and it only wants to dominate part of the world
> (presumably the part with all the money). I'm sure that Mr. Wuollet would be
> a gentleman and concede that much.

Absolutely, So far they haven't dominated the Lake Superior Agate business and they've left ditch digging and septic pumping alone along with cutting pulpwood and rock farming. So they don't have everything in Mille Lacs county. I do have to evict them to use any computer and it's tough doing automation without patronizing them. But yes, it's only a problem if I'd like to make a living. It'd sure be nice if I didn't have to build my own PLCs to have reliable tools and run Linux.

Regards

cww

By Vales, John on 2 November, 2001 - 4:16 pm

Ranjan, I really like your reply. It is short, sweet, and to the point. I wholeheartedly agree that pointing out how "Linux would have done
a better job" in response to a question for help on RSView is a non-answer at best, and a sharp stick in the eye at worst.

The question that begins with "I've got a problem with thus and such..." implies that all the design and selection decisions have already
been made long ago ... Let's *help* that person; not guffaw and tell them "well, if only you'd done this or that, you wouldn't be in this spot".

Regards,
jfv

John F. Vales
jfv@wes-tech.com

By Joe Jansen/ENGR/HQ/KEMET/US on 9 November, 2001 - 2:20 pm

In an effort to turn this into a useful thread:

A couple years ago, I was at the NMW show in Chicago, and saw an Omron PLC that had been built onto a PCI card. The processor lived inside the PC and controlled I/O using various fieldbus flavors. The PC was able to access PLC memory natively through the PCI bus.

I was wondering if anyone has ever used or seen one of these in operation. We just got approval for some prototype equipment where we plan on using Omron PLC's with DeviceNet, and I thought that if nothing else, this would be an interesting academic exercise to investigate.

Thanks!

--Joe Jansen

By Anthony Kerstens on 11 November, 2001 - 12:54 pm

Modicon had something similar. It was a 984 processor combined with an SA85 (Modbus Plus)card. It could be programmed using the same software as their other processors, but was limited in memory as it was contained on the card
and isolated from PC hardware. It was not a PC PLC, but a PLC residing on a PC card.

The only case where I seen one was on a line I upgraded 4 years ago. I ripped it out and put in a Modicon Quantum processor. The main complaint against it was it had to go out to MB+ to get it's I/O, which was too slow for certain points.
The Quantum had direct access to I/O in its own rack.

Mind you, something like that would be great as a testing/simulation tool.


That said, with PLC's incorporating ethernet now, I wouldn't even think about using something like that. I'm currently happy with flipping bits and doing small amounts of math with a PLC, and letting a PC deal with sql, recipes, and other such stuff. From my perspective, if a database or PC goes down, the machine must still run.

Another concern is several instances I've had where memory and HD's were stolen from locked-up plant floor PC's on midnight shift (presumably, but someone with keys). It's not a software quality issue, but software cannot be considered
in a complete vacuum. The choice to go with a PLC is just as much about hardware as it is about software, if not more.

Anthony Kerstens P.Eng.

By Nathan Boeger on 28 October, 2006 - 12:19 am

Interesting article! The post is dated, but still very relevant. In my opinion consumers shouldn't put up with paying forced "maintenance fees". I think that they should be able to reasonably expect their software to work properly and bug fixes should be included. For example, Inductive Automation is paving the way with industrial (SQL datbase driven web launched SCADA) software that is reasonably priced and you don't keep paying for again and again. FactorySQL and FactoryPMI go above and beyond by including unlimited upgrades, including support, and not charging artificial fees (concurrent clients, tag count, developer software, etc).

The industrial software business is dated in its practices. HMI programs are getting bigger and more bloated instead of tighter and more efficent. These companies should take a hint from the open source community. They should seek to lower their lines of code. They should leverage existing developed technologies (databases, reporting engines, networking protocols, GUI tools, etc) to avoid writing their own buggy crap whenever possible. Low cost, high quality software that gets the job done is not too much to ask in this industry. If integrators and manufacturers quit supporting this practice we'd have moved a giant step in the right direction.

--
Nathan Boeger
http://www.inductiveautomation.com

Bravo Nathan!