Software Quality

C
Hi Micheal

Michael Griffin wrote:
>
> Curt Wuollet wrote:
> <clip>
> >Only if you are ill prepared enough to go on site and try to install
> >on random hardware. He can't possibly do that often or he would know
> >that even with "built for windows" the occasional nightmare occurs.
>
> This was their mistake - with the mitigating circumstance of
> the limited time available, and their PC supplier was late (of
> course).

I know this because I had days like that. It's a very, very, bad day and what makes it really horrible is that there isn't much you can do after the fact. In fact with closed systems sometimes there isn't anything you can do, period. That's one of the exhilerating things about developing with OSS on Linux. I have never had that sort of disaster. I have energetically researched a few problems on the virge of panic, but it's always worked out all right. The worst I've had to do was trade modems with a customer for a winmodem I trashed at the next rest stop.

>
> >I wouldn't
> >try this without a plan B. Either a known MB I could swap in or a
> >bare bones spare. Anyone that you pay to do PC's should be prepared
> >for this and at least know the configuration is going to work ahead
> >of time.
> <clip>

>
> A known motherboard? How do you do that? If you use "standard"
> office PC style motherboards, you generally find that they go obsolete
> so fast you can't buy two in a row that are exactly the same.
> You must either have a source of motherboards that changes
> their design very little over time, or you must have an operating
> system that is very tolerant of different hardware. I find Windows
> NT's fussiness particularly surprising considering that the
> motherboard suppliers (and the chip designers) are designing their
> products with Windows in mind.


This is much easier with Linux, I keep an FIC VA503+ with a K6-2-500 and an FIC PA2013 around. This is great plenty for Linux. I suppose it'd choke on W2k or WinME. I have used these for about two years and they are still available. I'm researching the next generation. The trick was "super socket 7" a technology designed to extend the life of the MB to future processors and bus speeds. The prospects for next generation aren't quite as good. I've avoided slot a and 1 procs as they are stopgaps. Soyo and AMD Duron are the front runners so far. I research these things very carefully for the reasons you mention it's nice if you don't have two dozen variations out in the field. It's synergistic that the best Linux hardware is very generic and a year back from the leading edge. This happens to be the sweet spot for pricing also. The point is that you can have a sane PC operation if you work at it and manage the details. I enjoy peace and quiet far too much to do it the way most people do. And Linux is _very_ tolerant of hardware, it is written by folks that have every type of system from poor students with a 486 to poor engineers like me with a K6-2-400 to IBM and Compaq who runs it on machines that cost millions. Nothing else has so many qualified testers working on it. Windows, by necessity, lives on fairly new machines. I use very few "gee whiz" features on my test systems and that helps too.


>
> >The blame rests squarely on the integrator. Unfortunately, 99% of the
> >project was beyond his control. That's one of the major reasons I use
> >Linux and OSS. I can handle almost anything that can happen and
> >whatever your code does, it can't run any better than the platform
> >it's on.
>
> Yes in one sense, the blame lies with the integrator. However,
> I see the problems they routinely face, and they are not what I
> consider to be "value added" activities. These guys are supposed to be
> solving *testing* problems, not Windows problems. It isn't just one
> company, or just integrators. I see OEMs having the same difficulties.
> For example, they want to fix a bug, but they upgraded their
> compiler since the last time they worked on it. Now it's no longer
> compatable with the database, so that has to be upgraded. That in turn
> affects something else, which also needs an upgrade. Now we need a
> memory or hard drive upgrade because of all the software upgrades.
> One minor software incompatability leads to a chain reaction
> of cascading upgrades through the whole system. They plan on one
> "minor" change and find themselves being pulled in by an undertow
> whose existence they never suspected. How do you plan anything in an
> enviroment like that?

You realize that that is an insane environment to develop on and support and just say NO. Then you get a $2.98 Linux CD with all the tools and libraries you will ever need and try to forget the bad old days. It works for me :^) If you're a programmer why would you put up with that?

It's strange, all those intelligent people and so very few see the obvious. You don't _have_ to play the Windows game anymore. You have a choice. It's hard for a while, but you'll be grinning soon.

Regards

cww
 
M

Michael Griffin

Curt Wuollet wrote:
<clip>
>You realize that that is an insane environment to develop on and
>support and just say NO. Then you get a $2.98 Linux CD with all the
>tools and libraries you will ever need and try to forget the bad old
>days. It works for me :^) If you're a programmer why would you put up
>with that?
>
>It's strange, all those intelligent people and so very few see the
>obvious. You don't _have_ to play the Windows game anymore. You have a
>choice. It's hard for a while, but you'll be grinning soon.
<clip>

I believe you said you have created test systems. What development software do you use for test systems? There may be a lot of stuff available on the web, but separating the good from the bad is time consuming. I will classify items below for clarity.

1) Programming language (what compiler, etc.) for test applications in the 5k to 15k line size range.
2) What editor, debugger, and other tools seems to be popular for the langauge you would pick (this can be a personal choice I realise)?
3) GUI screen designer - or do you do this the hard way? Software which lets you draw the screens interactively saves a lot of time.
4) GUI toolbox - numeric displays, strip charts, XY charts, bar indicators, sliders, buttons and knobs, etc.
5) Signal processing toolbox for things like FFTs, digital filtering, etc.
6) Mathematical toolbox for things like matrix algebra, array operations, and various other engineering related math stuff.
7) Some sort of low end data base for storing test parameters. Parsing out ASCII data files for this can be rather tiresome, so I guess a simple, low overhead database woud be good for this.
8) A database suitable for logging test results (no more than a couple of megabytes per day). Some sort of standard format would be preferred.
9) Serial comms library.
10) Anything else you want to mention? Some sort of interpreter is handy for the initial work of getting familiar with the boards and other hardware provided it can make the necessary library calls.
11) And of course, what Linux distribution, with what items installed on the final target?

I think I've covered the all major areas needed for test systems above. Anyone who wants to get their feet wet but doesn't know where to start would get a good idea of what to use from any answers you can provide. Anyone else who would like to make suggestions is welcome as well.
Drivers for standard data aquisition and other boards doesn't seem to be a big problem any more. A lot of hardware companies are offering Linux drivers for their products now.



**********************
Michael Griffin
London, Ont. Canada
**********************
 
C
Michael Griffin wrote:
>
> At 20:58 22/10/01 -0500, Curt Wuollet wrote:
> <clip>
> >You realize that that is an insane environment to develop on and support
> >and just say NO. Then you get a $2.98 Linux CD with all the tools and
> >libraries you will ever need and try to forget the bad old days. It
> >works for me :^) If you're a programmer why would you put up with that?
> >
> >It's strange, all those intelligent people and so very few see the
> >obvious. You don't _have_ to play the Windows game anymore. You have
> >a choice. It's hard for a while, but you'll be grinning soon.
> <clip>
>
> I believe you said you have created test systems. What development
> software do you use for test systems? There may be a lot of stuff available
> on the web, but separating the good from the bad is time consuming. I will
> classify items below for clarity.

It's all good :^) I'll stick to the free tools.

> 1) Programming language (what compiler, etc.) for test applications
> in the 5k to 15k line size range.

While there are at least a dozen languages included in the typical
Linux distribution, I use C. It's the clear choice for working with
hardware and small apps don't really benefit from OOP.
Java (real Java), Python, and C++ are also popular.

> 2) What editor, debugger, and other tools seems to be popular for
> the langauge you would pick (this can be a personal choice I realise)?

Again, many choices including several IDE's. I am a traditionalist, I use VI (editor), gcc (compiler), gdb or Xgdb (debugger) and occasionally Electric Fence (memory bounds checker). The GNU tools are world class and widely used on many platforms especially embedded. There is full profiling capability also, something that should be used more often
than it is.

> 3) GUI screen designer - or do you do this the hard way? Software
> which lets you draw the screens interactively saves a lot of time.

There are several of these available also, Glade and Visual TCL come to mind, but our environment precludes mice or touchscreens and our users are typically untrained. Push to test, red for fail, green for pass. Inline testers are often headless with no monitor or keyboard No need to run many megabytes of GUI code for this. I use Ncurses. I have used TCL/Tk (sorta like VB) but it didn't add any functionality and caused more confusion than oohs and ahhs. Better to run in 8 mb.of
RAM.

> 4) GUI toolbox - numeric displays, strip charts, XY charts, bar
> indicators, sliders, buttons and knobs, etc.

Glade, TCL/Tk, Python or Java/swing, depending on who's got the widgets. All are bound to X which makes them ideal for local or remote display.
Unmatched Web capability if that's your thing. Full video capability for those machine vision apps.

> 5) Signal processing toolbox for things like FFTs, digital
> filtering, etc.
> 6) Mathematical toolbox for things like matrix algebra, array
> operations, and various other engineering related math stuff.

I typically write my own math, but that's just vanity. There is a matlab clone and a truly dazzling assortment of scientific, statistical
visualization and hardcore number crunching software available. Not too much of this comes with the distributions as few folks use it. But Linux is a favorite of scientists and engineers and several sites offer libraries and algorithms with source for free. And if you are really hardcore you can run the huge body of code published in Fortran GNU F77 is included. Gnuplot is great for graphing. It's an abundance of riches and costs you nothing. You can even design DSP's and PCB's for free.

> 7) Some sort of low end data base for storing test parameters.
> Parsing out ASCII data files for this can be rather tiresome, so I guess a
> simple, low overhead database woud be good for this.

MSQL, MySQL, Postgress and Interbase for free, DB2, Oracle, and everything else if you want to pay money. No MS databases, but lots of ODBC gateways, etc. I tend to use the Berkely DB tools for parameters and results, SPC, etc. flow directly to the enterprise system via an NFS mount or a socket connection. Samba for those of
you that must connect to Microsoft.

> 8) A database suitable for logging test results (no more than a
> couple of megabytes per day). Some sort of standard format would be preferred.

SQL (real standards compliant SQL) on any of the above. Postgress would be my choice as it's fast and free.

> 9) Serial comms library.

They exist, but I have boilerplate that gives me extensive control. As this is fundimental to Linux and ports are handled as files, the system call interface and ioctls are really easier to use than say, Greenleaf Commlib or equivalents under MS. This is core functionality for automation and the capabilities are far greater
than any library could offer. It is extensively covered in almost any UNIX programming text. This capability is one of the biggest reasons I use Linux for integration. From NC tools that mention
punches and readers to PLC proto's that you have to reverse engineer, Linux talks to them all. It's well worth learning.

> 10) Anything else you want to mention? Some sort of interpreter is
> handy for the initial work of getting familiar with the boards and other
> hardware provided it can make the necessary library calls.

Even MS uses perl for this. I however, don't as C is my native tongue. That is, I code faster in C than anything else. Most of the code I do for testers is tested and reusable so I typically don't prototype much. And the UNIX model for drivers makes them very similar to talk to. This consistancy is invaluable for code reuse.

> 11) And of course, what Linux distribution, with what items
> installed on the final target?

I use RedHat, 7.1 at the moment. Most code is portable across at least the last dozen versions. Some things are still changing, like video and the drivers are version specific, But, for the most part, Linux is Linux and any distribution will do. RedHat here and Suse abroad is a good plan. Debian if you want the most philosophically pure. I load everything on a development machine and often for deployment to cover future needs. You can cut this down drastically, to DOS size
proportions, even below that for embedded depending on what you use. Since I deploy on standard PC hardware and you can't get a small HDD
anymore, I'm not very selective.


> I think I've covered the all major areas needed for test systems
> above. Anyone who wants to get their feet wet but doesn't know where to
> start would get a good idea of what to use from any answers you can provide.

Linux also has world class networking with even IPv6. and there is Novell, Apple, Wireless, ATM, ARCNet, etc. support. Nothing else to buy.

Support for fieldbus and such is hard to come by but will arrive soon. Proprietary is proprietary and it takes big bucks to join these clubs, especially the "open" ones. Not much you can do for free. Not very Open either.

There are a couple of books that are really good for starters:
Of course, all the "nutshell" guides from OReilly(SP?).
Linux Application Development by Eric Troan and another Redhat guy.
And the Linux Programmer's Guide from the Linux Documentation Project.


The latter is free and available online.


> Anyone else who would like to make suggestions is welcome as well.
> Drivers for standard data aquisition and other boards doesn't seem
> to be a big problem any more. A lot of hardware companies are offering Linux
> drivers for their products now.
>
> **********************
> Michael Griffin
> London, Ont. Canada
> **********************

Thank you Micheal, for asking the leading questions.

Regards

cww
 
Johan Bengtsson:
> >So if you buy a PLC and you get the source code for the PLCs internal
...
> >would that PLC be worth less and harder for you to use because of that?

Michael Griffin:
> I believe the discussion was on the cost of assembling, configuring,
> testing, and documenting your own controller versus buying one with that
> already done (e.g. as with the PLC you mentioned).

I believe someone was conflating Open Source with build-it-yourself, which is what Johan was trying to correct.

The two are mostly unrelated.

> My original message (from which Mr. McGilvray quoted) made the point that
> someone using a controller which has open source software would probably
> be better off buying an off the shelf system rather than putting their
> own together (at least for typical PLC type applications).

Definitely.

> This means there should be a market for OEMs (although not necessarily
> traditional PLC manufacturers) to build these systems. I'm sure I could
> do it myself, but why would I want to - or rather, why would anyone want
> to pay me to do so?

The option to do so will make the OEMs more responsive, even if you never take it up. And, of course, there are the rare times when you do need to, or when you need to make a minor alteration to the supplied package.

> The last time I bought a car, I just went to the dealer and bought a car.
> I didn't shop around at the wreckers' to find the parts to make my own.
> I think that most people would do the same.

No, but neither did the car have a big lock on the hood that only authorized servicepeople can open.

You *can* open the car yourself to check the oil (or change the cyliner head gasket), and so can the corner mechanic. You probably don't, but the
mere option makes the cost of service much lower and the quality better.


Jiri
--
Jiri Baum <[email protected]> http://www.csse.monash.edu.au/~jirib
MAT LinuxPLC project --- http://mat.sf.net --- Machine Automation Tools
tlhIngan Hol jatlhlaHchugh ghollI' Hov leng ngoDHommey'e' not yISuD
Never bet on Star Trek trivia if your opponent speaks Klingon. --Kung Foole
 
J
I for one, don't want the big thick manual. Give me a CD with files in .PDF format. Let me decide what to print, if anything. The later .PDF
readers have a powerful search capability built in. Makes it very easy to find what you need and get on with the work. I can read .PDF files on any
machine that I have.

The .PS files are another story. I do have a reader, but it is only on one machine. These files are hard to print in a windows environment.

I agree with the comment on application specific documentation. Also, anything that requires the installation of a "reader" program is no good.

Jerry Miille
 
J

Johan Bengtsson

You definitely have a point there.

Could we agree that the following scale (from best to worst) defines the value regarding this point:

1. Good documentation with sources
2. Good documentation without sources
3. Bad or no documentation with sources
4. Bad or no documentation without sources

The leap between 2 and 3 would probably be very high for most users and the leap between 1 and 2 very small.

You can never document everything as fully as you can by *also* supplying the sources, but as you say, supplying sources should never be an excuse for less documentation. Even if you cannot
yourself use the source as a way of getting more information you can always hire someone else in those rare cases there is left.


Then there is the question about form of documentation (paper, pdf windows help, html, whatever) but this is a separate question (from
open vs closed source) as I see it entirely.


/Johan Bengtsson

----------------------------------------
P&L, Innovation in training
Box 252, S-281 23 H{ssleholm SWEDEN
Tel: +46 451 49 460, Fax: +46 451 89 833
E-mail: [email protected]
Internet: http://www.pol.se/
----------------------------------------
 
D

David McGilvray

The fact that a product is open source does not make it inherently different from a proprietary alternative, except, of course, the cost. An off the shelf PLC may be proprietary or open source. Off course, from a practical point of view, an open source PLC option would require proprietary hardware (PC and I/O). My expectation is the OS PuffinPLC, when completed, will be similar to
automationX : a software based system - one OS & the other proprietary - that is a fully functional, tested, documented, etc. In fact, software based systems are typically lower cost alternatives to the traditional alternatives.


David McGilvray
automationX
www.mnrcan.com
 
C
How would you feel about a compromise? Documentation in pdf or postscript that you can convert into a big thick paper manual at your leasure. I think a lot of the gripes about online docs are that they are in file formats that are essentially useless offline. An actual book that can be used either way solves a lot of those issues for me as I also like the attributes of hardcopy. Carrying full documentation on a CD in your notebook case has quite a bit of merit too. The absolutely worst, less than useless documentation for me is the stuff you can only access in the application, Various Microsoft formats vie for that distinction as they are useless to me and often useless unless you have a fairly specific set of expensive software on every machine you work on. Unfortunately these are the most prevalent types, possibly because they are convenient for the technical writer in a cubicle somewhere. Postscript is great on Linux, but Microsoft went through a great deal of trouble to avoid it as a standard, are there convenient means to view it in Windows? I know pdf is fairly universal as Adobe provides free readers.

HTML is pretty good also, but the typography and printability are worse than PS or pdf. I'm curious, are pdf and/or PostScript good for everybody else? Perhaps the vendors (and our project) will be listening.

Regards

cww
 
R
You seem to be describing DOCBOOK. I used it a couple of years ago. You write once and publish to HTML, POSTSCRIPT, and others. It creates hyper links, table of contents etc.

It formats the document for both web and for hardcopy. I know some brought this up before but I did not follow the discussion to it's conclusion. Is there a reason why DOCBOOK is an inadequate or unacceptable solution?

regards

Rick Jafrate
 
M

Michael Griffin

David McGilvray wrote:
<clip>
>The fact that a product is open source does not make it inherently different
>from a proprietary alternative, except, of course, the cost.

I agree that in either case, the products must be judged upon their own merits, not upon their provenance. I think however, that it is a mistake to concentrate too much upon the cost (or rather the price) of open source software. In any project there are many costs, and they all must be
considered together.

>An off the
>shelf PLC may be proprietary or open source. Off course, from a practical
>point of view, an open source PLC option would require proprietary hardware
>(PC and I/O).

I believe my original example went much further and postulated open source software running on entirely proprietary hardware in an off the shelf combination. Something which really hasn't been mentioned too much is that if a variety of different hardware ran the same "soft logic" system the end customer would be able to separate the issue of what hardware to use from
what software to use.

The "vendor lock-in" which we sometimes hear about comes more from software than from hardware. This is why for example Siemens went to such effort to keep the basic software architecture of the S7-300/400 series broadly similar to the S5 series even though the hardware is completely different. The S7-300/400 would probably have been a better PLC if it had made a cleaner break with the past, but Siemens wanted to make it easier for S5 customers to switch to the S7-300/400 than to something else. (And no
doubt some customers appreciated this continuity as well.)

>My expectation is the OS PuffinPLC, when completed, will be similar to
>automationX : a software based system - one OS & the other proprietary -
>that is a fully functional, tested, documented, etc. In fact, software
>based systems are typically lower cost alternatives to the traditional
>alternatives.
<clip>

I come to a slightly different conclusion as to application area, although admittedly on fewer facts than you have. My own impression is that AutomationX is oriented more towards larger systems, while the "PuffinPLC" (or MAT, or whatever) will concentrate on smaller ones. I see it as a difference in emphasis rather than potential. It also means that I don't see the two as direct competitors.

With smaller systems, there is a strong emphasis on minimising the engineering man hours and on buying pre-engineered sub-systems (e.g. - the controller). This is why I see a market for "off the shelf" controllers, even with ones which use open source software.

The situation may be different with very large systems - but I don't have much experience in that field to base an opinion on.


**********************
Michael Griffin
London, Ont. Canada
**********************
 
R

Ranjan Acharya

Regarding Microsoft, readers may be interested in this quote from The Register:

"There are substantial numbers of people out there that openly despise Microsoft with an almost religious furor, describing it as a purveyor of garbage, devoid of any security knowledge, absorbed in an horrifying monopolistic quest for world domination. To them, Microsoft is a group of Evil Troglodytes on coke who want to make the world their company."

The next paragraph points out that most people are somewhere in the middle (IIS is particularly bad!). We need to strive for that kind of balance on The Automation List. Postings with questions regarding a problem with RSView, for example, need responses with direct solutions; not responses espousing the merits of WinCC (or vice versa). The same goes for Linux or Windows. Open-ended postings asking about direction or "what is the best" open the field for us to suggest alternative platforms, OSs et cetera. We are all quite aware of alternatives to the various OSs and platforms. When we have a problem with platform "A" switching to platform "B" is rarely a useful suggestion and comes across as being glib. Poor platforms end up in a sarcophagus eventually; they really don't need help.

Ranjan
 
M

Michael Griffin

There has actually been some serious academic research into the two types of documentation formats (book style versus on-line "link" type help). I believe it also addressed printed books versus viewing the exact
equivalent on a screen.

The conclusion was that a printed book was slightly better than the exact same format viewed on a screen. It was believed that this was due to the book being easier to see (larger format, better viewing contrast). The difference between printed versus electronic though, was not very large.

There was a significant difference though between book format (in electronic form) and the typical "help" format you are referring to. The
book format was found to be much better for actually learning something new. While the "help" format was very poor for learning new subjects, it was convenient for looking up short references to previously learned subjects.

The conclusion was that there is a place for both formats. The degree of emphasis should depend upon how much learning is expected to be
done, versus simply looking up previously known facts.


**********************
Michael Griffin
London, Ont. Canada
**********************
 
C

Curt Wuollet

Yes. One reason is that we can't get the vendors to use it :^) Docbook would suit our (MAT) purposes well if anybody knows it but is not likely to be adopted by the majors anytime soon.
Are there decent free authoring tools? We won't convince anyone to do a markup language by hand. Not criticism, I'm asking. Perhaps now that StarOffice uses XML, there will be filters to
convert. Programmers are more likely to work with low level tools but tech writers have large time investments in particular systems and are very difficult to switch. Many still use Xywrite.

The reason I started with ps and pdf is that many browsers and WPs can export ps which can be converted or pdf. Although I've dabbled with TeX and LaTeX and they do beautiful work, they are not first choice for tech writers in this market.
I guess I don't care how they create content as long as they publish it in some format that doesn't require their product or a full blown Windows installation to read the one paragraph I care about.

Regards

cww
 
L

Lillie, Dave

On the documentation sub-thread:

While pdf provides great printing capability, it suffers from some of the same proprietary usability issues as Microsoft's legacy winhelp, and present compact html doc format. The issue is that these proprietary formats are either difficult, slow, or $expensive to use from an
independent search program's perspective.

Why is independent search important? The ability to "text search" for user supplied keywords across a collection of documentation/manuals is an
amazing productivity enhancement. As it is very rare for any of today's systems to come exclusively from one vendor: The ability to integrate documentation search across an ad hoc collection of various vendor docs - including your own custom documentation will become increasingly important as evolution continues (I.E. as young "Google Savvy" Engineers & customers begin to demand productive documentation design).

There is one more major issue besides index/search, which demands attention - Document Design. There is a difference between the design of a hard printed manual, and the design of a hyper text based (or web based document). As we all know, "refer to chapter 12 section 3" in a hard printed manual is a nuisance, while the equivalent "hypertext link" (in proper context) in an html document is a wonderful thing that reduces the clutter of "optional" or "obnoxious clarification" information, tables, etc. Hypertext clearly aids in user comprehension due to it's ability to encapsulate supplementary information and naturally cross reference
associations. Hypertext provides the ability to bind to dynamic, up-to-date (I.E. relevant) information such as catalogs, news, and events.

Unfortunately a solution that adequately addresses the requirements for both hard and soft documents is not here yet. I do not believe that the techwriting community has agreed upon an "Open System" documentation design that both embraces hypertext, and provides an organized way to print out the procedures, configuration tables, and job aids that are needed to operate offline. If the techwriting community embraces, and abides by the W3C - good things should be happening soon. If on the other hand, they get hooked into old Macintosh print format (pdf), or the next "gimme your wallet" Microsoft scheme, it is going to be a long bumpy ride.

Hopefully this issue can be resolved using W3C standards and an application Open Source implementation that prevents certain vendors from
convoluting it. (note: application open source extends past Linux to include Windows & Unix. Examples - NetBeans, Castor, JBoss, PostGreSQL. I
had to clarify, as I have seen postings on this list that use Linux and Open Source as synonyms)

I do not claim to be a documentation expert. I have had experience managing a program that attempted to fully integrate the documentation,
help, and website of 14 independently developed products. I have had experience managing a program containing Open Source applications and
development tools. I question whether .hlp. .chm, .pdf, .vendorwhatever
will help us get where we need to go. I hope that some form of XML compatible html, combined with CSS and an open, standards body endorsed, reference implementation will materialize.

The documentation issue extends well past the boundaries of Manufacturing or Industrial Control. It will most likely be solved by the general software industry under the "eCollaboration - workflow" category, as
opposed to being solved in our smaller "cManufacturing" realm.

In summary - The technology exists to completely solve this problem. Unfortunately we do not have a coordinated effort to leverage it ...... yet.


My 2 Cents,

Dave Lillie
Software Program Manager

(Opinions expressed are mine alone, and should not be associated with the opinions of my employer - Rockwell Software Inc.)
 
C

Curt Wuollet

Hi All.

I have been watching closely for suitable hardware for direct replacement of say an AB PLC with LPLC. SoftPLC sells some suitable hardware called tealware. Unfortunately, it's awful spendy. Right now, you really can't buy suitable stuff off the shelf. This is strange, kinda like
the chicken and the egg problem. Excellent embedded class hardware exists in usable form factors, but no one is packaging it for this market. I have been trying to generate enthusiasm for doing just that. If this can be accomplished, then I would agree on the hardware point. Right now, doing PC based control suffers under the additional burden of hardware that is more like a PC than a PLC. There is simply no reason for this
as a very complete and highly integrated PC can be had in PC104 form. This could be packaged for dinrail mount, pluggable modules, backplane
Bus, etc. For the time being "Off The Shelf" means buy a proprietary PLC.

What I would like to see is easily doable and would go a long ways toward making PC control just as easy a choice. The SoC's have more than
enough features and hardware. I have demonstrated the discrete IO can be done economically and analog is even being included on the PC104 boards.
The architecture could concievably be the standard PC architecture with an ISA or PCI backplane. You would then have what amounts to an off the shelf product that is standard to the extent that it runs the same software and is programatically identical to a desktop. If you think of it as simply a stretched out PC104 stack, it's obvious this is workable.

This would be a great first step. To achieve lower costs would require a simpler bus structure, exclusion of peripherals not needed for control and simpler IO modules. This is where things get too proprietary. The solution would be for a few like minded individuals to get together and do and publish under an Open and Free License, a reference design. Schematics, netlists, Gerbers for the boards, bill of materials, everything needed to make a standardized platform. If this were done and the corresponding drivers published under the GPL,
along with say, a LPLC port, a QNX port,(and maybe a wince port) it might be possible to get more than one company to make automation hardware
that is open and truly compatible. No one can argue that this would not be a giant leap forward for the industry. Everyone wins if the volumes can be multiplied by standardizing. The talent to accomplish this is reading this list. All it takes
is the desire to do things right. A coalition of sponsers for the very modest costs would be helpful also. I could do this, but that's the wrong way. If _we_ do this as a community effort it would be far more likely to be accepted and less likely to be exploited or derailed.

> >> The last time I bought a car, I just went to the dealer and bought a
> >> car. I didn't shop around at the wreckers' to find the parts to make my
> >> own. I think that most people would do the same.
> >
> >No, but neither did the car have a big lock on the hood that only
> >authorized servicepeople can open.
> >
> >You *can* open the car yourself to check the oil (or change the cyliner
> >head gasket), and so can the corner mechanic. You probably don't, but the
> >mere option makes the cost of service much lower and the quality better.

And it lets those of us who need to work under the hood to afford to drive continue to do so. My 91 Dodge needs to go another 100,000 miles. This is not economically feasible unless I do the work myself. There is a strong analog here to the folks who have unsupported proprietary software. If it were open, it's not a problem if you can read the book.

> I have actually heard that there are new cars today having this
> big lock on the hood. I won't buy one if I have any possibility
> to avoid it.

Exactly. Why can't people see this logic with software. Of course, you could get out your big drill as soon as you get home.

> And no, I have not tried to repair my current car myself and I have
> always serviced it at the authorised shop. Mostly because that
> currently is one of the better options considering cost and result.
> I would still not buy a car if I can avoid it where I am required
> to have them make the service.

Well, If you weren't in Sweden, I'd say stop over and we could drink beer and fix it right up. That's what motorheads do here. It's a shame automation people don't collaborate and share. I wonder why?

Regards

cww
 
D

David McGilvray

Michael,

ax device, a new complementary product to automationX, is the hardware independent "soft logic" you describe. Fully qualified and tested on VIPA hardware, with announcements for major vendor's hardware compatibility to follow shortly.

David McGilvray
automationX
www.mnrcan.com
 
M

Michael Griffin

At 11:58 30/10/01 -0600, Curt Wuollet wrote:
<clip>
>The reason I started with ps and pdf is that many browsers and WPs can
>export ps which can be converted or pdf.
<clip>

In case anyone who produces documentation may be interested, if you are using Windows, then virtually any program can be made to produce postscript output. All you have to do is set up a new printer definition. Create a new printer selection using a postscript printer definition, and set it to "print to file". Select this "printer" to print to, and then print from your program. This will create a postscript file (a window will automatically open to ask you for a file name). Next, use Ghostscript to convert the postcript file to PDF format.
This is all fairly painless, and costs nothing. I have used it many times quite successfully to produce various forms of documentation.

**********************
Michael Griffin
London, Ont. Canada
**********************
 
P
This is true. We have found engineers and techs tend to group soft logic products together and make assumptions as to how they work. In our case they have a difficult time understanding we have eliminated the requirement for hardware PLC's or Controllers to run soft logic.

If the system is truly software based - such as automationX - the hardware is completely decoupled in comparison. The only way to do this, and to unleash a power assault of computing at your automation project is to execute the soft logic on a server. The hardware may be swapped out, and the software stays the same. There are no strings attached for using hardware, even custom interfaces are inexpensive. We extend this concept further with aXDevice as Dave has mentioned.

I don't know why we focus so much on hardware, PLC's and controllers etc., and give them demanding computing tasks when they are such crappy computers. Think about it - It doesn't make sense. The most frequent argument we get
is reliability, but this is a perception. We've seen on this list that Siemens, Allen Bradley PLC's crap out more often than a properly tuned
server. And of course for computing power PLC's are even close.

You work with a company by nature you get some sort of "vendor lock in". But the term comes from lack of options, when faced with a decision to interface, add to, or upgrade an automation system. For example a DeltaV system works best with DeltaV controllers. A DCS vendor tries to sell their products at premium prices, or charges $30,000 for a little PLC interface - this is what gets people upset.

Paul Jager
CEO
www.mnrcan.com


 
B
> What would you generally say about Advantech in terms of how good is
>their 1) reliability of hardware,

Too soon to tell.

> 2) support,

I haven't needed much but from what I can tell, they are helpful.

> 3) delivery on new systems,

Good.

Bill Sturm
 
Top