Automation replacing people

Joe Jansen <[email protected]> commented :

>I just don't see anything that exists that can displace
>PLC's, given price etc

Jim Pinto responds :

Yes, PLC's will continue, alongside relays, contactors and pumps. But, the pay for programming PLC's will continue to decline. Just like the pay for servicing typewriters (which, as Willy Smith has suggested, continue to thrive in third-world countries).

Joe :
>Per your other item regarding learning Linux, Java, etc.
>I would hope that this is obvious to most.
>I am doing the Java thing now, and will be
>setting up a linux server this weekend.

Jim :

Hooray !

Cheers:
jim
----------/
Jim Pinto
email : [email protected]
web: www.JimPinto.com
San Diego, CA., USA
----------/
 
J

Johan Bengtsson

As other have noted, PLC:s will continue to exist, but they will be replaced at some places where they are used today with other forms of computers (PC:s etc.) at least (as a start) where reliability and real-time demands are lower.

That don't necesarily mean RLL as a programming language will dissapear, actually I think it will move to the new controllers and continue to exist there. Someone still have to program the controller for a while forward, regardless of if it that controller is a PLC or some other computer. And when you want to program a
controller with some kind of logic you can do that in a number of ways (RLL, FBD, boolean expression, truth-table and probably some other) but they are all convertable to each other (at least to some degree) and whatever format you use is more or less up to you.

/Johan Bengtsson

----------------------------------------
P&L, Innovation in training
Box 252, S-281 23 H{ssleholm SWEDEN
Tel: +46 451 49 460, Fax: +46 451 89 833
E-mail: [email protected]
Internet: http://www.pol.se/
 
I've said this before. So here goes again.

There are two trends in automation and controls staffing.

The first is that the "technicians" are becoming lower and lower level employees, because devices are becoming and, more importantly, are
_perceived_ by managers to be becoming smarter and more equipped with self-diagnostics. This is bad news for people who think of themselves as
instrument techs, and who get paid premium wages for those skills.

Along the same lines, instrument engineers are being replaced by better educated techs. This is the "career path" for techs to avoid being shoved
down into the maintenance pool. This means that people who only think of themselves as instrument engineers are time-limited in the job pool.

The other trend is the need for people who clearly understand the processes found on the factory floor, and how they relate to the business trends and business requirements of the company. In other words, people who not only
understand process but also eat drink and breathe MES. If you are that kind of person, most headhunters want to talk to you about making a whole lot of money.

Along with this trend, and dovetailing with the first one, is the growth of contract engineering and super-technician services. While a plant cannot justify a Senior Analyzer Tech on staff, a consulting Analyzer Tech or Engineer can be just the thing to bail out those jumped up techs who are replacing plant instrument engineers, or the maintenance guy who works on analyzers on Tuesday and Thursday, but fixes the airconditioning on MW & F.

MHO.

Walt Boyes

---------------------------------------------
Walt Boyes -- MarketingPractice Consultants
[email protected]
21118 SE 278th Place - Maple Valley, WA 98038
253-709-5046 cell 425-432-8262 home office
fax:801-749-7142 ICQ: 59435534
 
int main(y=random value)
/*The following are random thoughts. links may be vague or missing */

As processes become more complex, as we become more aware of environment issues and we realise that the kind of accuracy and precision and
emotionless work demanded by the industry is beyond human scope on a continous basis, we accept automation as a remedy.

And every bit and piece of technology that we have assimilated since dawn of human race is if you philosophically think a part of automation.
For example, The wheel is the automation of walking.

Automation in the end is supposed to do a task to precision and once you have automated then manpower requirement will be rationalized.
What society needs to do is to distribute the benefits of automation by way of some social security schemes. With increasing population and longer life expectncies, naturally the question that arises is what these people are going to do. Society needs to promote art, education, sports and research as the primary means of mass employment.

The very reason that human kind invented wheel or fire and made bows and arrows were to enjoy the luxury of time, enjoy benefits of warmth and
cooked food that were hitherto unknown, enjoy the safety of not being close to the animals that humans preyed. The same principles also apply today, automation is to give humans the luxury of more time for other activities, reduce the proximity to dangerous and toxic chemicals and so on as already highlighted in previous articles.
Its basic purpose is not to replace human beings, but only augment their skills and improve the quality of human life.

Return 0;

Anand
 
D
I'll preface this by informing you that I am one of those "jumped up techs"
mentioned earlier.

If someone has made these points already I apologize for repeating, but it
seems to me that this thread has overlooked some of the most important and
valuable reasons for using PLC/ladder logic versus a PC & PERL, PYTHON, C,
C++, VB, Delphi, JAVA, Pascal, Fortran or any other PC programming
languages. No need to address reliability of the OS most often used on PC's.

I'm talking about real time, real world scenarios. I can't even begin to
count the number of times I have found and fixed a program bug or
countermeasured a new problem WHILE THE EQUIPMENT IS RUNNING. It's almost
never necessary to stop a ladder application to edit it. You NEVER have to
experience the oft' repeated procedure of programmer of other languages -
stop the application, replace the old files with the new one (or more) you
just compiled on a separate development box, and restart it.

VB (as much as I hate M$) certainly has a good (the best?) development
environment for running and monitoring a program during the debugging
process. But it isn't remotely comparable to the "online Edit" experience
with a PLC.

If your company's production line controlled by software written in C or VB
is right now, right in front of your Maintenance Engineer's eyes,
experiencing a simple but crippling bug, what will he do? Is he going to
view the source code as it is running NOW and pinpoint the bug within a
minute? Will he have a fix for it up and running the next minute? No
way!! Even if he's the highest performing, most intelligent Maintenance
Engineer on the planet and actually knows C & VB and how to compile a
kernel - even if he wrote the program - he will be lucky to have fetched
his laptop in the time that he could have fixed the problem and sat down
for a cup of coffee. Really, how often is a PC with development software
other than ladder found permanently attached to production equipment or
very near to it?

I attempted to write a comparison of how software bugs other than ladder
logic are often handled, but responses range so widely (from fair to
completely unacceptable - see M$) that it would be like Motor Trend writing
a HEAD-TO-HEAD performance review of a Viper -vs- my John Deere
mower. When I program systems integrating both (and require me to debug
both) the PLC problems can be fixed shortly after they appear and I end up
adapting the PLC to the PC whenever possible for the sake of development
efficiency. Especially after put in use but still buggy, PC app changes
are patches developed during production and installed at breaks or
weekends. What would take minutes or hours with online editing in ladder,
instead takes days or weeks depending on equipment access.

SOME Advantages of ladder:
1. Real-time monitoring of the entire source code
2. Real-time editing off RUNNING programs.
3. Fast troubleshooting
4. No need to recompile/ reboot, or re-anything short unless you hardware
needs changed.
5. Simplicity/reduced learning curve enables larger selection of people to
learn it.
6. One of many technologies that enables a tech to do today what yesterday
required an engineer.
7. The more people who can understand technologies, the more technologies
can be selected, implemented and supported. The more they're used, the
less they cost, and used even more. Within limits, the vast selection of
easily understood (though amazingly complex) technologies (should, if
applied with justification and well applied) improves quality, reduces cost
to produce, and on.
7. Frees experts from day-to-day problems and be utilized more effectively.
8. Larger number of programmers/troubleshooters and debuggers means more
ideas and more problems solved. (OK, depending on mgmt, it can mean more
problems created too.)
9. Mgmnt perception of simple equipment adaptability enables more
agressive approach to market. New products are being brought to market
faster and at less cost than ever. One factor in that is
10. Did I mention ONLINE EDITING !?!?!?
 
R
Dale,

You make excellent points. In my opinion they comprise the primary requirements of of an industrial control system (i.e. runtime monitoring, modification, etc.). Traditional software debugging tools/techniques such as break points, single stepping of instructions, etc. are at best useless and at worst dangerous in an industrial application. If you deploy a program that is controlling a machine or process, any debugging tool/technique must allow the program to continue controlling the machine or process during the debugging process. A break point may halt the program but the machine or process will carry on it's merry way, potentially causing damage to material, equipment, and personnel.

I agree that most all PLC/Ladder systems provide a development and maintenance environment that meets these needs and that most electricians
know how to read a ladder diagram.

I disagree that a ladder diagram is the only or most effective way of of providing the required development and maintenance environment. As
everyone should already know, software doesn't wear out; it gets more reliable over time. Normally physical components such as motors,
switches, sensors, or electronics fail and produce symptoms elsewhere in the system. Consequently, the debugging process entails tracing the connections from the symptom to the point of failure. This does not require examination of the software innards. In most all cases where a technician is examining source code (LADDER, C, VB, ..) he is doing so to find these connections and has very little concern for the
algorithm being implemented. In my opinion this is very time consuming and inefficient.

A better approach would be to provide proper diagnostic tools so that someone could monitor a working system, enable/disable functional blocks,
force signals to specific values, and make any necessary program changes. The technician would follow connections or obtain more documentation by
clicking corresponding areas of diagnostic screens.

Having said that, I recognize that any development/maintenance environment, no matter how good it is, will at a minimum have to support a relay ladders. If not there will be resistance to change. The really interesting thing about this is that more resistance comes from engineers
than from technicians and electricians. It makes me wonder who really has problems understanding and learning something new.

Just in case I wrote too much an obscured the good point you make, they are in your words:

> SOME Advantages of ladder:
> 1. Real-time monitoring of the entire source code
> 2. Real-time editing off RUNNING programs.
> 3. Fast troubleshooting
> 4. No need to recompile/ reboot, or re-anything short unless you hardware
> needs changed.
> 5. Simplicity/reduced learning curve enables larger selection of people to
> learn it.
> 6. One of many technologies that enables a tech to do today what yesterday
> required an engineer.
> 7. The more people who can understand technologies, the more technologies
> can be selected, implemented and supported. The more they're used, the
> less they cost, and used even more. Within limits, the vast selection of
> easily understood (though amazingly complex) technologies (should, if
> applied with justification and well applied) improves quality, reduces cost
> to produce, and on.
> 7. Frees experts from day-to-day problems and be utilized more effectively.
> 8. Larger number of programmers/troubleshooters and debuggers means more
> ideas and more problems solved. (OK, depending on mgmt, it can mean more
> problems created too.)
> 9. Mgmnt perception of simple equipment adaptability enables more
> agressive approach to market. New products are being brought to market
> faster and at less cost than ever. One factor in that is
> 10. Did I mention ONLINE EDITING !?!?!?

The phrase "SOME Advantages of ladder:" could have just as easily been "Some Requirements of an Industrial Software Development and Maintenance
Environment". I would also add "Online Editing" to your list :).

Rick Jafrate
Mitek
 
D
Rick Jafrate wrote,

>Your make excellent points.

Thanks, you too. the the following:
>In my opinion they comprise the primary
>requirements of of an industrial control system (i.e. runtime monitoring,
>modification, etc.). Traditional software debugging tools/techniques
>such as break points, single stepping of instructions, etc. are at best
>useless and at worst dangerous in an industrial application. If you
>deploy a program that is controlling a machine or process, any debugging
>tool/technique must allow the program to continue controlling the machine
>or process during the debugging process. A break point may halt the
>program but the machine or process will carry on it's merry way,
>potentially causing damage to material, equipment, and personnel.
>
>I agree that most all PLC/Ladder systems provide a development and
>maintenance environment that meets these needs and that most electricians
>know how to read a ladder diagram.

I both agree and disagree with you on the next part. Comments follow.

>I disagree that a ladder diagram is the only or most effective way of
>of providing the required development and maintenance environment. As
>everyone should already know, software doesn't wear out; it gets more
>reliable over time. Normally physical components such as motors,
>switches, sensors, or electronics fail and produce symptoms elsewhere
>in the system.

Some PLC programs tend to be left as-is for years once they are debugged. Others are very frequently changed, and the changes are not
always made by engineers. In my experience, the following are some factors which result in frequent program modification:
1. Product changes require equipment modification. (Product lifecycles are shortening and equipment is often asked to handle multiple products, some of which require very extensive hardware/software changes to incorporate)
2. Safety, reliability/efficiency, & quality issues are counter-measured at the equipment.
3. Improvement to interface with production associates and operators.
4. Response to production associate suggestions.

Honda has strongly embraced a philosophy to "empower" the average Production and Maintenance associate. As a result, if a system isn't perfect, somebody WILL have an idea to improve it and those ideas are often tried. I liken it to the advantages of Open Source Software.
With respect to ladder programming, engineers are sometimes consulted depending upon the skill level of the maint assoc making the
change and the difficulty of the change. Usually they work together and with their Team Leaders and Coordinators to do it themselves. With our VB
apps, THIS NEVER happens and changes are ONLY made by engineering or IT.
Frankly, I love it when the individual with an idea or countermeasure can also implement. I wish I could get the maint associates to be more creative and independent than they are. They do not understand VB thus have fewer ideas and they can never implement them.

>Consequently, the debugging process entails tracing the
>connections from the symptom to the point of failure. This does not
>require examination of the software innards. In most all cases where
>a technician is examining source code (LADDER, C, VB, ..) he is doing
>so to find these connections and has very little concern for the
>algorithm being implemented. In my opinion this is very time
>consuming and inefficient.
>A better approach would be to provide proper diagnostic tools so that
>someone could monitor a working system, enable/disable functional blocks,
>force signals to specific values, and make any necessary program changes.
>The technician would follow connections or obtain more documentation by
>clicking corresponding areas of diagnostic screens.

This really depends on the level of complexity of a given system and it's integration with other system. Take Honda's "Multi-mount" for example,
which installs the engine and front and rear suspensions simultaneously, automatically, in about 45 seconds. (linespeed 50 sec/car.) To further complicate the issue, it changes model jigs without losing a beat, thus giving the ability to build a Civic one minute and an Acura CL the next with no manual operations or time lost, assuming all goes as expected. We integrate 6 independant systems, each with a VERY large IO count to accomplish this.

To diagnose this system without ladder is an impossible concept. In the last year we tried to eliminate the need to go to the
ladder for many common functions, which could be done, but a fascinating lesson was learned in the process: Diagnostic systems are complicated to
design and implement and for more complicated processes, such as providing indication of the limiting condition for a step, they are especially
difficult to justify. The main factoring which destroys the economics of this is that I have added another system to be maintained. EVEN WORSE, if it is not maintain, it will SLOW the diagnosis process and result in MORE DOWNTIME.

Essentially, if a program is structured to facilitate fast isolation of a limiting condition, the same result is accomplished at
little to no cost. I can't see a benefit in extensive diagnostics to replace the need to look at ladder, other than in simple or unchanging systems.
 
J

Jake Brodsky

> There are two trends in automation and controls staffing.

> The first is that the "technicians" are becoming lower and lower level employees, because devices are becoming and, more importantly, are _perceived_ by managers to be becoming smarter and more equipped with self-diagnostics. This is bad news for people who think of themselves as instrument techs, and who get paid premium wages for those skills.

...Unless you can show how the complexity makes a job more difficult than before. For example: It used to be that one could diagnose ignition system problems in cars. No longer. Shade tree mechanics (and a lot of professional ones too) are often faced with shot-gunning parts in and out to diagnose a problem. This is not problem solving, this is taking random pot-shots and never knowing just what caused the problem.

> The other trend is the need for people who clearly understand the processes found on the factory floor, and how they relate to the business trends and business requirements of the company. In other words, people who not only understand process but also eat drink and breathe MES. If you are that kind of person, most headhunters want to talk to you about making a whole lot of money.

Sadly, this level of business/technical integration is all too rare in the management I've had the misfortune of witnessing. There are too many BS artists and upper management has no way of knowing who is full of themself and who really knows what's going on.

> Along with this trend, and dovetailing with the first one, is the growth of contract engineering and super-technician services. While a plant cannot justify a Senior Analyzer Tech on staff, a consulting Analyzer Tech or Engineer can be just the thing to bail out those jumped up techs who are replacing plant instrument engineers, or the maintenance guy who works on analyzers on Tuesday and Thursday, but fixes the airconditioning on MW & F.

This works until the systems get so complex or expensive that simple substitution efforts don't work any more. Intimate knowledge of "where things are" and "what's in the middle" and "how it works together" are worth the extra money for keeping a full time person on staff. There are very few generic assembly lines or industrial processes and many many more custom built, one or two of a kind installations.

The bottom line: As automation replaces more and more direct manufacturing jobs, there is no way anyone will be able to do away with the instrument technician or engineer. Doing so will result in a shortage of people to work on such things.

Of course, there is a shortage of trained and qualified mechanics who can do a reasonable job working on my car or truck. It may not be healthy, but it may also be where we're going...
 
P

P Baum, Niksar

>> 10. Did I mention ONLINE EDITING !?!?!?

And I would add

10a. ONLINE EDITING with VERY FAST [BACK] function ...

... which allows return to previous version of the program. It is nice to be able to do a few changes online, but it is even better to be able to return to previous version without need to upload previous version from laptop, searching
in /file/open window for ... for .... which one was it before I edited??? You know the story -

Petr
 
M

Michael Griffin

Someone in the computer science field (I think it was Nikolas Wirth) once said something to the effect that programs are written for people to
read, not for computers. This is a simple idea which many people have difficulty understanding. Once you grasp the concept though, it changes your
point of view on what a good program looks like.


**********************
Michael Griffin
London, Ont. Canada
**********************
 
B

Bill Hullsiek

Along those lines, Donald Knuth wrote LaTex, which is like a HyperText language that allows you to embed documentation alongside code. One
compile would yield an executable image, plus a document that describes the executable image.

One of the original goals of CASE tools was to have your documentation and code in the same container. You update your documentation and code at the same time.

This concept shows up in quite a few control system tools, but for some reason the computer language people (VB-Basic, Java, C, C++, C-sharp,
what-ever), always keep on keeping documentation separate from the code.

- Bill hullsiek
 
D
>Along those lines, Donald Knuth wrote LaTex, which is like a HyperText
>language that allows you to embed documentation alongside code. One
>compile would yield an executable image, plus a document that describes
>the executable image.

Interesting, but can you online edit it?
 
B
Good point(of view).

Bill Mostia
===========================================
William(Bill) L. Mostia, Jr. PE
Independent I &E Consultant
WLM Engineering Co.
P.O. Box 1129
Kemah, TX 77565
[email protected]
281-334-3169
These opinions are my own and are offered on the basis of Caveat Emptor.
 
D
Is there a software/PLC that can do this? When I want a fast change back from an online edit I usually program a bit that let's me switch between new and old branches of code.


Dale

>And I would add
>
>10a. ONLINE EDITING with VERY FAST [BACK] function ...
>
>... which allows return to previous version of the program. It is nice to be
>able to do a few changes online, but it is even better to be able to return to
>previous version without need to upload previous version from laptop,
>searching
>in /file/open window for ... for .... which one was it before I edited??? You
>know the story -
>
>Petr
 
Petr:
> >10a. ONLINE EDITING with VERY FAST [BACK] function ...

Dale:
>Is there a software/PLC that can do this? When I want a fast change back
>from an online edit I usually program a bit that let's me switch between
>new and old branches of code.

Just the other day, on a branch of this thread on the MAT PLC list, I was suggesting just that...

The idea was that the new version gets prepared in the background and loaded into memory (but not started) while the old version is still running. At that point, both versions are in memory, and you can flip between them on a scan-by-scan basis.

Eventually, you decide which one's the keeper and nuke the other one.

You mean that's not a standard feature?

(Obviously, it'd be useful to have a full revision control system, so you can back out of a series of changes, or back out of them a week and three unrelated changes later, but that's a separate issue: it should be flexible and work within minutes - the above is simple and needs to work instantly.)

Jiri
--
Jiri Baum <[email protected]>
http://www.csse.monash.edu.au/~jiribvisit the MAT LinuxPLC project at http://mat.sf.net
 
R
A typical PLC that can do this is the PLC-5 - you add/change any number of rungs (up to the limit of memory) and go to a "test" mode to try the changes out. Two clicks and you are back to the old program. Two other clicks "assembles" the edit into the main file. You still have to be careful as you can leave dynamic data in the wrong state but in general it works well.

I cut my teeth on the Modicon 384/584 with their immediate edits (it is still that way in the Quantum) where you have to do what Dale mentions - it can work well with an experienced programmer but you can get into trouble quickly if you don't think your actions out first.

Russ Kinner
AVCA Corporation
Maumee, OH USA
 
B
> As everyone should already know, software doesn't wear out; it gets more
> reliable over time.

The concept that software gets more reliable over time should be looked at more closely from a lifecycle perspective and may have some shortcomings.

The general concept that software reliability improves over time may be true but at any period in time, it may not be true. The general idea behind software reliability improving over time is that software bugs will be removed as time goes on, hence reliability growth(improvement). There is an inherent assumption made here, however, that as bugs are fixed no new bugs are introduced(a questionable assumption as a generalization).

There is also an assumption that the program remains static except for the removal of the bugs. But as we all know in a program's lifecycle, improvements are made and "features" added so that the operating system and embedded software change and new versions are released. Who has gotten a new release of software that was free of bugs? Well, hopefully free of old bugs. And, few things remain unchanged in the production environment due to the process of continuous improvement. As bugs are removed and changes are introduced into the application software so are potential new bugs.

David Smith's book, "Reliability Maintainability and Risk, 5th Ed," Figure 16.1 pg. 202 provides a simple graphical illustration of software error rate over time including the introduction of change.

It is also true that software doesn't exhibit "wear out" in the mechanical sense but it does exhibit aging in the sense that as time goes by, it may be less able to meet the requirements of the application(which evolve) and
may be less supportable. An interesting paper was written on the subject "Software Aging" by David Lorge Parnas, from the proceedings of the 16th International Conference on Software Engineering(ICSE), 1994, available from the Association for Computing Machinery(ACM) - http://www.acm.org/.

Bill Mostia
===========================================
William(Bill) L. Mostia, Jr. PE
Independent I &E Consultant
WLM Engineering Co.
P.O. Box 1129
Kemah, TX 77565
[email protected]
281-334-3169
These opinions are my own and are offered on the basis of Caveat Emptor.
 
R

Rick Jafrate

Bill,

You make some good points that are not obvious to many people. Although I have encountered it many times, I had not considered software that has lived long past it's time.

> > As everyone should already know, software doesn't wear out; it gets more
> > reliable over time.
>
> The concept that software gets more reliable over time should be looked at
> more closely from a lifecycle perspective and may have some shortcomings.
>
> The general concept that software reliability improves over time may be true
> but at any period in time, it may not be true. The general idea behind
> software reliability improving over time is that software bugs will be
> removed as time goes on, hence reliability growth(improvement). There is an
> inherent assumption made here, however, that as bugs are fixed no new bugs
> are introduced(a questionable assumption as a generalization).

Assumes competent, disciplined, knowledgeable, experienced professional making changes.

> There is also an assumption that the program remains static except for the
> removal of the bugs. But as we all know in a program's lifecycle,
> improvements are made and "features" added so that the operating system and
> embedded software change and new versions are released. Who has gotten a
> new release of software that was free of bugs? Well, hopefully free of old
> bugs. And, few things remain unchanged in the production environment due to
> the process of continuous improvement. As bugs are removed and changes are
> introduced into the application software so are potential new bugs.

Certainly if you add new software then you back step on the reliability curve. The amount of backward progress depends upon how much change was introduced. It has been my experience that in a production environmwent changes are added incrementally and methodically and at a fairly slow pace. After a change is made, the longer it has been in use the less likely that it will operate incorrectly.

> David Smith's book, "Reliability Maintainability and Risk, 5th Ed," Figure
> 16.1 pg. 202 provides a simple graphical illustration of software error rate
> over time including the introduction of change.
>
> It is also true that software doesn't exhibit "wear out" in the mechanical
> sense but it does exhibit aging in the sense that as time goes by, it may
> be less able to meet the requirements of the application(which evolve) and
> may be less supportable. An interesting paper was written on the subject
> "Software Aging" by David Lorge Parnas, from the proceedings of the 16th
> International Conference on Software Engineering(ICSE), 1994, available from
> the Association for Computing Machinery(ACM) - http://www.acm.org/.

Excellent point and I couldn't agree more. One of the problems I have personally observed that fits into this category is as follows: A metal rolling mill application deployed on MODCOMP computers in FORTRAN language made good use of state machines in the form of if-then-elseif-else constructs. Each else/elseif represented a state and the
code block within implemented the actions to be performed when the state is active. I was asked to evaluate the prospect of re-implementing this system in another language on a distributed platform. The problem was that over a period of 20 years the elseif conditions (logical
conditions) had evolved empirically. For example, it was observed that the system operates incorrectly (i.e. goes to the wrong state) on every other thursday when the mill speed is between 500-600 feet/min and the alloy is
zzaabbcc. So the easy fix is to find the offending elseif construct and add the above conditions thusly:

elseif (not thursday) and (speed<500 or speed>600) and (alloy <> zzaabbcc) and (...)

This same procedure repeated over a period of 20 years results in a house of cards and makes porting to a new system difficult and time consuming. Many of these emperically observed conditions would not necessarily maintain their relationships on a new platform, particuliarily a distributed one. Conditions would not be
detected/generated on the new platform with the same timing relationships as on the old platform.

Theere is not much you can do about bad programming practices.

regards

Rick Jafrate
Mitek
 
Top