Virtual IO, 2 B or not 2 B?

M

Thread Starter

Mao-Tse-Tung

I have previously been a fan of Virtual IO, mapping real IO to bits which are then used throughout the prog. Beauty, just amend the undocumented IO into the Virtual IO and hey presto, no global search and replace required. Disadvantage, when using large complex programs with DNet etc, the mapping takes longer than the actual software. When you get into it, even the IO to the HMI's should really be in the V IO as well. It heads towards infinity, or so it seems. It also seems somewhat confusing to followers. My question is this, should we really bother with virtual IO in light of todays powerful find and replace functions? Should I just search and replace IO on demand or is virtual IO part of "the good programmers guide"? and a more professional approach?

Any comments are more than welcome.
 
S

Steve Myres, PE

One application for this, Mr. Chairman, is when using a PLC / HMI pair that communicate asynchronously to the PLC scan, or with a PLC like the Allen Bradley ControlLogix series that updates native I/O asynch to the scan. Some
programs have a problem when an I/O value changes mid-scan and cause unforeseen results which can be extremely difficult to troubleshoot.

Steve Myres, PE
Automation Solutions
[email protected]
 
The proper solution is for the PLC itself to support virtual IO natively, allowing you to name I/O points and then map (and re-map) them to real IO. Then it can do this efficiently, and you have one less problem to solve.

Until then, and given the non-existent optimization in PLCs, most likely the cycle time will dictate which approach you have to take. With numbered internal coils, there's not much advantage to it anyway.

Jiri
--
Jiri Baum <[email protected]> http://www.csse.monash.edu.au/~jirib
MAT LinuxPLC project --- http://mat.sf.net --- Machine Automation Tools
 
M

Michael Griffin

> > My question is this, should we really bother with virtual IO in light of
> > todays powerful find and replace functions? Should I just search and
> > replace IO on demand or is virtual IO part of "the good programmers
> > guide"? and a more professional approach?

I believe that Mr. Mao could use a bit of time in a programmer re-education camp. If this "virtual I/O" is just a euphamism for adding extra layers of logic to re-map the I/O to internal coils, then there are legions of
electricians who would like to purge him from the industry.
Always write a program from the point of view of the person who will have to read it later. Adding extra layers of indirection merely for the convenience of the original programmer is very bad practice. The original programmer only
has to read it once. The maintenance personnel may have to read it many times.

Jiri Baum replied:
> The proper solution is for the PLC itself to support virtual IO natively,
> allowing you to name I/O points and then map (and re-map) them to real IO.
> Then it can do this efficiently, and you have one less problem to solve.
<clip>

Some newer PLC software offers "symbolic" addressing, where you can change the actual I/O address in the symbol table and it is automatically reflected throughout the program.
If you want to change the actual symbol name, then you turn symbolic addressing off (to use hardware addressing), change the symbol, turn symbolic addressing back on, and the symbol change is automatically reflected throughout the program.

************************
Michael Griffin
London, Ont. Canada
************************
 
J
I have recently gone through a similar learning curve as the infamous "Mr Mao" and we have used V I/O very successfully. There is too much of this "write it for the engineer to follow" rubbish. If we did that we would all be using
Windows 3.11 because it is easier to follow.
In my view, software should be written with all disrespect to the maintenanace engineer. Why should we sacrifice functionality and lean manufacturing for the sake some guy who struggles to get to grips with with programming techniques?
If you struggle - change your job or go and get re-educated. Do not hold back progress and take every job on merit. As technology moves forward, in particularly DNet, if you did noy use V I/O you would struggle. M tables in SLC can not be displayed unless you map them. Map them to N files and you win. End of story.
 
M

Michael Griffin

I'm not sure that I really understand your point, other than that you feel that clarity and conciseness are unimportant. Having viewed your message, I suppose I shouldn't be too surprised at that.
The method which I believe "Mr. Mao" was referring to is an old one which some people used so they could write a program without knowing what the final I/O assignment would be. All inputs and outputs would be individually mapped to internal flags by extra logic when the final wiring was done. The extra logic served no useful purpose once the machine was running. The
problem with this is that it added two extra levels of software (input and output mapping) when tracing a problem from input to output (or visa versa).
Some network or remote I/O schemes required something similar to this, but this was a technical limitation of the firmware, not an independent user programming technique.
"Modern" software and addressing methods (as discussed with Mr. Baum) remove the (rather poor) excuse for doing this anymore.

************************
Michael Griffin
London, Ont. Canada
************************
 
D

Derek Appleton

While accepting most of the critics of your question' answers I have the following. . .

I have, on numerous occations, passed real I/O over to internal memory. I did this to make it easy to simulate using a HMI/IMI/SCADA (software). To have the (software) communicte directly with the program all that is required is to disable the I/O to Memory (data table) subroutine so that the processor never scans it i.e., it doesn't know there is I/O so, you can use just the processor and a rack.
 
Top