Ladder a parallel lang? (was: Demarcation)

  • Thread starter Joe Jansen/ENGR/HQ/KEMET/US
  • Start date
J

Thread Starter

Joe Jansen/ENGR/HQ/KEMET/US

Responding to Vladimir E. Zyubin:

You have never heard anyone state before that ladder diagram is a parallel language? Where have you been?

I looked at your link, but don't understand what exactly it is trying to state. From what I gather, if we just started calling it parallel LD, then it would fit? The languages listed in your citation are either defined as " a parallel version of <language>" or "not implemented".

The difference in semantics may be explained if you are coming from an IT position, and think of parallel automatically meaning 'more than one processor'. The definition you provide seems to support this, and would mean that even programs written on the intel platform that are multi-threaded are not *truly* parallel, but only very quickly sequenced. This is true of LD as well. It is not truly parallel in that there is only one processor, however program execution occurs at all lines of code per scan, as opposed to a traditionally sequential language where program execution follows step by step, and blocks on each step waiting for completion. In contrast, a ladder rung that fails completion (ie is false) does not slow the rest of the rungs (lines of code) from executing. Likewise, while in C it is easy to block parts of code on and off using { }, if then else conditions, etc, in ladder, it is assumed that every line of logic will be attempted to execute every processor scan, unless more explicit measures are taken to disable sections of code.

Finally, in a traditional sequential language, the program is not expected to execute in its entirety in a timeframe of milliseconds. if more than one task is to (seem to) occur at once, seperate threads are spawned in an effort to simulate multi-tasking (aka parallel processing). By contrast, if the entire ladder diagram fails to execute within a few milliseconds a problem exists, and the processor will typically stop executing the program.

When viewed at the processor switching transistors level, no language is parallel, however, traditionally LD is considered a parallel language, especially when contrasted to more sequential languages such as C, forth, pascal, et al. If you have never heard it stated that LD is a parallel language, you must have been hiding from the rest of the industrial automation world.

--Joe Jansen
 
B

Bob Peterson

LD is about as close to true parallel processing as you can get. After all, its designed to simulate individual relays that operate totally independantly.

I grant you it has a scan order to it, but all parallel processing systems also have an order of execution as well. You just don't see it, or worse yet you cannot control it, or cannot even tell what it is in some cases.

Its pretty obvious to me that some people on the list have only a vague clue what RLL actually is. They see a symbolic language and just can't cope with it. Someone not too long ago suggested boolean would be a better approach. In fact, RLL provides for an extremely efficient way to code boolean expressions. Its failings are in other areas but extensions have even made complex mathematical constructs relatively easy.

One of the problems with IEC LD syntax is the somewhat bizzare way they almost force you to work harder in LD then in some of the other languages IEC supports. Its like they never really wanted it in there in the first place, did not understand how to use it efficiently, and so it ended up being a lot less powerful and far less easy to use then it could have been.

Its saving grace is user defined function blocks. I have been using these for the first time and I have rapidly become a convert.

Can you imagine the power LD could have if it incorporated just a few features from AB style RLL? For instance:

Indirect addressing
Tables (guess what a data file is guys?)
Compute instructions (a real nice way to directly enter algebraic expressions)
Packing bits into integers
Direct conversions (e.g.-int to float)

Granted all of these things can (or probably can) be done in LD but to do it is painful, or requires you to switch to a different language.

Bob Peterson
 
Johan Bengtsson:
> > Since you don't think ladder is a parallel language you have proven
> > that you either don't understand it at all (I mean really understand
> > it, not just are able to read one) - that is what the "way of thinking"
> > part really is about.
...

Vladimir E. Zyubin:
> "Parallelism" has a complex denotation (rather denotationS). So, the word
> ought to be strictly defined.
...
> What I said is just a constatation that the LD doesn't look like other
> parallel languages... and I never see before that somebody try to posit,
> the LD is a parallel language... You are the first. My doubts are just a
> spontaneous reaction on the statement.

LD is *based* on a parallel notation.

In a relay diagram, all the rungs happen independently and concurrently.

However, like logic in Prolog, this parallelism has been half-abandoned in order to make the language easy to implement. LD still has some elements of parallelism (and Prolog of logic programming), but one must now be aware of the order of execution, and one can also violate some of the strict rules and depend on the order of execution to give the desired effect.

Whether LD is a parallel language, or Prolog a logic language, is a matter of semantics. In the strictest sense, they aren't. But it's often useful to think of them as though they were.

> > In our PLC simulator and editor we support as many characters as an
> > editbox and CString in WIN32 can handle that would be about some 1000
...
> > We have other problems displaying this when the identifyers get longer
> > - but it is still an implementation and not a language issue.

> It is a problem of the REPRESENTATION (of the graphical form of the LD,
> and any other graphical language). The form imposes restrictions.

I agree - it has to be possible for an implementation to do it, otherwise it's a problem of the language itself.

That said, though, certainly implementations could do a lot better than they are doing now (eg write long identifiers at an angle), and perhaps
that would address some of the concerns.

The correct way, of course, is to have the concept of a local variable - a coil/register which is only available in a couple of adjacent rungs, or perhaps a subroutine, for instance. But LD doesn't provide such mechanisms.

> The IEC1131 just reflects this sorrowful fact. IMO.

Seven-character identifiers are a bit too skimpy, though. You can fit more than that on a LD.


Jiri
--
Jiri Baum <[email protected]> http://www.csse.monash.edu.au/~jirib
MAT LinuxPLC project --- http://mat.sf.net --- Machine Automation Tools
 
Vladimir E. Zyubin:
> > > I don't know the right name for the feature in English... the feature
> > > is called "self-sufficiency" (in OOP)... Translation (from Russian)
> > > of the term looks like "informative isolation".

Jiri Baum:
> > "Information isolation" is probably a good way of putting it, though,
> > because it says what you mean and means what you say.

Michael Griffin:
> I believe the english term is "data hiding". That is the phrase I am
> familiar with when describing means of preventing data from being used or
> modified outside of its intended module or local.

That's right, sorry - brain blank.

That said, though, in some ways I like "information isolation" better. I
suspect that to an average mind, "data hiding" suggests mischievousness, while "information isolation" emphasises the safety aspect. Especially for electricians-turned-programmers (to return to the start of this thread).


Jiri
--
Jiri Baum <[email protected]> http://www.csse.monash.edu.au/~jirib
MAT LinuxPLC project --- http://mat.sf.net --- Machine Automation Tools
 
M

Michael Griffin

On March 6, 2002 02:23 pm, Jiri Baum wrote:
<clip>
> That said, though, certainly implementations could do a lot better than
> they are doing now (eg write long identifiers at an angle), and perhaps
> that would address some of the concerns.
>
> The correct way, of course, is to have the concept of a local variable - a
> coil/register which is only available in a couple of adjacent rungs, or
> perhaps a subroutine, for instance. But LD doesn't provide such mechanisms.
<clip>

If you have worked with Siemens Step-7 you will have seen that you can indeed have local variables. They can be defined for each block, which is a logical unit of code.

The "identifier" which appears above the ladder contact or coil is the "symbol", which is separate from the "comment". The comment describes what the address represents, while the symbol is used to select the address (if you are using symbolic addressing). The comment can be more than long enough for any normal purpose.

Very long symbols are not terribly useful, as you don't want to have to type in too much to select each address. Furthermore, excessively long symbols would require spreading the ladder diagram out too much, and make it less concise and readable. The comments are intended to provide the detail, and they are normally printed below the rung.


--

************************
Michael Griffin
London, Ont. Canada
************************
 
V

Vladimir E. Zyubin

Good.

In the dictionary parallel language is a programming language in which algorithms of parallel processing are written.

The LD is not a parallel language in this sense.

Well. There are such things as the "physical parallelism" and the "logical parallelism". But the LD is not a parallel language as well.

In the definition I can agree with, parallel language is a language that provides both linguistic means to divide algorithm into weakly-connected parts that can executed independently and linguistic means to describe inter-communication of these parts. There are two different kinds of independent execution (or parallelism): physical parallelism and logical parallelism. Physical parallelism is a simultaneous execution on several computers in order to decrease execution time. Logical paralellism is a division of the algorithm into a structurized set of logically independent parts that allow their simultaneous or quasi-simultanious execution in order to reduce complexity of the algorithm.

The LD is not a parallel language. There are no _natural_ means to provide the division, the structurization, and the intercommunication in the LD.

Well. Somebody may think the LD connected with the parallelism because its notation looks like "a configuration of two or more electrical components connected between two points in a circuit so that the same voltage is applied to each". ;-) So, an electrician possible see a parallelism in the LD.. Heheh. But there is computer science. With that point of view the LD is not a parallel language, because it does not fit to definition of a parallel language... Alas.

BTW. Multi-tasking is not a parallel processing.
"parallel processing <parallel> The simultaneous use of more than one computer to solve a problem. [...]"

"http://www.InstantWeb.com/D/dictionary/foldoc.cgi?query=parallel":http://www.InstantWeb.com/D/dictionary/foldoc.cgi?query=parallel

"more than one" is the key-words.

--
Best regards,
Vladimir mailto:[email protected]

(P.S. Would you like to provide references, definitions, etc.? Without it, words look like too emotional.)
 
Vladimir:
> In the definition I can agree with, parallel language is a language that
> provides both linguistic means to divide algorithm into weakly-connected
> parts that can executed independently and linguistic means to describe
> inter-communication of these parts.

Yeah - the parts are the rungs.

> Logical paralellism is a division of the algorithm into a structurized
> set of logically independent parts that allow their simultaneous or
> quasi-simultanious execution in order to reduce complexity of the
> algorithm.

Yup. In LD, the execution is done in a fixed, predictable order, so programmers tend to break the rules often; but good LD code sticks to the
parallelism (just like good prolog code only uses green cuts, or no cuts at all).


Jiri
--
Jiri Baum <[email protected]> http://www.csse.monash.edu.au/~jirib
MAT LinuxPLC project --- http://mat.sf.net --- Machine Automation Tools
 
Jiri Baum:
> > The correct way, of course, is to have the concept of a local variable
> > - a coil/register which is only available in a couple of adjacent
> > rungs, or perhaps a subroutine, for instance. But LD doesn't provide
> > such mechanisms.

Michael Griffin:
> If you have worked with Siemens Step-7 you will have seen that you can
> indeed have local variables. They can be defined for each block, which is
> a logical unit of code.

Are they persistent from scan to scan? Otherwise they're of limited value... (still better than nothing, of course) If I need a hold-in relay for some logic, can it be a local variable? Or must it be global?

> The "identifier" which appears above the ladder contact or coil is the
> "symbol", which is separate from the "comment".

Yup.

> Very long symbols are not terribly useful, as you don't want to have to
> type in too much to select each address.

You can easily type in a 10-15-letter identifier, which is a reasonable length for a global variable (as, unfortunately, most are in LD). Not to mention that since you're using a development environment, it an do Tab completion (hitting Tab completes as much of the symbol as is obvious).

Certainly limiting them to six characters is ridiculous.

> Furthermore, excessively long symbols would require spreading the ladder
> diagram out too much, and make it less concise and readable.

Which is why we are saying it's an intrinsic problem of LD rather than of any particular implementation: long identifiers can't be supported, because they just don't fit.

> The comments are intended to provide the detail, and they are normally
> printed below the rung.

However, they should not be required to contain a symbol table giving the name for each symbol used; that should be obvious from the symbols' names. Only when you're doing something weird to a symbol ought you to have to mention it by name.


Jiri
--
Jiri Baum <[email protected]> http://www.csse.monash.edu.au/~jirib
MAT LinuxPLC project --- http://mat.sf.net --- Machine Automation Tools
 
V

Vladimir Zyubin

On Thursday, March 07, 2002, Michael Griffin wrote:

MG> Very long symbols are not terribly useful, as you don't want to have to type
MG> in too much to select each address. Furthermore, excessively long symbols
MG> would require spreading the ladder diagram out too much, and make it less
MG> concise and readable. The comments are intended to provide the detail, and
MG> they are normally printed below the rung.

Just a remark:
Also there is a problem if the name-length is short enough, but we have a lot of variables.

Have a look at the following quite readable and compact statement in textual form:

Y = X01 && X02 && X03 && X04 && X05 && X06 && X07 && X08 && X09 && X10 &&
X11 && X12 && X13 && X14 && X15 && X16 && X17 && X18 && X19 && X20;

(&& - "logic AND" in C)

If we rewrite the statement with the LD, the statement will become more unreadable (will lead either to the cross screen problem or to a new variable to carry over to the next rung)

if instead of "logic AND" will be "logic OR" the situation in the LD will be even more pity (unreadable).

--
Best regards,
Vladimir mailto:[email protected]
 
V

Vladimir E. Zyubin

Hello Jiri,

JB> Vladimir:
>> In the definition I can agree with, parallel language is a language that
>> provides both linguistic means to divide algorithm into weakly-connected
>> parts that can executed independently and linguistic means to describe
>> inter-communication of these parts.

JB> Yeah - the parts are the rungs.

yeah - the parts are the text-string in C... (BTW, and the strings are not orthogonal! They are parallel as well! ;-)

well, the definition is not good enough... maybe there must be words about structure, hierarchy, or the word "arbitrary"[parts] in the definition.

And the following explanation may help to adequatelly understand the idea... that "linguistic means to describe inter-communication include instructions to describe divergence and convergence of the parts".

>> Logical paralellism is a division of the algorithm into a structurized
>> set of logically independent parts that allow their simultaneous or
>> quasi-simultanious execution in order to reduce complexity of the
>> algorithm.

JB> Yup. In LD, the execution is done in a fixed, predictable order, so
JB> programmers tend to break the rules often; but good LD code sticks to the
JB> parallelism (just like good prolog code only uses green cuts, or no cuts at
JB> all).

The parallelism can be implemented in C as well. Possibility to be used for the programming of a some kind of algorithms proves nothing. If algorithm demands to use logically parallelism technique, it have to be described with the technique... the language has no matter... even if the language is the nstruction "subtruct and jump if zero" ;-) (the instruction any algorithm can be described with)

--
Best regards,
Vladimir mailto:[email protected]
 
Vladimir Zyubin wrote:
> ..There are two different kinds of
> independent execution (or parallelism): physical parallelism and
> logical parallelism. Physical parallelism is a simultaneous execution
> on several computers in order to decrease execution time. Logical
> paralellism is a division of the algorithm into a structurized set of
> logically independent parts that allow their simultaneous or
> quasi-simultanious execution in order to reduce complexity of the
> algorithm.

Sorry about responding to such an old posting, but that last sentence sounds exactly like RLL. In fact, one could implement the RLL runtime engine so in fact, multiple rungs run in parallel, with the only stipulation that rungs lower in the ladder that depend upon the results of a prior rung, must execute after that prior rung.

Rufus

(oops, random thought to throw out there:

Every (?) so-called "parallel" process has sequential operations within each parallel thread of execution.)
 
V

Vladimir Zyubin

Rufus wrote:
LM> Sorry about responding to such an old posting, but that last sentence
LM> sounds exactly like RLL. In fact, one could implement the RLL runtime
LM> engine so in fact, multiple rungs run in parallel, with the only
LM> stipulation that rungs lower in the ladder that depend upon the
LM> results of a prior rung, must execute after that prior rung.

The LD does not allow to describe algorithm as "a structurized set of logically independent parts"... parts that are constructed by the programmer... Descrete. What should be called "spaghetty code" is a program in the LD. There is no any means in the LD to structurize the algorithm... at all.

And again, it would be more constructive, if somebody makes an alternative definition (or corrects mine).

To those who believe the LD is a parallel language: Hocus-pocus... Magical and unbeliavable trasformation... Brilliant invention... Parallel C... (a flourish is being sounded):

main(){ /* standard begin of C-program */
for(;;){ /* magical string that converts C into Parallel C (endless loop) */

[ ... Here everybody can put his codes in the parallel C. Syntax of
parallel C and syntax C are the same :cool: (power of magic ;-) ... ]

}
}

:) So, who can pick out any conceptual difference between the LD and the "parallel C"? Well, the parallel C is better than the LD... it allows to structurize your programs via the means of functions (and much more ;-).

LM> Rufus

LM> (oops, random thought to throw out there:

LM> Every (?) so-called "parallel" process has sequential operations
LM> within each parallel thread of execution.)

Moreover, if we have only one processor we can speak about quasi-parallelism (or quasi-simultaneous execution) only... i.e. about sequential operations...

--
Best regards,
Vladimir mailto:[email protected]

P.S. Sometimes the messages do not arrive at all subscribers of the list (too long distance ;-), so I insert the addresses of the members interested in the topic in the CC field.
 
M

Michael R. Batchelor

I'm going to stick my foot in my own mouth here, because I don't always do this myself. For a *PROPERLY* designed ladder program the order of execution of the rungs is immaterial. In other words, no rung should depend on the result of a previous rung to execute properly. Sometimes the real world gets in the way, but that's how it *OUGHT* to be, IMHO.

MB
 
Michael R. Batchelor:
> I'm going to stick my foot in my own mouth here, because I don't
> always do this myself. For a *PROPERLY* designed ladder program the
> order of execution of the rungs is immaterial.

Yes, that was my point.

> In other words, no rung should depend on the result of a previous rung
> to execute properly. Sometimes the real world gets in the way, but
> that's how it *OUGHT* to be, IMHO.

Precisely. Indeed, originally each rung corresponded to a separate circuit that worked totally asynchronously to all other rungs.


Jiri
--
Jiri Baum <[email protected]> http://www.csse.monash.edu.au/~jirib
MAT LinuxPLC project --- http://mat.sf.net --- Machine Automation Tools
 
RufusVS:
> In fact, one could implement the RLL runtime engine so in fact,
> multiple rungs run in parallel, with the only stipulation that rungs
> lower in the ladder that depend upon the results of a prior rung, must
> execute after that prior rung.

No, that's just pipelining, it doesn't change what kind of language RLL is.

> (oops, random thought to throw out there:

> Every (?) so-called "parallel" process has sequential operations
> within each parallel thread of execution.)

Not necessarily. There are languages where sequential operations are optional and somewhat alien to the language, and the bulk of the sequencing is left up to the (optimizing) compiler. They're not very widely used, because that's not how people think, or at least not how programmers think.


Jiri
--
Jiri Baum <[email protected]> http://www.csse.monash.edu.au/~jirib
MAT LinuxPLC project --- http://mat.sf.net --- Machine Automation Tools
 
V

Vladimir Zyubin

yes, that kind of tricks reduces reliability of program in the LD. because AFAIK the LD does not regulate the order of execution... as far as I remember, there are authoritative recomendations insisting that the programs in the LD shell not contain those tricks... etc.

So, the "JUMPER"-rungs I speak about in the previouse message is an ugly implementation of RLL... (true condition in a "JUMPER"-rung leads to skipping several rungs below "JUMPER"-rung)

But the problem is: I understand the purpose of the "JUMPER"-rungs, but I see no _adequate_ (safe) means to resolve the situations... (A safe means shell provide localization of condition for the skipping)

The above is just a remark on the question of structurization of the alorithms in the LD... i.e. rather it concerns the previous (demarcation) topic.

--
Best regards,
Vladimir mailto:[email protected]
 
B

Blunier, Mark

<P>> > I'm going to stick my foot in my own mouth here, because I don't always do this myself. For a *PROPERLY* designed ladder program the order of execution of the rungs is immaterial. < < </P>

<P>I'd disagree. I think it is OK to have order specific code, but the rungs should be close together. For example, making a one shot.</P>
<PRE>
B X OS
--| |----|/|----( )

B X
--| |-----------( )
</PRE

<P>> Yes, that was my point.<BR>
><BR>
> > In other words, no rung should depend on the result of a previous rung to execute properly. Sometimes the real world gets in the way, but that's how it *OUGHT* to be, IMHO.<BR>
><BR>
> Precisely. Indeed, originally each rung corresponded to a separate circuit that worked totally asynchronously to all other rungs.<</P>

<P>And originally, current could flow in either direction, not just left to right. Are there PLCs that allow logic to flow right to left?</P>

<P>Mark Blunier<BR>
Any opinions expressed in this message are not necessarily those of the company.</P>
 
Top