reply to Jiri's system level description

K

Thread Starter

Ken E.

Thanks Jiri for Clarifying some of these issues.

Each part (module) of the lPLC runs as a separate process, each linked against the linuxplc.a library[1].

OK ... I'm all for multiple processes ...

Example modules might be: logic (RLL) program, HMI, http-server, I/O [2], bus I/O/comm. Any particular installation will run only a few of the
available modules.

Hmmmm..... How do drivers get synchronized with the logic solve??
From this description they are all running asynchronously to each other through separate processes. Are we using system events to
synchronize these processes or what?? Ideally you want Input Driver Scan to run, logic to run, and then Output driver to run. IS this being done??


The linuxplc.a library provides various services, perhaps the most important being config, data map and synch. In turn:

- config - provides for the configuration of various lPLC
parameters from a single file, by default linuxplc.conf.
There
are private sections for each module (there is provision
for
several of a given module to run simultaneously), as well
as
sections used by all modules, eg [PLC] and [SMM].


OK.

- data map - provides the familiar data points, corresponding
to
PLC inputs, outputs and internal coils/registers. Points of
0 to
32 bits are supported, 1-bit points corresponding to simple
coils
or relays.


OK ... This sounds like a master IO table (even though internals are included) Steeplechase uses what they call a universal IO Table as their mechanism, which works great. Tags get compiled to a pointer and an offset into this table and then get modified and read by the logic scan. It is interesting to note that they include "Force" data in the table so that they can quickly mask tags to see if there is a force condition applied by the debugging program. There should be added data types to this, resembling "C" except they will not be compiler/machine specific (I.e. Integers are 32 bit signed always and floats are always 64 ..... you get the idea.) I think I saw mention to some data types in earlier discussion. IO types of 8 , 16, and 32 bit words need to be
supported as well.


As in a standard PLC, the data map is buffered so that
changes to
a point do not appear on the outputs (or HMI screen, or to
other
modules) until the end of the cycle.

OK


Unlike a standard PLC, no distinction is made between
inputs,
outputs and internal coils. It is recommended that the user
adopt
a naming scheme that avoids confusion.


I don't see how this is going to fly. How will driver modules know which tags to process if they don't know which ones are inputs and which ones are outputs?? Is this done as a part of this data map you were discussing??


- synch - this provides for (optional) synchronisation between
the
cycles of the various modules. By default each module runs
free;
synchronisation can lower CPU usage and provide more
uniform
latencies.

Some modules (eg PID) require that they be synchronised to
certain of their inputs, as they calculate deltas.


All my experience tells me that syncronization should not be optional. I have seen numerous occasions where NOT having synchronization of input, logic, and output stages has caused
extremely flakey program errors. Also it requires more careful programming practices.


[1] This should be linuxplc.so, but I've no experience creating shared libraries and neither, it seems, has anybody else. So linuxplc.a for
now.

[2] There is some argument whether there should be one I/O module or many, but in practice I think the point is moot as most installations will
only use one kind of I/O anyway, and hence only one I/O module.


This is definitely not moot. A lot of applications use different types of IO at the same time. Some industrial plants use dozens of IO
systems across their floor (although for the first stage of this project, we can assume a few different devices will be connected, since IO drivers will have to be written first!!!) Since we are on the subject, what are the hooks between the IO Table and the device Drivers?? I suggest that each device driver gets linked to the IO
table at compile time by the use of some IO config file.

~Ken


_______________________________________________
LinuxPLC mailing list
[email protected]
http://linuxplc.org/mailman/listinfo/linuxplc
 
Jiri Baum:
> > Example modules might be: logic (RLL) program, HMI, http-server, I/O
> > [2], bus I/O/comm. Any particular installation will run only a few of
> > the available modules.

Ken Emmons, Jr.:
> Hmmmm..... How do drivers get synchronized with the logic solve??

There are two parts to this answer:

- 1 - they don't need to be. Data isn't exchanged with the logic
program until the end/beginning of its cycle. This provides for
the expected behaviour without making the logic program always
wait for the slowest I/O.

- 2 - trough the (optional) synch library.

> From this description they are all running asynchronously to each other
> through separate processes.

Yes, by default.

> Are we using system events to synchronize these processes or what??
> Ideally you want Input Driver Scan to run, logic to run, and then Output
> driver to run. IS this being done??

As far as the logic program is concerned, yes. However, while it is running, I/O drivers are (asynchronously) handling the previous output scan
and preparing the next input scan, and the HMI is (asynchronously) updating the screen.

In effect, I/O scan time is reduced to near-zero at the cost of increased latency.

> > The linuxplc.a library provides various services, perhaps the most
> > important being config, data map and synch. In turn:
...
> > - data map - provides the familiar data points, corresponding to
> > PLC inputs, outputs and internal coils/registers. Points of 0 to 32
> > bits are supported, 1-bit points corresponding to simple coils or
> > relays.

> OK ... This sounds like a master IO table (even though internals are
> included) Steeplechase uses what they call a universal IO Table as
> their mechanism, which works great.

I'll add these two expressions to the document.

> Tags get compiled to a pointer and an offset into this table and then get
> modified and read by the logic scan.

This happens at start-up time - that's what the plc_pt_by_name() function does. It takes a tag name and returns a struct with a (relative) pointer, offset, bitmask and length.

> It is interesting to note that they include "Force" data in the table so
> that they can quickly mask tags to see if there is a force condition
> applied by the debugging program.

There are no forces at present. They could be implemented relatively simply by taking away "write" rights to a point. (Hmm, this makes them part of the online-change monster.)

(What are the correct semantics for forces? The above would apply the forces at I/O scan time; (re)setting a forced point would change its value
for the rest of the scan for that module, then it would revert.)

> There should be added data types to this, resembling "C" except they will
> not be compiler/machine specific (I.e. Integers are 32 bit signed always
> and floats are always 64 ..... you get the idea.) I think I saw mention
> to some data types in earlier discussion.

Not yet done.

> IO types of 8 , 16, and 32 bit words need to be supported as well.

They are. Any number of bits from 0 to 32 inclusive.

> > Unlike a standard PLC, no distinction is made between inputs, outputs
> > and internal coils. It is recommended that the user adopt a naming
> > scheme that avoids confusion.

> I don't see how this is going to fly. How will driver modules know
> which tags to process if they don't know which ones are inputs and
> which ones are outputs??

The drivers need to be told which driver should process which tags anyway, so this also covers which ones should be processed at all.

Note that a single tag can be an input to one driver and an output to another, for "traffic cop" applications.

> > - synch - this provides for (optional) synchronisation between the
> > cycles of the various modules. By default each module runs free;
> > synchronisation can lower CPU usage and provide more uniform
> > latencies.

> > Some modules (eg PID) require that they be synchronised to certain
> > of their inputs, as they calculate deltas.

> All my experience tells me that syncronization should not be optional. I
> have seen numerous occasions where NOT having synchronization of input,
> logic, and output stages has caused extremely flakey program errors. Also
> it requires more careful programming practices.

The data map library buffers everything, to take care of this problem.

At the beginning of its cycle, the logic program obtains a private copy of the global data map. Then it goes away and does whatever it does in its
private copy - no new data is allowed in, no data is written out. When it's finished, all the outputs and other coils/registers that it should write get copied into the global data map.

This copying is atomic, indivisible, so that all the results of the program get written simultaneously. If you change two registers together in the logic program, the other modules will first see both old values and then both new values - never a mix.


(Actually, it is possible to specifically request particular points to be read in or written out in the middle of logic, but if you do that it's your
own lookout.)

> > [2] There is some argument whether there should be one I/O module or
> > many, but in practice I think the point is moot as most installations
> > will only use one kind of I/O anyway, and hence only one I/O module.

> This is definitely not moot. A lot of applications use different
> types of IO at the same time. Some industrial plants use dozens of IO
> systems across their floor

OK, maybe that was a bit too strong, but in general I don't think having several I/O modules running will be a problem.

> Since we are on the subject, what are the hooks between the IO Table and
> the device Drivers?? I suggest that each device driver gets linked to the
> IO table at compile time by the use of some IO config file.

It is linked at load (start-up) time based on the linuxplc.conf file. When on-line changes get written, it will also be possible to re-link drivers to points at any time.

Jiri
--
Jiri Baum <[email protected]>
What we do Every Night! Take Over the World!

_______________________________________________
LinuxPLC mailing list
[email protected]
http://linuxplc.org/mailman/listinfo/linuxplc
 
P

Philip Costigan

On Thu, 02 Nov 2000, Jiri Baum wrote:

> > It is interesting to note that they include "Force" data in the table so
> > that they can quickly mask tags to see if there is a force condition
> > applied by the debugging program.
>
> There are no forces at present. They could be implemented relatively simply
> by taking away "write" rights to a point. (Hmm, this makes them part of the
> online-change monster.)
>
> (What are the correct semantics for forces? The above would apply the
> forces at I/O scan time; (re)setting a forced point would change its value
> for the rest of the scan for that module, then it would revert.)


Some PLC's only allow forcing on external I/O and others allow forcing on everything. If we choose to only allow forcing on external I/O then we can use the recently added function io_status_pt() and modify it to be something like

io_parameter_pt( const char *base, const char *suffix, int loglevel);

and this point can drive io points, digital or analog.

eg. io_parameter_pt( tag, "force_val", 4 );
io_parameter_pt( tag, "force_en", 4 );

The linuxplc.conf would look something like

point mcn1filsol "machine 1 fill solenoid" some_module at 0.0
point mcn1filsol.force_val "force bit for solenoid" forcing_module at 0.1
point mcn1filsol.force_en "enable force on this bit" forcing_module at 0.2

It could be then also used for seting up analog card's ranges from module control instead of hard coded (for some devices anyhow).

eg.

point vat1temp "vat 1 temperature" io_module at 47
point vat1temp.setup "0=4-20mA 1=0-10V" setup_module at 48.0 3

I can't, at this stage, see that forcing everything will be achievable too soon but what I propose here may be achievable for version 0.1 or 0.2.

Let me know your thoughts.

--

Regards

Philip Costigan

_______________________________________________
LinuxPLC mailing list
[email protected]
http://linuxplc.org/mailman/listinfo/linuxplc
 
Jiri Baum:
> > There are no forces at present. They could be implemented relatively
> > simply by taking away "write" rights to a point. (Hmm, this makes them
> > part of the online-change monster.)

Philip Costigan:
> Some PLC's only allow forcing on external I/O and others allow forcing on
> everything. If we choose to only allow forcing on external I/O then we
> can use the recently added function io_status_pt() and modify it to be
> somthing like

> io_parameter_pt( const char *base, const char *suffix, int
> loglevel);
...
> The linuxplc.conf would look something like

> point mcn1filsol "machine 1 fill solenoid" some_module at 0.0
> point mcn1filsol.force_val "force bit for solenoid" forcing_module at 0.1
> point mcn1filsol.force_en "enable force on this bit" forcing_module at 0.2

I don't like this - neither the io modules nor any other modules should need to be coded explicitly for forcing. It should be a feature of the library, applied equally to all modules regardless of kind.

In some ways, it's a question of elegance - at present, there's no difference whatsoever between io modules, logic modules, HMI modules etc.
I'd like to keep that. But it's also a practical matter - this should be implemented once, in the library, where io-module writers don't have to
worry about it.

That, plus forcing internal coils.

> It could be then also used for seting up analog card's ranges from module
> control instead of hard coded (for some devices anyhow).

> eg.

> point vat1temp "vat 1 temperature" io_module at 47
> point vat1temp.setup "0=4-20mA 1=0-10V" setup_module at 48.0 3

That particular example doesn't sound very useful, since this will not normally change during program run. However, I can imagine where other parameters (scaling, offset, zero) will be callibrated on-line, and this would indeed be the right way to do them.

Perhaps a separate function, though, as these parameters will not suffer a null point. (A null point is always zero.)

> I can't, at this stage, see that forcing everything will be acheivable
> too soon but what I propose here may be achievable for version 0.1 or
> 0.2.

It's actually not very difficult conceptually; like I said, though, it's the beginning of the on-line change thing, which hasn't been conceived yet.

(We can, of course, hack up a single-purpose thing that can only change the ownership of points, but it'd be nice to have a vision for the whole on-line change architecture first.)


Jiri
--
Jiri Baum <[email protected]>
What we do Every Night! Take Over the World!

_______________________________________________
LinuxPLC mailing list
[email protected]
http://linuxplc.org/mailman/listinfo/linuxplc
 
P

Philip Costigan

On Thu, 02 Nov 2000, Jiri Baum wrote:

> > point mcn1filsol "machine 1 fill solenoid" some_module at 0.0
> > point mcn1filsol.force_val "force bit for solenoid" forcing_module at 0.1
> > point mcn1filsol.force_en "enable force on this bit" forcing_module at 0.2
>
> I don't like this - neither the io modules nor any other modules should
> need to be coded explicitly for forcing. It should be a feature of the
> library, applied equally to all modules regardless of kind.

Thats cool.

> That, plus forcing internal coils.
>
> > It could be then also used for seting up analog card's ranges from module
> > control instead of hard coded (for some devices anyhow).
>
> > eg.
>
> > point vat1temp "vat 1 temperature" io_module at 47
> > point vat1temp.setup "0=4-20mA 1=0-10V" setup_module at 48.0 3
>
> That particular example doesn't sound very useful, since this will not
> normally change during program run. However, I can imagine where other
> parameters (scaling, offset, zero) will be callibrated on-line, and this
> would indeed be the right way to do them.
>

conceptualy we probably should still have somthing to go the opposite way to io_status_pt() but I supose we can leave it until a good reason arises.

> (We can, of course, hack up a single-purpose thing that can only change the
> ownership of points, but it'd be nice to have a vision for the whole
> on-line change architecture first.)
>

I supose if we get it right the first time then it'll save a lot of wasted coding time. And that should keep everyone happy :)


Regards

Philip Costigan


_______________________________________________
LinuxPLC mailing list
[email protected]
http://linuxplc.org/mailman/listinfo/linuxplc
 
If synchronization is required, perhaps those modules that require a common synchronization should have a shared private memory map, and be round robin scheduled between table updates, but that should implementation detail left to the user.

On on-the-fly updates:

Good Luck!

But seriously, while people clamor for this feature and call it a deal-breaker (I don't necessarily disagree) some rules will likely have to be followed if it is.

In a scanned language where we go in the top and come out of the bottom on every scan and all the variables have been predefined in my memory map, hot modifications are trivial: You can wait to the I/O update portion of the cycle, and the next time you need logic solve, plug in the new code. For that matter, if you are running ladder logic, you can swap a rung as long as it is not the
currently processing one.

There are no initialization, stack frame, local variable, current state isues.

Because of these issues and more, hot swap folks may find themselves limited to IEC 61131-3 languages, which is OKAY, because that would be
just a subset of the capabilities of this unit, anyway, right?


True, hotswappable, C++ code, if even possible, would put a lot of the onus on the programmer, compiler, operating system. The entire stack
frame would need to be adjusted and shuffled if argument counts and types changed. Code that called the new routine would have to be adjusted too. Return addresses would be modified to suit the new code. Pointers pointing into the stack would have to be recognized and adjusted. (I'm starting to get a headache...) The programming guidelines and constraints would likely kibosh any perceived benefits. It would be like learning another language.


_______________________________________________
LinuxPLC mailing list
[email protected]
http://linuxplc.org/mailman/listinfo/linuxplc
 
[email protected]:
> On I/O versus Logic:

> In the abstract, what is the difference between an I/O driver and a logic
> solve module? There really is none, other than perhaps, the physical
> reality that a given point actually has an input or output direction
> defined by its real-world function.

Agreed.

> Or suppose I want to program a simulator for a yet-unwritten I/O driver
> or yet-unbuilt hardware? Does that go in the I/O driver space or the
> logic solve module space?

They are the one space.

> On synchronization:

> If synchronization is required, perhaps those modules that require a
> common synchronization should have a shared private memory map, and be
> round robin scheduled between table updates, but that should
> implementation detail left to the user.

Interesting idea! Currently they don't... However, this only applies to fully-synchronized modules. If they are only partially synchronized (for instance A -> B, so that B waits for A but not vice versa) then this cannot be used.


> On on-the-fly updates:
...
> In a scanned language
...
> You can wait to the I/O update portion of the cycle, and the next time
> you need logic solve, plug in the new code.

Yes.

> For that matter, if you are running ladder logic, you can swap a rung as
> long as it is not the currently processing one.

No, because it might rely on an internal coil that would have been set by a newly-inserted, not-yet-calculated rung.

> True, hotswappable, C++ code, if even possible, would put a lot of the
> onus on the programmer, compiler, operating system.

Hmm, true hotswappable C++ code suddenly reminded me of something I knew for a long time - persistent systems. Those are systems where instead of saving a file, you simply minimize your application - and the system is designed to cope with that, suspend and resume, etc.

Upgrading an application under these circumstances is an interesting problem which *has* received some attention - but that's about all I know. I'd have to look it up.

> The entire stack frame would need to be adjusted and shuffled if argument
> counts and types changed. Code that called the new routine would have to
> be adjusted too. Return addresses would be modified to suit the new
> code. Pointers pointing into the stack would have to be recognized and
> adjusted. (I'm starting to get a headache...)

Most of these can be avoided by treating the stack as a linked list. It almost is, you know, the "saved frame pointer" being the "next" field.

> The programming guidelines and constraints would likely kibosh any
> perceived benefits. It would be like learning another language.

I'm not so sure. If any currently-executing routine continued being the old one, I can see how it might be done - load the new routines in, and garbage collect the old. Any return addresses on the stack would keep the old routines in as long as necessary, then they too would be garbage collected.

But I'd suggest reading the literature on persistent systems first.


Jiri
--
Jiri Baum <[email protected]>
What we do Every Night! Take Over the World!

_______________________________________________
LinuxPLC mailing list
[email protected]
http://linuxplc.org/mailman/listinfo/linuxplc
 
P

Philip Costigan

On Fri, 03 Nov 2000, [email protected] wrote:

> On on-the-fly updates:
>
> True, hotswappable, C++ code, if even possible, would put a lot of
> the onus on the programmer, compiler, operating system. The entire stack
> frame would need to be adjusted and shuffled if argument counts and types
> changed. Code that called the new routine would have to be adjusted too.
> Return addresses would be modified to suit the new code. Pointers pointing
> into the stack would have to be recognized and adjusted. (I'm starting to get
> a headache...) The programming guidelines and constraints would likely
> kibosh any perceived benefits. It would be like learning another language.
>

I was under the impression that to update a running module one would load the newly updated module into memory and then stop the original module using the sync functions and then start the new module followed by killing the original
module.

If all of the important variables are stored in the shared memory then it should not be a problem. I might have this too simple here but have I got the right idea.


Regards

Philip Costigan

_______________________________________________
LinuxPLC mailing list
[email protected]
http://linuxplc.org/mailman/listinfo/linuxplc
 
[email protected]:
> > On on-the-fly updates:

> > True, hotswappable, C++ code, if even possible, would put a lot of the
> > onus on the programmer, compiler, operating system.
...
> > (I'm starting to get a headache...)
...

Philip Costigan:
> I was under the impression that to update a running module one would load
> the newly updated module into memory and then stop the original module

That only works for pure or near-pure cycle-oriented programs.

The SMM doesn't limit programs to being cycle-oriented.

A C program may, for instance, have several/many different loops in different functions which call each other in a structured manner.

Even something as simple as a PID module will have internal state (for the I and D parts) that probably won't be in the globalmap.

Jiri
--
Jiri Baum <[email protected]>
What we do Every Night! Take Over the World!

_______________________________________________
LinuxPLC mailing list
[email protected]
http://linuxplc.org/mailman/listinfo/linuxplc
 
A

Ahnen, Richard

I know this may be a little simplistic,(and may be alittle bit of a memory hog) but is it possible to have 2 copies of the runtime logic in the memory at once? Where one copy is the current active runtime & the other editable target...

When an online edit is "Tested" or "Accepted" the following scan points to the top of the edited version.


_______________________________________________
LinuxPLC mailing list
[email protected]
http://linuxplc.org/mailman/listinfo/linuxplc
 
In a message dated 00-11-03 04:37:05 EST, [email protected] writes:
>
> [email protected]:
>
> > On on-the-fly updates:
> ...
> > In a scanned language
> ...
> > You can wait to the I/O update portion of the cycle, and the next time
> > you need logic solve, plug in the new code.
>
> Yes.
>
> > For that matter, if you are running ladder logic, you can swap a rung as
> > long as it is not the currently processing one.
>
> No, because it might rely on an internal coil that would have been set by a
> newly-inserted, not-yet-calculated rung.
>

Oops, I had limited my thinking to a single insertion per scan.

> > True, hotswappable, C++ code, if even possible, would put a lot of the
> > onus on the programmer, compiler, operating system.
>
> Hmm, true hotswappable C++ code suddenly reminded me of something I knew
> for a long time - persistent systems. Those are systems where instead of
> saving a file, you simply minimize your application - and the system is
> designed to cope with that, suspend and resume, etc.

Sounds like stuff they do (or should do) in PDA's

> > The entire stack frame would need to be adjusted and shuffled if argument
> > counts and types changed. Code that called the new routine would have to
> > be adjusted too. Return addresses would be modified to suit the new
> > code. Pointers pointing into the stack would have to be recognized and
> > adjusted. (I'm starting to get a headache...)
>
> Most of these can be avoided by treating the stack as a linked list. It
> almost is, you know, the "saved frame pointer" being the "next" field.
>
> > The programming guidelines and constraints would likely kibosh any
> > perceived benefits. It would be like learning another language.
>
> I'm not so sure. If any currently-executing routine continued being the old
> one, I can see how it might be done - load the new routines in, and garbage
> collect the old. Any return addresses on the stack would keep the old
> routines in as long as necessary, then they too would be garbage collected.

Interesting. I was thinking about how if a routines argument list changed between insertions,
then the callers wouldn't call them correctly, but this wouldn't be a problem because name mangling will prevent the old-style caller from calling the new-style routine.

Then a hot-swapped routine with a changed argument list isn't fully swapped until all callers have been changed to the new calling format. Which is as it should be.

(Now that I write that and read it back I see a flaw: A mangled name will change the linkage only
if the function signatures change. The signatures could be the same but the arguments represent different things in the new version... Oh well...)

> But I'd suggest reading the literature on persistent systems first.
>

Sounds like interpreted languages may have an edge in begin able to have all those features (Java, anyone?)


Rufus

_______________________________________________
LinuxPLC mailing list
[email protected]
http://linuxplc.org/mailman/listinfo/linuxplc
 
Ahnen, Richard:
> I know this may be a little simplistic,(and may be alittle bit of a
> memory hog) but is it possible to have 2 copies of the runtime logic in
> the memory at once? Where one copy is the current active runtime & the
> other editable target...

Yes, no problem so far; the problem comes in the next bit:

> When an online edit is "Tested" or "Accepted" the following scan points
> to the top of the edited version.

The C++ code might not have a well-defined concept of "top of the edited version". RLL does, of course, and there's no problem there; but C++ can have arbitrary structure.

Or it might have data that it accumulates over several cycles. It might not necessarily be obvious how to get new-version data out of old-version data, even to a human, much less to a computer.

(In some cases, it might be quite a research project how to get new-version data out of old-version data, if an opaque approach like NN or GA is used.)

Jiri
--
Jiri Baum <[email protected]>
What we do Every Night! Take Over the World!

_______________________________________________
LinuxPLC mailing list
[email protected]
http://linuxplc.org/mailman/listinfo/linuxplc
 
[email protected]:
> > > True, hotswappable, C++ code, if even possible, would put a lot of
> > > the onus on the programmer, compiler, operating system.
[big snip]

Jiri Baum:
> > But I'd suggest reading the literature on persistent systems first.

[email protected]:
> Sounds like interpreted languages may have an edge in begin able to have
> all those features (Java, anyone?)

Not necessarily... you just need to have support for it in the compiler/interpreter (or at least the linker). Once you have that, it doesn't really matter and the usual tradeoffs apply.

Jiri
--
Jiri Baum <[email protected]>
What we do Every Night! Take Over the World!

_______________________________________________
LinuxPLC mailing list
[email protected]
http://linuxplc.org/mailman/listinfo/linuxplc
 
So I take it that the Logic code in it's primitive state is going ot be C++?? This seems overkill to me. I don't understand why RLL or IEC,
or flow, or whatever interface needs to "compile" or "translate" into C++.

I guess I don't see RLL needing to use an object structure ...

:eek:)

Are you guys intending this thing to be programmed with C++ in addition to RLL, etc. ????

What about good ole efficient, reliable ANSI C ???

*** Disclaimer *** I am not a proficient C++ programmer, but I am pretty good in ANSI C and understand the basic concepts of objects
.... Perhaps this is a bias, but I have seen too many people abuse C++ where regular C would be better .....

~Ken

_______________________________________________
LinuxPLC mailing list
[email protected]
http://linuxplc.org/mailman/listinfo/linuxplc
 
Ken Emmons, Jr.:
> So I take it that the Logic code in it's primitive state is going ot be
> C++?? This seems overkill to me. I don't understand why RLL or IEC, or
> flow, or whatever interface needs to "compile" or "translate" into C++.

It'll compile (if it compiles) into whatever the author of the translator is comfortable with, probably C.

Somebody up-thread asked about C++, but all of this discussion applies equally to C (and doesn't apply at all to programs translated from RLL).

> Are you guys intending this thing to be programmed with C++ in addition
> to RLL, etc. ????

Yes. That's where the problem comes in - you can't take arbitrary running C++ and transform it into arbitrary other running C++. Not easily, anyway.

> What about good ole efficient, reliable ANSI C ???

Yes. That's the native interface.


Jiri
--
Jiri Baum <[email protected]>
What we do Every Night! Take Over the World!

_______________________________________________
LinuxPLC mailing list
[email protected]
http://linuxplc.org/mailman/listinfo/linuxplc
 
Top