Fred's shared memory paper

D

Thread Starter

Dan Pierson

> From: Curt Wuollet [mailto:[email protected]]
> Subject: Re: LinuxPLC: Linux PLC: ISA Article and a Charter of sorts(long)
>
> to editorialize. The ControlX thing was (I hope) a joke. Any comments on Fred's paper? I don't see a need for the ring buffering, but the
rest seems like a good way to be as flexible as possible. <

Seems reasonable and straightforward. The only things that worry me a bit are the Linux enforced limit on maximum shared memory size and some vague
questions about how well it would work with multiple sharing processes. The later is really about usage conventions, not the general idea. For an example of why: consider a vision system (written in C++) that wants to communicate tightly with a controller -- say for automatic inspection. We've had customers with requirements like that.

Dan Pierson

_______________________________________________
LinuxPLC mailing list
[email protected]
http://linuxplc.org/mailman/listinfo/linuxplc
 
S

Simon Martin

Hi Stan,

What I seem to read from you messages is that you are in favour of a flat memory space shared between processes. Each process may lock the memory in order to perform special operations, like atomic writing of a set of outputs.

There are a few big disadvantages I can see on this one:

1) Maintaining configuration files can be a nightmare.
2) How can I restrict the access to a single data point?
3) Memory locking can be very time expensive for other processes.

What I had thought of is returning to the calling process an array of pointers to data. The functions used to write data have the in-built logic to restrict access.

I envisage the client side having something like the following functions:

io_t *RegisterIO(...);
int SetIO(...);
int GetIO(...);
int ForceUpdateIO(...);
int WaitUpdateIO(...);

int YourSuggestionsAreWelcome(...);

One question, why are you only bothered about exclusivity in the case of PhysicalIO. If processes can change the values of each others VirtualIO, then the effects can be just as unpredictable.

Debian GNU User
Simon Martin
Project Manager
Isys
mailto: [email protected]

There is a chasm of carbon and silicon the software cannot bridge


_______________________________________________
LinuxPLC mailing list
[email protected]
http://linuxplc.org/mailman/listinfo/linuxplc
 
S
On Sun Jan 16 20:13:03 2000 Simon Martin wrote...
>
>Hi Stan,
>
>What I seem to read from you messages is that you are in favour of a flat memory space shared between processes. Each process may lock the memory in order to perform special operations, like atomic writing of a set of outputs.<

I think what I was envisioning was a series of shared memory segments. I had not completely worked out in my mind, whether there should be one for each file (eg N7:0 -> N7:99) or one for each data type, with the file abstraction being maintained by the routines.

I know I prefer the former. It has the advantage of making run time size changes _much_ easier. However I am concerned about the large
number of shared memory segments, and corresponding semaphores that would need to be created for complex systems.

I must admit I only have experience with systems with small numbers ( < 100) of shared memory segments. Have you any experience with larger ones?

>
>There are a few big disadvantages I can see on this one:
>
>1) Maintaining configuration files can be a nightmare.

I think we have the configuration files for data table well under control. They are really dirt simple. As a first pass we create and
maintain them with an ASCII editor. Later the
programming/editing/documentation engine takes over this task.

>2) How can I restrict the access to a single data point?

No need, lock the whole thing and make your copy at the start of scan for the logic engines(). and lock for updating by the I/O scanners(s), timer execution modules et all.

>3) Memory locking can be very time expensive for other processes.

Spin-locks are common in database work. They are really cheap in resources. They are even used in kernel code.

>What I had thought of is returning to the calling process an array of
>pointers to data. The functions used to write data have the in-built logic
>to restrict access.
>
>I envisage the client side having something like the following functions:
>
>io_t *RegisterIO(...);
>int SetIO(...);
>int GetIO(...);
>int ForceUpdateIO(...);
>int WaitUpdateIO(...);

How would the I/O routines differ from non I/O data table? Or do they
need to?
>
>int YourSuggestionsAreWelcome(...);
>
>One question, why are you only bothered about exclusivity in the case of
>PhysicalIO. If processes can change the values of each others VirtualIO,
>then the effects can be just as unpredictable.

Valid question, deserving of a explanation.

In the real world day to day troubles shooting, one of the more common approaches, is "what's not happening here that should". That is what output that should be coming on is not. From there you start backtracking the logic that controls this output. Therefore it is critical to have exactly one starting place for this backtracking, not 2 or more.

Indeed data passed between logic engines, or between HMI's and the logic engines, can cause unpredictable behavior. That is not the real issue of the "single point of control" for an output point. Its to allow the maintenance personnel a fair shot at troubleshooting the system.

Comments?


--
Stan Brown [email protected] 843-745-3154
Westvaco
Charleston SC.

_______________________________________________
LinuxPLC mailing list
[email protected]
http://linuxplc.org/mailman/listinfo/linuxplc
 
C

Curt Wuollet

Stan Brown wrote:
>
> On Sun Jan 16 20:13:03 2000 Simon Martin wrote...
> >
> >Hi Stan,
> >
> >What I seem to read from you messages is that you are in favour of a flat
> >memory space shared between processes. Each process may lock the memory in
> >order to perform special operations, like atomic writing of a set of
> >outputs.

Hold on there guys, Are we still talking about the page of shared memory outside the Linux map per Fred's paper? If so, I'm getting confused by
references to segments and locking. This is not SYSV shared memory and, I would expect, although I'm not sure, that it would be outside "normal"
locks and IPC's. Locks and "semaphores" would be implemented by the coder. The structures that live in that space either need to be declared at compile time so the compiler can handle alignment or set to absolute physical addresses and carefully typed so that they can be mmap()'ed by a userland program. Also we have a 4mb.limit on current machines and 1mb on older procs. Huge tables will also discourage use on SBC's and embedded systems.

If we're now talking about shmem inside the Linux mm space, we sacrifice the ability to access these structures, tables, whatever, from RTLinux.
I think it would be nifty and may be neccessary to have that capability. Please. let's clarify as it has major implications for dynamic allocation.

IF we want "normal" memory and RTLinux we would have quite a bit of data in fifos in the event of a failure. Or short fifos and a _really_ fast "shared memory manager" or IO daemon to service them.

Just want to make sure we're on the same page, literally. :^)

cww

_______________________________________________
LinuxPLC mailing list
[email protected]
http://linuxplc.org/mailman/listinfo/linuxplc
 
S

Simon Martin

Curt,

I refer to "lock" in it's generic term

Debian GNU User
Simon Martin
Project Manager
Isys
mailto: [email protected]

There is a chasm of carbon and silicon the software cannot bridge

----- Original Message -----
From: Curt Wuollet

Hold on there guys, Are we still talking about the page of shared memory outside the Linux map per Fred's paper? If so, I'm getting confused by
references to segments and locking. This is not SYSV shared memory and, I would expect, although I'm not sure, that it would be outside "normal"
locks and IPC's. Locks and "semaphores" would be implemented by the coder. ...<clip>


_______________________________________________
LinuxPLC mailing list
[email protected]
http://linuxplc.org/mailman/listinfo/linuxplc
 
P

Phil Covington

If some of the drivers are RTLinux modules, then doesn't it follow that the Logic Engine should be
on the RTLinux side also? What is the point of having real time deterministic I/O but a non-deterministic Logic Engine? Also, if some of the I/O drivers use TCP/IP and Ethernet then they cannot be on the RTLinux side as NMT's RTLinux doesn't have access to the Linux kernel's services. KU Real Time Linux (KURT) may be a better fit for this. At least with KURT you have access to the Linux kernel's services. Or am I off in left field here? Just some thoughts...

Phil Covington
vHMI


_______________________________________________
LinuxPLC mailing list
[email protected]
http://linuxplc.org/mailman/listinfo/linuxplc
 
S
On Mon Jan 17 19:33:41 2000 Curt Wuollet wrote...
>
>This is just the discussion I would like to see. Do we accomodate RTL or not.
>If we need realtime we need to think this through.

I would like to get something out the door, without adding in the complexities of RTL.

Hardware PLC's are not determininistic, they are just FAST. I have written a application where the response time is sub 90 msec. response time in this case includes the actual operation of 15KV power breakers, which have an operation type of 3 power line cycles (16.5 ms per cycle).

So as you can see, the response can be kept in a reasonable range by a careful application programmer.


--
Stan Brown [email protected] 843-745-3154
Westvaco
Charleston SC.
--

_______________________________________________
LinuxPLC mailing list
[email protected]
http://linuxplc.org/mailman/listinfo/linuxplc
 
P

Phil Covington

Then the LinuxPLC will be no better than most ( and worse than a few ) Windows NT based soft PLCs. At least some of the NT based systems add a real-time extensions to NT. The normal Linux kernel can easily be more than 20 ms late for a periodic task. Without real time capability, the LinuxPLC will not be acceptable for medium and high speed processes. This will severely limit the acceptance of this project. Linux is a general purpose OS after all...

With a PLC, for any given application and conditions, I can pretty well predict what the response time will be ( and it will be consistent). With normal Linux this response time will vary depending on system load and, especially, disk activity.

It would be smart to decide early on how to accommodate real time tasks with the LinuxPLC.

Just my .02...

Phil Covington
vHMI

_______________________________________________
LinuxPLC mailing list
[email protected]
http://linuxplc.org/mailman/listinfo/linuxplc
 
K
On Mon, Jan 17, 2000 at 06:33:41PM -0600, Curt Wuollet wrote:
> This is just the discussion I would like to see. Do we accomodate RTL or not.
> If we need realtime we need to think this through.

I think some applications will need real-time behavior, though many will not need it. "What is real time" is a not-infrequent subject on the
Automation List, and it is evident to me that there are as many definitions for real time, hard, soft, deterministic, etc., as the number of folks that offer definitions.

It is certainly a non-trivial issue, and I'd imagine it would be possible to build a perfectly functional Linux Open Controller/PLC that just cannot work for some real time applications, due to certain design decisions. I don't know what can be done up front to keep the real time options open, but perhaps those with the need and experience might weigh in on the subject.

In what I've read of the Real Time Linux project (one of them), it is an interrupt-driven layer beneath the Linux OS, but with certain means of
communicating between the RTL layer and the OS. I think it is essential that the RTL stuff be as brief and limited as possible, e.g., perhaps
by setting flags/registers which are then addressed in the non-real time kernel or user space. I don't think it is reasonable to stuff an entire logic engine, for instance, in the real time part.

In terms of defining real time in terms of scans or response time, someone will always come along with a more stringent requirement, so saying that
a 1, 10, or 100 msec scan is sufficient isn't going to cut it.

The Linux Lab Project and other similar efforts have surely beaten this subject to death more than once, and it would be good to look into what
those projects have found.

Ken

--
Ken Irving
Trident Software
[email protected]


_______________________________________________
LinuxPLC mailing list
[email protected]
http://linuxplc.org/mailman/listinfo/linuxplc
 
C

Curt Wuollet

Hi Stan,

We're fortunate here in that the only concession we need to make to allow RTL in the future is to use memory accessable by both. Not using that type
of memory excludes use of RTL unless we use fifos. Fifos are not a good fit for what we are doing. That's why I asked (indirectly) if all this magic will work with that type of memory. Real world, we will probably want it or need it. As long as we stay with that form we won't have to
rewrite if we need RTL. We can stay in userland for now, indeed the driver I'm writing uses the kernel networking services and I'm not even
gonna try to figure out how to do that from RTL. We can play around with the jiffy timer some, but really fast performance may require both. I was making this assumption initially, but, in the ensuing discussion I am hearing things that I'm not sure are doable this way. Maybe, but I'm not sure. Those who are writing code need to check this out.

Curt Wuollet,
WOT

_______________________________________________
LinuxPLC mailing list
[email protected]
http://linuxplc.org/mailman/listinfo/linuxplc
 
C

Curt Wuollet

You were saying you wanted to get something out without the complexities of RTLinux. I am saying I agree, but, we want to preserve that option
because we may need it. That requires that we stick with the original shared memory scheme. I'm not sure some of the things proposed are
compatible with that. Since much of the detail is known only to the coder, they will need to check it out if we are to preserve that functionality.
cww

_______________________________________________
LinuxPLC mailing list
[email protected]
http://linuxplc.org/mailman/listinfo/linuxplc
 
M
In my experience there is an argument (of sorts), not as big as the ladder/no ladder argument, amongst PLC programmers on this very point.

Some of us make the program run as fast as possible by only scanning appropriate ladder, i.e. don't scan the lift logic if the lift is not in operation.

Others scan everything in the hope of achieving more consistent scan times.

Even in the latter case PLC operation is not particularly deterministic and scan times can very greatly depending on the state of an input. In most cases this does not affect the resulting application.

RT would be a great option, but IMHO not necessarily for an early release.

Of course, logic dictates that an understanding of the requirements of RTL is necessary to avoid painting ourselves into a corner over RT
implementation at a later date.
_______________________________________________
LinuxPLC mailing list
[email protected]
http://linuxplc.org/mailman/listinfo/linuxplc
 
Top