Today is...
Thursday, March 23, 2017
Welcome to Control.com, the global online
community of automation professionals.
Featured Video...
Featured Video
EtherCAT with CTC’s master lets your multivendor network play well together...
Our Advertisers
Help keep our servers running...
Patronize our advertisers!
Visit our Post Archive
persistent data
One of the basic concepts of a PLC is that the data table is retained upon program stop, whether that be manually stopping the scan, or the actual powering down of the controller. Any thoughts on how to handle this?

One of the basic concepts of a PLC is that the data table is retained upon program stop, whether that be manually stopping the scan, or the actual powering down of the controller.

Has anyone put any thoughts into how we can handle this? In a real PLC is genuine battery backed up RAM, so their is no "save on power failure", we unfortunately don't have that luxury. Nor can we depend upon every installation having a UPS.

Anyone know a way around this, that doesn't cause us to really spend a lot of time writing changing data table to the disk? Or are we going to have to have a task for this? If so, we need to look at some of the database projects, they have similar issues with recovery logs.

--
Stan Brown stanb@netcom.com 843-745-3154
Westvaco
Charleston SC.
_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

By Locke, Alan S on 14 January, 2000 - 7:03 pm

>On Fri Jan 14 15:50:39 2000 "Sage, Pete (IndSys, GEFanuc, Albany)" wrote...
>>>Unless you synch the data on every write you will lose data if someone
>>switches the PC off. Syncing the data on every write will kill your
>>performance. A reasonable technique is to configure the shared memory as a
>>memory mapped file, this will give you persistence. Periodically you can
>>flush it to disk.
>Well, I was thinking of a process whose job it is to scan the data tables, and
>write any changes it finds to the disk files. I realize this is a performance issue,
>_but_ it is critical to the operation of the process, and it is a problem that has been
>solved by the database code writers, they can't lose data either. You would hate to
>have your savings deposit deducted from your checking account, but never credited
>to your savings account because of a computer crash, now wouldn't you :-)

My understanding is that software PLC vendors have addressed this issue by using a battery backed up flash drive (ram) and that they write the data tables to this drive every scan. As a machine integrator type, I would also expect to need to install a UPS with a software PLC installation and to also do the power loss wiring to the PLC for orderly shutdown.

IMHO the data tables must be saved every scan. The end user could configure the PLC to save only a portion of the data table, depending on the application, but not saving them every scan could really mess up a machine once repowered.


_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

On Fri Jan 14 19:03:12 2000 Locke, Alan S wrote...
>
>>On Fri Jan 14 15:50:39 2000 "Sage, Pete (IndSys, GEFanuc, Albany)" wrote...
>>>>Unless you synch the data on every write you will lose data if someone
>>>switches the PC off. Syncing the data on every write will kill your
>>>performance. A reasonable technique is to configure the shared memory as a
>>>memory mapped file, this will give you persistence. Periodically you can
>>>flush it to disk.
>>Well, I was thinking of a process whose job it is to scan the data tables, and
>>write any changes it finds to the disk files. I realize this is a performance issue,
>>_but_ it is critical to the operation of the process, and it is a problem that has been
>>solved by the database code writers, they can't lose data either. You would hate to
>>have your savings deposit deducted from your checking account, but never credited
>>to your savings account because of a computer crash, now wouldn't you :-)
>
>My understanding is that software PLC vendors have addressed this issue by using a battery backed up flash drive (ram) and that they write the data tables to this drive every scan. As a machine integrator type, I would also expect to need to install a UPS with a software PLC installation and to also do the power loss wiring to the PLC for orderly shutdown.

Good point. However I wish we could come up with a better solution. Flash RAM is expensive, and I most certainly don't put all of my PLC's on UPS'es

>IMHO the data tables must be saved every scan. The end user could configure the PLC to save only a portion of the data table, depending on the application, but not saving them every scan could really mess up a machine once repowered.

Yep.

--
Stan Brown stanb@netcom.com 843-745-3154
Westvaco
Charleston SC.


_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

By Butler, Lawrence on 15 January, 2000 - 12:44 am

Perhaps we consider configuring which data is persistent to minimize disk
writes....

LB

> -----Original Message-----
> From: Stan Brown [SMTP:stanb@awod.com]
>
> One of the basic concepts of a PLC is that the data table is
> retained
> upon program stop, whether that be manually stopping the scan, or
> the
> actual powering down of the controller.
>
> Has anyone put any thoughts into how we can handle this? In a real
> PLC
> is genuine battery backed up RAM, so their is no "save on power
> failure",
> we unfortunately don't have that luxury. Nor can we depend upon
> every
> installation having a UPS.
>
> Anyone know a way around this, that doesn't cause us to really spend
> a
> lot of time writing changing data table to the disk? Or are we going
> to
> have to have a task for this? If so, we need to look at some of the
> database projects, they have similar issues with recovery logs.
>
_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

Sage, Pete (IndSys, GEFanuc, Albany):
> >Unless you synch the data on every write you will lose data if someone
> >switches the PC off. Syncing the data on every write will kill your
> >performance. A reasonable technique is to configure the shared memory
> >as a memory mapped file, this will give you persistence. Periodically
> >you can flush it to disk.

Stan Brown:
> Well, I was thinking of a process whose job it is to scan the data
> tables, and write any changes it finds to the disk files.

The only data tables that need to be written to disk are the "internal coils", aren't they?

The place in the architecture where this fits is among the I/O drivers: just another set of points, except instead of interfacing to a PLC it'll interface to a disk file.

You'll probably lose a few seconds worth of data in a crash, but I don't think that can really be helped.

The advantage of doing it like this is that if a few seconds' loss is unacceptable, you load the battery-backed-RAM driver instead.

> I realize this is a performance issue, _but_ it is critical to the
> operation of the process, and it is a problem that has been solved
> by the database code writers, they can't lose data either.

Yes, but they don't have the real-time problem. (Well, they do, but it's not as hard as ours. Their real-time problems are measured in seconds or days.)


Jiri
--
Jiri Baum <jiri@baum.com.au>
On the Internet, nobody knows if you are a @{[@{[open(0),<0>]}-1]}-line
perl script...

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

On Sat Jan 15 00:42:20 2000 Butler, Lawrence wrote...
>
>Perhaps we consider configuring which data is persistent to minimize disk
>writes....

Perhaps, or perhaps we define what is "high priority persistent", where an attempt to keep all the rest up to date, would be made, but the "high priority" stuff would have precedence over the other.

However, I am not very happy with this solutin, since it adds a whole extra level of things to keep in mind when writing the application programs.

I think we need to think some more about this.

--
Stan Brown stanb@netcom.com 843-745-3154
Westvaco
Charleston SC.

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

On Fri Jan 14 21:44:14 2000 Jiri Baum wrote...
>
>Sage, Pete (IndSys, GEFanuc, Albany):
>> >Unless you synch the data on every write you will lose data if someone
>> >switches the PC off. Syncing the data on every write will kill your
>> >performance. A reasonable technique is to configure the shared memory
>> >as a memory mapped file, this will give you persistence. Periodically
>> >you can flush it to disk.
>
>Stan Brown:
>> Well, I was thinking of a process whose job it is to scan the data
>> tables, and write any changes it finds to the disk files.
>
>The only data tables that need to be written to disk are the "internal
>coils", aren't they?

No, in a real PLC _all data_ is in battery backed RAM.
>
>The place in the architecture where this fits is among the I/O drivers:
>just another set of points, except instead of interfacing to a PLC it'll
>interface to a disk file.

I don't see it that way. I see a data to disk process. Whose job is to read the data tables and keep the disk copy up to date. We can optimize this to minimize the number of disk writes, by keeping up with what has changed since the last write, sort of like the in memory cacheing of databases.

>You'll probably lose a few seconds worth of data in a crash, but I don't
>think that can really be helped.
>
>The advantage of doing it like this is that if a few seconds' loss is
>unacceptable, you load the battery-backed-RAM driver instead.
>
>> I realize this is a performance issue, _but_ it is critical to the
>> operation of the process, and it is a problem that has been solved
>> by the database code writers, they can't lose data either.
>
>Yes, but they don't have the real-time problem. (Well, they do, but it's
>not as hard as ours. Their real-time problems are measured in seconds or
>days.)


--
Stan Brown stanb@netcom.com 843-745-3154
Westvaco
Charleston SC.

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

By Butler, Lawrence on 15 January, 2000 - 10:06 pm

> -----Original Message-----
> From: Stan Brown [SMTP:stanb@awod.com]

<snip>
> Perhaps, or perhaps we define what is "high priority persistent",
> where
> an attempt to keep all the rest up to date, would be made, but the
> "high priority" stuff would have precedence over the other.
>
> However, I am not very happy with this solutin, since it adds a
> whole
> extra level of things to keep in mind when writing the application
> programs.
>
> I think we need to think some more about this.
<snip>
Definitely requires much more thought, don't want to get caught at 3:00 am with problems because you forgot to designate a register as
persistent and the program dies through a power bump.

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

> >Sage, Pete (IndSys, GEFanuc, Albany):
> >> >Unless you synch the data on every write you will lose data if someone
> >> >switches the PC off. Syncing the data on every write will kill your
> >> >performance. A reasonable technique is to configure the shared memory
> >> >as a memory mapped file, this will give you persistence. Periodically
> >> >you can flush it to disk.

> >Stan Brown:
> >> Well, I was thinking of a process whose job it is to scan the data
> >> tables, and write any changes it finds to the disk files.

Jiri Baum:
> >The only data tables that need to be written to disk are the "internal
> >coils", aren't they?

Stan Brown wrote:
> No, in a real PLC _all data_ is in battery backed RAM.

I'm not sure whether this is a disagreement or a misunderstanding...

If you mean that the other files of data (16-bit words, floats) are also saved, then that's no problem; the PersistentData driver will simply handle them, too (from its point of view it's all bits - no problem).

If you mean there's data *other* than the files to be saved, can you give an example?

> >The place in the architecture where this fits is among the I/O drivers:
> >just another set of points, except instead of interfacing to a PLC it'll
> >interface to a disk file.

> I don't see it that way. I see a data to disk process. Whose job is to
> read the data tables and keep the disk copy up to date.

How would this differ in functionality from what I've suggested?

(I'd rather minimize the number of interfaces into the core, even if it means that sometimes two things that are presented to the user as
completely different sometimes share the same interface. And I don't see any difference between taking a bunch of bits and sending them to a PLC and taking a bunch of bits and sending them to disk.)

> We can optimize this to minimize the number of disk writes, by keeping up
> with what has changed since the last write, sort of like the in memory
> cacheing of databases.

The PLC interface will probably have that info anyway, because real PLC drivers will want to minimize bus traffic.


Jiri
--
Jiri Baum <jiri@baum.com.au>
On the Internet, nobody knows if you are a @{[@{[open(0),<0>]}-1]}-line
perl script...

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

On Sun Jan 16 05:42:01 2000 Jiri Baum wrote...
>
>> >Sage, Pete (IndSys, GEFanuc, Albany):
>> >> >Unless you synch the data on every write you will lose data if someone
>> >> >switches the PC off. Syncing the data on every write will kill your
>> >> >performance. A reasonable technique is to configure the shared memory
>> >> >as a memory mapped file, this will give you persistence. Periodically
>> >> >you can flush it to disk.
>
>> >Stan Brown:
>> >> Well, I was thinking of a process whose job it is to scan the data
>> >> tables, and write any changes it finds to the disk files.
>
>Jiri Baum:
>> >The only data tables that need to be written to disk are the "internal
>> >coils", aren't they?
>
>Stan Brown wrote:
>> No, in a real PLC _all data_ is in battery backed RAM.
>
>I'm not sure whether this is a disagreement or a misunderstanding...
>
>If you mean that the other files of data (16-bit words, floats) are also
>saved, then that's no problem; the PersistentData driver will simply handle
>them, too (from its point of view it's all bits - no problem).

Thats exactly what I mean.

>If you mean there's data *other* than the files to be saved, can you give an example?<
>
>> >The place in the architecture where this fits is among the I/O drivers:
>> >just another set of points, except instead of interfacing to a PLC it'll
>> >interface to a disk file.
>
>> I don't see it that way. I see a data to disk process. Whose job is to
>> read the data tables and keep the disk copy up to date.
>
>How would this differ in functionality from what I've suggested?

I am a big believer in a relatively large number of simpler processes that work with each other, rather than assigning multiple tasks to one process. Easier to code, debug, and understand for the application programmers.

>(I'd rather minimize the number of interfaces into the core, even if it
>means that sometimes two things that are presented to the user as
>completely different sometimes share the same interface. And I don't see
>any difference between taking a bunch of bits and sending them to a PLC and
>taking a bunch of bits and sending them to disk.)
>
>> We can optimize this to minimize the number of disk writes, by keeping up
>> with what has changed since the last write, sort of like the in memory
>> cacheing of databases.
>
>The PLC interface will probably have that info anyway, because real PLC
>drivers will want to minimize bus traffic.

Huh, are we talking about the same thing here?

--
Stan Brown stanb@netcom.com 843-745-3154
Westvaco
Charleston SC.

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

On Sat Jan 15 22:04:50 2000 Butler, Lawrence wrote...

>> I think we need to think some more about this.
> <snip>
> Definitely requires much more thought, don't want to get caught at
>3:00 am with problems because you forgot to designate a register as
>persistent and the program dies through a power bump.

Yep, I was out looking at the process at that time this morning :-(

--
Stan Brown stanb@netcom.com 843-745-3154
Westvaco
Charleston SC.

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

On Sat, Jan 15, 2000 at 10:52:25AM -0500, Stan Brown wrote:
> On Sat Jan 15 00:42:20 2000 Butler, Lawrence wrote...
> >
> >Perhaps we consider configuring which data is persistent to minimize disk
> >writes....
>
> Perhaps, or perhaps we define what is "high priority persistent", where
> an attempt to keep all the rest up to date, would be made, but the
> "high priority" stuff would have precedence over the other.
>
> However, I am not very happy with this solutin, since it adds a whole
> extra level of things to keep in mind when writing the application
> programs.
>
> I think we need to think some more about this. <

As a machine, the PLC has a specific state at all times, including I/O states as well as the application program's instruction pointer(s). If
the machine state is to be preserved, then sufficient information must be stored to fully define it. To me this would require writing all changable registers to disk on each scan, or possibly whenever they are changed. The latter would suggest writing the state to disk only when a change is made, which would be less of a performance hit than dumping all the registers to disk on every scan.

We'll know that the Linux PLC has succeeded when it can be relied upon to survive nasty power events, like (at least ideal) ordinary PLCs can.

Maybe fast, persistent memory is simply lacking in the current PC design, which really has limited need for it anyway. Perhaps an add-on board with battery-backed ram or flash could provide this as a service. It would be a mistake to try to cover in software for limitations that
really ought to be addressed in hardware, IMHO.

--
Ken Irving
Trident Software
jkirving@mosquitonet.com


_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

By Mark Hutton on 17 January, 2000 - 4:02 am

That's (one of) the difference(s) between a real PLC and a softPLC. A real PLC is designed for the job, hardware and firmware. A softPLC uses software to force a fit onto a general purpose system (PC and OS), in this case not only is the PC/OS not designed for instantaneous loss of power, it is general considered to be a no-no.

You not only have to consider the state of the data table in such a circumstance but wether or how well Linux will reboot in these
circumstances.

(in the windows world software has come to degrade over time because of registry corruption caused by such pwoer downs).

It may be that persistance is not required, certainly the state of the I/O should be determined prior to the start of logic (this raises another point, should the logic engine be able to run if it cannot access its assigned
I/O?). A well designed application will check the state of the machine in its initialisation (to prevent unexpected moves).

Our responsibility here is to ensure that power down/power up cycle does not introduce any inherent hazards.

-----Original Message-----
From: linuxplc-admin@linuxplc.org [mailto:linuxplc-admin@linuxplc.org]On
Behalf Of Stan Brown

On Fri Jan 14 19:03:12 2000 Locke, Alan S wrote...
>
>>On Fri Jan 14 15:50:39 2000 "Sage, Pete (IndSys, GEFanuc, Albany)"
wrote...
>>>>Unless you synch the data on every write you will lose data if someone
>>>switches the PC off. Syncing the data on every write will kill your
>>>performance. A reasonable technique is to configure the shared memory as
a
>>>memory mapped file, this will give you persistence. Periodically you can
>>>flush it to disk.
>>Well, I was thinking of a process whose job it is to scan the data tables,
and
>>write any changes it finds to the disk files. I realize this is a
performance issue,
>>_but_ it is critical to the operation of the process, and it is a problem
that has been
>>solved by the database code writers, they can't lose data either. You
would hate to
>>have your savings deposit deducted from your checking account, but never
credited
>>to your savings account because of a computer crash, now wouldn't you :-)
>
>My understanding is that software PLC vendors have addressed this issue by
using a battery backed up flash drive (ram) and that they write the data
tables to this drive every scan. As a machine integrator type, I would also
expect to need to install a UPS with a software PLC installation and to also
do the power loss wiring to the PLC for orderly shutdown.

Good point. However I wish we could come up with a better solution. Flash RAM is expensive, and I most certainly don't put all of my PLC's on UPS'es
>
>IMHO the data tables must be saved every scan. The end user could
configure the PLC to save only a portion of the data table, depending on the
application, but not saving them every scan could really mess up a machine
once repowered.

Yep.


_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

> >Jiri Baum:
> >> >The only data tables that need to be written to disk are the
> >> >"internal coils", aren't they?

> >Stan Brown wrote:
> >> No, in a real PLC _all data_ is in battery backed RAM.

Jiri Baum:
> >If you mean that the other files of data (16-bit words, floats) are also
> >saved, then that's no problem; the PersistentData driver will simply
> >handle them, too (from its point of view it's all bits - no problem).

Stan Brown:
> Thats exactly what I mean.

OK. Sorry about that - my fault, really.

Jiri Baum:
> >> >The place in the architecture where this fits is among the I/O
> >> >drivers: just another set of points, except instead of interfacing to
> >> >a PLC it'll interface to a disk file.

Stan Brown:
> >> I don't see it that way. I see a data to disk process. Whose job is to
> >> read the data tables and keep the disk copy up to date.

Jiri Baum:
> >How would this differ in functionality from what I've suggested?

Stan Brown:
> I am a big believer in a relatively large number of simpler
> processes that work with each other, rather than assigning multiple
> tasks to one process. Easier to code, debug, and understand for the
> application programmers.

So am I...

I assumed that each I/O driver would be a separate process (so that you can mix and match different brands of I/O, different busses, etc).

Then the PersistentData process can simply pretend to be another I/O driver. You get all the goodies available at the I/O driver interface
without having to re-invent them all.

Stan Brown:
> >> We can optimize this to minimize the number of disk writes, by keeping up
> >> with what has changed since the last write, sort of like the in memory
> >> cacheing of databases.

05:42:01 Jiri Baum:
> >The PLC interface will probably have that info anyway, because real PLC
> >drivers will want to minimize bus traffic.

Stan Brown:
> Huh, are we talking about the same thing here?

No, not when I'm up till six in the morning :-)

I meant the I/O drivers.

The I/O driver interface will probably have that info anyway, because real
I/O drivers will want to minimize bus traffic.

Sound better?


(Sometimes the I/O devices will be PLCs, I think that's how I got confused. Either PLCs that have been demoted to dumb I/O, or the PLCs that actually control the machine, with the linux box doing HMI.)


Jiri
--
Jiri Baum <jiri@baum.com.au>
On the Internet, nobody knows if you are a @{[@{[open(0),<0>]}-1]}-line
perl script...

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

On Mon Jan 17 03:56:37 2000 Mark Hutton wrote...
>
>That's (one of) the difference(s) between a real PLC and a softPLC. A real
>PLC is designed for the job, hardware and firmware. A softPLC uses software
>to force a fit onto a general purpose system (PC and OS), in this case not
>only is the PC/OS not designed for instantaneous loss of power, it is
>general considered to be a no-no.
>
>You not only have to consider the state of the data table in such a
>circumstance but wether or how well Linux will reboot in these
>circumstances.

True, but with journaling filesystems coming on line in Linux, this should become a non issue.
>
>(in the windows world software has come to degrade over time because of
>registry corruption caused by such power downs).

So do you want to go down that road :-)
>
>It may be that persistance is not required, certainly the state of the I/O
>should be determined prior to the start of logic (this raises another point,
>should the logic engine be able to run if it cannot access its assigned
>I/O?). A well designed application will check the state of the machine in
>its initialization (to prevent unexpected moves).

A good point. The I/O scanners need to be able to do an input only scan, and then wait for the logic engine(s) to finish prescan, and first
scan.

I had not thought of this :-(

Means we need a way of communicating this between tasks. Uh-oh I feel the sharedmemeorymanaer() coming on :-)

>Our responsibility here is to ensure that power down/power up cycle does not
>introduce any inherent hazards.

Absolutely!

--
Stan Brown stanb@netcom.com 843-745-3154
Westvaco
Charleston SC.

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

On Fri, 14 Jan 2000, Stan Brown wrote:

> On Fri Jan 14 19:03:12 2000 Locke, Alan S wrote...
> >
> >>On Fri Jan 14 15:50:39 2000 "Sage, Pete (IndSys, GEFanuc, Albany)" wrote...
> >>>>Unless you synch the data on every write you will lose data if someone
> >>>switches the PC off. Syncing the data on every write will kill your
> >>>performance. A reasonable technique is to configure the shared memory as a
> >>>memory mapped file, this will give you persistence. Periodically you can
> >>>flush it to disk.
> >>Well, I was thinking of a process whose job it is to scan the data tables, and
> >>write any changes it finds to the disk files. I realize this is a performance issue,
> >>_but_ it is critical to the operation of the process, and it is a problem that has been
> >>solved by the database code writers, they can't lose data either. You would hate to
> >>have your savings deposit deducted from your checking account, but never credited
> >>to your savings account because of a computer crash, now wouldn't you :-)
> >
> >My understanding is that software PLC vendors have addressed this issue by using a battery backed up flash drive (ram) and that they write the data tables to this drive every scan. As a machine integrator type, I would also expect to need to install a UPS with a software PLC installation and to also do the power loss wiring to the PLC for orderly shutdown.
>
> Good point. However I wish we could come up with a better solution.
> Flash RAM is expensive, and I most certainly don't put all of my PLC's on UPS'es <
> >
> >IMHO the data tables must be saved every scan. The end user could configure the PLC to save only a portion of the data table, depending on the application, but not saving them every scan could really mess up a machine once repowered.
>
> Yep.

Surely any programmer that relies on the saved state of the I/O after a non orderly shutdown is asking for trouble. What happens to the machine if the power is cut and the operators have to
manually do something to the machine to extract the product. If the machine is not put back in the exact same state before power is restored
then the machine could at least be damaged or worse injure somebody. Surely the correct programming technique is to re-initialise the machine from real inputs ignoring any saved I/O (because you don't know if it is valid) and drive the machine to a safe startup state. For example I would never code for battery backed inputs, outputs, timers or counters. The only items that should be battery backed are set points, control limits and alarm limits (you may or may not battery back alarm states depending on the kind of automation required). A good example is a printing machine. It should not assume that the piece of paper it was printing is still there to print on or even in the same position after a power interruption. If it can not determine this from real live inputs then it should eject that piece of paper and re-start on a new sheet.

As for any of the methods suggested so far none are of any use. You do not know when the power is going to fail. If the data is copied to a disk or
flash or whatever then your power cut may occur during the write and your data is then corrupt. The only way to implement this is to add a UPS that will signal a power failure and ensure sufficient time to save the state to permanent storage and then perform an orderly storage.

We all know why systems check the disks on boot up after an un ordered shutdown. You do not know if your saved state survived the un ordered
shutdown and the file system may be corrupt or fixed by fsck or whatever and is therefore invalid anyway.

Dave West E-Mail: davew@hoagy.demon.co.uk
Semiras Projects Ltd. PGP public key available on request.


_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

On Mon Jan 17 07:39:40 2000 Dave West wrote...
>
>Surely any programmer that relies on the saved state of the I/O after a
>non orderly shutdown is asking for trouble.
>What happens to the machine if the power is cut and the operators have to
>manually do something to the machine to extract the product. If the
>machine is not put back in the exact same state before power is restored
>then the machine could at least be damaged or worse injure somebody.
>Surely the correct programming technique is to re-initialise the machine
>from real inputs ignoring any saved I/O (because you don't know if it is
>valid) and drive the machine to a safe startup state. For example I would
>never code for battery backed inputs, outputs, timers or counters. The
>only items that should be battery backed are set points, control limits
>and alarm limits (you may or may not battery back alarm states depending
>on the kind of automation required).
>A good example is a printing machine. It should not assume that the piece
>of paper it was printing is still there to print on or even in the same
>position after a power interruption. If it can not determine this from
>real live inputs then it should eject that piece of paper and re-start on
>a new sheet.

The issue, once again, is _npt_ teh I/O states. It is teh non I/O data table.

Recipes, amounts of material loaded inot vesels, all sorts of required things are stored there.

>As for any of the methods suggested so far none are of any use. You do not
>know when the power is going to fail. If the data is copied to a disk or
>flash or whatever then your power cut may occur during the write and your
>data is then corrupt. The only way to implement this is to add a UPS that
>will signal a power failure and ensure sufficient time to save the state
>to permanent storage and then perform an orderly storage.

Journaling filesystems go a long way toward addressing this.

>We all know why systems check the disks on boot up after an un ordered
>shutdown. You do not know if your saved state survived the un ordered
>shutdown and the file system may be corrupt or fixed by fsck or whatever
>and is therefore invalid anyway.

--
Stan Brown stanb@netcom.com 843-745-3154
Westvaco
Charleston SC.

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

On Sun Jan 16 23:26:17 2000 Jiri Baum wrote...
>
>> >Jiri Baum:
>> >> >The only data tables that need to be written to disk are the
>> >> >"internal coils", aren't they?
>
>> >Stan Brown wrote:
>> >> No, in a real PLC _all data_ is in battery backed RAM.
>
>Jiri Baum:
>> >If you mean that the other files of data (16-bit words, floats) are also
>> >saved, then that's no problem; the PersistentData driver will simply
>> >handle them, too (from its point of view it's all bits - no problem).
>
>Stan Brown:
>> Thats exactly what I mean.
>
>OK. Sorry about that - my fault, really.

That's OK we are coming together here, this is good.
>
>Jiri Baum:
>> >> >The place in the architecture where this fits is among the I/O
>> >> >drivers: just another set of points, except instead of interfacing to
>> >> >a PLC it'll interface to a disk file.
>
>Stan Brown:
>> >> I don't see it that way. I see a data to disk process. Whose job is to
>> >> read the data tables and keep the disk copy up to date.
>
>Jiri Baum:
>> >How would this differ in functionality from what I've suggested?
>
>Stan Brown:
>> I am a big believer in a relatively large number of simpler
>> processes that work with each other, rather than assigning multiple
>> tasks to one process. Easier to code, debug, and understand for the
>> application programmers.
>
>So am I...

Great!
>
>I assumed that each I/O driver would be a separate process (so that you can
>mix and match different brands of I/O, different busses, etc).

I am on the same wavelength with you here.
>
>Then the PersistentData process can simply pretend to be another I/O
>driver. You get all the goodies available at the I/O driver interface
>without having to re-invent them all.

Well it;s not really dealing with I/O. It;s dealing with the more generic "all of data table', including I/O data table, and non I/O data table.

I still don't feel like I have gotten the distinction between the 2 types of data table across. am I wrong?
>
>Stan Brown:
>> >> We can optimize this to minimize the number of disk writes, by keeping up
>> >> with what has changed since the last write, sort of like the in memory
>> >> cacheing of databases.
>
>05:42:01 Jiri Baum:
>> >The PLC interface will probably have that info anyway, because real PLC
>> >drivers will want to minimize bus traffic.
>
>Stan Brown:
>> Huh, are we talking about the same thing here?
>
>No, not when I'm up till six in the morning :-)
>
>I meant the I/O drivers.
>
>The I/O driver interface will probably have that info anyway, because real
>I/O drivers will want to minimize bus traffic.
>
>Sound better?

Yes.
>
>
>(Sometimes the I/O devices will be PLCs, I think that's how I got confused.
>Either PLCs that have been demoted to dumb I/O, or the PLCs that actually
>control the machine, with the linux box doing HMI.)

PLC's are not real I/O.

Real I/o is a piece of hardware, with wires on it, anything else is data. Even if this data came from real I/O in another processor.

--
Stan Brown stanb@netcom.com 843-745-3154
Westvaco
Charleston SC.

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

Jiri Baum:
> >Then the PersistentData process can simply pretend to be another I/O
> >driver. You get all the goodies available at the I/O driver interface
> >without having to re-invent them all.

Stan Brown:
> Well it;s not really dealing with I/O. It;s dealing with the more
> generic "all of data table', including I/O data table, and non I/O
> data table.

> I still don't feel like I have gotten the distinction between the 2
> types of data table across. am I wrong?

No, I understand the difference. I was just thinking that having the PersistentData driver *pretend* to be an I/O driver would save us having to invent a separate interface for it.

Since then I've changed my mind anyway, so it no longer matters.

[on a different topic]
> >(Sometimes the I/O devices will be PLCs, I think that's how I got
> >confused. Either PLCs that have been demoted to dumb I/O, or the PLCs
> >that actually control the machine, with the linux box doing HMI.)

> PLC's are not real I/O.

> Real I/o is a piece of hardware, with wires on it, anything else is
> data. Even if this data came from real I/O in another processor.

Well, if it has a serial cable on one side and wires out the other side, and doesn't do any processing, it's as good as real I/O.

The other thing is, that the among the I/O drivers there can be drivers reading PLCs that *are* doing processing. Those won't be real real I/O, of course - some points will be almost-real I/O (those that read the values on the wires going in and out of the PLC), while others will be definitely unreal I/O (internal coils of the PLC).

But I've been thinking too much of the SMM lately, where bits is bits, regardless of where they're from, where they're going or what they mean.


Jiri
--
Jiri Baum <jiri@baum.com.au>
On the Internet, nobody knows if you are a @{[@{[open(0),<0>]}-1]}-line
perl script...

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

Dave West wrote:
>Surely any programmer that relies on the saved state of the I/O after a
>non orderly shutdown is asking for trouble.
>What happens to the machine if the power is cut and the operators have to
>manually do something to the machine to extract the product. If the
>machine is not put back in the exact same state before power is restored
>then the machine could at least be damaged or worse injure somebody.
>Surely the correct programming technique is to re-initialise the machine
>from real inputs ignoring any saved I/O (because you don't know if it is
>valid) and drive the machine to a safe startup state. For example I would
>never code for battery backed inputs, outputs, timers or counters. The
>only items that should be battery backed are set points, control limits
>and alarm limits (you may or may not battery back alarm states depending
>on the kind of automation required).
>A good example is a printing machine. It should not assume that the piece
>of paper it was printing is still there to print on or even in the same
>position after a power interruption. If it can not determine this from
>real live inputs then it should eject that piece of paper and re-start on
>a new sheet.

There are many applications that require the machine state to be saved to be able to reasonably recover after a power bump. For instance machines that don't have enough sensors to determine the state based on inputs alone
(a common issue with material handling systems), or machines that have a degree of autonomy and need to be able to recover without operator
intervention. Even if the operator is available to assist in the power loss recovery, it's nice to have the machine HMI prompt the operator
through a recovery process based at least partly on prior state. A common solution to machines that may be changed (by maintenance personnel possibly) without the PLCs knowledge is a machine reset process.

This is definitely one of the difficult areas in control engineering, being so highly tied to the machine process and complex fault trees.

Alan Locke
Control Engineer, Boeing

"My opinions are my own and not necessarily those of my employer"

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

By Johan Bengtsson on 18 January, 2000 - 12:59 pm

>> >Unless you synch the data on every write you will lose data if someone
>> >switches the PC off. Syncing the data on every write will kill your
>> >performance. A reasonable technique is to configure the shared memory
>> >as a memory mapped file, this will give you persistence. Periodically
>> >you can flush it to disk.
>
>Stan Brown:
>> Well, I was thinking of a process whose job it is to scan the data
>> tables, and write any changes it finds to the disk files.
>

If it is possible to configure some special (small) area to save and save that as often as possible I think that is enough for most
application, if not - buy an UPS or a battery backed RAM card...

If the data is saved in different places for each save in some kind of round robin scheme together with a version number with enough bits to identify the newest version even when a wrap occurs and some CRC scheme to identify the versions really fully written. This way the data may not be saved really each scan, but if the data
is consistent for a paricular scan not too far from the power failure it should be enough in most cases.

Writing one sector (512 bytes) - say 12 bytes for recover information like CRC, version stamp and so on will still give you about 250 16bit
values or 4000 digital values - (I don't intend this as a limit, just an example for the calculations). This should be quite fast and
probably cover most applications needing to store anything!

Can someone fill in the expected maximum time to write this amount to a harddrive under linux?

<clip>


----------------------------------------
P&L, the Academy of Automation
Box 252, S-281 23 Hässleholm SWEDEN
Tel: +46 451 49 460, Fax: +46 451 89 833
E-mail: johan.bengtsson@pol.se
Internet: http://www.pol.se/
----------------------------------------


_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

By Johan Bengtsson on 18 January, 2000 - 12:59 pm

>> >The only data tables that need to be written to disk are the "internal
>> >coils", aren't they?
>> No, in a real PLC _all data_ is in battery backed RAM.
>
>I'm not sure whether this is a disagreement or a misunderstanding...
>

I think this depends on type of PLC.
In Some PLC:s I have seen (mostly Mitsubishi) there where special memory areas backed upp and other not backed up. I think the same is true for timers and counters. I do however not know exacly where I saw it right now



----------------------------------------
P&L, the Academy of Automation
Box 252, S-281 23 Hässleholm SWEDEN
Tel: +46 451 49 460, Fax: +46 451 89 833
E-mail: johan.bengtsson@pol.se
Internet: http://www.pol.se/
----------------------------------------


_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

By Mark Hutton on 19 January, 2000 - 3:04 am

In Siemens S5, only the first 64 (?) flag bytes are retentive (retain there
value on power down).

-----Original Message-----
From: linuxplc-admin@linuxplc.org [mailto:linuxplc-admin@linuxplc.org]On
Behalf Of johan.bengtsson@pol.se

>> >The only data tables that need to be written to disk are the "internal
>> >coils", aren't they?
>> No, in a real PLC _all data_ is in battery backed RAM.
>
>I'm not sure whether this is a disagreement or a misunderstanding...
>

I think this depends on type of PLC.
In Some PLC:s I have seen (mostly Mitsubishi) there where special memory areas backed upp and other not backed up. I think the same is true for timers and counters. I do however not know exacly where I saw it right now

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

On Mon, 17 Jan 2000, Stan Brown wrote:

> On Mon Jan 17 07:39:40 2000 Dave West wrote...
> >
> >Surely any programmer that relies on the saved state of the I/O after a
> >non orderly shutdown is asking for trouble.
> >What happens to the machine if the power is cut and the operators have to
> >manually do something to the machine to extract the product. If the
> >machine is not put back in the exact same state before power is restored
> >then the machine could at least be damaged or worse injure somebody.
> >Surely the correct programming technique is to re-initialise the machine
> >from real inputs ignoring any saved I/O (because you don't know if it is
> >valid) and drive the machine to a safe startup state. For example I would
> >never code for battery backed inputs, outputs, timers or counters. The
> >only items that should be battery backed are set points, control limits
> >and alarm limits (you may or may not battery back alarm states depending
> >on the kind of automation required).
> >A good example is a printing machine. It should not assume that the piece
> >of paper it was printing is still there to print on or even in the same
> >position after a power interruption. If it can not determine this from
> >real live inputs then it should eject that piece of paper and re-start on
> >a new sheet.
>
> The issue, once again, is _npt_ teh I/O states. It is teh non I/O data table.
>
> Recipes, amounts of material loaded inot vesels, all sorts of required things are stored there. <

Surely a well controlled machine can measure from a live input material loaded into a vessel etc, as such things can change over a power cut.
Recipes I agree with, they need to be saved, but do they really change so often that they need to be saved every scan? and if so surely they are no
longer recipes. It may be that I do not understand your interpretation ofa recipe. I see it as something similar to a cook book with pages of recipes for hotpot, stew, soup, Chicken Kiev etc. Once set it does not change. Anything that a recipe controls that changes depending on how far
you are through the recipe is not part of the recipe but is a variable that is measure against the recipe and therefore should be discernable
from live I/O. Hmm, I just thought about entries in a recipe that are measured by time.This makes it difficult in that the elapsed time would need to be saved and this could be very frequent.

> >As for any of the methods suggested so far none are of any use. You do not
> >know when the power is going to fail. If the data is copied to a disk or
> >flash or whatever then your power cut may occur during the write and your
> >data is then corrupt. The only way to implement this is to add a UPS that
> >will signal a power failure and ensure sufficient time to save the state
> >to permanent storage and then perform an orderly storage.
>
> Journaling filesystems go a long way twoard addressing this.

Yes but they do not reliably solve the problem 100% of the time.

> >We all know why systems check the disks on boot up after an un ordered
> >shutdown. You do not know if your saved state survived the un ordered
> >shutdown and the file system may be corrupt or fixed by fsck or whatever
> >and is therefore invalid anyway.

This is the main point of my argument.
On boot the system state can not be reliably determined from any saved data unless the system shut down in an ordered fashion. To achieve this
you need a UPS that signals power loss and allows you to shut down properly.
Some people have said a UPS in unacceptable because of the situation where the UPS fails. I think this situation is the same as the one where a PLC backup battery fails and therefore all bets are off.


Dave West E-Mail: davew@hoagy.demon.co.uk
Semiras Projects Ltd. PGP public key available on request.


_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

On Mon, 17 Jan 2000, Alan Locke wrote:

> Dave West wrote:
> >Surely any programmer that relies on the saved state of the I/O after a
> >non orderly shutdown is asking for trouble.
> >What happens to the machine if the power is cut and the operators have to
> >manually do something to the machine to extract the product. If the
> >machine is not put back in the exact same state before power is restored
> >then the machine could at least be damaged or worse injure somebody.
> >Surely the correct programming technique is to re-initialise the machine
> >from real inputs ignoring any saved I/O (because you don't know if it is
> >valid) and drive the machine to a safe startup state. For example I would
> >never code for battery backed inputs, outputs, timers or counters. The
> >only items that should be battery backed are set points, control limits
> >and alarm limits (you may or may not battery back alarm states depending
> >on the kind of automation required).
> >A good example is a printing machine. It should not assume that the piece
> >of paper it was printing is still there to print on or even in the same
> >position after a power interruption. If it can not determine this from
> >real live inputs then it should eject that piece of paper and re-start on
> >a new sheet.
>
> There are many applications that require the machine state to be saved to be
> able to reasonably recover after a power bump. For instance machines that
> don't have enough sensors to determine the state based on inputs alone
> (a common issue with material handling systems), or machines that
> have a degree of autonomy and need to be able to recover without operator
> intervention. Even if the operator is available to assist in the power
> loss recovery, it's nice to have the machine HMI prompt the operator
> through a recovery process based at least partly on prior state. A common
> solution to machines that may be changed (by maintenance personnel possibly)
> without the PLCs knowledge is a machine reset process.
>
> This is definitely one of the difficult areas in control engineering, being so
> highly tied to the machine process and complex fault trees.

I'm not disagreeing with the requirement to save some state on power loss. What I have been trying to say is that any attempt to save state of all
data on every scan will make our project so slow that it will be useless. The correct way to save state is detect the event that requires a state
save (power loss) and deal with it (UPS and ordered shutdown saving machine state prior to power off).

Dave West E-Mail: davew@hoagy.demon.co.uk
Semiras Projects Ltd. PGP public key available on request.


_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

On Wed, 19 Jan 2000, Mark Hutton wrote:

> In Siemens S5, only the first 64 (?) flag bytes are retentive (retain there
> value on power down).
>
> -----Original Message-----
> From: linuxplc-admin@linuxplc.org [mailto:linuxplc-admin@linuxplc.org]On
> Behalf Of johan.bengtsson@pol.se

> >> >The only data tables that need to be written to disk are the "internal
> >> >coils", aren't they?
> >> No, in a real PLC _all data_ is in battery backed RAM.
> >
> >I'm not sure whether this is a disagreement or a misunderstanding...
> >
>
> I think this depends on type of PLC.
> In Some PLC:s I have seen (mostly Mitsubishi) there where
> special memory areas backed upp and other not backed up.
> I think the same is true for timers and counters.
> I do however not know exacly where I saw it right now

Using MEDOC you set which data values are retentive in blocks via the parameters option.
You can set data registers, timers, counters, relays etc. There is only one block per type and it must be contiguous. Note by default none are retentive.


Dave West E-Mail: davew@hoagy.demon.co.uk
Semiras Projects Ltd. PGP public key available on request.


_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

All the points you make are very true...
> A good example is a printing machine. It should not assume that the piece
> of paper it was printing is still there to print on or even in the same
> position after a power interruption. If it can not determine this from
> real live inputs then it should eject that piece of paper and re-start on
> a new sheet.

However,
the value of the article being produced has an influence on the machine program. When a fridge, or car body (or several car bodies within a single
machine) is partly completed, a lot of value adding has been invested in them already. Some machine do enforce a clean/empty start, but they are not as well thought of as other machines which try to avoid scrap. It is often not permissible to redo the whole cycle again, starting from scratch, upon the half finished parts/car-bodies. For an example, the tooling
people utter a warning that for some of the punch-tooling/dies it would cause great damage to the tooling if it punches into an pre-existing hole.
Imagine that it is impossible to sense/monitor/measure let alone see some
of the holes until shifted clear of the tooling. So with no direct sensing, a possible second-best/fall-back methodology is to memorise what
has and what has not yet been punched/done. The judicious use of persistence seems sometimes not to be too big a risk to take, especially since there will be a careful operator available to check the machine stages before enabling the machine, stage by stage. But in a better world,
what you say has merit.


_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

By Mark Hutton on 19 January, 2000 - 11:12 am

<clip>
However,
the value of the article being produced has an influence on the machine
program. When a fridge, or car body (or several car bodies within a single
machine) is partly completed, a lot of value adding has been invested in
them already. Some machine do enforce a clean/empty start, but they are
not as well thought of as other machines which try to avoid scrap. It is
often not permissible to redo the whole cycle again, starting from scratch,
upon the half finished parts/car-bodies. For an example, the tooling
people utter a warning that for some of the punch-tooling/dies it would
cause great damage to the tooling if it punches into an pre-existing hole.
Imagine that it is impossible to sense/monitor/measure let alone see some
of the holes until shifted clear of the tooling. So with no direct
sensing, a possible second-best/fall-back methodology is to memorise what
has and what has not yet been punched/done. The judicious use of
persistence seems sometimes not to be too big a risk to take, especially
since there will be a careful operator available to check the machine
stages before enabling the machine, stage by stage. But in a better world,
what you say has merit.
</clip>

I go back to my earlier point. NO ASSUMPTIONS should be made on power up. What point is persistance, in this context, if the real world is not persistant.

While the line is powered down there is no way to garauntee that the remembered process state remains accurate. While the machine is powered down product may have been moved, in any direction, by any amount.

Process start up states that cannot be determined by machine initial startup sequences should not automatically restart. If you are worried about reject cost install manual sequencing to bring the process back to the known state, otherwise reject it.


_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

By Johan Bengtsson on 19 January, 2000 - 5:59 pm

>> >Unless you synch the data on every write you will lose data if someone
>> >switches the PC off. Syncing the data on every write will kill your
>> >performance. A reasonable technique is to configure the shared memory
>> >as a memory mapped file, this will give you persistence. Periodically
>> >you can flush it to disk.
>
>Stan Brown:
>> Well, I was thinking of a process whose job it is to scan the data
>> tables, and write any changes it finds to the disk files.
>

If it is possible to configure some special (small) area to save and save that as often as possible I think that is enough for most
application, if not - buy an UPS or a battery backed RAM card...

If the data is saved in different places for each save in some kind of round robin scheme together with a version number with enough bits to identify the newest version even when a wrap occurs and some CRC scheme to identify the versions really fully written. This way the data may not be saved really each scan, but if the data
is consistent for a paricular scan not too far from the power failure it should be enough in most cases.

Writing one sector (512 bytes) - say 12 bytes for recover information like CRC, version stamp and so on will still give you about 250 16bit
values or 4000 digital values - (I don't intend this as a limit, just an example for the calculations). This should be quite fast and
probably cover most applications needing to store anything!

Can someone fill in the expected maximum time to write this amount to a harddrive under linux?

>The only data tables that need to be written to disk are the "internal
>coils", aren't they?
>
>The place in the architecture where this fits is among the I/O drivers:
>just another set of points, except instead of interfacing to a PLC it'll
>interface to a disk file.
>
>You'll probably lose a few seconds worth of data in a crash, but I don't
>think that can really be helped.
>
>The advantage of doing it like this is that if a few seconds' loss is
>unacceptable, you load the battery-backed-RAM driver instead.
>
>> I realize this is a performance issue, _but_ it is critical to the
>> operation of the process, and it is a problem that has been solved
>> by the database code writers, they can't lose data either.
>
>Yes, but they don't have the real-time problem. (Well, they do, but it's
>not as hard as ours. Their real-time problems are measured in seconds or
>days.)


----------------------------------------
P&L, the Academy of Automation
Box 252, S-281 23 Hässleholm SWEDEN
Tel: +46 451 49 460, Fax: +46 451 89 833
E-mail: johan.bengtsson@pol.se
Internet: http://www.pol.se/
----------------------------------------
_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

Dave West:
> Hmm, I just thought about entries in a recipe that are measured by time.
> This makes it difficult in that the elapsed time would need to be saved
> and this could be very frequent.

No, you just save the wall-time (either for timer-start or timer-end, probably for whichever is represented as zero).

This must be in GMT/UTC (one of them, anyway), but that's default in Linux.


(Actually, it isn't - for recipe timers you'd want the UTC without the leap seconds - but it's close enough as will make no difference.)


> Some people have said a UPS in unacceptable because of the situation > where the UPS fails. I think this situation is the same as the one where
> a PLC backup battery fails and therefore all bets are off.

Good point.


Jiri
--
Jiri Baum <jiri@baum.com.au>
On the Internet, nobody knows if you are a @{[@{[open(0),<0>]}-1]}-line
perl script...

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

On Fri, 21 Jan 2000, Jiri Baum wrote:

> Dave West:
> > Hmm, I just thought about entries in a recipe that are measured by time.
> > This makes it difficult in that the elapsed time would need to be saved
> > and this could be very frequent.
>
> No, you just save the wall-time (either for timer-start or timer-end,
> probably for whichever is represented as zero).
>
> This must be in GMT/UTC (one of them, anyway), but that's default in Linux.
>
> (Actually, it isn't - for recipe timers you'd want the UTC without the leap
> seconds - but it's close enough as will make no difference.)

Ermm, I think you missed my point slightly. First can you explain wall-time. Second how does a timer that counts a number of seconds know
about GMT/UTC if it is stored in 16 bits. More apropriately what does it care.

Dave West E-Mail: davew@hoagy.demon.co.uk
Semiras Projects Ltd. PGP public key available on request.


_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

On Fri Jan 21 09:22:53 2000 Dave West wrote...
>
>On Fri, 21 Jan 2000, Jiri Baum wrote:
>
>> Dave West:
>> > Hmm, I just thought about entries in a recipe that are measured by time.
>> > This makes it difficult in that the elapsed time would need to be saved
>> > and this could be very frequent.
>>
>> No, you just save the wall-time (either for timer-start or timer-end,
>> probably for whichever is represented as zero).
>>
>> This must be in GMT/UTC (one of them, anyway), but that's default in Linux.
>>
>>
>> (Actually, it isn't - for recipe timers you'd want the UTC without the leap
>> seconds - but it's close enough as will make no difference.)
>
>Ermm, I think you missed my point slightly. First can you explain
>wall-time. Second how does a timer that counts a number of seconds know
>about GMT/UTC if it is stored in 16 bits. More apropriately what does it
>care.
>

Excelent point. BTW timers need to have a finer resolution that 1 sec.

--
Stan Brown stanb@netcom.com 843-745-3154
Westvaco
Charleston SC.

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

On Fri, Jan 21, 2000 at 02:22:53PM +0000, Dave West wrote:
> On Fri, 21 Jan 2000, Jiri Baum wrote:
>
> > Dave West:
> > > Hmm, I just thought about entries in a recipe that are measured by time.
> > > This makes it difficult in that the elapsed time would need to be saved
> > > and this could be very frequent.
> >
> > No, you just save the wall-time (either for timer-start or timer-end,
> > probably for whichever is represented as zero).
> >
> > This must be in GMT/UTC (one of them, anyway), but that's default in Linux.
> >
> >
> > (Actually, it isn't - for recipe timers you'd want the UTC without the leap
> > seconds - but it's close enough as will make no difference.)
>
> Ermm, I think you missed my point slightly.

Well, rather than saving elapsed time continually, you save the wall-time whenever the timer is reset.

> First can you explain wall-time.

It's the time on the clock on the wall, eg "Sat Jan 22 14:31:27 UTC 2000".

> Second how does a timer that counts a number of seconds know about
> GMT/UTC if it is stored in 16 bits. More apropriately what does it care.

To turn the question around, how *else* are you going to implement a timer?

Yes, the user only cares about elapsed time. But the only practical way of implementing it is to store the start time (or the end time) and compare
the current time with that whenever the user asks (or on every scan).


Jiri
--
Jiri Baum <jiri@baum.com.au>
On the Internet, nobody knows if you are a @{[@{[open(0),<0>]}-1]}-line
perl script...

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

On Sun, 23 Jan 2000, Jiri Baum wrote:

> On Fri, Jan 21, 2000 at 02:22:53PM +0000, Dave West wrote:
> > On Fri, 21 Jan 2000, Jiri Baum wrote:
> >
> > > Dave West:
> > > > Hmm, I just thought about entries in a recipe that are measured by time.
> > > > This makes it difficult in that the elapsed time would need to be saved
> > > > and this could be very frequent.
> > >
> > > No, you just save the wall-time (either for timer-start or timer-end,
> > > probably for whichever is represented as zero).
> > >
> > > This must be in GMT/UTC (one of them, anyway), but that's default in Linux.
> > >
> > >
> > > (Actually, it isn't - for recipe timers you'd want the UTC without the leap
> > > seconds - but it's close enough as will make no difference.)
> >
> > Ermm, I think you missed my point slightly.
>
> Well, rather than saving elapsed time continually, you save the wall-time
> whenever the timer is reset.
>
> > First can you explain wall-time.
>
> It's the time on the clock on the wall, eg "Sat Jan 22 14:31:27 UTC 2000".
>
> > Second how does a timer that counts a number of seconds know about
> > GMT/UTC if it is stored in 16 bits. More apropriately what does it care.
>
> To turn the question around, how *else* are you going to implement a timer?

Dead simple count interrupts like Linux does. Remember that Linux has no concept of time just an elapsed time ticker that it initialises from your BIOS RTC at boot time or even from a time server on the internet. Almost all PLC's I have worked with have absolutely no concept of wall-time at all yet they still run lots of timers.

> Yes, the user only cares about elapsed time. But the only practical way of
> implementing it is to store the start time (or the end time) and compare
> the current time with that whenever the user asks (or on every scan).

Many of the devices that were tested for Y2K compliance passed without problem because they have absolutely no concept of wall time yet they all have timers. Thus storing the start time and comparing it with real time is certainly not the *only* practical way of doing this!


Dave West E-Mail: davew@hoagy.demon.co.uk
Semiras Projects Ltd. PGP public key available on request.


_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

On Sat Jan 22 09:36:53 2000 Jiri Baum wrote...
>
>To turn the question around, how *else* are you going to implement a timer?

It simply a counter, incremented at the timebase, if enabled, and not done. The timebase is a multiple of the dreaded 10ms ticks of the kernel.

>Yes, the user only cares about elapsed time. But the only practical way of
>implementing it is to store the start time (or the end time) and compare
>the current time with that whenever the user asks (or on every scan).
>

I don;t think so, since the timer execution engine will potentially have may timers to deal with, I think keeping up with the start time of all of them would be a royal pain. Also they can be stopped, and restarted (in the case of retentive times) by the logic engines.

--
Stan Brown stanb@netcom.com 843-745-3154
Westvaco
Charleston SC.

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

On Sat, Jan 22, 2000 at 03:46:10PM +0000, Dave West wrote:
> On Sun, 23 Jan 2000, Jiri Baum wrote:
> > To turn the question around, how *else* are you going to implement a
> > timer?

> Dead simple count interrupts like Linux does.

Not as simple as it sounds. For one thing, you're going to miss clock ticks and get errors unless you really know what you are doing. For another
thing, you don't want to be updating a thousand timers from your interrupt handler.

> Remember that Linux has no concept of time just an elapsed time ticker
> that it initialises from your BIOS RTC at boot time or even from a time
> server on the internet. Almost all PLC's I have worked with have
> absolutely no concept of wall-time at all yet they still run lots of timers.

Hmm, I suspect they do work off the concept of wall-time, it's just that since they are only interested in differences, they don't bother asking for it and just assume they were turned on at midnight 1st January AD 1.

A very private concept of wall-time, I admit, but it suffices for the purposes.

The other way of looking at it is that there is one interrupt-driven "master timer", and all the other timers work off that. In linux, the master timer happens to run on GMT.

Jiri
--
Jiri Baum <jiri@baum.com.au>
On the Internet, nobody knows if you are a @{[@{[open(0),<0>]}-1]}-line
perl script...

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

On Sat, Jan 22, 2000 at 11:35:38AM -0500, Stan Brown wrote:
> On Sat Jan 22 09:36:53 2000 Jiri Baum wrote...

> >Yes, the user only cares about elapsed time. But the only practical way
> >of implementing it is to store the start time (or the end time) and
> >compare the current time with that whenever the user asks (or on every
> >scan).

> I don;t think so, since the timer execution engine will potentially
> have may timers to deal with, I think keeping up with the start
> time of all of them would be a royal pain.

No, having to increment all of them at each tick would be a royal pain.

(Unless you really know what you are doing, you'll end up missing ticks. The kernel already solved that problem - let's hang our timers of the
kernel timer, rather than re-implementing it.)

> Also they can be stopped, and restarted (in the case of retentive
> times) by the logic engines,.

Stopped timers store elapsed time, of course.


Jiri
--
Jiri Baum <jiri@baum.com.au>
On the Internet, nobody knows if you are a @{[@{[open(0),<0>]}-1]}-line
perl script...

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

On Mon Jan 24 04:15:08 2000 Jiri Baum wrote...
>
>On Sat, Jan 22, 2000 at 03:46:10PM +0000, Dave West wrote:
>> On Sun, 23 Jan 2000, Jiri Baum wrote:
>> > To turn the question around, how *else* are you going to implement a
>> > timer?
>
>> Dead simple count interrupts like Linux does.
>
>Not as simple as it sounds. For one thing, you're going to miss clock ticks
>and get errors unless you really know what you are doing. For another
>thing, you don't want to be updating a thousand timers from your interrupt
>handler.

Thats why the timer execution engine is a task onto itself. And BTW .01 sec resolution is all we neeed here.

>> Remember that Linux has no concept of time just an elapsed time ticker
>> that it initialises from your BIOS RTC at boot time or even from a time
>> server on the internet. Almost all PLC's I have worked with have
>> absolutely no concept of wall-time at all yet they still run lots of timers.
>
>Hmm, I suspect they do work off the concept of wall-time, it's just that
>since they are only interested in differences, they don't bother asking for
>it and just assume they were turned on at midnight 1st January AD 1.

Many of these devices have no concept of wall time. Just a continuing stream of ticks.

--
Stan Brown stanb@netcom.com 843-745-3154
Westvaco
Charleston SC.

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

On Mon Jan 24 04:06:08 2000 Jiri Baum wrote...
>
>On Sat, Jan 22, 2000 at 11:35:38AM -0500, Stan Brown wrote:
>> On Sat Jan 22 09:36:53 2000 Jiri Baum wrote...
>
>> >Yes, the user only cares about elapsed time. But the only practical way
>> >of implementing it is to store the start time (or the end time) and
>> >compare the current time with that whenever the user asks (or on every scan).
>
>> I don;t think so, since the timer execution engine will potentially
>> have may timers to deal with, I think keeping up with the start
>> time of all of them would be a royal pain.
>
>No, having to increment all of them at each tick would be a royal pain.
>
>(Unless you really know what you are doing, you'll end up missing ticks.
>The kernel already solved that problem - let's hang our timers of the
>kernel timer, rather than re-implementing it.)

This can be impeented very simpy for the resolutin we need. If we can get a consenus on the data table structures, and the library calls to interface to them. I will whip up a sample timer execution engine to put this discussion to bed.


--
Stan Brown stanb@netcom.com 843-745-3154
Westvaco
Charleston SC.

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

On Mon, 24 Jan 2000, Jiri Baum wrote:

> On Sat, Jan 22, 2000 at 11:35:38AM -0500, Stan Brown wrote:
> > On Sat Jan 22 09:36:53 2000 Jiri Baum wrote...
>
> > >Yes, the user only cares about elapsed time. But the only practical way
> > >of implementing it is to store the start time (or the end time) and
> > >compare the current time with that whenever the user asks (or on every scan).
>
> > I don;t think so, since the timer execution engine will potentially
> > have may timers to deal with, I think keeping up with the start
> > time of all of them would be a royal pain.
>
> No, having to increment all of them at each tick would be a royal pain.

Yes it would be a royal pain incrementing or decrementing all the timers every tick.

> (Unless you really know what you are doing, you'll end up missing ticks.
> The kernel already solved that problem - let's hang our timers of the
> kernel timer, rather than re-implementing it.)

Well, I'm not sure about hanging our timers off the kernel timer but we should look at the kernel for the best way to handle a quantity of timers.
Basically you know which timer will expire first so you only test for it expiring. If that timer is disabled or a shorter timer is created/started
then you re-evaluate which timer will expire first and test only that one. To do this you need a free running clock such as the jiffies counter and always compare against that. Note I have not mentioned wall-time as I think this free running timer should start when the PLC logic is started
and have absolutely no relation to wall-time.

> > Also they can be stopped, and restarted (in the case of retentive
> > times) by the logic engines,.
>
> Stopped timers store elapsed time, of course.


Dave West E-Mail: davew@hoagy.demon.co.uk
Semiras Projects Ltd. PGP public key available on request.


_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

On Mon, 24 Jan 2000, Jiri Baum wrote:

> On Sat, Jan 22, 2000 at 03:46:10PM +0000, Dave West wrote:
> > On Sun, 23 Jan 2000, Jiri Baum wrote:
> > > To turn the question around, how *else* are you going to implement a
> > > timer?
>
> > Dead simple count interrupts like Linux does.
>
> Not as simple as it sounds. For one thing, you're going to miss clock ticks
> and get errors unless you really know what you are doing. For another
> thing, you don't want to be updating a thousand timers from your interrupt handler.
>
> > Remember that Linux has no concept of time just an elapsed time ticker
> > that it initialises from your BIOS RTC at boot time or even from a time
> > server on the internet. Almost all PLC's I have worked with have
> > absolutely no concept of wall-time at all yet they still run lots of timers.
>
> Hmm, I suspect they do work off the concept of wall-time, it's just that
> since they are only interested in differences, they don't bother asking for
> it and just assume they were turned on at midnight 1st January AD 1.

This implies an understanding of minutes, hours, days, weeks, months, years and leap days etc. None of the PLC's I was refering to has any
knowledge, concept or idea of these things. Indeed, I have worked on projects where we designed and built embedded controllers they had timers but no concept of wall-time. I know this to be the case because I wrote all the code in 8085 asm myself.

> A very private concept of wall-time, I admit, but it suffices for the
> purposes.
>
> The other way of looking at it is that there is one interrupt-driven
> "master timer", and all the other timers work off that. In linux, the
> master timer happens to run on GMT.

No it does not. It runs on seconds since 00:00 1/1/1970 (or similar). The library routines sort that count into GMT/UTC or whatever on request and
it is not accurate to the millisecond. It may be accurate to within +/- 9 msec depending on when the timer interrupt was initialised and assuming
the clock it initialised the counter from was correct.


Dave West E-Mail: davew@hoagy.demon.co.uk
Semiras Projects Ltd. PGP public key available on request.


_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

On Mon, Jan 24, 2000 at 02:36:14PM +0000, Dave West wrote:
> On Mon, 24 Jan 2000, Jiri Baum wrote:
> > Hmm, I suspect they do work off the concept of wall-time, it's just
> > that since they are only interested in differences, they don't bother
> > asking for it and just assume they were turned on at midnight 1st
> > January AD 1.

> This implies an understanding of minutes, hours, days, weeks, months, years and leap days etc. <

Not really, more a willful ignorance of them. In AD 1, years were reckoned Urbis Conditae, started in March, didn't have leap days and days themselves, didn't start at midnight but at sundown. Or something.

...
> > The other way of looking at it is that there is one interrupt-driven
> > "master timer", and all the other timers work off that. In linux, the
> > master timer happens to run on GMT.

> No it does not. It runs on seconds since 00:00 1/1/1970 (or similar).

Well, yes and no. It's seconds since 00:00 1/1/1970 GMT.

> The library routines sort that count into GMT/UTC or whatever on request

Yes, they sort it into hours and minutes and whatnot, but we don't need to call that library function - we can just get seconds count and use that.

(It's better to use that count rather than jiffies-since-boot, because the system time will be of the same order of magnitude throughout execution. We don't want to have the problem of losing timing accuracy when running longer, like the Patriot missiles reportedly did around Kuwait and Iraq.)

> and it is not accurate to the millisecond. It may be accurate to within
> +/- 9 msec depending on when the timer interrupt was initialised and
> assuming the clock it initialised the counter from was correct.

Hmm, what does the UTIME patch do to that? Does it make it more accurate?

We probably don't care about leap seconds, do we.

Damn, I just read the manpage and supposedly at 2100.02.28 23:59:59 the system clock will jump by a whole day.

Jiri
--
Jiri Baum <jiri@baum.com.au>
On the Internet, nobody knows if you are a @{[@{[open(0),<0>]}-1]}-line
perl script...

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

On Tue, 25 Jan 2000, Jiri Baum wrote:

> On Mon, Jan 24, 2000 at 02:36:14PM +0000, Dave West wrote:
> > On Mon, 24 Jan 2000, Jiri Baum wrote:
> > > Hmm, I suspect they do work off the concept of wall-time, it's just
> > > that since they are only interested in differences, they don't bother
> > > asking for it and just assume they were turned on at midnight 1st
> > > January AD 1.
>
> > This implies an understanding of minutes, hours, days, weeks, months, years and leap days etc.< <
>
> Not really, more a willful ignorance of them. In AD 1, years were reckoned
> Urbis Conditae, started in March, didn't have leap days and days themselves
> didn't start at midnight but at sundown. Or something.
>
> ...
> > > The other way of looking at it is that there is one interrupt-driven
> > > "master timer", and all the other timers work off that. In linux, the
> > > master timer happens to run on GMT.
>
> > No it does not. It runs on seconds since 00:00 1/1/1970 (or similar).
>
> Well, yes and no. It's seconds since 00:00 1/1/1970 GMT.
>
> > The library routines sort that count into GMT/UTC or whatever on request
>
> Yes, they sort it into hours and minutes and whatnot, but we don't need to
> call that library function - we can just get seconds count and use that.

Exactly it is not wall-time it is seconds count!!

> (It's better to use that count rather than jiffies-since-boot, because the
> system time will be of the same order of magnitude throughout execution. We
> don't want to have the problem of losing timing accuracy when running
> longer, like the Patriot missiles reportedly did around Kuwait and Iraq.)

And the seconds count is incremented when the jiffies count passes 100's

> > and it is not accurate to the millisecond. It may be accurate to within
> > +/- 9 msec depending on when the timer interrupt was initialised and
> > assuming the clock it initialised the counter from was correct.
>
> Hmm, what does the UTIME patch do to that? Does it make it more accurate?

I don't really know, I think it has the potential to but then who really cares. Most likely the clock you initialised from is only accurate to the
nearest second anyway.

> We probably don't care about leap seconds, do we.
>
> Damn, I just read the manpage and supposedly at 2100.02.28 23:59:59 the
> system clock will jump by a whole day.

What are you trying to say here. That the system clock will jump to March 1? March 1 is the correct next day 2100 is not a leap year.

BTW it's worse than that. As I'm sure you are aware the current system clock will roll over to 00:00 1/1/1970 somewhere around April 2038 so who
cares what happens in 2100?

Dave West E-Mail: davew@hoagy.demon.co.uk
Semiras Projects Ltd. PGP public key available on request.


_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

On Tue Jan 25 04:13:21 2000 Dave West wrote...
>
>Oh, I don't know about this. Supposing the logic engine keeps track of
>when a coil changes state (including the coils of timers). Then when a
>timer coil gets turned on it can send some message to the timer engine
>telling it to start timer x. When the coil gets turned off a message of
>stop timer x is sent. When the timer engine receives these messages it
>would wake up and re-evaluate the timer expiry order and then sleep untill
>the next timer expires.
>For clarity: I might code this using a FIFO for the message passing
>medium. The timer engine can then use a blocking read on the FIFO with a
>timeout slightly less than the next timer expirey. When the read times
>out, wait for the timer to finish and then re-evaluate and repeat the
>read. When the read returns with a message handle the message, test any
>timers for expirey and re-evaluate before repeating the read.

I have not bought into the inter task communication chanel outside the data tables, yet.

>The way I see things this intercommunication between task has always been
>required and there is no possible way to make this work otherwise.

No, that is _the purpose_ of the data table structures, for things like counters, timers, PID's etc. Not the only purpose, but the single most important one.

>I've had another thought. If the logic engine accesses timers via a shared
>memory segment (the timer data table) then a semaphore or something will
>be required. I believe the timer engine can watch this semaphore and if it
>has been changed by the logic engine or other process (not the timer
>engine) then it can receive a signal and do the processing outlined above.
>This is still inter process communication.

Hmm. prhaps, although my conecept of this has only the shared memory libraries having access to this semaphore.

--
Stan Brown stanb@netcom.com 843-745-3154
Westvaco
Charleston SC.

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

-----Original Message-----
From: Dave West <davew@hoagy.demon.co.uk>
>
>The way I see things this intercommunication between task has always been
>required and there is no possible way to make this work otherwise.
>
>I've had another thought. If the logic engine accesses timers via a shared
>memory segment (the timer data table) then a semaphore or something will
>be required. I believe the timer engine can watch this semaphore and if it
>has been changed by the logic engine or other process (not the timer
>engine) then it can receive a signal and do the processing outlined above.
>This is still inter process communication.

I think this communications should be through the shared memory like everything else.

Ron Davis

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

By Johan Bengtsson on 25 January, 2000 - 2:23 pm

Actually I don't think it will make that much of a
difference, you have to check them anyway and that can be done at the same time. I would go for increasing / decreasing each time but it would not matter THAT much.


/Johan Bengtsson

----------------------------------------
P&L, the Academy of Automation
Box 252, S-281 23 H{ssleholm SWEDEN
Tel: +46 451 49 460, Fax: +46 451 89 833
E-mail: johan.bengtsson@pol.se
Internet: http://www.pol.se/
----------------------------------------


-----Original Message-----
From: MIME :jiri@baum.com.au [SMTP:MIME :jiri@baum.com.au]

On Sat, Jan 22, 2000 at 11:35:38AM -0500, Stan Brown wrote:
> On Sat Jan 22 09:36:53 2000 Jiri Baum wrote...

> >Yes, the user only cares about elapsed time. But the only practical way
> >of implementing it is to store the start time (or the end time) and
> >compare the current time with that whenever the user asks (or on every
> >scan).

> I don;t think so, since the timer execution engine will potentially
> have may timers to deal with, I think keeping up with the start
> time of all of them would be a royal pain.

No, having to increment all of them at each tick would be a royal pain.

(Unless you really know what you are doing, you'll end up missing ticks. ...<clip>

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

By Johan Bengtsson on 25 January, 2000 - 3:44 pm

>Well, I'm not sure about hanging our timers off the kernel timer but we
>should look at the kernel for the best way to handle a quantity of timers.
>Basically you know which timer will expire first so you only test for it
>expiring. If that timer is disabled or a shorter timer is created/started
>then you re-evaluate which timer will expire first and test only that one.

Hmm, a sorted list of timers sorted by when they would expire... didn't think of that one. Could be quite fast.

>To do this you need a free running clock such as the jiffies counter and
>always compare against that. Note I have not mentioned wall-time as I
>think this free running timer should start when the PLC logic is started
>and have absolutely no relation to wall-time.

Sounds like a good solution to me


/Johan Bengtsson

----------------------------------------
P&L, the Academy of Automation
Box 252, S-281 23 H{ssleholm SWEDEN
Tel: +46 451 49 460, Fax: +46 451 89 833
E-mail: johan.bengtsson@pol.se
Internet: http://www.pol.se/
----------------------------------------


_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

On Tue, 25 Jan 2000, Stan Brown wrote:

> On Tue Jan 25 04:13:21 2000 Dave West wrote...
> >
> >Oh, I don't know about this. Supposing the logic engine keeps track of
> >when a coil changes state (including the coils of timers). Then when a
> >timer coil gets turned on it can send some message to the timer engine
> >telling it to start timer x. When the coil gets turned off a message of
> >stop timer x is sent. When the timer engine receives these messages it
> >would wake up and re-evaluate the timer expiry order and then sleep untill
> >the next timer expires.
> >For clarity: I might code this using a FIFO for the message passing
> >medium. The timer engine can then use a blocking read on the FIFO with a
> >timeout slightly less than the next timer expirey. When the read times
> >out, wait for the timer to finish and then re-evaluate and repeat the
> >read. When the read returns with a message handle the message, test any
> >timers for expirey and re-evaluate before repeating the read.
> >
> I have not bought inot the inter task communication chanel outside the data tables, yet.

How about the situation where a logic engine is running on one computer and an HMI on another. There has to be a coomunication channel. You cannot use the data table alone. A better example may be a logic engine on one computer coomunicating with a logic engine on another remote computer.

> >The way I see things this intercommunication between task has always been
> >required and there is no possible way to make this work otherwise.
>
> No, that is _the purpose_ of the data table structures, for things like
> counters, timers, PID's etc. Not the only purpose, but the single most
> important one.

If the data table is not a communication medium what is it? Furthermore each task accessing the data table will need to communicate the state of the data table to others. It has been said that this will be done via semaphores. Semaphores are a means of IPC (Inter Process Communication). Thus the system cannot work without a communication channel of some sort.

> >I've had another thought. If the logic engine accesses timers via a shared
> >memory segment (the timer data table) then a semaphore or something will
> >be required. I believe the timer engine can watch this semaphore and if it
> >has been changed by the logic engine or other process (not the timer
> >engine) then it can receive a signal and do the processing outlined above.
> >This is still inter process communication.
>
> Hmm. prhaps, although my conecept of this has only the shared memory
> libraries having access to this semaphore.

So?
The library has to provide a function to allow the timer engine to read the initial value from the data table. It is simple enough to write a
library function such as wait_for_change(int timeout); that blocks waiting for the semaphore to show the table has updated. The timer engine has to block at some point otherwise we have a real nasty busy wait loop, one cannot call sleep() as the man page says the actual time spent in sleep() is somewhat variable.

The only free running proccess should be the logic engine all others should spend most of there time blocked for I/O or data table access.


Dave West E-Mail: davew@hoagy.demon.co.uk
Semiras Projects Ltd. PGP public key available on request.


_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

On Tue, 25 Jan 2000, Ron Davis wrote:

> -----Original Message-----
> From: Dave West <davew@hoagy.demon.co.uk>
> >
> >The way I see things this intercommunication between task has always been
> >required and there is no possible way to make this work otherwise.
> >
> >I've had another thought. If the logic engine accesses timers via a shared
> >memory segment (the timer data table) then a semaphore or something will
> >be required. I believe the timer engine can watch this semaphore and if it
> >has been changed by the logic engine or other process (not the timer
> >engine) then it can receive a signal and do the processing outlined above.
> >This is still inter process communication.
>
> I think this communications should be through the shared memory like
> everything else.

That was the point of my second thought. That is throught the shared memory.


Dave West E-Mail: davew@hoagy.demon.co.uk
Semiras Projects Ltd. PGP public key available on request.


_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

Think about this real carefully, then throw away all your thoughts and look at how the kernel handles its timers. The kernel is realy elegant
about this. Before anyone says it's different it is not, the kernel timers (scheduling, alarms etc) all have the functionality we require.


On Tue, 25 Jan 2000 johan.bengtsson@pol.se wrote:

> Actually I don't think it will make that much of a difference, you have to check them anyway and that can be done at the same time. I would go for increasing / decreasing each time but it would not matter THAT much. <

Dave West E-Mail: davew@hoagy.demon.co.uk
Semiras Projects Ltd. PGP public key available on request.


_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

Hmm, I should have read more of the list before replying to that last one. I thought you had missed this one.

On Tue, 25 Jan 2000 johan.bengtsson@pol.se wrote:

> >Well, I'm not sure about hanging our timers off the kernel timer but we
> >should look at the kernel for the best way to handle a quantity of timers.
> >Basically you know which timer will expire first so you only test for it
> >expiring. If that timer is disabled or a shorter timer is created/started
> >then you re-evaluate which timer will expire first and test only that one.
>
> Hmm, a sorted list of timers sorted by when they would
> expire... didn't think of that one. Could be quite fast.

Oh, it is!!!!

>
> >To do this you need a free running clock such as the jiffies counter and
> >always compare against that. Note I have not mentioned wall-time as I
> >think this free running timer should start when the PLC logic is started
> >and have absolutely no relation to wall-time.
>
> Sounds like a good solution to me

Dave West E-Mail: davew@hoagy.demon.co.uk
Semiras Projects Ltd. PGP public key available on request.


_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

By Curt Wuollet on 26 January, 2000 - 9:42 am

Dave West wrote:

> Think about this real carefully, then throw away all your thoughts and
> look at how the kernel handles its timers. The kernel is realy elegant
> about this. Before anyone says it's different it is not, the kernel timers
> (scheduling, alarms etc) all have the functionality we require.
>

I agree completely, I doubt that we'd be handling thousands of timers anyway.

cww


_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

Johan.bengtsson@pol.se:

> >Basically you know which timer will expire first so you only test for it
> >expiring. If that timer is disabled or a shorter timer is created/started
> >then you re-evaluate which timer will expire first and test only that one.

> Hmm, a sorted list of timers sorted by when they would expire... didn't
> think of that one. Could be quite fast.

You don't even need a real sort. A priority queue (also known as a "heap", as in "heap-sort") is enough and has O(log n) insertion / deletion time
with constant time to access first element.

> >To do this you need a free running clock such as the jiffies counter and > >always compare against that. Note I have not mentioned wall-time as I > >think this free running timer should start when the PLC logic is started > >and have absolutely no relation to wall-time.

OK, but be careful with that - make sure you know how long the jiffies counter will last and what happens when it rolls over.


Jiri
--
Jiri Baum <jiri@baum.com.au>
On the Internet, nobody knows if you are a @{[@{[open(0),<0>]}-1]}-line
perl script...

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

On Tue, Jan 25, 2000 at 09:26:27AM +0000, Dave West wrote:
> On Tue, 25 Jan 2000, Jiri Baum wrote:
...
> > Damn, I just read the manpage and supposedly at 2100.02.28 23:59:59 the
> > system clock will jump by a whole day.
>
> What are you trying to say here. That the system clock will jump to March
> 1? March 1 is the correct next day 2100 is not a leap year.

Yes, but usually between 23:59:59 one day and 0:00:00 the next the system clock changes by one, not by 86400.

> BTW it's worse than that. As I'm sure you are aware the current system > clock will roll over to 00:00 1/1/1970 somewhere around April 2038 so who > cares what happens in 2100?

With any luck we'll be on 64 bit systems well before then.

Personally, I hope someone will change the specification to say that it takes account of *real* leap days. It should still ignore leap seconds tho.


Jiri
--
Jiri Baum <jiri@baum.com.au>
On the Internet, nobody knows if you are a @{[@{[open(0),<0>]}-1]}-line
perl script...

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

On Tue, Jan 25, 2000 at 09:54:16AM -0500, Ron Davis wrote:
> -----Original Message-----
> From: Dave West <davew@hoagy.demon.co.uk>

> >The way I see things this intercommunication between task has always
> >been required and there is no possible way to make this work otherwise.

> >I've had another thought. If the logic engine accesses timers via a
> >shared memory segment (the timer data table) then a semaphore or
> >something will be required. I believe the timer engine can watch this
> >semaphore and if it has been changed by the logic engine or other
> >process (not the timer engine) then it can receive a signal and do the
> >processing outlined above. This is still inter process communication.

> I think this communications should be through the shared memory like
> everything else.

Yes, everything should be through the shared memory.

But note that semaphores have to be separate. I was putting them in the We'll Do That Later category, though (besides, the kernel provides them).


Jiri
--
Jiri Baum <jiri@baum.com.au>
On the Internet, nobody knows if you are a @{[@{[open(0),<0>]}-1]}-line
perl script...

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

On Wed, Jan 26, 2000 at 09:40:14AM +0000, Dave West wrote:
> How about the situation where a logic engine is running on one computer
> and an HMI on another. There has to be a coomunication channel. You cannot
> use the data table alone. A better example may be a logic engine on one
> computer coomunicating with a logic engine on another remote computer.

As far as I'm concerned, this would be done by a pseudo-IO driver.

Basically, accessing inputs, outputs or internal coils on another linuxPLC box would be exactly the same as accessing the same points on any other
kind of PLC.

(In fact easier, because the linuxPLC box could be set up to emulate any number of legacy PLCs, so your legacy HMI box could run on.)


Jiri
--
Jiri Baum <jiri@baum.com.au>
On the Internet, nobody knows if you are a @{[@{[open(0),<0>]}-1]}-line
perl script...

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

On Thu, 27 Jan 2000, Jiri Baum wrote:

> Johan.bengtsson@pol.se:
>
> > >Basically you know which timer will expire first so you only test for it
> > >expiring. If that timer is disabled or a shorter timer is created/started
> > >then you re-evaluate which timer will expire first and test only that one.
>
> > Hmm, a sorted list of timers sorted by when they would expire... didn't
> > think of that one. Could be quite fast.
>
> You don't even need a real sort. A priority queue (also known as a "heap",
> as in "heap-sort") is enough and has O(log n) insertion / deletion time
> with constant time to access first element.
>
> > >To do this you need a free running clock such as the jiffies counter and
> > >always compare against that. Note I have not mentioned wall-time as I
> > >think this free running timer should start when the PLC logic is started
> > >and have absolutely no relation to wall-time.
>
> OK, but be careful with that - make sure you know how long the jiffies
> counter will last and what happens when it rolls over.

1 year, 132 days, 2 hours, 27 minutes and 52.95 seconds. Assuming a tick every 0.01 seconds and an unsigned int (32 bit) counter.

As for roll over, the comparison code needs to handle it. It's not too difficult as anybody that has implimented any start time + period duration
code on a 24 hour clock will tell you. Come to think of it even a barrel buffer is similar in principal to this roll over.

Dave West E-Mail: davew@hoagy.demon.co.uk
Semiras Projects Ltd. PGP public key available on request.


_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

On Thu, 27 Jan 2000, Jiri Baum wrote:

> On Tue, Jan 25, 2000 at 09:26:27AM +0000, Dave West wrote:
> > On Tue, 25 Jan 2000, Jiri Baum wrote:
> ...
> > > Damn, I just read the manpage and supposedly at 2100.02.28 23:59:59 the
> > > system clock will jump by a whole day.
> >
> > What are you trying to say here. That the system clock will jump to March
> > 1? March 1 is the correct next day 2100 is not a leap year.
>
> Yes, but usually between 23:59:59 one day and 0:00:00 the next the system
> clock changes by one, not by 86400.

If I understand this correctly you are saying that the internal clock will add 86400 seconds to itself at 23:59:59 on 28/2/2100 because the library functions are broken!

BTW which man page did you get this from?

Dave West E-Mail: davew@hoagy.demon.co.uk
Semiras Projects Ltd. PGP public key available on request.


_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

On Thu, 27 Jan 2000, Jiri Baum wrote:

> On Wed, Jan 26, 2000 at 09:40:14AM +0000, Dave West wrote:
> > How about the situation where a logic engine is running on one computer
> > and an HMI on another. There has to be a coomunication channel. You cannot
> > use the data table alone. A better example may be a logic engine on one
> > computer coomunicating with a logic engine on another remote computer.
>
> As far as I'm concerned, this would be done by a pseudo-IO driver.
>
> Basically, accessing inputs, outputs or internal coils on another linuxPLC
> box would be exactly the same as accessing the same points on any other
> kind of PLC.
>
> (In fact easier, because the linuxPLC box could be set up to emulate any
> number of legacy PLCs, so your legacy HMI box could run on.)

Exactly the point. You need the psuedo I/O driver for logic engines on different boxes so why write a new method to provide the same functionality when the two logic engines are on the same box and why complicate a simple enviroment.

Dave West E-Mail: davew@hoagy.demon.co.uk
Semiras Projects Ltd. PGP public key available on request.


_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

By Johan Bengtsson on 27 January, 2000 - 11:24 am

>> Hmm, a sorted list of timers sorted by when they would expire... didn't
>> think of that one. Could be quite fast.
>
>You don't even need a real sort. A priority queue (also known as a "heap",
>as in "heap-sort") is enough and has O(log n) insertion / deletion time
>with constant time to access first element.

Was probably like what I was thinking about :-)

>> >To do this you need a free running clock such as the jiffies counter and
>> >always compare against that. Note I have not mentioned wall-time as I
>> >think this free running timer should start when the PLC logic is started
>> >and have absolutely no relation to wall-time.
>
>OK, but be careful with that - make sure you know how long the jiffies counter will last and what happens when it rolls over.
>

That's easy enough if you make all calculations using the same register size as the original variables and always uses time differences, and expect it to roll over.

instead of: (NOTE! this code is WRONG)
if (now>start+time)
...

you write:
if (now-start>time)
...

Think about what will happen when start+time is a number higher than can be represented by the selected variable. btw. some kind of unsigned integers (any size) is probably the best to use for these ku_nd of caclulations.

My suggestion is (if this kind of code is to be implemented) a fairly low amount of bits is used. Like that it will wrap within hours at least. This will make bugs appear and be fixed instead of hiding them by the fact that no one wants to debug that long.

Like using 32 bits and use usec resolution (sligtly more than 1 hour) actual resolution may be lower depending on the system.

or if msec resolution is to be used, not use more than 16 bits (will wrap after a little bit more than 1 minute)



/Johan Bengtsson

----------------------------------------
P&L, the Academy of Automation
Box 252, S-281 23 H{ssleholm SWEDEN
Tel: +46 451 49 460, Fax: +46 451 89 833
E-mail: johan.bengtsson@pol.se
Internet: http://www.pol.se/
----------------------------------------


_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

By Curt Wuollet on 27 January, 2000 - 8:44 pm

Jiri Baum wrote:

> > I think this communications should be through the shared memory like everything else.
>
> Yes, everything should be through the shared memory.
>
> But note that semaphores have to be separate. I was putting them in the
> We'll Do That Later category, though (besides, the kernel provides them).

Yes, but are they visible/usable from kernel code?
cww

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

On Thu, 27 Jan 2000, Curt Wuollet wrote:

> > Yes, everything should be through the shared memory.
> >
> > But note that semaphores have to be separate. I was putting them in the
> > We'll Do That Later category, though (besides, the kernel provides them).
>
> Yes, but are they visible/usable from kernel code?
> cww

As in useable by RT stuff????
Otherwise why do they need to be?


Dave West E-Mail: davew@hoagy.demon.co.uk
Semiras Projects Ltd. PGP public key available on request.


_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

On Thu, 27 Jan 2000 johan.bengtsson@pol.se wrote:

> >> Hmm, a sorted list of timers sorted by when they would expire... didn't
> >> think of that one. Could be quite fast.
> >
> >You don't even need a real sort. A priority queue (also known as a "heap",
> >as in "heap-sort") is enough and has O(log n) insertion / deletion time
> >with constant time to access first element.
>
> Was probably like what I was thinking about :-)
>
> >> >To do this you need a free running clock such as the jiffies counter and
> >> >always compare against that. Note I have not mentioned wall-time as I
> >> >think this free running timer should start when the PLC logic is started
> >> >and have absolutely no relation to wall-time.
> >
> >OK, but be careful with that - make sure you know how long the jiffies
> >counter will last and what happens when it rolls over.
>
> That's easy enough if you make all caclulations using the same
> register size as the original variables and always uses time
> differences, and expect it to roll over.
>
> instead of: (NOTE! this code is WRONG)
> if (now>start+time)
> ...
>
> you write:
> if (now-start>time)
> ...
>
> Think about what will happen when start+time is a number
> higher than can be represented by the selected variable.
> btw. some kind of unsigned integers (any size) is probably the
> best to use for these ku_nd of caclulations.
>
> My suggestion is (if this kind of code is to be implemented)
> a fairly low amount of bits is used. Like that it will wrap
> within hours at least. This will make bugs appear and be fixed
> instead of hiding them by the fact that noone wants to debug that long.
>
> Like using 32 bits and use usec resolution (sligtly more than
> 1 hour) actual resolution may be lower depending on the system.

No use, You cannot time for more than 2 hours!!!

> or if msec resolution is to be used, not use more than 16 bits
> (will wrap after a little bit more than 1 minute)
>

See above.

Bsically if the counter + offset value is double the counters scale then you timer fails entirely. It needs to be a BIG number. For debugging our
code you can use a shorter type and simply increase it to a bigger one once the algorythm is sorted. This code should never need to be re vamped and as such only need written once and will never break.

Dave West E-Mail: davew@hoagy.demon.co.uk
Semiras Projects Ltd. PGP public key available on request.


_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

By Curt Wuollet on 28 January, 2000 - 11:06 am

Dave West wrote:

> > > But note that semaphores have to be separate. I was putting them in the
> > > We'll Do That Later category, though (besides, the kernel provides them).
> >
> > Yes, but are they visible/usable from kernel code?
> > cww
>
> As in useable by RT stuff????
> Otherwise why do they need to be?

RT stuff and board drivers that need interrupts.
cww

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

On Thu, Jan 27, 2000 at 07:48:56PM +0000, Curt Wuollet wrote:
> Jiri Baum wrote:

> > But note that semaphores have to be separate. I was putting them in the
> > We'll Do That Later category, though (besides, the kernel provides them).

> Yes, but are they visible/usable from kernel code?

I don't know, but frankly I don't really care.

Either they can be, or the RT group will give us a wrapper function we can call instead of semop(2). Semaphores are a sufficiently basic requirement that we can assume they will exist everywhere.


Jiri
--
Jiri Baum <jiri@baum.com.au>
On the Internet, nobody knows if you are a @{[@{[open(0),<0>]}-1]}-line
perl script...

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

On Thu, Jan 27, 2000 at 12:49:17PM +0000, Dave West wrote:
> If I understand this correctly you are saying that the internal clock
> will add 86400 seconds to itself at 23:59:59 on 28/2/2100 because the
> library functions are broken!

> BTW which man page did you get this from?

time(2) from Linux 2.0.30, dated 9 September 1997.

It quotes this as coming from POSIX.1, particularly Annex B 2.2.2.


Maybe the intention was that by the time 2100 rolls around POSIX.1 will have been supplanted by something sane in this area.

Jiri
--
Jiri Baum <jiri@baum.com.au>
On the Internet, nobody knows if you are a @{[@{[open(0),<0>]}-1]}-line
perl script...

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

By Curt Wuollet on 29 January, 2000 - 12:26 am

Hi Jiri

We need to care. You can't code for something that isn't there. I think we need to find out. If we don't have them we can simulate them or use
something else for synchronization. I'll do some looking, but we ahould have a plan B. I'd like something in the map. Drivers will need to sync
and at least some will have to run in kernel space.

Regards,

Curt




Jiri Baum wrote:
>
> On Thu, Jan 27, 2000 at 07:48:56PM +0000, Curt Wuollet wrote:
> > Jiri Baum wrote:
>
> > > But note that semaphores have to be separate. I was putting them in the
> > > We'll Do That Later category, though (besides, the kernel provides them).
>
> > Yes, but are they visible/usable from kernel code?
>
> I don't know, but frankly I don't really care.
>
> Either they can be, or the RT group will give us a wrapper function we can
> call instead of semop(2). Semaphores are a sufficiently basic requirement
> that we can assume they will exist everywhere.
_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

Curt Wuollet:
> We need to care. You can't code for something that isn't there. I think
> we need to find out. If we don't have them we can simulate them or use
> something else for synchronization.

Semaphores are *basic*. Every environment will provide them, either itself or in the form of sample code.

You can code to a generic semaphore - it has a signal function and a wait function, and ensures that at any time the signal function had completed at least as many times as the wait function.

> I'll do some looking, but we ahould have a plan B. I'd like something in
> the map. Drivers will need to sync and at least some will have to run in
> kernel space.

I don't think a plan B *can* exist. Either you have sufficient primitives to construct a semaphore, or you can't do multiprocessing safely. I don't think there's any third possibility.

That's what I meant by "don't care". They *are* important; but I'm certain that they'll be provided, and the exact form of the function call is unimportant.

> > Semaphores are a sufficiently basic requirement that we can assume they
> > will exist everywhere.


Jiri
--
Jiri Baum <jiri@baum.com.au>
On the Internet, nobody knows if you are a @{[@{[open(0),<0>]}-1]}-line
perl script...

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

By Curt Wuollet on 30 January, 2000 - 12:47 pm

Hi Jiri,

A simple grep through the driver code in /usr/src/linux says that semaphores of some sort are available. My point, in professing my ignorance and asking, is that it's time to move from totally blue sky to how we implement in Linux. I didn't mean to use you personally as an example, whenever possible I'll be the dumb guy and ask questions. We have some firm constraints. Quite a bit of the stuff I see ignores those constraints. Some of what I see ignores *nix
practice and philosophy completely and invents things that are already available as OS services. I think from your response that you see my point
that "I don't know, I'll find out" is more useful than "I don't care". Please forgive my intrusion as well intended.

For some of this first crucial code the project has a real problem. We need people who have experience in writing to Linux at the kernel level. I have precious little of that experience, I've hacked together a driver or two that work, but I am worried that the foundation be efficient, solid, and takes the best possible advantage of the OS. These are goals that we all
can agree on as they make things easier and better all around. As we go up towards the application level, we have lots of capable. if
opinionated ;^), talent. I need help to address this concern. I have had few serious inquiries about the developers list. I have asked some people and got some favorable responses. We really need to sift through the list and identify, without ego or prejudice, the individuals that can "walk the walk". I, for example, have a good grasp on what I don't know and and am hoping we have better talent than myself available. I have even posted on a few other lists to see if we can attract a serious kernel hacker on at least an advisory level. I ask all in the list to help find the best people for the Linux specific layer.

All comments and suggestions welcome.

Curt Wuollet.
Wide Open Technologies.


Jiri Baum wrote:
>
> Curt Wuollet:
> > We need to care. You can't code for something that isn't there. I think
> > we need to find out. If we don't have them we can simulate them or use
> > something else for synchronization.
>
> Semaphores are *basic*. Every environment will provide them, either itself
> or in the form of sample code.
>
> You can code to a generic semaphore - it has a signal function and a wait
> function, and ensures that at any time the signal function had completed at
> least as many times as the wait function.
>
> > I'll do some looking, but we ahould have a plan B. I'd like something in
> > the map. Drivers will need to sync and at least some will have to run in
> > kernel space.
>
> I don't think a plan B *can* exist. Either you have sufficient primitives
> to construct a semaphore, or you can't do multiprocessing safely. I don't
> think there's any third possibility.
>
> That's what I meant by "don't care". They *are* important; but I'm certain
> that they'll be provided, and the exact form of the function call is unimportant. <clip>

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

Curt Wuollet :
> A simple grep through the driver code in /usr/src/linux says that
> semaphores of some sort are available.
...
> I think from your response that you see my point that "I don't know, I'll
> find out" is more useful than "I don't care". Please forgive my
> intrusion as well intended.

I think I should be the one apologising; "I don't care" really isn't a very good way of putting what I was trying to express.

It was the details I didn't care about, and in the overall view I was certain that semaphores are available; it can't work otherwise. So I
figured I'd use semop(2) until there's a reason to switch to something else, at which point it's a simple matter of doing a search-and-fix.

Since I don't have any experience with RT Linux or kernel drivers, I assumed that it'd be someone else who would find out that something special
has to be done for semaphores, and write appropriate wrappers.

> I have had few serious inquiries about the developers list. I have asked
> some people and got some favorable responses. We really need to sift
> through the list and identify, without ego or prejudice, the individuals
> that can "walk the walk".

I don't think there's a need at this stage, and we'd probably lose more in a list-split than we'd gain - especially now that one of the most prolific posters has left (because of standards *compliance*).


Jiri
--
Jiri Baum <jiri@baum.com.au>
On the Internet, nobody knows if you are a @{[@{[open(0),<0>]}-1]}-line
perl script...

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

By Curt Wuollet on 31 January, 2000 - 8:23 am

Jiri Baum wrote:
>
> Curt Wuollet :
> > A simple grep through the driver code in /usr/src/linux says that
> > semaphores of some sort are available.
> ...
> > I think from your response that you see my point that "I don't know, I'll
> > find out" is more useful than "I don't care". Please forgive my
> > intrusion as well intended.
>
> I think I should be the one apologising; "I don't care" really isn't a very
> good way of putting what I was trying to express.
>
> It was the details I didn't care about, and in the overall view I was
> certain that semaphores are available; it can't work otherwise. So I
> figured I'd use semop(2) until there's a reason to switch to something
> else, at which point it's a simple matter of doing a search-and-fix.
>
> Since I don't have any experience with RT Linux or kernel drivers, I
> assumed that it'd be someone else who would find out that something special
> has to be done for semaphores, and write appropriate wrappers.

Exactly, I'm concerned about who that someone else is.
>
> > I have had few serious inquiries about the developers list. I have asked
> > some people and got some favorable responses. We really need to sift
> > through the list and identify, without ego or prejudice, the individuals
> > that can "walk the walk".
>
> I don't think there's a need at this stage, and we'd probably lose more in
> a list-split than we'd gain - especially now that one of the most prolific
> posters has left (because of standards *compliance*).

While we don't necessarily need a list split due to reduced volume, we do need to get a handle on where our Linux expertise is. and get that discussion focused on implementation. I finally got a chance to study the stuff in the cvs archive. and was well impressed. Phil, Simon, myself and the others deeply interested in
low level stuff should talk as there are some differences. I am trying to write to the map based on the membuf stuff and will need to dig some more.

I hope we haven't lost anyone permanently, It's just that fight^h^h^h^hdiscussion can take place later, when those issues are being addressed.
cww

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

By Simon Martin on 3 February, 2000 - 4:08 pm

Hi Curt,

With respect your mail about expertise, I have about 15 years experience in C in multitasking/multiprocessing/real-time(servo) environments. I have been involved in *NIX base systems for about 2/3 years now. I have used pthreads and forks, process interlocks (using pmutex, lock files, etc.).

I have no experience with the real-time linux patches.

By the way I had a wonderful holiday and am back on-line again. Next week I will be in the UK and will have more time available for this project (at
least 2 hours a day).

Read you soon.

Debian GNU User
Simon Martin
Project Manager
Isys
mailto: smartin@isys.cl


_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

By Simon Martin on 27 March, 2000 - 7:31 pm

Hi Jiri,

I come to you cap in hand. I have just come back on-line after a rather interesting return to Chile. Do you still want/need my source code?


Debian GNU User
Simon Martin
Project Manager
Isys
mailto: smartin@isys.cl

There is a chasm of carbon and silicon the software cannot bridge

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc

Simon Martin,

> I come to you cap in hand. I have just come back on-line after a rather
> interesting return to Chile. Do you still want/need my source code?

Oops, yes I'm still waiting for it... (not that Real Life wouldn't be keeping me busy, but still).


Jiri
--
Jiri Baum <jiri@baum.com.au>
Windows is not popular. Windows is *widespread*. Linux is popular.

_______________________________________________
LinuxPLC mailing list
LinuxPLC@linuxplc.org
http://linuxplc.org/mailman/listinfo/linuxplc