persistent data

S

Thread Starter

Stan Brown

One of the basic concepts of a PLC is that the data table is retained upon program stop, whether that be manually stopping the scan, or the actual powering down of the controller.

Has anyone put any thoughts into how we can handle this? In a real PLC is genuine battery backed up RAM, so their is no "save on power failure", we unfortunately don't have that luxury. Nor can we depend upon every installation having a UPS.

Anyone know a way around this, that doesn't cause us to really spend a lot of time writing changing data table to the disk? Or are we going to have to have a task for this? If so, we need to look at some of the database projects, they have similar issues with recovery logs.

--
Stan Brown [email protected] 843-745-3154
Westvaco
Charleston SC.
_______________________________________________
LinuxPLC mailing list
[email protected]
http://linuxplc.org/mailman/listinfo/linuxplc
 
L

Locke, Alan S

>On Fri Jan 14 15:50:39 2000 "Sage, Pete (IndSys, GEFanuc, Albany)" wrote...
>>>Unless you synch the data on every write you will lose data if someone
>>switches the PC off. Syncing the data on every write will kill your
>>performance. A reasonable technique is to configure the shared memory as a
>>memory mapped file, this will give you persistence. Periodically you can
>>flush it to disk.
>Well, I was thinking of a process whose job it is to scan the data tables, and
>write any changes it finds to the disk files. I realize this is a performance issue,
>_but_ it is critical to the operation of the process, and it is a problem that has been
>solved by the database code writers, they can't lose data either. You would hate to
>have your savings deposit deducted from your checking account, but never credited
>to your savings account because of a computer crash, now wouldn't you :)

My understanding is that software PLC vendors have addressed this issue by using a battery backed up flash drive (ram) and that they write the data tables to this drive every scan. As a machine integrator type, I would also expect to need to install a UPS with a software PLC installation and to also do the power loss wiring to the PLC for orderly shutdown.

IMHO the data tables must be saved every scan. The end user could configure the PLC to save only a portion of the data table, depending on the application, but not saving them every scan could really mess up a machine once repowered.


_______________________________________________
LinuxPLC mailing list
[email protected]
http://linuxplc.org/mailman/listinfo/linuxplc
 
S
On Fri Jan 14 19:03:12 2000 Locke, Alan S wrote...
>
>>On Fri Jan 14 15:50:39 2000 "Sage, Pete (IndSys, GEFanuc, Albany)" wrote...
>>>>Unless you synch the data on every write you will lose data if someone
>>>switches the PC off. Syncing the data on every write will kill your
>>>performance. A reasonable technique is to configure the shared memory as a
>>>memory mapped file, this will give you persistence. Periodically you can
>>>flush it to disk.
>>Well, I was thinking of a process whose job it is to scan the data tables, and
>>write any changes it finds to the disk files. I realize this is a performance issue,
>>_but_ it is critical to the operation of the process, and it is a problem that has been
>>solved by the database code writers, they can't lose data either. You would hate to
>>have your savings deposit deducted from your checking account, but never credited
>>to your savings account because of a computer crash, now wouldn't you :)
>
>My understanding is that software PLC vendors have addressed this issue by using a battery backed up flash drive (ram) and that they write the data tables to this drive every scan. As a machine integrator type, I would also expect to need to install a UPS with a software PLC installation and to also do the power loss wiring to the PLC for orderly shutdown.

Good point. However I wish we could come up with a better solution. Flash RAM is expensive, and I most certainly don't put all of my PLC's on UPS'es

>IMHO the data tables must be saved every scan. The end user could configure the PLC to save only a portion of the data table, depending on the application, but not saving them every scan could really mess up a machine once repowered.

Yep.

--
Stan Brown [email protected] 843-745-3154
Westvaco
Charleston SC.


_______________________________________________
LinuxPLC mailing list
[email protected]
http://linuxplc.org/mailman/listinfo/linuxplc
 
B

Butler, Lawrence

Perhaps we consider configuring which data is persistent to minimize disk
writes....

LB

> -----Original Message-----
> From: Stan Brown [SMTP:[email protected]]
>
> One of the basic concepts of a PLC is that the data table is
> retained
> upon program stop, whether that be manually stopping the scan, or
> the
> actual powering down of the controller.
>
> Has anyone put any thoughts into how we can handle this? In a real
> PLC
> is genuine battery backed up RAM, so their is no "save on power
> failure",
> we unfortunately don't have that luxury. Nor can we depend upon
> every
> installation having a UPS.
>
> Anyone know a way around this, that doesn't cause us to really spend
> a
> lot of time writing changing data table to the disk? Or are we going
> to
> have to have a task for this? If so, we need to look at some of the
> database projects, they have similar issues with recovery logs.
>
_______________________________________________
LinuxPLC mailing list
[email protected]
http://linuxplc.org/mailman/listinfo/linuxplc
 
Sage, Pete (IndSys, GEFanuc, Albany):
> >Unless you synch the data on every write you will lose data if someone
> >switches the PC off. Syncing the data on every write will kill your
> >performance. A reasonable technique is to configure the shared memory
> >as a memory mapped file, this will give you persistence. Periodically
> >you can flush it to disk.

Stan Brown:
> Well, I was thinking of a process whose job it is to scan the data
> tables, and write any changes it finds to the disk files.

The only data tables that need to be written to disk are the "internal coils", aren't they?

The place in the architecture where this fits is among the I/O drivers: just another set of points, except instead of interfacing to a PLC it'll interface to a disk file.

You'll probably lose a few seconds worth of data in a crash, but I don't think that can really be helped.

The advantage of doing it like this is that if a few seconds' loss is unacceptable, you load the battery-backed-RAM driver instead.

> I realize this is a performance issue, _but_ it is critical to the
> operation of the process, and it is a problem that has been solved
> by the database code writers, they can't lose data either.

Yes, but they don't have the real-time problem. (Well, they do, but it's not as hard as ours. Their real-time problems are measured in seconds or days.)


Jiri
--
Jiri Baum <[email protected]>
On the Internet, nobody knows if you are a @{[@{[open(0),<0>]}-1]}-line
perl script...

_______________________________________________
LinuxPLC mailing list
[email protected]
http://linuxplc.org/mailman/listinfo/linuxplc
 
S
On Sat Jan 15 00:42:20 2000 Butler, Lawrence wrote...
>
>Perhaps we consider configuring which data is persistent to minimize disk
>writes....

Perhaps, or perhaps we define what is "high priority persistent", where an attempt to keep all the rest up to date, would be made, but the "high priority" stuff would have precedence over the other.

However, I am not very happy with this solutin, since it adds a whole extra level of things to keep in mind when writing the application programs.

I think we need to think some more about this.

--
Stan Brown [email protected] 843-745-3154
Westvaco
Charleston SC.

_______________________________________________
LinuxPLC mailing list
[email protected]
http://linuxplc.org/mailman/listinfo/linuxplc
 
S
On Fri Jan 14 21:44:14 2000 Jiri Baum wrote...
>
>Sage, Pete (IndSys, GEFanuc, Albany):
>> >Unless you synch the data on every write you will lose data if someone
>> >switches the PC off. Syncing the data on every write will kill your
>> >performance. A reasonable technique is to configure the shared memory
>> >as a memory mapped file, this will give you persistence. Periodically
>> >you can flush it to disk.
>
>Stan Brown:
>> Well, I was thinking of a process whose job it is to scan the data
>> tables, and write any changes it finds to the disk files.
>
>The only data tables that need to be written to disk are the "internal
>coils", aren't they?

No, in a real PLC _all data_ is in battery backed RAM.
>
>The place in the architecture where this fits is among the I/O drivers:
>just another set of points, except instead of interfacing to a PLC it'll
>interface to a disk file.

I don't see it that way. I see a data to disk process. Whose job is to read the data tables and keep the disk copy up to date. We can optimize this to minimize the number of disk writes, by keeping up with what has changed since the last write, sort of like the in memory cacheing of databases.

>You'll probably lose a few seconds worth of data in a crash, but I don't
>think that can really be helped.
>
>The advantage of doing it like this is that if a few seconds' loss is
>unacceptable, you load the battery-backed-RAM driver instead.
>
>> I realize this is a performance issue, _but_ it is critical to the
>> operation of the process, and it is a problem that has been solved
>> by the database code writers, they can't lose data either.
>
>Yes, but they don't have the real-time problem. (Well, they do, but it's
>not as hard as ours. Their real-time problems are measured in seconds or
>days.)


--
Stan Brown [email protected] 843-745-3154
Westvaco
Charleston SC.

_______________________________________________
LinuxPLC mailing list
[email protected]
http://linuxplc.org/mailman/listinfo/linuxplc
 
B

Butler, Lawrence

> -----Original Message-----
> From: Stan Brown [SMTP:[email protected]]

<snip>
> Perhaps, or perhaps we define what is "high priority persistent",
> where
> an attempt to keep all the rest up to date, would be made, but the
> "high priority" stuff would have precedence over the other.
>
> However, I am not very happy with this solutin, since it adds a
> whole
> extra level of things to keep in mind when writing the application
> programs.
>
> I think we need to think some more about this.
<snip>
Definitely requires much more thought, don't want to get caught at 3:00 am with problems because you forgot to designate a register as
persistent and the program dies through a power bump.

_______________________________________________
LinuxPLC mailing list
[email protected]
http://linuxplc.org/mailman/listinfo/linuxplc
 
> >Sage, Pete (IndSys, GEFanuc, Albany):
> >> >Unless you synch the data on every write you will lose data if someone
> >> >switches the PC off. Syncing the data on every write will kill your
> >> >performance. A reasonable technique is to configure the shared memory
> >> >as a memory mapped file, this will give you persistence. Periodically
> >> >you can flush it to disk.

> >Stan Brown:
> >> Well, I was thinking of a process whose job it is to scan the data
> >> tables, and write any changes it finds to the disk files.

Jiri Baum:
> >The only data tables that need to be written to disk are the "internal
> >coils", aren't they?

Stan Brown wrote:
> No, in a real PLC _all data_ is in battery backed RAM.

I'm not sure whether this is a disagreement or a misunderstanding...

If you mean that the other files of data (16-bit words, floats) are also saved, then that's no problem; the PersistentData driver will simply handle them, too (from its point of view it's all bits - no problem).

If you mean there's data *other* than the files to be saved, can you give an example?

> >The place in the architecture where this fits is among the I/O drivers:
> >just another set of points, except instead of interfacing to a PLC it'll
> >interface to a disk file.

> I don't see it that way. I see a data to disk process. Whose job is to
> read the data tables and keep the disk copy up to date.

How would this differ in functionality from what I've suggested?

(I'd rather minimize the number of interfaces into the core, even if it means that sometimes two things that are presented to the user as
completely different sometimes share the same interface. And I don't see any difference between taking a bunch of bits and sending them to a PLC and taking a bunch of bits and sending them to disk.)

> We can optimize this to minimize the number of disk writes, by keeping up
> with what has changed since the last write, sort of like the in memory
> cacheing of databases.

The PLC interface will probably have that info anyway, because real PLC drivers will want to minimize bus traffic.


Jiri
--
Jiri Baum <[email protected]>
On the Internet, nobody knows if you are a @{[@{[open(0),<0>]}-1]}-line
perl script...

_______________________________________________
LinuxPLC mailing list
[email protected]
http://linuxplc.org/mailman/listinfo/linuxplc
 
On Sun Jan 16 05:42:01 2000 Jiri Baum wrote...
>
>> >Sage, Pete (IndSys, GEFanuc, Albany):
>> >> >Unless you synch the data on every write you will lose data if someone
>> >> >switches the PC off. Syncing the data on every write will kill your
>> >> >performance. A reasonable technique is to configure the shared memory
>> >> >as a memory mapped file, this will give you persistence. Periodically
>> >> >you can flush it to disk.
>
>> >Stan Brown:
>> >> Well, I was thinking of a process whose job it is to scan the data
>> >> tables, and write any changes it finds to the disk files.
>
>Jiri Baum:
>> >The only data tables that need to be written to disk are the "internal
>> >coils", aren't they?
>
>Stan Brown wrote:
>> No, in a real PLC _all data_ is in battery backed RAM.
>
>I'm not sure whether this is a disagreement or a misunderstanding...
>
>If you mean that the other files of data (16-bit words, floats) are also
>saved, then that's no problem; the PersistentData driver will simply handle
>them, too (from its point of view it's all bits - no problem).

Thats exactly what I mean.

>If you mean there's data *other* than the files to be saved, can you give an example?<
>
>> >The place in the architecture where this fits is among the I/O drivers:
>> >just another set of points, except instead of interfacing to a PLC it'll
>> >interface to a disk file.
>
>> I don't see it that way. I see a data to disk process. Whose job is to
>> read the data tables and keep the disk copy up to date.
>
>How would this differ in functionality from what I've suggested?

I am a big believer in a relatively large number of simpler processes that work with each other, rather than assigning multiple tasks to one process. Easier to code, debug, and understand for the application programmers.

>(I'd rather minimize the number of interfaces into the core, even if it
>means that sometimes two things that are presented to the user as
>completely different sometimes share the same interface. And I don't see
>any difference between taking a bunch of bits and sending them to a PLC and
>taking a bunch of bits and sending them to disk.)
>
>> We can optimize this to minimize the number of disk writes, by keeping up
>> with what has changed since the last write, sort of like the in memory
>> cacheing of databases.
>
>The PLC interface will probably have that info anyway, because real PLC
>drivers will want to minimize bus traffic.

Huh, are we talking about the same thing here?

--
Stan Brown [email protected] 843-745-3154
Westvaco
Charleston SC.

_______________________________________________
LinuxPLC mailing list
[email protected]
http://linuxplc.org/mailman/listinfo/linuxplc
 
On Sat Jan 15 22:04:50 2000 Butler, Lawrence wrote...

>> I think we need to think some more about this.
> <snip>
> Definitely requires much more thought, don't want to get caught at
>3:00 am with problems because you forgot to designate a register as
>persistent and the program dies through a power bump.

Yep, I was out looking at the process at that time this morning :-(

--
Stan Brown [email protected] 843-745-3154
Westvaco
Charleston SC.

_______________________________________________
LinuxPLC mailing list
[email protected]
http://linuxplc.org/mailman/listinfo/linuxplc
 
On Sat, Jan 15, 2000 at 10:52:25AM -0500, Stan Brown wrote:
> On Sat Jan 15 00:42:20 2000 Butler, Lawrence wrote...
> >
> >Perhaps we consider configuring which data is persistent to minimize disk
> >writes....
>
> Perhaps, or perhaps we define what is "high priority persistent", where
> an attempt to keep all the rest up to date, would be made, but the
> "high priority" stuff would have precedence over the other.
>
> However, I am not very happy with this solutin, since it adds a whole
> extra level of things to keep in mind when writing the application
> programs.
>
> I think we need to think some more about this. <

As a machine, the PLC has a specific state at all times, including I/O states as well as the application program's instruction pointer(s). If
the machine state is to be preserved, then sufficient information must be stored to fully define it. To me this would require writing all changable registers to disk on each scan, or possibly whenever they are changed. The latter would suggest writing the state to disk only when a change is made, which would be less of a performance hit than dumping all the registers to disk on every scan.

We'll know that the Linux PLC has succeeded when it can be relied upon to survive nasty power events, like (at least ideal) ordinary PLCs can.

Maybe fast, persistent memory is simply lacking in the current PC design, which really has limited need for it anyway. Perhaps an add-on board with battery-backed ram or flash could provide this as a service. It would be a mistake to try to cover in software for limitations that
really ought to be addressed in hardware, IMHO.

--
Ken Irving
Trident Software
[email protected]


_______________________________________________
LinuxPLC mailing list
[email protected]
http://linuxplc.org/mailman/listinfo/linuxplc
 
M
That's (one of) the difference(s) between a real PLC and a softPLC. A real PLC is designed for the job, hardware and firmware. A softPLC uses software to force a fit onto a general purpose system (PC and OS), in this case not only is the PC/OS not designed for instantaneous loss of power, it is general considered to be a no-no.

You not only have to consider the state of the data table in such a circumstance but wether or how well Linux will reboot in these
circumstances.

(in the windows world software has come to degrade over time because of registry corruption caused by such pwoer downs).

It may be that persistance is not required, certainly the state of the I/O should be determined prior to the start of logic (this raises another point, should the logic engine be able to run if it cannot access its assigned
I/O?). A well designed application will check the state of the machine in its initialisation (to prevent unexpected moves).

Our responsibility here is to ensure that power down/power up cycle does not introduce any inherent hazards.

-----Original Message-----
From: [email protected] [mailto:[email protected]]On
Behalf Of Stan Brown

On Fri Jan 14 19:03:12 2000 Locke, Alan S wrote...
>
>>On Fri Jan 14 15:50:39 2000 "Sage, Pete (IndSys, GEFanuc, Albany)"
wrote...
>>>>Unless you synch the data on every write you will lose data if someone
>>>switches the PC off. Syncing the data on every write will kill your
>>>performance. A reasonable technique is to configure the shared memory as
a
>>>memory mapped file, this will give you persistence. Periodically you can
>>>flush it to disk.
>>Well, I was thinking of a process whose job it is to scan the data tables,
and
>>write any changes it finds to the disk files. I realize this is a
performance issue,
>>_but_ it is critical to the operation of the process, and it is a problem
that has been
>>solved by the database code writers, they can't lose data either. You
would hate to
>>have your savings deposit deducted from your checking account, but never
credited
>>to your savings account because of a computer crash, now wouldn't you :)
>
>My understanding is that software PLC vendors have addressed this issue by
using a battery backed up flash drive (ram) and that they write the data
tables to this drive every scan. As a machine integrator type, I would also
expect to need to install a UPS with a software PLC installation and to also
do the power loss wiring to the PLC for orderly shutdown.

Good point. However I wish we could come up with a better solution. Flash RAM is expensive, and I most certainly don't put all of my PLC's on UPS'es
>
>IMHO the data tables must be saved every scan. The end user could
configure the PLC to save only a portion of the data table, depending on the
application, but not saving them every scan could really mess up a machine
once repowered.

Yep.


_______________________________________________
LinuxPLC mailing list
[email protected]
http://linuxplc.org/mailman/listinfo/linuxplc
 
> >Jiri Baum:
> >> >The only data tables that need to be written to disk are the
> >> >"internal coils", aren't they?

> >Stan Brown wrote:
> >> No, in a real PLC _all data_ is in battery backed RAM.

Jiri Baum:
> >If you mean that the other files of data (16-bit words, floats) are also
> >saved, then that's no problem; the PersistentData driver will simply
> >handle them, too (from its point of view it's all bits - no problem).

Stan Brown:
> Thats exactly what I mean.

OK. Sorry about that - my fault, really.

Jiri Baum:
> >> >The place in the architecture where this fits is among the I/O
> >> >drivers: just another set of points, except instead of interfacing to
> >> >a PLC it'll interface to a disk file.

Stan Brown:
> >> I don't see it that way. I see a data to disk process. Whose job is to
> >> read the data tables and keep the disk copy up to date.

Jiri Baum:
> >How would this differ in functionality from what I've suggested?

Stan Brown:
> I am a big believer in a relatively large number of simpler
> processes that work with each other, rather than assigning multiple
> tasks to one process. Easier to code, debug, and understand for the
> application programmers.

So am I...

I assumed that each I/O driver would be a separate process (so that you can mix and match different brands of I/O, different busses, etc).

Then the PersistentData process can simply pretend to be another I/O driver. You get all the goodies available at the I/O driver interface
without having to re-invent them all.

Stan Brown:
> >> We can optimize this to minimize the number of disk writes, by keeping up
> >> with what has changed since the last write, sort of like the in memory
> >> cacheing of databases.

05:42:01 Jiri Baum:
> >The PLC interface will probably have that info anyway, because real PLC
> >drivers will want to minimize bus traffic.

Stan Brown:
> Huh, are we talking about the same thing here?

No, not when I'm up till six in the morning :)

I meant the I/O drivers.

The I/O driver interface will probably have that info anyway, because real
I/O drivers will want to minimize bus traffic.

Sound better?


(Sometimes the I/O devices will be PLCs, I think that's how I got confused. Either PLCs that have been demoted to dumb I/O, or the PLCs that actually control the machine, with the linux box doing HMI.)


Jiri
--
Jiri Baum <[email protected]>
On the Internet, nobody knows if you are a @{[@{[open(0),<0>]}-1]}-line
perl script...

_______________________________________________
LinuxPLC mailing list
[email protected]
http://linuxplc.org/mailman/listinfo/linuxplc
 
On Mon Jan 17 03:56:37 2000 Mark Hutton wrote...
>
>That's (one of) the difference(s) between a real PLC and a softPLC. A real
>PLC is designed for the job, hardware and firmware. A softPLC uses software
>to force a fit onto a general purpose system (PC and OS), in this case not
>only is the PC/OS not designed for instantaneous loss of power, it is
>general considered to be a no-no.
>
>You not only have to consider the state of the data table in such a
>circumstance but wether or how well Linux will reboot in these
>circumstances.

True, but with journaling filesystems coming on line in Linux, this should become a non issue.
>
>(in the windows world software has come to degrade over time because of
>registry corruption caused by such power downs).

So do you want to go down that road :)
>
>It may be that persistance is not required, certainly the state of the I/O
>should be determined prior to the start of logic (this raises another point,
>should the logic engine be able to run if it cannot access its assigned
>I/O?). A well designed application will check the state of the machine in
>its initialization (to prevent unexpected moves).

A good point. The I/O scanners need to be able to do an input only scan, and then wait for the logic engine(s) to finish prescan, and first
scan.

I had not thought of this :-(

Means we need a way of communicating this between tasks. Uh-oh I feel the sharedmemeorymanaer() coming on :)

>Our responsibility here is to ensure that power down/power up cycle does not
>introduce any inherent hazards.

Absolutely!

--
Stan Brown [email protected] 843-745-3154
Westvaco
Charleston SC.

_______________________________________________
LinuxPLC mailing list
[email protected]
http://linuxplc.org/mailman/listinfo/linuxplc
 
On Fri, 14 Jan 2000, Stan Brown wrote:

> On Fri Jan 14 19:03:12 2000 Locke, Alan S wrote...
> >
> >>On Fri Jan 14 15:50:39 2000 "Sage, Pete (IndSys, GEFanuc, Albany)" wrote...
> >>>>Unless you synch the data on every write you will lose data if someone
> >>>switches the PC off. Syncing the data on every write will kill your
> >>>performance. A reasonable technique is to configure the shared memory as a
> >>>memory mapped file, this will give you persistence. Periodically you can
> >>>flush it to disk.
> >>Well, I was thinking of a process whose job it is to scan the data tables, and
> >>write any changes it finds to the disk files. I realize this is a performance issue,
> >>_but_ it is critical to the operation of the process, and it is a problem that has been
> >>solved by the database code writers, they can't lose data either. You would hate to
> >>have your savings deposit deducted from your checking account, but never credited
> >>to your savings account because of a computer crash, now wouldn't you :)
> >
> >My understanding is that software PLC vendors have addressed this issue by using a battery backed up flash drive (ram) and that they write the data tables to this drive every scan. As a machine integrator type, I would also expect to need to install a UPS with a software PLC installation and to also do the power loss wiring to the PLC for orderly shutdown.
>
> Good point. However I wish we could come up with a better solution.
> Flash RAM is expensive, and I most certainly don't put all of my PLC's on UPS'es <
> >
> >IMHO the data tables must be saved every scan. The end user could configure the PLC to save only a portion of the data table, depending on the application, but not saving them every scan could really mess up a machine once repowered.
>
> Yep.

Surely any programmer that relies on the saved state of the I/O after a non orderly shutdown is asking for trouble. What happens to the machine if the power is cut and the operators have to
manually do something to the machine to extract the product. If the machine is not put back in the exact same state before power is restored
then the machine could at least be damaged or worse injure somebody. Surely the correct programming technique is to re-initialise the machine from real inputs ignoring any saved I/O (because you don't know if it is valid) and drive the machine to a safe startup state. For example I would never code for battery backed inputs, outputs, timers or counters. The only items that should be battery backed are set points, control limits and alarm limits (you may or may not battery back alarm states depending on the kind of automation required). A good example is a printing machine. It should not assume that the piece of paper it was printing is still there to print on or even in the same position after a power interruption. If it can not determine this from real live inputs then it should eject that piece of paper and re-start on a new sheet.

As for any of the methods suggested so far none are of any use. You do not know when the power is going to fail. If the data is copied to a disk or
flash or whatever then your power cut may occur during the write and your data is then corrupt. The only way to implement this is to add a UPS that will signal a power failure and ensure sufficient time to save the state to permanent storage and then perform an orderly storage.

We all know why systems check the disks on boot up after an un ordered shutdown. You do not know if your saved state survived the un ordered
shutdown and the file system may be corrupt or fixed by fsck or whatever and is therefore invalid anyway.

Dave West E-Mail: [email protected]
Semiras Projects Ltd. PGP public key available on request.


_______________________________________________
LinuxPLC mailing list
[email protected]
http://linuxplc.org/mailman/listinfo/linuxplc
 
On Mon Jan 17 07:39:40 2000 Dave West wrote...
>
>Surely any programmer that relies on the saved state of the I/O after a
>non orderly shutdown is asking for trouble.
>What happens to the machine if the power is cut and the operators have to
>manually do something to the machine to extract the product. If the
>machine is not put back in the exact same state before power is restored
>then the machine could at least be damaged or worse injure somebody.
>Surely the correct programming technique is to re-initialise the machine
>from real inputs ignoring any saved I/O (because you don't know if it is
>valid) and drive the machine to a safe startup state. For example I would
>never code for battery backed inputs, outputs, timers or counters. The
>only items that should be battery backed are set points, control limits
>and alarm limits (you may or may not battery back alarm states depending
>on the kind of automation required).
>A good example is a printing machine. It should not assume that the piece
>of paper it was printing is still there to print on or even in the same
>position after a power interruption. If it can not determine this from
>real live inputs then it should eject that piece of paper and re-start on
>a new sheet.

The issue, once again, is _npt_ teh I/O states. It is teh non I/O data table.

Recipes, amounts of material loaded inot vesels, all sorts of required things are stored there.

>As for any of the methods suggested so far none are of any use. You do not
>know when the power is going to fail. If the data is copied to a disk or
>flash or whatever then your power cut may occur during the write and your
>data is then corrupt. The only way to implement this is to add a UPS that
>will signal a power failure and ensure sufficient time to save the state
>to permanent storage and then perform an orderly storage.

Journaling filesystems go a long way toward addressing this.

>We all know why systems check the disks on boot up after an un ordered
>shutdown. You do not know if your saved state survived the un ordered
>shutdown and the file system may be corrupt or fixed by fsck or whatever
>and is therefore invalid anyway.

--
Stan Brown [email protected] 843-745-3154
Westvaco
Charleston SC.

_______________________________________________
LinuxPLC mailing list
[email protected]
http://linuxplc.org/mailman/listinfo/linuxplc
 
On Sun Jan 16 23:26:17 2000 Jiri Baum wrote...
>
>> >Jiri Baum:
>> >> >The only data tables that need to be written to disk are the
>> >> >"internal coils", aren't they?
>
>> >Stan Brown wrote:
>> >> No, in a real PLC _all data_ is in battery backed RAM.
>
>Jiri Baum:
>> >If you mean that the other files of data (16-bit words, floats) are also
>> >saved, then that's no problem; the PersistentData driver will simply
>> >handle them, too (from its point of view it's all bits - no problem).
>
>Stan Brown:
>> Thats exactly what I mean.
>
>OK. Sorry about that - my fault, really.

That's OK we are coming together here, this is good.
>
>Jiri Baum:
>> >> >The place in the architecture where this fits is among the I/O
>> >> >drivers: just another set of points, except instead of interfacing to
>> >> >a PLC it'll interface to a disk file.
>
>Stan Brown:
>> >> I don't see it that way. I see a data to disk process. Whose job is to
>> >> read the data tables and keep the disk copy up to date.
>
>Jiri Baum:
>> >How would this differ in functionality from what I've suggested?
>
>Stan Brown:
>> I am a big believer in a relatively large number of simpler
>> processes that work with each other, rather than assigning multiple
>> tasks to one process. Easier to code, debug, and understand for the
>> application programmers.
>
>So am I...

Great!
>
>I assumed that each I/O driver would be a separate process (so that you can
>mix and match different brands of I/O, different busses, etc).

I am on the same wavelength with you here.
>
>Then the PersistentData process can simply pretend to be another I/O
>driver. You get all the goodies available at the I/O driver interface
>without having to re-invent them all.

Well it;s not really dealing with I/O. It;s dealing with the more generic "all of data table', including I/O data table, and non I/O data table.

I still don't feel like I have gotten the distinction between the 2 types of data table across. am I wrong?
>
>Stan Brown:
>> >> We can optimize this to minimize the number of disk writes, by keeping up
>> >> with what has changed since the last write, sort of like the in memory
>> >> cacheing of databases.
>
>05:42:01 Jiri Baum:
>> >The PLC interface will probably have that info anyway, because real PLC
>> >drivers will want to minimize bus traffic.
>
>Stan Brown:
>> Huh, are we talking about the same thing here?
>
>No, not when I'm up till six in the morning :)
>
>I meant the I/O drivers.
>
>The I/O driver interface will probably have that info anyway, because real
>I/O drivers will want to minimize bus traffic.
>
>Sound better?

Yes.
>
>
>(Sometimes the I/O devices will be PLCs, I think that's how I got confused.
>Either PLCs that have been demoted to dumb I/O, or the PLCs that actually
>control the machine, with the linux box doing HMI.)

PLC's are not real I/O.

Real I/o is a piece of hardware, with wires on it, anything else is data. Even if this data came from real I/O in another processor.

--
Stan Brown [email protected] 843-745-3154
Westvaco
Charleston SC.

_______________________________________________
LinuxPLC mailing list
[email protected]
http://linuxplc.org/mailman/listinfo/linuxplc
 
Jiri Baum:
> >Then the PersistentData process can simply pretend to be another I/O
> >driver. You get all the goodies available at the I/O driver interface
> >without having to re-invent them all.

Stan Brown:
> Well it;s not really dealing with I/O. It;s dealing with the more
> generic "all of data table', including I/O data table, and non I/O
> data table.

> I still don't feel like I have gotten the distinction between the 2
> types of data table across. am I wrong?

No, I understand the difference. I was just thinking that having the PersistentData driver *pretend* to be an I/O driver would save us having to invent a separate interface for it.

Since then I've changed my mind anyway, so it no longer matters.

[on a different topic]
> >(Sometimes the I/O devices will be PLCs, I think that's how I got
> >confused. Either PLCs that have been demoted to dumb I/O, or the PLCs
> >that actually control the machine, with the linux box doing HMI.)

> PLC's are not real I/O.

> Real I/o is a piece of hardware, with wires on it, anything else is
> data. Even if this data came from real I/O in another processor.

Well, if it has a serial cable on one side and wires out the other side, and doesn't do any processing, it's as good as real I/O.

The other thing is, that the among the I/O drivers there can be drivers reading PLCs that *are* doing processing. Those won't be real real I/O, of course - some points will be almost-real I/O (those that read the values on the wires going in and out of the PLC), while others will be definitely unreal I/O (internal coils of the PLC).

But I've been thinking too much of the SMM lately, where bits is bits, regardless of where they're from, where they're going or what they mean.


Jiri
--
Jiri Baum <[email protected]>
On the Internet, nobody knows if you are a @{[@{[open(0),<0>]}-1]}-line
perl script...

_______________________________________________
LinuxPLC mailing list
[email protected]
http://linuxplc.org/mailman/listinfo/linuxplc
 
Dave West wrote:
>Surely any programmer that relies on the saved state of the I/O after a
>non orderly shutdown is asking for trouble.
>What happens to the machine if the power is cut and the operators have to
>manually do something to the machine to extract the product. If the
>machine is not put back in the exact same state before power is restored
>then the machine could at least be damaged or worse injure somebody.
>Surely the correct programming technique is to re-initialise the machine
>from real inputs ignoring any saved I/O (because you don't know if it is
>valid) and drive the machine to a safe startup state. For example I would
>never code for battery backed inputs, outputs, timers or counters. The
>only items that should be battery backed are set points, control limits
>and alarm limits (you may or may not battery back alarm states depending
>on the kind of automation required).
>A good example is a printing machine. It should not assume that the piece
>of paper it was printing is still there to print on or even in the same
>position after a power interruption. If it can not determine this from
>real live inputs then it should eject that piece of paper and re-start on
>a new sheet.

There are many applications that require the machine state to be saved to be able to reasonably recover after a power bump. For instance machines that don't have enough sensors to determine the state based on inputs alone
(a common issue with material handling systems), or machines that have a degree of autonomy and need to be able to recover without operator
intervention. Even if the operator is available to assist in the power loss recovery, it's nice to have the machine HMI prompt the operator
through a recovery process based at least partly on prior state. A common solution to machines that may be changed (by maintenance personnel possibly) without the PLCs knowledge is a machine reset process.

This is definitely one of the difficult areas in control engineering, being so highly tied to the machine process and complex fault trees.

Alan Locke
Control Engineer, Boeing

"My opinions are my own and not necessarily those of my employer"

_______________________________________________
LinuxPLC mailing list
[email protected]
http://linuxplc.org/mailman/listinfo/linuxplc
 
Top