J
Johan Bengtsson
>> >Unless you synch the data on every write you will lose data if someone
>> >switches the PC off. Syncing the data on every write will kill your
>> >performance. A reasonable technique is to configure the shared memory
>> >as a memory mapped file, this will give you persistence. Periodically
>> >you can flush it to disk.
>
>Stan Brown:
>> Well, I was thinking of a process whose job it is to scan the data
>> tables, and write any changes it finds to the disk files.
>
If it is possible to configure some special (small) area to save and save that as often as possible I think that is enough for most
application, if not - buy an UPS or a battery backed RAM card...
If the data is saved in different places for each save in some kind of round robin scheme together with a version number with enough bits to identify the newest version even when a wrap occurs and some CRC scheme to identify the versions really fully written. This way the data may not be saved really each scan, but if the data
is consistent for a paricular scan not too far from the power failure it should be enough in most cases.
Writing one sector (512 bytes) - say 12 bytes for recover information like CRC, version stamp and so on will still give you about 250 16bit
values or 4000 digital values - (I don't intend this as a limit, just an example for the calculations). This should be quite fast and
probably cover most applications needing to store anything!
Can someone fill in the expected maximum time to write this amount to a harddrive under linux?
<clip>
----------------------------------------
P&L, the Academy of Automation
Box 252, S-281 23 Hässleholm SWEDEN
Tel: +46 451 49 460, Fax: +46 451 89 833
E-mail: [email protected]
Internet: http://www.pol.se/
----------------------------------------
_______________________________________________
LinuxPLC mailing list
[email protected]
http://linuxplc.org/mailman/listinfo/linuxplc
>> >switches the PC off. Syncing the data on every write will kill your
>> >performance. A reasonable technique is to configure the shared memory
>> >as a memory mapped file, this will give you persistence. Periodically
>> >you can flush it to disk.
>
>Stan Brown:
>> Well, I was thinking of a process whose job it is to scan the data
>> tables, and write any changes it finds to the disk files.
>
If it is possible to configure some special (small) area to save and save that as often as possible I think that is enough for most
application, if not - buy an UPS or a battery backed RAM card...
If the data is saved in different places for each save in some kind of round robin scheme together with a version number with enough bits to identify the newest version even when a wrap occurs and some CRC scheme to identify the versions really fully written. This way the data may not be saved really each scan, but if the data
is consistent for a paricular scan not too far from the power failure it should be enough in most cases.
Writing one sector (512 bytes) - say 12 bytes for recover information like CRC, version stamp and so on will still give you about 250 16bit
values or 4000 digital values - (I don't intend this as a limit, just an example for the calculations). This should be quite fast and
probably cover most applications needing to store anything!
Can someone fill in the expected maximum time to write this amount to a harddrive under linux?
<clip>
----------------------------------------
P&L, the Academy of Automation
Box 252, S-281 23 Hässleholm SWEDEN
Tel: +46 451 49 460, Fax: +46 451 89 833
E-mail: [email protected]
Internet: http://www.pol.se/
----------------------------------------
_______________________________________________
LinuxPLC mailing list
[email protected]
http://linuxplc.org/mailman/listinfo/linuxplc