Scratch variables revisited

M

Thread Starter

Mario de Sousa

Hi Jiri, Let me just re-send a previous email of mine to try and bring the issue of arrays of linuxplc points back onto the drawing board. It seems we need to get this sorted out quickly so David can go do his thing on the IEC compiler... Cheers, Mario *************** old email *******************  Jiri Baum wrote: > (... snipped a little here ...) > > > I'm not sure to what extent this actually matters to the smm/gmm. > ... > > Yes, it can handle both situations, but I think it needs to be optimised > > for either one. Currently each named linuxplc point takes up 48+36 bytes > > of configuration memory, and 1 to 32 _bits_ of user memory. This ratio > > is much too big if we are going to have every scratch variable as a named > > linuxplc point. > > Actually, this ratio is probably much too big in any case... I haven't been > looking at the confmap in much detail, I think you still know it better > than I do; is there any easy way to reduce this? > > For instance, could all the point owners be listed in one place, with the > point table just having one-byte references into the list? > > With some care, this and the bit & length fields might be squashed into > sixteen bits... With bit and length having a total of 529 combinations, > fitting into 10 bits, this would limit us to 64 distinct owners; is that > enough? We can probably optimize it quite a bit, but I think the most difficult will be squashing the name of the point itself (currently using 32 bytes for a maximum of 31 char length name). If we keep this limit, it will probably be difficult to go any lower than say 40 bytes per point. Remember we still need the byte-offset:bit- offset:size:eek:wner_ptr fields, which should probably take up 4 bytes if we consider a maximum size of 64*4Kbytes for the global memory (this is using 2 bytes for the byte-offset, 5 bits for the bit-offset, 5 bits for size, and 6 bits for the owner_ptr with a maximum of 64 distinct owners). It probably won't be easy to reach those 40 bytes if we keep the current architecture. Currently the configuration memory manager (cmm) is used to store both the linuxplc points and the synch points configurations. To do this, the cmm is actually just a list of variable sized chunks of memory, each chunk with a name (used for the point or synch point name) and a type (to distiguish between points and synch points). The remaining bytes of each chunk of memory is used internally by each library (the synch library, the gmm library, any future libraries...) to store their configuration data. For the gmm this is the byte-offset:bit-offset:size fields, along with the owner. This architecture requires that each chunk of memory have an overhead of at least 6 bytes to maintain the linked list of cmm memory chunks, and the size and type fields of each chunk. This brings the total down to 42 bytes per point. Actually the cmm works practically like a specialised memory allocation function. I think those 42 bytes is still a little large. Maybe we can go lower by using a variable number of bytes for the name instead of reserving 32 bytes. Like this the name would start at a certain position in the cmm memory chunk structure, and end when a '\0' is encountered. The rest of the info would continue right after the '\0' intead of at the end of the 32 byte char array. Like this we can easily support names larger than 31 chars, but the responsobility of using up a large amount of config memory would fall into the hands of the user. If they want a small config memory, then they must use shorter names. Taking this idea still further, maybe we could use hashing for the names. I have nerver needed to use hashing myself, so I don't know how safe it is... > > Maybe we need to come back and consider arrays or structures of linuxplc > > points? > > Arrays - where they are actual arrays from all points of view, ie all > elements have exactly the same properties - would definitely be useful > where they are needed. They probably won't be very hard to implement, but > they don't really solve the above problem. > They would solve the problem of the iec M memory. We could have a single linuxplc named array typed point, that would have all the M memory. Remember that the iec M memory (and the others too) are just an array of 16(?) bit integers that can also be accessed as bits. With arrays we can define the whole memory with only one linuxplc point. Granted, this means that only the iec module would have write access to this memory. That is the problem with arrays, that probbaly needs discussing. Mario. -- ---------------------------------------------------------------------------- Mario J. R. de Sousa [email protected] ---------------------------------------------------------------------------- The box said it requires Windows 95 or better, so I installed Linux _______________________________________________ LinuxPLC mailing list [email protected] http://linuxplc.org/mailman/listinfo/linuxplc
 
Top