Framing on non-real time OS's

E

Thread Starter

Erik

I've heard that non-real time operating systems (e.g. Linux, Windows) don't have 100% compliant implementations of the ModBus standard. Apparently, due to hardware limitations, these OS's can't measure the t1.5 and t3.5 framing times defined by the standard.

Exceeding t3.5 does not matter for sending of packets. It does matter, however, for slaves receiving broadcast messages. In theory, a master could broadcast several frames with only t3.5 between each frame. If a slave can't detect the t3.5 silence between frames, then it will think that all the frames are one message. I've been told that the solution is to use a different mechanism based on "implied length".

If master nodes in the real world use a much longer time between frames, is that documented anywhere? Is it t10, t100, t1000? Our communication board will be performing some webserver functions, which may occupy the processor for long periods of time. I want to make sure that between the ModBus driver code and our higher-level software we can meet whatever the unofficial standard is. I also need to make sure that our final test procedure doesn't impose unrealistically tight (or loose) timing requirements on our final system.
 
When you are talking about Modbus, you have to keep in mind that there are several different forms of it. The only version to which the timing applies is Modbus/RTU. It doesn't apply to Modbus ASCII or Modbus/TCP.

I believe the timing limits were intended to simplify the implementation of Modbus/RTU on typical simple (8 bit) serial "slaves" (servers). A lot of other protocols on the other hand need special hardware assistance to work properly.

At least some serial masters allow you to set the minimum "silent" period used. That will be product and installation specific though. You would need to tell the customer to set the minimum silent period to be greater than the longest period for which your device is unavailable. The problem with this is that it slows down communications for every message all the time, which may be unacceptable in some applications.

Another approach is to simply try decoding the received message. If everything about the message matches what is supposed to be there, then you should have a complete message. Each Modbus message includes information on how long it is expected to be (number of registers, number of coils, etc.).

However, this assumes you were able to buffer the serial input even if you weren't able to process it. If you can't buffer enough characters while your system is "deaf", then you are going to lose messages and there is nothing that any protocol can do about that.

Furthermore, I'm not sure what the real time or non-real time status of MS Windows or Linux has to do with this. Yes, MS Windows is not real-time and it only runs on PC hardware and it is very limited in where you can apply it.

Linux runs on pretty much every 32 bit embedded platform in existence (even ones without memory managers), and it does have real time versions. If this has some relevance to your board, then there is a pretty good chance that real time Linux will run on it while providing the TCP/IP stack and embedded web server.
 
Top