Deleting the TCP Buffer?

B

Thread Starter

bierbauch

Hi there,

I'm trying to implement a TCP-Modbus Server on a Linux Mashine. I have already implemented the connection... (socket, bind, accept) and it works. But I have problems with flushing the TCP Buffer: e.g. my Slave gets a message (with MBAP-Header):

00 01 00 00 00 06 02 03 00 00 00 01

My Slave is configured as Slave #1. So after receiving

00 01 00 00 00 06 02

it knows that this message isn't important. So it should throw the complete message away. That includes the remaining bytes, too:

03 00 00 00 01

How can I do that? I need a concrete Unix / Linux - Function/Macro/....

I already tried:

tcflush( sfd, TCIOFLUSH );

Unfortunately, that does not work.

Thanks for helping...
 
Hello,

Are you using TCP_NODELAY?
(See the man pages for tcp(7) and setsockopt(2).)

You might also want to set IP_TOS to an appropriate value (see ip(7)), but it's probably neither very important nor very useful.

BTW, any reason you're implementing your own modbus server, rather than using one of the several already out there? (We have one in the MatPLC project, for instance...)

Jiri
--
Jiri Baum <[email protected]> http://www.baum.com.au/~jiri
MAT LinuxPLC project --- http://mat.sf.net --- Machine Automation Tools
 
Hi,

I could solve this problem myself:

with

int MessageLength = recv(fd, buffer, 262, 0);

the program gets the complete message. recv() returns the length of the message.

(262 ist the max. length of the Modbus Message (256 Bytes + 6 Bytes (MBAP Header))

But there is another problem:

For testing my Modbus Slave on the linux machine, I use Modbus Poll 3.7.2 from Witte. I'm configuring the parameters (Response Timeout, Delay Between Polls, Scan Rate) to get the best result. It is remarkable, that about 6 per mill errors accurs (with Response Timeout = 50ms, Delay Between Polls = 20ms and Scan Rate = 20ms). The error rate can be minimized by chossing Response Timeout = 600ms. Is that really ok? That means, that there are some Responses that need nearly 600 ms to reach the master...
The two systems are connected with two ethernet lines and one switch (very less traffic). The linux machine has a 10 MBit Ethernet Adapter, the windows PC a 100 MBit one.

Is there a reason for that slow connection? With Modbus over serial line (Baudrate: 19200), everything works fine. But I thought Modbus TCP is about 10 times faster as Modbus over serial line.

Thank you very much for helping :)
 
F

Friedrich Haase

Moin Mr. bierbauch,
moin all,

flushing the TCP Buffer? why? Don't you make your job more difficult as required? Just receive the entire datagram and forget it.

One more idea. TCP allows to concatenate request into the same datagram or split it into parts which come in different datagrams. I have never seen that for MODBUS/TCP, but it is possible. Just flushing the buffer could remove the wrong amount, either to less or to mach.

regards
Friedrich Haase

Ing.-Büro Dr. Friedrich Haase
Consulting - Automatisierungstechnik
email [email protected]
WEB http://www.61131.com
 
M

Michael Griffin

In reply to bierbauch - I'm not a Modbus expert, but I can address a few points about timing in general. You have two PCs; your slave uses a Linux OS with your own Modbus software, and the master uses a Windows OS with the "Witte" Modbus master test software.

I don't know how your slave software is written, but I assume that you conduct one poll, and then "sleep" for some period of time (you haven't said how long this is). The master is polling at a 20ms rate, with a response time-out of 50 ms. You therefor have two PCs which are only polling intermittently.

Windows has a minimum scheduling period of 10 ms. That is, the shortest time that a process can "sleep" (be idle to allow other programs to run) is 10 ms. This is a *best* case assumption. In practice, a process will often be randomly delayed by 50 to 200 ms.

With Linux, 2.4 had a similar (10 ms) scheduling period. When 2.6 was introduced a couple of years ago, this was changed this to 1 ms. You can of
course change the scheduling period in Linux to whatever you want or even select a different scheduling algorithm, but these are the defaults. Linux appears to be much more consistent than Windows, but it (or at least the standard versions) still isn't a real time operating system.

What this means is that your modbus master program (on the Windows PC) will run at best every 10 ms, and often times at random longer intervals. The modbus slave (your program on the Linux PC) will run at best every 1 ms (or
10ms if it is using an older Linux version). The inherent OS delays of both PCs add together.

Given the above, a scan interval of 20ms is not unexpected. Also, the modbus master software could easily be occasionally delayed by Windows by several times more than the 50ms time out you wished to use.

The real limitations are likely not anything to do with communications speed, but rather with how quickly a user program running on a general purpose PC can respond to and turn around a message. User programs are only given small time slices at intervals to prevent them from loading down the PC. Since the program does relatively little work before yeilding each time it is scheduled, the limitation is how quickly it will get another turn to run after it yields (or is forced to yield) to the scheduler.

We had a discussion on OS timing effects earlier this year under the subject "PC: Ways to do machine control under Windows". You may wish to refer to this for details on some simple experiments that I conducted at that time.

You may wish to try using a Linux PC at *both* ends, since it appears that Linux can give you a faster and more consistent response. If you want better than 1 ms, you can reconfigure the kernal accordingly. A shorter scheduling interval will result in more of the CPU being used in OS overhead, but that probably isn't a concern for a dedicated application.
 
L

Lynn A Linse

Just read the full message out of the buffer but ignore if you like - you are greating a huge problem for yourself if you accomplish a flush.
99% of the Modbus/TCP masters of the world will PIPELINE requests, this means you could have 2 or more Modbus/TCP requests waiting in your TCP
buffer. You may even have partial new requests building up in your TCP buffer. If you flush you risk discarding partial new requests. Is this spec? Debatable - but reality.

You need to faithfull pull all bytes out of your TCP buffer message-by-message or else close the socket. You cannot flush or you risk losing "sync" with the MB/TCP message stream.

Also, you need to ANSWER all requests in Modbus/TCP or you risk problems with existing Masters. If you really don't want to answer a request to Unit Id 2, then you should return exception code 0x0A - basically no such slave or no route to such a slave in a bridge. Most mature Master apps will close the socket if you don't answer the request, while immature ones could get hopelessly confused.

Modbus/TCP is *NOT* Modbus/RTU.

- LynnL, www.digi.com
 
Hi and thanx for all the great helping answers.

I'm now understanding how to do that, without flushing the TCP Buffer.

But there is still a problem. From time to time, in about 1% cases, an error accured. The problem is, that the MODBUS server answers in this special case after 250 ms (Normally it needs a time of about 10 ms). I configured the Client to wait 50 ms. Because of that, a timeout occured. The next TCP-message consists of three MODBUS requests (from Client).

Does somebody knows, why the Server could need such a long time for responsing from time to time? How do I have to handle a TCP-message with three MODBUS requests?

Thanks a lot for helping.
 
Hello Dr. Friedrich Haase

is it possible, to write an email to you (in german)? For me, it's much easier....

Thank you very much
 
Top