Ultra High Speed Datalogging Into A DB

R

Thread Starter

Rayno Powell

I have a server platform running: Wonderware InSQL, SQL2005, TwinCAT SoftPLC, W2K3 R2, SP1.
We are in the nuclear industry and have to log info at very high speed: 3 to 5 miliseconds.

In my C# application:
Local variables for the 17 digitals (discretes) and 24 reals (floating points). Then I have a timer that ticks every 3MS (definable) to read the info out of the PLC and then calls the storit function (standard from the InSQL SDK) to store the info into InSQL. InSQL however stores the info anything from 13 to 17MS, even if I take the timer value up to 20MS. I know for a fact that the PLC comms cannot slow this down due to the speed the PLC's API delivers, it has to be on the InSQL side.

Also, I get the info from the PLC as a binary stream, but am forced, due to SDK restrictions, to store the values variable at a time to InSQL.

Has anybody done this before and wouldn't mind to share?

Kind Regards,
Rayno
[email protected]
 
C

Curt Wuollet

I haven't done quite that, but I would expect you might have better luck logging with the PC directly and either shipping the values to the PLC or having it see the data in parallel. Very fast and PLC are almost opposites.

Regards

cww
 
M

Michael Griffin

We have discussed program timing experiments with various operating systems including MS-Windows before. You might search the archives if you are interested in the details. But briefly, if you are relying on standard MS-Windows timing capabilities you are probably wasting your time.

The standard MS-Windows NT (NT 4 to Vista) task scheduler uses a 10 to 15 ms timer "tick". That is, the timing resolution is between 10 and 15 ms (the actual value depends upon which version of MS-Windows you are using). This matches with the 13 to 17 ms you are seeing. Just because you can put any number you want into a "sleep" call doesn't mean that you are going to get exactly that time as a result. InSQL can only run when given a time slice by the scheduler.

Our experiments also showed that although these might be "typical" values, there were occasional timing "hick-ups" of up to several hundred milliseconds. These seemed to be randomly generated, and did not depend upon any discernible outside influence (i.e. there is nothing you can do to avoid them). I think you will find that the 13 to 17 ms you are seeing are the best case results. If you analyse a large enough sample you probably will see similar relatively large "hick-ups" in timing.

You are using C#, which is going to add more timing delays of its own into the problem. Every time the garbage collector runs your program will come to a halt for some undefined period of time.

Our experiments with Linux showed timing resolution of 1 to 4 ms (depending upon the version) for a standard kernel using the standard scheduler. You can of course configure your kernel to whatever you want, and also have the choice of using several other scheduling algorithms. The timing with Linux was also much more consistent than with MS-Windows, without the large delays (there were some delays, but much much smaller).

Some soft logic PLCs actually run on an RTOS. MS-Windows is then run as a "guest" operating system on top of the RTOS. I am not familiar enough with TwinCAT to say if this is the case with that product. If TwinCAT is using an RTOS and the RTOS is capable of the necessary timing resolution, then you need some way to buffer enough data for the worst case timing before passing it to the MS-Windows OS. You'll also need to do the time stamps in TwinCAT since the MS-Windows time stamp will be useless.

I am not familiar with your application, but if the TwinCAT soft logic system is just being used to collect data from another source instead of actually controlling anything critical you might want to reconsider how you are doing things. You seem to have quite a bit running on a single PC for a difficult application. You could run a data collection application on an RTOS (or real time version of Linux) on one PC and network it to your Wonderware application on another PC. The data collection application could buffer and time stamp the data, while Wonderware could accept the data at its own pace.
 
Hi,

I am surprised you got a PLC bus with the ability to deliver some value(s) at so huge a rate...

How many numerical values do you want to historize?

What is the link use between PLC & Server/Bridge?

OTT
 
C
Hello,

I haven't ever done this exactly, but I have some background in the CS end of things that might be of some use to you.

First of all, you should probably look into how accurate a computer's clock is. I'm sure C# will let you specify that you _want_ to run your timer at 3ms, but you might want to verify that it actually runs at that speed.

Assuming that you cross that hurdle, you have an architectural problem as well. Making a separate call to store data in a database every 3ms isn't realistic - the database can't respond that quickly, as it has lots of processing to do to ensure data reliability and consistency. What you probably want to do is have your 3ms timer store data to an array in the computer's memory (very quick, so that your timer can return quickly to process the next bit of data). Then run a second process, much slower (every second or so?), to take the data from that array and insert it into your database as a chunk.

If you are really getting the data as as stream, that's great - its already being stored in memory as the computer recieves the stream. All you need to do is build up more of a buffer before you send it to the database.

Lastly, if you are finding InSQL API's limiting, consider ditching it - RDBMS systems are commonplace.

Good Luck,

Carl Gould
http://www.inductiveautomation.com
 
W

William Sturm

Could you log the data into a C# memory stream in "real time" and then load it into the SQL database at a later time?
 
R

Rayno Powell

Hi Guys,

Thanx for all the info! Yes the Beckhoff PLC range has terminals capable of getting digital values into the physical soft PLC at 10 micro seconds and analogs at 50 micro seconds, it's called EtherCAT and yes, TwinCAT is ring 0, so it decides when to free the system ticks for windows to operate.

Whats funny though is that in my C# prog I have inserted an extra timer cycling at 5MS to get the info out of the PLC. Out of a 1000 or so cycles, I get that 1 cycle exceeds the 1MS cycle time by at least an exstra 15 to 20MS (sometimes much more) and the other 999 cycles are within the milli second. If I query the SQLDB it looks like the app slows down a bit. Now, my previous mentioned 3MS timer, if I set it to 30MS, the data in SQL is almost at the 30MS stamps. Only every 10th or so value offsets by an additional 3 - 10 milli seconds. But still, if I take an average of my time stamps, they still differ 13 - 17MS if I cycle that timer at 10MS.

I have found though, that the InSQL part of my C# app very seldom crosses over the 1MS time line. But there is a difference, in the TwinCAT part I have to rely on the Convert.ToDouble(); .NET component to convert my data to the correct format InSQL needs. This causes that part of my program to take it's time and cause the above mentioned (1000 - 1 error ratio).

I think I should rather split the system like mentioned by Michael. Have one box only doing the PLC control and another only the InSQL. Then also, what do you guys think, maybe to program this in Borland C++ which will alow me to import the .NET DLL of InSQL. And if this fails, keep the hardware setup, but make the array thing. Also to rather have TwinCAT do the physical time stamping of the data and just read the info out into my app and pump it into InSQL as fast as InSQL can.

Question1: Do .NET cascade the timers you put in? In Other Words - if you have two timers, and both needs to tick, the first logically scanned one will tick, execute, finish and then allow the second one to continue the same?

Question2: How does InSQL then Log over 50000 tags per second on a fairly large server?
(50000/1000 = 50 tags/MS) Is it all in memory and then flush to disk with a seperate task every now and then?
 
M

Michael Griffin

MS-Windows is not an RTOS. This means there are no guaranties about when something will get scheduled. It appears that the underlying RTOS is giving MS-Windows a "kick" at a fairly rapid rate, which is why you are seeing your C# program scheduled at a faster rate than would be the case if MS-Windows were running on its own.

However, MS-Windows not necessarily use that opportunity to run your program immediately. It could do something else instead (e.g. some internal housekeeping). That is the difference between an RTOS and a regular operating system. If you need something to get scheduled at a precise interval, or you need a guaranteed response time, then that software needs to be operating directly under the RTOS.

Furthermore, C# is not a real-time language. It runs on an underlying virtual machine which controls execution. The C# VM will halt your program for short periods of time while it does things like garbage collecting memory. This is not like using standard 'C' where you have more control over these things.

You mentioned that when you convert the data to double precision floating point, your C# program slows down dramatically. I would find it surprising that the conversion library would be so slow as to have this make that much difference. Rather, I suspect that calling a standard C# library function acts as a pre-emption point for the dot-Net VM, which allows the dot-Net VM to go off and do other things before returning with your answer.

I can't answer your question on C# timers. However as far as the speed of InSQL is concerned, take any database performance ratings with a very large grain of salt. Normal relational databases are rated in transactions per second (TPS). These tests are always conducted under very unrealistic conditions, and you are lucky if real life performance is within an order of magnitude of the specs.

As for whether a database operates in memory, there are genuine real time databases (used in for example the telecommunications industry) which operate entirely in RAM. However, even normal file I/O on most modern operating systems is not writing directly to the disk. The operating system will use whatever RAM is available to buffer reads and writes. If there is enough RAM, then reading and writing the same records over again repeatedly will operate in memory.

Database speed (except for special real time databases) however refers to *average* throughput. There is no guaranty how long any individual record will take. The Wonderware InSQL database describes itself as "real time", but take these descriptions with a grain of salt unless you see some hard specs on response time.

The InSQL database seems to be actually two databases operating in parallel. There is a non-relational database for live data, plus an ordinary MS-SQL Server database for lower speed summary data. These operate in parallel, and seem to accept different data. You have to know which sub-system your data is going into before you can tell how fast it can accept it.

Finally as I said previously, I suspect you will need to time stamp and buffer the data in the RTOS until the non-real time software (MS-Windows, C#/dot-Net, database) can accept it. You may need to run the two on separate hardware, if that is possible.
 
R

Rayno Powell

Hi Guys,

Long time since I read this page. I have great news - I found a way in the Beckhoff PLC to cycle through 750 analogs and 1000 digitals in 290 micro seconds - all processing is done there and then I use a special API to pull the information out and store it in a database.

Yet again - thank you all for the inputs you made.

Regards,
Rayno
 
<I have great news - I found a way in the Beckhoff PLC to cycle through 750 analogs and 1000 digitals in 290 micro seconds>

Interesting ... but let me calculate. 750 analog values with a size of 32 bits or 4 bytes ( correct?) need 3000 bytes or 24000 bits to be stored. 1000 digitals can be stored in 125 bytes or 1000 bits. All together we have to transfer 25000 bits per cycle.

With a transmission speed of 100Mb/s on the wire the transfer of one bit takes 0,01 us. That means the transfer of 25000 bits takes then 250us over the wire. We have still to add here the time needed by the driver, DMA, the PCI bus and the Ethernet adapter.

However ... how can you process and store 3.125 kBytes into a data base in less than 40us using a PC??

Best Regards
Armin Steinhoff
 
Top