Labview & Linux


Thread Starter

Petr Baum

Hi All, During our recent project we developed quite complex control program, using Labview 6i running under Microsoft Windows NT 4. The Labview application being run consists of 8 TCP/IP ports (configured as clients), providing continuous two way communication, and extensive GUI display, which is also updated continuously. The application runs on Pentium 3(550MHz) based machine. We run into rather serious problems during development with performance of application and with general stability of system. When checked the memory usage remains static, and the CPU usage cycles between 40% and 2 %. In an attempt to compare operating systems we tried to run this application - using Labview for Linux - under Red Hat Linux 6.1. The code is identical on both platforms. Running under Linux, the memory usage also remains static, but the CPU usage is approximately 1% - 2% only. So far there were no problems with stability. Both operating systems were configured as a standard workstation. The memory and CPU were monitored using the standard task managers that are supplied with the platforms. Test were conducted as a part of development cycle, not in operation environment. We are stunned by the difference in CPU usage - if our results are correct they could easily explain difficulties of NT. They would also indicate that using Linux we do not have to worry about streamlining of Labview code. We can add ten times more similar functionality (or so) to application instead... It really sounds too good to be true, are we missing something? Are there better ways how to compare both OS with Labview? Any other comments, please? Petr Baum [email protected] Kevin Lodewyks [email protected] --------------------------------------------------------------- Niksar Pty Ltd Unit 135/45 Gilby Rd, Mount Waverley, 3149 Phone: +61-3-9558 9924 Fax: +61-3-9558 9927 ====================================

Curt Wuollet

Hi Petr, Welcome to those who use Linux for the right reasons. Here's another great way to compare them that relates directly to critical control work. Simply set them up side by side on Friday and go home. On Monday, see which one is still running. This can be made more interesting by the placement of wagers. Or you can have someone call you repeatedly and angrily when one goes down to simulate actual usage. You aren't missing anything, especially when you consider that the app was written for Windows and ported to Linux. If it was written native for Linux you could expect even better performance. I design and build ATE on Linux that is absolutely unsupportable under Windows. The difference is striking and very, very, difficult to explain to customers. They think you're sabotaging NT to sell Linux. I suspect this is why there is a Linux version of Labview. No product can look very good if the platform isn't stable. There are a lot of people here that will tell you that if _they_ were running the NT it would be fine. I use RedHat exactly as installed and my phone doesn't ring. Regards cww

Michael Griffin

I'm not really the best person to answer this, but since you didn't get a more serious answer yet, I'll take a crack at it. I wouldn't trust the standard CPU monitor statistics as being a true indication of overall system load. They are really only intended as a rough indicator of how hard your computer is working, and are likely not measuring the same thing in the same way. What you really have to address is two different things. 1) Real time performance. Do you need this? If you do, you must keep in mind that it can be possible to design an OS for better *average* performance by sacrificing real time characteristics. If you need real time performance (and this often comes up unexpectedly with test equipment), then any considerations of average performance are moot. Windows NT is definitely not real time; some versions of Linux may be better in this regards. It is also possible to make the computer "feel" faster to an interactive user by giving tasks associated with user interaction higher priority, while temporarily starving background or less visible tasks of CPU time. A more responsive design may be better for say CAD work, while a more consistent design may be better for test equipment. 2) Average performance. If you can tolerate significant fluctuations in system response, then a system might be able to give better average throughput even if it is subject to greater short term fluctuation. Given the above, the best method I am aware of is to load down the system with your actual real application, and measure the performance externally. You may need to artificially increase the load to get a measurable response. For example, you could modify your application to perform a task multiple times, and then measure the time required to complete it. Alternatively, if the hardware is suitable you could measure the time to complete one test (or even specific parts of the test) accurately and precisely. For example, in a complex application which I worked on a while ago, I hooked up an oscilliscope to a digital output card and measured the time to complete each part of the test. I turned on a spare output as a flag to tell me when a particular part of the test began and ended. I previously measured the extra time setting the output took, and subtracted this from the results. I had some very difficult cycle time targets to meet, and so needed detailed reliable benchmarks to direct my optimisation efforts. Sophisticated software development systems have code profilers which can do this automatically, but I'm not aware of one for Labview. A profiler itself can also disrupt critical timing, especially if your program has to react to outside events. This means that profiler results need to be taken with a grain of salt. I would suggest considering the oscilliscope method. A bit of genuine data is worth a thousand speculations or opinions. If you decide to test your system using any of the above ideas, I think quite a few of us would like to hear the results. It would also be very interesting to hear about any measured differences in reliability (crashing or hanging up) if you can find some way to test that. With test equipment (I assume this is what you are building), reliability is far more important than small differences in speed. What I found particularly interesting though, is that you ported your Labview application from Windows NT directly to Linux without any problems. This is very impressive and interesting in its own right. I can see where there may be situations where someone wants to be able to develop or modify a Labview program on their desktop computer, which has Windows NT (because of corporate standards or other special software), while using Linux on the final target machine (which may be in service and so not available for development work). I'm not an expert on Windows or Linux, but I hope the preceding has been of some help. ********************** Michael Griffin London, Ont. Canada [email protected] **********************

Curt Wuollet

Now if we could just get some of the automation vendors to port their tools, I'm sure they would be pleased with the results and it would solve a lot of resource conflict and serial comm problems. Think of how many of those show up on the list as well as other windowisms that reflect poorly on the vendor and raise support costs. Just a thought. cww