Deterministic control in c++

V

Thread Starter

Vito Nardi

I’m looking for information about the programming techniques to make a deterministic control in c++ under win98 or winNT.
 
Vito,
You didn’t say what sort of determinism your looking for. Microseconds, milliseconds, seconds, minutes and what sort of protocol or driver is involved. These directly effect the type of program you’d have to write.
By control do you mean an ActiveX control?
Pete
 
J
Some General thoughts:
1 WINNT Services are the starting point.
2. No Application that has a window is suitable for control. Services run in the background.
3. Multi-threading is a must
4. Thread synchronization is necessary
5. WINNT is the minimum acceptable operating system.
6. Memory management is critical
7. IO Bus interfacing is in another service so COMM interfaces or memory mapped files are required to communicate between the two services.
8. PLC people don’t believe this will work
9 Extensive flow charting and modeling is a requirement for any degree of sucess.

Good Luck
Joe Bingham
 
J

Johan Bengtsson

That is easy, it isn’t possible
Windows 98 (as well as windows 95) is not possible to
have fully deterministic in any way at all
In NT you may (by adding extra packages) I have never tested this myself but they say it work in theory (I have some information about that someware, but I got it from this list in the first place so it may be better to get it directly from there)
By itself neither of these operating systems can guarantee you any time for anything, you will generally have enough and if that is not enough... sorry.
If a high priority thread combined with a sleep whith a calculated sleep time (using timeGetTime to calculate the sleep-time) is the best I have seen and used.
It may or may not be good enough for you and works for both windows 98 and windows NT.
What range of times are we talking about? and how necsesary is it to have it really deterministic.

Johan Bengtsson
P&L, the Academy of Automation
Box 252, S-281 23 H{ssleholm SWEDEN
 
E

Emil Georgiev

Hi Vito,
Determinism isn’t a programming language (c++) issue. It rather depends on the OS involved. To some extent determinism is possible even in NT or 98. Depending on your $/time budget and the complexity of your task, you could go for an WinNT/Win Ce real-time extension ( take a look at http://www.vci.com/products/vci_products/vci_products.html), wait for Embedded NT, or try on your own to solve the problem using a kernel mode driver!
Best regards
Emil Georgiev
MICONT Ltd.
 
S

Simon Martin

A few comments (inline)
<snip>
2. No Application that has a window is suitable for control. Services run in the background.

SPM: Not necessarily
3. Multi-threading is a must

SPM: Not necessarily
4. Thread synchronization is necessary

SPM: Not necessarily
5. WINNT is the minimum acceptable operating system.

SPM: Not necessarily (QNX, RT-Linux, Proprietary)
7. IO Bus interfacing is in another service so COMM interfaces or memory mapped files are required to communicate between the two services.

SPM: Not necessarily. The Win32 ReadFile command is a direct call (or as direct as you can get in WINNT)
8. PLC people don’t believe this will work

SPM: Well specified it will. Like any job if you define well what you mean by deterministic you can throw enough hardware at it to get it to work.
9 Extensive flow charting and modeling is a requirement for any degree of sucess.

SPM: Isn’t this true of all projects?
<snip>
 
F

Fred A. Putnam

This is entirely possible if the time constants of the system you are controlling are such that your control loop can be serviced fast enough and deterministically enough by the system.
Many control loops can be serviced at rates/latentcies of 10 milliseconds or more. For these, you can write your control algorithm as a “C Icon” dll and use the NT real-time multithreading, I/O, and other services provided by our LABTECH CONTROL product. Contact us for free evaluation downloads.
If your control loop needs to be serviced at higher rates, this can be achieved using real-time kernel extensions. The are provided by VenturCom - http://www.vci.com/, RadiSys - http://www.radisys.com/products/intime/, and Imagination Systems - http://www.imagination.com/. Tests by General Motors proved that these provide latentcies of 100 microseconds or less—see my I&CS article on this at http://www.icsmagazine.com/soft0698.htm.
Fred 11/5
 
J

Johan Bengtsson

Just a small note:
It is possible to have around 10-20ms just using a thread with high enough priority and proper use of the sleep command, but it will NOT be fully deterministic. Now get me right, this does NOT mean it is useless to a lot of applications, but if determinism is really important then that is not the way to do it. The other soulutions mentioned is probably a lot better.
BTW the scheduler in windows NT is switching about every 16:th ms and that means less than 16ms is hard to reach just relying on the scheduler (A thread not really leaving the processor, such as a low priority worker thread, will have the processor for about 16ms before it is thrown out)
BTW2 timeGetTime can be used for getting time with up to one ms resolution, but to really get one ms you have to call timeBeginPeriod (and of course eventually timeEndPeriod).

Johan Bengtsson
 
R

R A Peterson

You can also get deterministic control in DOS, Commodore 64, etc. The problem I see with the continuous debate over deterministic control is that most people really are talking about high speed control when they say deterministic.
No form of windows is a good choice for either perfect deterministic or really high speed control. They are used for it because its acceptable in most cases. If you really want deterministic you realistically should just get rid of the disk drive, mouse, and whatever else needs to be serviced in like mode. Put the code into prom, have enough RAM that it can run, and be done with it.
real realtime operating systems cost less then Windows anyway, so why waste time with Windows at all? The reason is because most of us can tolerate the tiny little bits of non-determinism (is that a word?) that WinNT will end up with to get the other (maybe only perceived) benefits of using WinNT.
 
R

Ranjan Acharya

Your prerequisite for real-time control must consider the following:
&#61623; Forget about the DOS-wrapper OS’s such as Win3X, Win95 and Win98. Consider Windows NT only at this time. Windows CE might be the way to go in the future. If you want true real-time then look at the QNX, Unix, pSOS, VxWorks or OS-9 .... and forget the cheesy Win32 API

==========================
&#61623; Hard real-time event handling is deterministic and guarantees a response to every stimulus within a pre-defined amount of time. Soft real-time is not deterministic and the delay to a stimulus may be so long that the next stimulus is missed (try running a piece of machinery off that!)
&#61623; Must survive a system crash and continue to operate in a safe manner and must be isolated from other applications running on the same machine. That means that your machine must not fall over dead when you have a BSOD
&#61623; Must survive a hard-drive crash (or better still, use ROM and Flash)
&#61623; Must be based on a proven real-time engine. The control engine must have a proven track record in mission-critical applications. Not something you just got from the Web!
&#61623; According to Microsoft: “Windows NT Workstation is not a hard real-time operating system. Windows NT Workstation is a general purpose operating system that has the capability to provide very fast response times but is not as deterministic as a hard real-time system requires”. I think that this is Microsoft’s way of pushing us towards future implementations of Windows CE (the Washing Machine OS).
&#61623; However, NT is a very powerful OS, it provides good hardware and software interrupt support with the appropriate event dispatching via its Hardware Abstraction Layer (HAL). It also supports multi-processor systems. Windows NT defines 32 levels of priority for tasks with 31 being the highest real-time priority and 0 being the lowest priority for an idle thread
&#61623; One bane of real-time control for Windows NT is the Deferred Procedure
Call or DPC. DPC’s are set-up by innocuous interrupt service routines (ISR’s). An ISR can set up several. They are then processed later by the OS at its leisure in a queue WITH NO PRIORITISATION. This means inane things such as mouse movement can really slow down your “real-time” response.
&#61623; What to do? Do nothing like Allen-Bradley and SoftLogix (vanilla NT) OR like other vendors pick a wrapper to NT. Alternatives are : RadiSys’s INtime (iRMX)
(Steeplechase VLC, Cutler-Hammer NetSolver); VenturCom?s RTX (Intellution Wizdom); Imagination Systems’ Hyperkernel (Nematron); InControl from Wonderware has an NT hack too.
&#61623; These wrappers come in two flavours: a real-time base that loads before NT or a souped-up HAL. The first always seems the best to me. This means that when you see the BSOD that your control system is still running.
&#61623; We have used the AB solution for a customer on a non-critical system.
The nice thing about it is that it has the look and feel of PLC-5 or SLC (of course it means having another development package too). The system works well and does not crash.
&#61623; I recommend using multiple PC’s if you are heading this way. Do not let some cheap project manager make you put your SCADA, MMI and PLC functionality on one machine. Keep the PLC separate. IMHO.

RJ
Ranjan Acharya
Team Leader - Systems Group
Grantek Control Systems http://www.grantek.com/
 
F

Fred A. Putnam

> It is possible to have around 10-20ms just using a thread with high enough priority and proper use of the sleep command, but it will NOT be fully deterministic. Now get me right, this does NOT mean it is useless to a lot of applications, but if determinism is really important then that is not the way to do it. The other soulutions mentioned is probably a lot better.<

As is common in this type of discussion, we are clearly running into semantic difficulties. What does “fully deterministic” mean? As some may take this to mean “infinitely deterministic”, I’d like to suggest a more useful term: “deterministic enough”. Like “real-time”, I would submit that “deterministic” needs to be defined relative to the requirements of the process under control. Thus, one should always reference a time scale when discussing whether a system is sufficiently deterministic or sufficiently real-time for a given process. Many processes have time scales that are tens of seconds or even minutes long. For these, most engineers would agree that 10-20 ms is deterministic enough.

> BTW the scheduler in windows NT is switching about every 16:th ms and that means less than 16ms is hard to reach just relying on the scheduler (A thread not really leaving the processor, such as a low priority worker thread, will have the processor for about 16ms before it is thrown out)<

This is incorrect as stated. The Windows NT kernel will run the highest priority thread whenever it is ready. It does not wait until the roundrobin timeslicing period for low priority threads is up.
Fred 11/10
--
Fred A. Putnam
LiveUpdate and Labtech
 
J

Johan Bengtsson

> As is common in this type of discussion, we are clearly running into semantic difficulties. What does “fully deterministic” mean? As some may take this to mean “infinitely deterministic”, I’d like to suggest a more useful term: “deterministic enough”. Like “real-time”, I would submit that “deterministic” needs to be defined relative to the requirements of the process under control. Thus, one should always reference a time scale when discussing whether a system is sufficiently deterministic or sufficiently real-time for a given process. ...<

I agree completely, the difference between fast responce and determinism is quite often missed. True determinism is not about HOW fast a system is, but if a system ALWAYS is fast enoug.
A while ago i tried to run a program something like this:
timeBeginPeriod(1);
start=current=timeGetTime();
do
{
for (loop2 000;loop2;loop2--)
; //do nothing 10000 times
last=current;
current=timeGetTime();
diff=current-last; //gives the # of ms between this
// reading and the previous one
if (diff>2000)
diff 00; //truncate if grater than 2000ms since
//the array ends there values[diff]++;
}
while (current-start<600000); //run test for 10 minute timeEndPeriod(1);
If I run this with a high enough priority it gets above even the operating system. (When I did that the mouse stopped completely as well as a lot of other things.)
I still needed the “truncate if higher than 2000ms” statement
if I run the test long enough
That basically means: this loop normally executes well below 1ms, it jumps up to about 16ms sometimes. And ocassionaly 2000ms is not enough.
BTW, I know this code is stupid to run since it uses up all of the processor don’t let even the OS get anything, but it makes a point.

>This is incorrect as stated. The Windows NT kernel will run the highest priority thread whenever it is ready. It does not wait until the roundrobin timeslicing period for low priority threads is up.<

I have read that too, I just don’t beleve it works that way everytime. I think I have read somewere else that there is some mechanism making really low priority threads run occasionally even when a higher priority thread is availiable. The reason is to not completely starve out a low priority thread. I am not really sure about this but will make further tests someday.
Just to make my opinion clear if someone might have misunderstood it:
I do beleve plain NT is enough for some control systems but not for all of them, yet some can be used with other NT based approaches (not relying on NT:s scheduler) suggested in some other responses. Windows NT:s scheduler does what it is designed to do, and that is not higly deterministic control. That doesn’t mean it is useless for all types of control!
I wouldn’t trust this in some potentially hazardous machinery but if the result of a too long delay between executions is a lost thenth of a second of production every second month - well then... why not?
Johan Bengtsson
P&L, the Academy of Automation
Box 252, S-281 23 H{ssleholm SWEDEN
 
S
While much discussion has quickly, and legitimately, moved to the most common execution environment for c++, namely Windows-NT, the question about using c++ as a language, instead of say c or assembler, is a different and equally interesting one.
With the fancy features of c++ like constructors and distructor, operator overloading, virtual functions, etc, the issues of c++ efficiency and determinism are not easy. Simple, similar looking c++ statements can have remarkably different execution times depending upon the vagaries of the language. And if the dynamic features of the language where the data types are inspected at run time to determine which functions are called, determinism becomes very difficult if not impossible.
Depending upon the particular program and libraries, the simple declaration of a class in a function can cause a large amount of code to execute which creates and instantiates the class on the stack. Similarly, the distructors that run on function exit can be quite sizable. (But while this time can be long and hidden from the unsuspecting programmer, it will usually be fixed and not affect determinism. Of course problems arrise if the constructors/ distructors themselves are non-deterministic, say by calling malloc() or free() which will sometimes, but not always invoke garbage collection.)
Thus, some of the fancy features of c++ should be used carefully (and maybe sparingly) in programs where performance and determinism are important. Similarly, all class libraries (which can have “hidden” behaviors) that a system uses must be known and understood. Testing of programs and inspection of actual generated code and code paths is important until familiarity with the c++ and it behaviors are gained.
Overall, with proper engineering, the c++ can be used for real time and even deterministic programming. As with any language, including c and assembler, care and understanding of the language and the entire system, including the OS, is required.
My fifth of a dime’s worth, steve
Steven B. Cliff
VP, Research & Development
Control Technology, Inc
http://www.controltechnology.com
 
M

Michael Griffin

<clip>
I have read that too, I just don’t beleve it works that way
everytime. I think I have read somewere else that there is
some mechanism making really low priority threads run
occasionally even when a higher priority thread is availiable.
The reason is to not completely starve out a low priority
thread.
<clip>
I believe that I also read this some years ago. If I recall correctly, the scheduling algorithm which was originally used for NT was a proprietary one. It was supposed to be very closely targetted towards giving the user the best appearance of responsiveness for the foreground tasks, while still allowing background tasks some CPU time on an irregular basis. The idea was to give the illusion of a faster computer even if CPU utilisation was thereby made a bit less efficient.
The assumptions the algorithm design were based on the computer being used by a single person who was running standard desktop applications including CAD, etc. I don’t know if the scheduling algorithm has since been significantly changed.
<clip>
Windows NT:s scheduler does what it is
designed to do, and that is not higly deterministic control.
That doesn’t mean it is useless for all types of control!

I wouldn’t trust this in some potentially hazardous machinery
but if the result of a too long delay between executions is
a lost thenth of a second of production every second month -
well then... why not?
<clip>
I think though that it is easy to get yourself into trouble with Windows NT. Nobody can really seem to give me a definitive answer on how deterministic NT actually is, other than saying “not very”. A number which is an “average”, or “most of the time when I tested it” is not very comforting. Blaming the problems on “bad network drivers” kind of misses the point.
My own experience with this particular problem has so far been limited to listening to test equipment software developers from several companies moaning about all the headaches they ran into because NT did unpredictable things to their timing for reasons that no one could explain. They hadn’t realised they would have problems with determinism until the design was too far advanced to turn back, and they ended up behind schedule and over budget as a result (and sometimes with unresolvable bugs). The decision to use Windows NT was made at the beginning of the project, instead of being the end result of detailed design analysis.
One of the problems with NT seems to be that while there are a lot of books on NT available which can help you write data entry software, I haven’t been able to find anything which is oriented towards control uses or test equipment. It’s no use having a shelf full of books if none of them answer the types of questions you are interested in.
Asking a typical office application type computer programmer (e.g. my brother) about these types of questions also seems to be a waste of time. The types of problems I need to solve are ones he never imagined existed.
If anyone knows of any good Windows NT reference material which is oriented towards control uses, I am sure other people here would appreciate learning about it. I myself may need hard information on this subject for a future project I will be working on.

Michael Griffin
London, Ont. Canada
 
S
With respect the vagaries of C++, most things are resolved at compile time, unless otherwise stated, so overloaded functions and operators are resolved at runtime. Ok some functions are runtime by definition, like cast checking, but otherwise it should generate similar timings to plain vanilla C, depending on the efficiency of the compiler.
With respect the use of malloc in a constructor. It is bad practice to put anything that can fail inside a constructor. The work around for doing it is create an init function (I consider good) or throw an exception (which I call a work around, not a solution).
I agree that knowledge of the tool used is more a key factor here than the actual language itself. I suppose with a good enough knowledge of COBOL I could write a deterministic program...
Debian GNU User
Simon Martin
Project Manager
Isys
mailto: [email protected]
 
E

E. Douglas Jensen

I trust that you all have read the papers on Microsoft’s web
site which report on some characterizations of NT’s timeliness
such as at http://www.research.microsoft.com/~mbj/
I don’t have time right now to explain about NT’s scheduling algorithm(s).
BTW, not all real-time control operates with microsecond and millisecond or even second timescales. Real-time computing for higher order and higher level control in an enterprise usually has time constraints and activity execution durations ranging across a wider spectrum, typically on the order of 10^-1 to 10^6 seconds. Each individual control loop may involve timeframes in multiple, perhaps changing, regions anywhere in that spectrum. These timeframes are normally large with respect to (at least much of) the latency magnitudes and predictabilities of the underlying infrastructure. For example, the magnitudes and predictability of OS interrupt/context switching and service latencies (which are the primary focus of traditional real-time thinking and technology) are likely be insignificant, allowing the use of mainstream COTS products such as NT; those of the networks are more likely to have greater significance (e.g., due to large size data transfers, or low bandwidth).
Doug
E. Douglas Jensen (traveling in Monterey)
[email protected]
http://www.real-time.org
 
D

Darwin Frerking

Robert,

1) What is the package you use?
2) What are the resolution and jitter(uncertainty) times for I/O read
and writes?

Regards,
Darwin Frerking, Control Engineering Manager
FAS Technologies - 10480 Markison Road - Dallas, TX 75238
 
C
Hi All

Rant ON:

One really good reason to use C and C++ for deterministic programming is so
you can buy that fantastic package that emulates 4 plc's in sanskrit, latin,
and those other iec1131 languages. I wouldn't use NT for that though. That half
hour reboot recovery is hard to do in a deterministic fashion.


Curt Wuollet
Linux Systems Engineer
Heartland Engineering Co.
 
S
aaah, yes, but what is the system you mentioned programmed in...

Debian GNU User
Simon Martin
Project Manager
Isys
 
R

Robert Trask, P.E.

1) What is the package you use?

The package is TwinCAT by Beckhoff. The full software package can be
downloaded at www.beckhoff.com and used for 30 days with no obligations. I
do not work for Beckhoff so I feel comfortable posting this to the list.
The software is only recently available in the US. It is quite amazing
stuff.

2) What are the resolution and jitter(uncertainty) times for I/O read
and writes?

The resolution depends on the I/O hardware. The jitter is also hardware
dependent and the software has an easy to access screen that displays the
jitter. On my laptop running the software with a minor control program and
a DeviceNet PCMCIA scanner the jitter is on the order of 4-5 microsecond.

Robert Trask, P.E. [email protected]
Wilmington, NC USA
 
Top