RTOS vs OS

P

Thread Starter

Praveen Mathew

What are the main differences between an RTOS & normal OS like Unix or Windows?

Why is RTOS prefered over these for certain applications?

Would appreciate any thoughts on this matter.

Thanks
praveen
 
C

Carlos O'Donell

The biggest difference is determinancy. An RTOS will have a deterministic scheduler. For any given set of tasks your process will always execute every number of microseconds or miliseconds exactly, and the same number from schedule to schedule.

In UNIX and Windows the scheduler are usually soft-realtime (as opposed to some hard-realtime RTOS). Soft-realtime means that the scheduler tries to assure your process runs every X number of miliseconds, but may fail to do so on occassion. Your process may also experience scheduler jitter, being executed in X << Y at one point, or X >> Y at another, where Y is some value larger than X. A hard-realtime RTOS will always make sure your process runs every X number of miliseconds by taking away time from lower priority processes.

There are various RTOS extensions to Linux, including RTAI and RTLinux. WindowsXP claims to have hard-realtime by use of process priorities, but I don't have much experience there.

This is all very important if you are doing data aquisition and measurement. An RTOS allows for deterministic sampling using software.

Cheers,
Carlos.
 
Carlos,

Well -- true, but that's not all.

The "classic" difference between an RTOS such as the old ModComp Classic and DEC's RSX-11 (for those of you old enough to remember these minicomputers), and the more recent operating systems based on UNIX and including DOS and Windows in all forms, is Pre-Emption. Those older RTOS's and the minicomputers on which they were based, used hardware interrupts for all significant events
including scheduling timers and external I/O state changes. They had a large number of registers with a set devoted to each interrupt level. For example, program execution on the Modcomp used 16 registers, but it handled 16 levels of priority interrupt, so it needed a total of 256 registers. Hardware interrupts on the Modcomp did not require saving registers before execution, unless one of the levels was used for multiple sub-levels. Typically, interrupts were serviced in only a few CPU cycles, and the interrupted program resumed. Pre-emption is the ability to interrupt an operating program, including the OS itself, with a higher priority interrupt immediately.

Modern CPUs cannot do this, BUT they are now so much faster (1-3000 times) than those old minicomputers, that with efficient register block streaming, large cache memories, and today's fast memory, there is no noticeable difference between an RTOS and conventional OS EXCEPT in embedded applications. However, none of the current microcontroller architectures used for embedded systems support more than 4 vectored interrupt levels. Today's use of registers in embedded systems not based on the Intel 80xx family, tends to be more like RISC processors in which there is no dedicated set of registers that could be saved. Rather, their large number of registers are used more like a FIFO stack automatically retaining registers on interrupt. This makes the old-fashioned RTOS unnecessary.

Modern RTOSs simply make sure that a) no interrupt is ever lost, and b) no interrupt can be blocked by a lower priority process.

Determinism is simply that the maximum possible worst-case delay is known and is repeatable. Not quite good enough for an RTOS.

Dick Caro (been there -- done that!)
===========================================
Richard H. Caro, CEO
CMC Associates
2 Beth Circle, Acton, MA 01720
Tel: +1.978.635.9449 Mobile: +.978.764.4728
Fax: +1.978.246.1270
E-mail: [email protected]
Web: http://www.CMC.us
Buy my books:
http://www.isa.org/books
Automation Network Selection
Wireless Networks for Industrial Automation
http://www.spitzerandboyes.com/Product/fbus.htm
The Consumer's Guide to Fieldbus Network Equipment
for Process Control
===========================================
 
V

Vladimir E. Zyubin

Hello automation,

There is the only remarkable difference: some OS is advertized as a RT one, the others - as a common purpose one.

--
Best regards.
= Vladimir E. Zyubin
= Friday, January 21, 2005, 5:43:33 PM =
 
M

Michael Griffin

Re: Dick Caro's reply. I have a few minor clarifications.

On January 21, 2005, Dick Caro wrote:
<clip>
> Modern RTOSs simply make sure that a) no interrupt is ever lost, and b) no
> interrupt can be blocked by a lower priority process.
>
> Determinism is simply that the maximum possible worst-case delay is known
> and is repeatable. Not quite good enough for an RTOS.
<clip>

The real difference between an RTOS and a general purpose OS is that with an RTOS the designers have taken care to ensure that the response times are known. This is not as simple as it may sound. Modern general purpose operating system kernals are very large, with several million lines of code. It can be difficult to trace through them to find all the possible sources of delay in response. An RTOS tends to be much smaller than a general purpose OS making guarantying the response time more practical.

As well as the difficulties in predicting response time, there are often deliberate design decisions made which affect response time. When an operating system is executing code within itself, it is often necessary to "lock" the system from switching tasks while it is in critical zones. These critical zones are sequences of code which must not be interrupted in order to avoid corrupting system data.

The OS designers generally try to keep these critical zones short, but there are trade offs involved. For example, designing for shorter or more predictable response times may decrease average through-put. A decrease in average performance might be considered acceptable for someone designing an embedded application, but it might be considered completely unacceptable to someone designing a large scale database system. Since general purpose operating systems are designed for the desktop and server markets, they are designed with the requirements of those markets in mind.

While an OS may or may not be *intended* for use in real time applications, whether it is in fact suitable for a particular real time situation is a matter of judgement. First you must decide what real time deadlines you must meet, and then you must decide what degree of risk of not meeting them you are willing to take. Once you know that, you can select an OS.

A practical example may make some of this clear. A common general purpose operating system is Linux. Until about a year or so ago, the standard kernal version was 2.4. This was not intended as an RTOS, but there are a number of embedded software vendors who would take the standard Linux kernal and modify it (they of course had complete access to the source code) to make it suitable for many real time applications.

The reason why standard Linux 2.4 was not consider to be "real time" is because there were long sections of code which were "locked" in while executing. Making it "real time" involved removing these locks. However, the means used to do so had side effects which were unacceptable to enough people who were not interested in real time that these changes were never accepted into the mainstream code base.

The main stream of development for Linux after version 2.4 was to make it more scalable, particularly in the upwards direction. In this context, "scalable" meant being able to use it in larger multi-processor systems with less loss in efficiency. The result of this was Linux 2.6 (the current version). Making it more upwardly scalable though had an interesting side effect - they had to remove or change a lot of the internal locks (although they did so in a way that didn't have the undesirable side effects). The result is that version 2.6 tends to have a much more predictable and shorter response time.

So, does this mean that Linux is an RTOS? The answer is "no" in the sense that it isn't the design intent to be one. However, it can be still be suitable for many real time applications.
 
C

Curt Wuollet

Hi Michael

I've been running some controller type code and the latest kernels are indeed very good for variability and latency. To put it in perspective, you are far more likely to miss an automation event due to the heavy filtering in PLC inputs and the slow sampling rate than the rare long response to an interrupt. In the PLC time context, I'd say Linux is unquestionably real time. That is, in long term tests you would have 100% in time completion of the tasks needed to read, solve, and write at any cycle time greater than 1msec with any practical IO count. This is with "normal" code without special extensions, just the preemption and scheduling features in a distribution kernel. At this time, the limitation for general automation is IO. Random wiring, filtering and garden variety output circuits with long on/off times set the upper practical limit and certainly at those speeds Linux would be real time. Since it's not practical in general automation to use controlled Z wiring and impedance matching, I'd say it's good enough for any job you can do with a PLC. If the truth were known, I'd question that many PLCs are "real time" even in their normal application.

Regards

cww
 
M

Michael Griffin

Re: Curt Wuollet's comments:

Conventional PLC CPUs are not "real time". They simply offer *average* scan rates in the tens of milliseconds with deviations of a similar order. Most control applications do not require real time and average reaction times in the tens of milliseconds are more than adequate. Genuine high speed real time tasks in PLCs are typically handled by special hardware, such as counter, stepper, or servo modules. Some PLCs allow some limited code blocks to be scheduled on a timed basis, but this feature is very seldom used even on those PLCs which possess it.

The rate and repeatability of the thread timings I demonstrated in my experiments (in reply to "Re: PC: Ways to do machine control under Windows") are better than the scan rate and repeatability of a typical PLC in service
today. Any advantages that a conventional PLC may have do not lie in speed or determinism.

A typical application for PC in the problem domain we are discussing would be in a computerised test with low data acquisition rates (e.g. less than 100 Hz). A system such as this may "scan" several analogue inputs during a test
and act on the results. A PLC could perform the same task (this example was deliberately chosen to compare PC versus. PLC). However, the PC offers a simpler way to provide better operator interface, and to store and distribute the results of the tests. A PLC would "scan", while a PC program would use multiple timed threads to poll the I/O and apply the readings to some set criteria. The tests I conducted in previous messages used a threading method which is analogous to the approach a PC would use in this example.

The timing experiments I conducted were with a stock 2.4 kernal. Some improvements could have been attempted by using 1) a 2.6 kernal, 2) enabling kernal pre-emption, 3) using a faster task switching rate (the standard is 10 ms - some people change this to 1 ms). I would expect any benefits which may be derived from these as depending upon the application.

If we examine each of these, the question of whether to use a 2.6 kernal would be more or less moot, as this is the current version and would be used in a new application anyway. Kernal pre-emption is relevant to fast interrupt
response, but would again be more or less irrelevant to our discussion where we are polling I/O on a constant schedule.

A faster task switch time (e.g. 1 ms instead of the standard 10 ms) might be useful for applications which need the faster thread repetition rate, but I doubt it would do anything for the worst case deviations (e.g. ) I mentioned in the last set of experiments (several samples of approximately 30 msec.).

However, there are several special factors which play into these timing deviations which may not apply under other circumstances. The threading library used was that belonging to the Python interpreter. Using the OS threads directly (possibly the POSIX threads) may give a different result. Taking advantage of this would require either a different language (e.g. 'C'), or a different VM (versions of Python operate under other VMs, including Java - I don't know if this would make a difference).

The deviations may have been affected by the I/O operations being performed (the last set of test included writing to a simulated log file). Another approach may involve writing to a pipe or memory mapped file, and having another process (not just another thread) take the data and write it to the disk file.

The target repetition rate for the threads which scan and evaluate I/O could be set to run faster, but as you mentioned there may be no benefit to it if the I/O cannot deliver useful data faster or if the characteristic being
measured does not respond faster. Since we are comparing things operating on a PLC type time scale, a good target repetition rate would be 25 msec.

Given the above, a stock kernal with *no* performance tweaks would likely be more than adequate for most PC applications. Higher performance is available via some standard options, which would extend the application range a bit further. To operate reliably in the micro-second range however, I believe requires a genuine RTOS.
 
C

Curt Wuollet

Hi Michael

It seems we diverge only in the use of interpreted languages for time critical processes. I'm really trying to get there, realizing that speeds and raw power have increased several orders of magnitude since I may have formed my biases. In fact, what amazes me is that the
perception of speed lags way behind the real improvements in throughput as software complexity (bloat) has absorbed the available cycles. Anyway, I doubt I'll code control stuff in Python, but I should take another look. And I don't believe I'll need to resort to assembler, but I'll probably still seperate low level stuff from high level stuff. The wonderful thing is that it is becoming much easier to get the performance levels needed in Bonehead C on "normal" Linux. The next step will be when one can easily use hardware interrupts in userland. But that's as much philosophy as anything else. The intense interest in
embedded Linux keeps making things easier and easier for things on the fringes of hard fast realtime. I can run most DAQ cards fast enough to capture waveforms with good fidelity and crunch the numbers on stream. That's as fast as I need at the moment. And I can reliably do anything a PLC can do while I'm doing it. And maybe play DOOM :^)

Regards

cww
 
A

Armin Steinhoff

Hi,

>Re: Dick Caro's reply. I have a few minor clarifications.
>On January 21, 2005, Dick Caro wrote:
><clip>
> > Modern RTOSs simply make sure that a) no interrupt is ever lost, and b) no
> > interrupt can be blocked by a lower priority process.
> >
> > Determinism is simply that the maximum possible worst-case delay is known
> > and is repeatable. Not quite good enough for an RTOS.
><clip>
>
>The real difference between an RTOS and a general purpose OS is that with an
>RTOS the designers have taken care to ensure that the response times are
>known. <

Hm, I believe they taking care that the processing is strictly event oriented. The response time is not important as long as the processed results are available at the deadline.

> This is not as simple as it may sound. Modern general purpose
>operating system kernals are very large, with several million lines of code.
>It can be difficult to trace through them to find all the possible sources of
>delay in response. An RTOS tends to be much smaller than a general purpose OS
>making guarantying the response time more practical. <

IMHO... it doesn't matter how big the kernel is. It's important how deterministic the kernel responses to events. A problem is mostly the disabling of interrupts in such big non-RTOS kernels... that means interrupt events are suppressed.

>As well as the difficulties in predicting response time, there are often
>deliberate design decisions made which affect response time. When an
>operating system is executing code within itself, it is often necessary to
>"lock" the system from switching tasks while it is in critical zones. These
>critical zones are sequences of code which must not be interrupted in order
>to avoid corrupting system data. <

Yes... and here is the big design difference between RTOS and non-RTOS!

>The OS designers generally try to keep these critical zones short, but there
>are trade offs involved. For example, designing for shorter or more
>predictable response times may decrease average through-put. <

This depend on the 'costs' of context switching... good RTOSes are allowing fast and efficient context switching.

>[ clip ..]
>
>The reason why standard Linux 2.4 was not consider to be "real time" is
>because there were long sections of code which were "locked" in while
>executing. Making it "real time" involved removing these locks. However, the
>means used to do so had side effects which were unacceptable to enough people
>who were not interested in real time that these changes were never accepted
>into the mainstream code base.
>
>The main stream of development for Linux after version 2.4 was to make it
>more
>scalable, particularly in the upwards direction. In this context, "scalable"
>meant being able to use it in larger multi-processor systems with less loss
>in efficiency. The result of this was Linux 2.6 (the current version). Making
>it more upwardly scalable though had an interesting side effect - they had to
>remove or change a lot of the internal locks (although they did so in a way
>that didn't have the undesirable side effects). The result is that version
>2.6 tends to have a much more predictable and shorter response time. <

True... that kernel reacts faster to events.

>So, does this mean that Linux is an RTOS? The answer is "no" in the sense
>that
>it isn't the design intent to be one. However, it can be still be suitable
>for many real time applications. <

True... but the real-time performance is still not predictable and a lot of developers are 'fiddling around' to improve it.

Best Regards
Armin Steinhoff
http://www.steinhoff-automation.com
 
A

Armin Steinhoff

Hi All,

>Carlos,
>
>Well -- true, but that's not all.
>
>The "classic" difference between an RTOS such as the old ModComp Classic
>and DEC's RSX-11 (for those of you old enough to remember
>these minicomputers), and the more recent operating systems based on UNIX
>and including DOS and Windows in all forms, is
>Pre-Emption. Those older RTOS's and the minicomputers on which they were
>based, used hardware interrupts for all significant events
>including scheduling timers and external I/O state changes. They had a
>large number of registers with a set devoted to each
>interrupt level. For example, program execution on the Modcomp used 16
>registers, but it handled 16 levels of priority interrupt, so
>it needed a total of 256 registers. Hardware interrupts on the Modcomp did
>not require saving registers before execution, unless one
>of the levels was used for multiple sub-levels. Typically, interrupts were
>serviced in only a few CPU cycles, and the interrupted
>program resumed. Pre-emption is the ability to interrupt an operating
>program, including the OS itself, with a higher priority
>interrupt immediately. <

Preemption happens on two levels... at program and hardware level. Preempting at hardware level (or interrupt level) leads interrupt nesting. It preempts interrupt service routines...

Operating programs can be preempted by the scheduler (triggered by events)... e.g when a program with a higher priority requests the CPU.

>Modern CPUs cannot do this, <

Sorry, but that's not correct. All modern CPUs - including the x86 line - are allowing to do that kind of preemption as described above. But interrupt nesting is not supported by all RTOSes...

> BUT they are now so much faster (1-3000 times) than those old
> minicomputers, that with efficient
>register block streaming, large cache memories, and today's fast memory,
>there is no noticeable difference between an RTOS and
>conventional OS EXCEPT <

No, no... there are remarkable BIG differences!

> in embedded applications. However, none of the current microcontroller
> architectures used for embedded
>systems support more than 4 vectored interrupt levels. <

The x86 CPUs e.g. are supporting 15 hardware interrupt levels... an exception are the CPUs of the PPC line.

> Today's use of registers in embedded systems not based on the Intel 80xx
>family, tends to be more like RISC processors in which there is no
>dedicated set of registers that could be saved. Rather, their
>large number of registers are used more like a FIFO stack automatically
>retaining registers on interrupt. This makes the
>old-fashioned RTOS unnecessary.
>
>Modern RTOSs simply make sure that a) no interrupt is ever lost, <

This depends on the device driver and the hardware interface of the device... it doesn't depend on the RTOS.

> and b) no interrupt can be blocked by a lower priority process. <

You are mixing up here two things. The execution of a 'program' can't block an interrupt. Important is that a hardware interrupt with a lower priority should not block an interrupt with a higher (hardware) priority.

>Determinism is simply that the maximum possible worst-case delay is known
>and is repeatable. Not quite good enough for an RTOS. <

If you mean your definition of determinism... yes, then you are right :)

Best Regards
Armin Steinhoff
http://www.steinhoff-automation.com
 
M

Michael Griffin

On Jan31, 2005 02:51, Armin Steinhoff wrote:
<clip>
> >The real difference between an RTOS and a general purpose OS is that with
> > an RTOS the designers have taken care to ensure that the response times
> > are known. <
>
> Hm, I believe they taking care that the processing is strictly event
> oriented. The response time is not important as long as the processed
> results are available at the deadline.
<clip>

However, you do have to know whether the eadlines can in fact be met, so the response times have to be known.

> IMHO... it doesn't matter how big the kernel is. It's important how
> deterministic the kernel responses to events. A problem is mostly the
> disabling of interrupts in such big non-RTOS kernels... that means
> interrupt events are suppressed.
<clip>

The reference to the size of the kernal is with respect to how practical it is to ensure an OS behaves correctly as an RTOS. An RTOS adds design and testing criteria which are beyond what a conventional OS requires. The more code which is present in the kernal, the more difficult it is to ensure that the real time criteria have been met. Although the basic design problem is the same in either case, it is important to keep the scale of the problem manageable.
 
P
This has been an interesting thread I believe. May I suggest that there are at least three distinct markets for "RTOS" capabilities?

1. There is the mass produced, low value market. Cash registers, home and small office printers etc. etc. Production runs 10,000 and up

2. There is the low volume, high security market. Aerospace and military. (Automotive come between 1 and 2 I believe)

3. One and few off systems like custom factory automation.

The high-volume systems pare down costs of production by the fraction of a penny while the factory automation systems require on-going flexibility.

I think that all other things being equal, 1 and 2 prefer minimal kernels for the delivery system with the inconvenience (read higher costs) of cross-development while the factory automation systems benefit from the
flexibility of self-hosted development (read general-purpose operating system)

All these systems require high-dependability OSs, aerospace especially so. Small kernels provide this more easily as the complexity is lower and they do not have to protect against system developer error at run-time. General-purpose OSs require rock-solid design and capabilities to ensure high-dependability.

Peter

Peter Clout
Vista Control Systems, Inc.
 
M

maphil philip

in rtos the timing behavior is important. rtos is deterministic and os non-deterministic. contrary to normal os the goal of rtos is to minimize the complexity. all embedded applications not need rtos. but by using an rtos effeciently we can provide correctness proecion ....etc.
 
M

Mudit Aggarwal

I understood that an RTOS should have a determinstic behaviour. But what is there in RTOS which makes it determinstic which is not there in normal OS.

Premeption, Low Interrupt Latency, priority scheduling can exist in normal OS also, but what exactly makes RTOS deterministic?
 
M

Michael Griffin

What gives the RTOS the deterministic behaviour is how it is written. Most operating systems have "locks" preventing interruptions in "critical sections". Every part of an RTOS kernel is written so that it can be interrupted at almost any time, which requires that locked sections of code must be as few and as short as possible. This means the latency between an event and the response to it can be accurately "determined" (known).

There are also often (but not always) special scheduling calls in an RTOS which can be used to help ensure that the most critical tasks get priority over the less critical ones. Whenever the RTOS designer has to make a choice between responsiveness and efficiency, he will in most cases choose responsiveness.

In contrast, a general purpose kernel will often be written with large sections that cannot be interrupted (locks are applied). This means there can be long (and indeterminate) periods of time for which external events must wait. Generally, no one knows how long these periods of time can be. The OS designer will almost always choose to maximise average throughput rather than responsiveness.

Having said the above, sometimes we get lucky and attempts to improve efficiency in a general purpose OS will also improve real time response. This happened several years ago when the Linux OS kernel was being changed to improve the ability to use multiple CPUs. Using dozens of CPUs efficiently requires the ability to interrupt the OS kernel in a manner more like an RTOS than a general purpose OS. Recent work on reducing power consumption (for embedded applications) has had similar effects.

The net result is that mainstream development for Linux happens to produce a result which is useful to people producing real time versions of Linux. It is expected that within a couple of years, producing a real time version of the standard Linux kernel will just require changing an option and re-compiling. Some distributions may ship the RT kernel version as an option (many currently provide alternate kernel versions with different options). At present, there are RT versions of Linux, but they have extensive internal changes (although fewer now than before) from the standard distribution.

MS-Windows is a different story. The standard MS-Windows OS is used in a much narrower range of applications than Linux is, and can't be efficiently used in very large or very small applications. For embedded us, Microsoft offers a completely different OS called "Microsoft Windows CE" which has an interface layer which acts in a manner which is somewhat familiarity to someone who has written programs for the standard versions of MS-Windows.

The above is a very brief summary which doesn't attempt to discuss some of the other features which a specialised RTOS will offer which may make them more suitable for smaller embedded applications. Not every RTOS is suited for every RT application. However, discussing that in any detail is a subject for a book, not a short message.
 
RTOS is just a means for positioning of the target group. If we make comparision Windowz and QNX we do not find any valuable differencies. Microkernel? Ok. RT means "a microkernal
architecture". That's all. Deterministic behavior? Just a B.S. Where is a criterion? is absent. Discrete.

--
Best regards,
Vladimir
 
M

Michael Griffin

In reply to Vladimir: An RTOS does not have to use a micro-kernel and a use of a micro-kernel does not make an OS an RTOS. Good examples of these are respectively, the RT versions of Linux which use a monolithic kernel, and Minix 3 or Hurd which have micro-kernels but are not an RTOS.

Many people consider a micro-kernel to be a good basis for an RTOS because the small size of the kernel means it is easier to verify the length of the "locked" (uninterruptable) code sections (because there is less code to review and maintain). The rest of the OS processes are pushed out to modules with lower privilege levels which can be interrupted at any time just like a user program.

Micro-kernels are also popular in RTOS designs because an RTOS is often used in small embedded systems. The modularity of the micro-kernel design makes it easier to strip it down to the bare essentials for that particular application, thereby saving EPROM and RAM.

The disadvantage of a micro-kernel is that it runs more slowly than the alternative (monolithic kernel) on typical hardware, and is more difficult to write and debug (and so tends to incorporate potential improvements more slowly). Micro-kernels are popular with theoretical computer scientists but all of the popular general purpose operating systems today use monolithic kernels (specialised ones like QNX are the exception).

The criteria for deterministic behaviour in an RTOS is that an interrupt is always serviced within a specific period of time, or that a process is always run at a specific interval. However, using an RTOS does not automatically make a complete system "deterministic". That requires proper design of the overall application, hardware, and system. The RTOS is just a tool in the toolbox of the application designer.
 
V

Vladimir Zyubin

> In reply to Vladimir: An RTOS does not have to use a micro-kernel
> and a use of a micro-kernel does not make an OS an RTOS. Good
> examples of these are respectively, the RT versions of Linux which
> use a monolithic kernel, and Minix 3 or Hurd which have
> micro-kernels but are not an RTOS. <

It looks like the apophatic theology... definition by negations. :)

What are the RTOS features? I see no difference between QNX and Windows. Microkernal architecture only.

> Many people consider a micro-kernel to be a good basis for an RTOS
> because the small size of the kernel means it is easier to verify
> the length of the "locked" (uninterruptable) code sections (because
> there is less code to review and maintain). The rest of the OS
> processes are pushed out to modules with lower privilege levels
> which can be interrupted at any time just like a user program. <

Microkernal architecture allows us to close the question about multitasking logical parallelism at all. We can easely share the kernal between any multicore architecture. And there is no scheduler
problem: latencies, preemtive algorithms, timesharing, priorities, etc. in MCA.

> Micro-kernels are also popular in RTOS designs because an RTOS is
> often used in small embedded systems. The modularity of the
> micro-kernel design makes it easier to strip it down to the bare
> essentials for that particular application, thereby saving EPROM and
> RAM. <

And makes it easy to share the tasks between the cores, i.e. to transform logical parallelism to physical one.

> The disadvantage of a micro-kernel is that it runs more slowly than
> the alternative (monolithic kernel) on typical hardware, and is more
> difficult to write and debug (and so tends to incorporate potential
> improvements more slowly). Micro-kernels are popular with
> theoretical computer scientists but all of the popular general
> purpose operating systems today use monolithic kernels (specialised
> ones like QNX are the exception). <

Yes. Parallelism is more dificult to deal with.

> The criteria for deterministic behaviour in an RTOS is that an
> interrupt is always serviced within a specific period of time, or
> that a process is always run at a specific interval. However, using
> an RTOS does not automatically make a complete system
> "deterministic". That requires proper design of the overall
> application, hardware, and system. The RTOS is just a tool in the
> toolbox of the application designer. <

Any interrupt demands a non-zero time. In multicore parallel system with microkernal OS it demands minimal time interval for handling. And it
is localised, i.e. it depends on the local task structure only.

As to the word "deterministic": determinism - the philosophical doctrine that all events including human actions and choices are fully determined by
preceding events and states of affairs, and so that freedom of choice is illusory.

So, personally can make the following statement only: any digital system is deterministic by definition.

As to me, RT in our field is just a means to use logical operations with time entities: pauses, latencies, timeouts, etc. in order to synchronise control algorithm with the physical processes which are on the controlled object. In other words, any control algorithm is RT by definition. If control system has problems with synchronisation (or just demands any manipulations with priorities to be within the specification), it is just a bad designed system. IMO.

--
Best regards,
Vladimir E. Zyubin
 
M

Michael Griffin

In reply to Vladimir Zyubin (April 27, 2007 12:27:20 am):

VZ: Microkernal architecture allows us to close the question about
> multitasking logical parallelism at all. We can easely share the
> kernal between any multicore architecture. And there is no scheduler
> problem: latencies, preemtive algorithms, timesharing, priorities,
> etc. in MCA.
MG: I don't believe that a microkernel inherently solves any of these, at
least not in a way that wouldn't be equally open to a monolithic kernel. There is nothing about a microkernel that makes it automatically useful with a multicore CPU (or multiprocessor system).

MG:
> > The disadvantage of a micro-kernel is that it runs more slowly than
> > the alternative (monolithic kernel) on typical hardware, and is more
> > difficult to write and debug (and so tends to incorporate potential
> > improvements more slowly). Micro-kernels are popular with
> > theoretical computer scientists but all of the popular general
> > purpose operating systems today use monolithic kernels (specialised
> > ones like QNX are the exception). <


VZ: Yes. Parallelism is more dificult to deal with.
MG: Parallelism is indeed more difficult to deal with, but the difficulty I
was referring to isn't parallelism. With a monolithic kernel, you are dealing with essentially one program (the OS kernel) and can debug it as such. With a microkernel, you are dealing with multiple cooperating programs (microkernel plus "server modules") which are operating at different CPU privilege levels, with control passing back and forth through interfaces that are intended to act as barriers between them. Standard debugging techniques don't handle this very well.

On the surface a microkernel is simpler to debug because it is a series of small modules. In practical terms though it doesn't work so well with the common CPUs available today. The user program makes a call to a "server" module which then calls the microkernel which then calls another server module which then calls the microkernel to gain access to the hardware. It is easy for the programmer to get lost in these back-and-forth calls through the interface "gateways". If the CPU hardware allowed the microkernel to delegate specific address ranges to the "server" (subsystem) modules this would be much simplified (and faster), but unfortunately that isn't the case for commodity hardware.

VZ: Any interrupt demands a non-zero time. In multicore parallel system
> with microkernal OS it demands minimal time interval for handling. And it
> is localised, i.e. it depends on the local task structure only.
MG: What you are describing is asymmetric versus symmetric multi-processor
systems, not microkernel versus monolithic kernel. There are also monolithic real time systems which reserve a particular core (or processor) for real time tasks, while the operating system and non-real time tasks run on a different processor (many mobile phones work this way). This is in fact the "easy" (or at least easier) way to do "real time". It is much harder to get the same results with a single CPU, or with a symmetrical system (where all CPUs are treated equally).

VZ: So, personally can make the following statement only: any digital system
> is deterministic by definition.
>
> As to me, RT in our field is just a means to use logical operations
> with time entities: pauses, latencies, timeouts, etc. in order to
> synchronise control algorithm with the physical processes which are
> on the controlled object. In other words, any control algorithm is RT
> by definition. If control system has problems with synchronisation (or
> just demands any manipulations with priorities to be within the
> specification), it is just a bad designed system. IMO.

The difference between an RTOS and a general purpose OS is really a matter of emphasis. If you asked the designer of a general purpose OS "what is the worst case latency in your OS", they would probably answer "I don't know". It isn't something that they generally worry about unless it gets so long that someone important enough complains about it. If you ask an RTOS designer the same question, they can give you a definite answer. Keeping this number as small as possible is their entire raison d'etre.

However as I said before, using an RTOS does nothing magical by itself for an application. It is just a tool in the toolbox of the control system designer. The entire system (hardware, OS, application) has to be properly designed and selected by someone who knows what they are doing or the entire "real time" effort is a waste of time.

Most industrial applications however do *not* require an RTOS, and using an RTOS where it isn't needed adds unnecessary complexity. People often fall into the trap of thinking that "embedded" or "small" or "fast" or "reliable" are synonymous with "real time" when that manifestly isn't the case.
 
V

Vladimir E. Zyubin

Good day, Michael!

Saturday, Apr 27, 2007 4:13 pm, Michael Griffin wrote:
MG: I don't believe that a microkernel
MG: inherently solves any of these, at
MG:least not in a way that wouldn't be equally MG: open to a monolithic kernel. There is
MG: nothing about a microkernel that makes it
MG: automatically useful with a multicore CPU
MG: (or multiprocessor system).

The key words are "independency of functioning" or "weakly connected functioning". That circumstances make microkernel architecture
automatically useful with a multicore system. Logical multitasking paralelism can
be easely transformed to physical parallelism, but monolitic OS can not.

And it is one of paralellism problems: we have to deal with so called combinatorial outburst of complexity, which immediatelly appears when
we try to create a set of weakly-connected heterogenious parallel modules.

It is the answer, why supercomputer programming (so called parallel programming) is not a common practice, but just an esotheric field of programming.

VZ: Any interrupt demands a non-zero time. In multicore parallel system
>> with microkernal OS it demands minimal time interval for handling. And it
>> is localised, i.e. it depends on the local task structure only.

MG: What you are describing is asymmetric versus symmetric multi-processor
MG> systems, not microkernel versus monolithic kernel. There are also monolithic
MG> real time systems which reserve a particular core (or processor) for real
MG> time tasks, while the operating system and non-real time tasks run on a
MG> different processor (many mobile phones work this way). This is in fact
MG> the "easy" (or at least easier) way to do "real time". It is much harder to
MG> get the same results with a single CPU, or with a symmetrical system (where
MG> all CPUs are treated equally).

I made the simple statement: multicore systems need no any "smart" scheduler. Possibility to have unique core for every unique task
eliminates the RT problem (in the understanding many of us have in our heads).

VZ: So, personally can make the following statement only: any digital system
>> is deterministic by definition.
>>
>> As to me, RT in our field is just a means to use logical operations
>> with time entities: pauses, latencies, timeouts, etc. in order to
>> synchronise control algorithm with the physical processes which are
>> on the controlled object. In other words, any control algorithm is RT
>> by definition. If control system has problems with synchronisation (or
>> just demands any manipulations with priorities to be within the
>> specification), it is just a bad designed system. IMO.

MG> The difference between an RTOS and a general purpose OS is really a matter of
MG> emphasis. If you asked the designer of a general purpose OS "what is the
MG> worst case latency in your OS", they would probably answer "I don't know". It
MG> isn't something that they generally worry about unless it gets so long that
MG> someone important enough complains about it. If you ask an RTOS designer the
MG> same question, they can give you a definite answer. Keeping this number as
MG> small as possible is their entire raison d'etre.

IMO, It is a very disputable definition of RTOS. For example, because we can easily transform an "ordinary" OS to "RT" one just by calculation of the worst case latency.

MG> However as I said before, using an RTOS does nothing magical by itself for an
MG> application. It is just a tool in the toolbox of the control system designer.
MG> The entire system (hardware, OS, application) has to be properly designed and
MG> selected by someone who knows what they are doing or the entire "real time"
MG> effort is a waste of time.

Yes. It has to be properly designed. One of the requirements is "the application shell not need any dirty plays with priorities". In particular, in order to provide robustness during the lifecycle (corrections of error, upgrades, enhancements, and other modifications).

MG> Most industrial applications however do *not* require an RTOS, and using an
MG> RTOS where it isn't needed adds unnecessary complexity. People often fall
MG> into the trap of thinking that "embedded" or "small" or "fast" or "reliable"
MG> are synonymous with "real time" when that manifestly isn't the case.

I personally prefer don't use phrase "real time" at all. The words "embedded", "small", "fast", "reliable" look like more creditable and understandable.

--
Best regards,
Vladimir E. Zyubin
 
Top