Are Deterministic and Real Time the Same?

W

William Sturm

I would say that deterministic and real time could be considered the same. A real time system is synchronized with the world outside the computer. If a real time system falls out of sync, it has failed. Notice that there is no implication of speed or response time. A real time system could be slow, but it must be guaranteed to respond within a specified time.
 
T

Trevor Ousey

Deterministic means the data is received at determined time intervals, for example with ControlNet it is the scheduled time period that will provide the data every 10mS. This is how ControlNet was better than Ethernet and other non deterministic networks, so the sales pitch went.
 
J

James Ingraham

"Can someone help me settle a discussion about this topic?"

Nope, there's no settling it. :)

Seriously though, "it depends" really is the answer. The problem is that there's no real definition of "real-time." "Deterministic" is therefore actually a more precise term. I have heard the terms used interchangeably, though I consider this to be imprecise.

Example of "real-time" not being deterministic:
Real-time Shop Floor Control, e.g. www.realtrac.com (Full disclosure: this is my uncle's company.) In this case, "real-time" means that you can track job progress without waiting for reports. Except you don't actually know when the data will be updated in the system, so you could be a little out of date. This is generally true of all real-time ERP/accounting/etc.

-James Ingraham
Sage Automation, Inc.
 
No they're not the same.

Deterministic = happens in exactly the same timeframe each time (usually fast enough to be considered real-time, but is basically arbitrary). Could refer to a process that checks in once every second or every minute. You assume that if it doesn't check in, there's a problem.

Real-time = standard system functions do not create processing overhead. Basically the controller functions independently of processing times or is well-accounted-for (think of slack in a valve).
 
J
I think they are not the same. This is my personal understanding:

DETERMINISTIC: The maximum response time can be predicted. That is, there is no element of randomness involved. Protocols have a random time before they retry after a failure are not deterministic. Deterministic does not necessarily mean fast, it just means predictable. It may be predictably fast or predictably slow. Many industrial data link protocols ensure this.

REAL-TIME: No events are missed. It is a term relative to the frequency of events. If the events come fast, the bus has to be even faster. If you are controlling a missile at twice the speed of sound you must be very fast. If you are measuring movement of glaciers one sample every year may be real-time. It is a matter of selecting the bus speed relative to the process. Too much data or an inefficient protocol makes the bus cycle slower. Cable length (capacitance) is a major limiting factor for bus speed.

I also included two more terms related to this discussion:

ISOCHRONOUS: Done on a precisely periodic basis, i.e. exact same interval every bus cycle. The real-time updates of inputs and outputs are not affected by non-real-time communication such as diagnostics and operator display etc. There is a minimum of jitter in the update time (I'm talking bus cycle jitter here, not bit transmission jitter which is a different story). This is very important for PID control and even more important for motion control. There are a few data link protocols which are isochronous but most are not. Buses designed for PID loops and motion control excel here. My favorite example is Foundation fieldbus H1, but there is also PROFIBUS-DPv2 and some other motion protocols.

Note that non-isochronous does not mean random or non-deterministic. It just means there is a slight variation, jitter, and for applications such as factory automation or SCADA this is not critical.

SYNCHRONIZED: The communication and control algorithm are coordinated such that input sample it taken, then communicated, control output computed, then communicated, and lastly actuated. This sequence of events is totally logical and you would think that all systems/buses work this way - but they don't. Most systems are "free-running" where measurement, control, actuation, and communication occur independent of one another. A synchronized system has less jitter, again critical for PID and motion control. The system must run on a schedule to achieve this and it goes beyond the mere communication. My favorite example is Foundation fieldbus H1. I believe most DCS have control networks that does this too - and this is one of the points where DCS and PLC are very different.

Do not confuse synchronized communication/execution with "synchronous" (clocked) bit transmission.

To learn more about fieldbus scheduling take a look at the yellow book "Fieldbuses for Process Control: Engineering, Operation, and Maintenance" buy online: http://www.isa.org/fieldbuses

Jonas
 
M

Michael Griffin

In reply to Jonas Berge: "Deterministic" does not mean there is no randomness in the communications response time, it just means that the worst case response time is known. That is, it means there is an upper bound to the randomness. A "random" delay introduced as part of a recovery process in a protocol will normally have a known limit.

Furthermore people usually talk about "deterministic" in terms of how something behaves when there are no communications errors, or when there are a specified number of errors (e.g. one). If you don't put a constraint on this definition, then no protocol is "deterministic." An unlimited number of consecutive errors will result in any protocol being "non-deterministic."

Realistically, the most any network protocol can offer is a probability of delivering a message within defined period of time.
 
Sorry Michael, Jonas was correct.

5Base2 Ethernet (the old yellow cable, collision-based Ethernet) using CSMA-CD is non-deterministic specifically because the protocol uses statistical (random) setback. On collision detection, the IEEE 802.3 protocol demands that both transmitters wait a random time period, then try again. There is no guarantee that the message will EVER arrive at the destination, since another collision and another statistical setback may occur again. That is the classic example of non-deterministic behavior. The only way to cure the problem is to eliminate collisions by using a full duplex switch.

By the way, CAN also uses CSMA-CD, but its recovery is completely non-random where the lowest numbered address transmits before the higher numbered address. CAN is deterministic.

Dick Caro
===========================================
Richard H. Caro, CEO , Certified Automation Professional
CMC Associates
2 Beth Circle, Acton, MA 01720
Tel: +1.978.635.9449 Mobile: +.978.764.4728
Fax: +1.978.246.1270
E-mail: [email protected] < mailto:[email protected]>
Blog: < http://DickCaro.liveJournal.com>
Web: < http://www.CMC.us>
Buy my books:
< http://www.isa.org/books>
Automation Network Selection
Wireless Networks for Industrial Automation
< http://www.spitzerandboyes.com/Product/fbus.htm>
The Consumer's Guide to Fieldbus Network Equipment for Process Control
Buy this book and save 50% or more on your next control system!!!
===========================================
 
V

Vladimir Zyubin

Hello, Richard:

According to your definition, CAN cannot be deterministic as well because of non-zero BER. The reasoning is just the same as for the case of Ethernet collisions.

--
Best regards,
zyubin
 
M

Michael Griffin

In reply to Dick Caro: I believe that if you look at my description of "determinism" in network communications again you will see that I didn't mention anything about Ethernet. Rather, I said that "deterministic" means there is a known upper bound to response, not a precisely known response.

The CAN example that you mention does have some randomness in its operating characteristics. Since errors are not predictable (are random), the arrival time of any individual message cannot be precisely predicted, only the worst case response to a single error (the upper bound). The same is true for token passing networks which must regenerate the token after a lost token error.

If you have a control algorithm that truly cannot tolerate any "randomness" in network response, then it will fail in real life application. This is why I said "realistically, the most any network protocol can offer is a probability of delivering a message within a defined period of time."

On another note, "modern" Ethernet still uses the CSMA-CD algorithm, but most present day hardware prevents collisions from actually occurring by using store and forward switches in a star network configuration. The exceptions ironically are some of the newer proprietary Ethernet based industrial networks which use hubs in their "real time" protocol extensions.
 
C

Curt Wuollet

Actually while CAN is perhaps not absolutely deterministic, it does demonstrate a very good point. It's "good enough" to have been used in a lot of control situations. I'm fairly sure that Ethernet, at least the faster varieties, is also "good enough" and as a practical matter would do well in all but a few. Actually I would propose that anything that can resolve collisions in less than one scan or update interval renders the variability moot, when working with PLCs. Multi-axis motion control or pushing messages across a heavily loaded segment would get hairy, but that is fairly predictable. I used to worry a lot about IC and cable propagation delays and the like. In this arena, the speed differences between CMOS and Low Power Shottky TTL are irrelevant. Similarly, the real world maximum collision resolution times are pretty much swamped by the sampling rates. Yes, theoretically an Ethernet collision may never be resolved. But at some point, the bit may have been flipped by an errant gamma particle anyway.

Regards
cww
 
M

Michael Griffin

In reply to Curt Wuollet: Since you mentioned I/O speed over Ethernet, I thought I would share the results of some tests I did recently for an application using an Advantech Ethernet Adam 6000 digital I/O module. The tests were intended to check the read/write speed and error rate to see what the capabilities of the hardware were.

The tests involved reading the inputs and writing to the outputs repeatedly as fast as possible. Each read or write involved sending an Adam ASCII protocol command over UDP and waiting for the reply (or acknowledge). The total time was then divided by the number of iterations to determine the average times for a read, a write, and a read + write.

The results were as follows: the average time for a read was 0.74 ms. The average time for a write was 0.71 ms. The average time for alternating read plus write was 1.45 ms. Each test was run for 100,000 iterations. There was one error detected (a bad acknowledge on a write).

The test was written in Python using the standard "socket" module and run on Linux. Comparable results were achieved when the same test was repeated on MS-Windows XP. The I/O module can also speak Modbus/TCP, but I didn't test that protocol. The tests were conducted on an isolated network through a low cost switch (which is how the application was to be deployed).

I didn't measure variability in message timing (real time response) because it all worked so fast that I didn't have the proper equipment to measure something that fast. My conclusion is that at least for simple PC based applications, ordinary Ethernet based I/O can work fine (at least with the right hardware).

I believe that Ethernet collisions are a red herring in this type of application. Message traffic is controlled at the application level, so there is no reason for collisions to occur. A typical design would put the I/O on an isolated network anyway, as it makes the system easier to troubleshoot and maintain. If you need a supervisory network, then just add a separate Ethernet card.

If someone is concerned about response times, they should be concentrating on what their choice of operating system and development software does to their overall application response. I/O polling times are negligible contributor in this type of application.

In addition to the above, I did some different tests reading and writing to an RS-232 device through the PC serial port. Again, the tests were repeated on both Linux and MS-Windows XP. Briefly, when run on Linux the timing results appeared to show command replies arriving as fast as the bits can travel through the wires at that baud rate. The same tests run on MS-Windows XP however showed very large delays in handling the messages. The system required a much slower polling rate to run successfully on MS-Windows XP. This is something that people may wish to keep in mind if they want to use serial (RS-232 or RS-485) connected devices.
 
C

Curt Wuollet

Hi Michael,

My point, more or less exactly. Now if you were to do the same thing with a PLC, I would expect that you would find your Ethernet delays to be a long ways down your list of variabilities. Many PLCs can't handle back to back packets at serial port speeds. Some might even drop packets on the floor due to limited buffer space between scans. TCP would need to handle that much like the ack/nak business in serial port protocols. I'm pretty sure the randomness of collision fallback would be the least of your issues due to the way comms are handled.

Regards

cww
 
M

Mike McDermott

All protocols are more or less "real time". Whether you use RS-232, Ethernet/IP, DeviceNet, ControlNet, or whatever, if you press a button on an HMI, it comes on instantly in the PLC... that is real time.

Deterministic however is being able to determine precisely when something happens. This is where ControlNet comes in to play. Let say I have Ethernet/IP and every 6ms I want to GUARANTEE that I check an input. Well it won't happen because Ethernet/IP is not deterministic. Can it happen? Sure. Can you guarantee it will happen? NO.
With ControlNet you can guarantee. Some networks like EtherCat claim to be so fast that you can basically call them deterministic but the fact is, they can't GUARANTEE delivery at specific intervals.
 
C
I think you are a victim of hyperbole. From many years of networking experience, you can't guarantee anything with ControlNet either, All you can really assume (which is not a guarantee) is that if your time constraint is not met, you will know about it.

I can go and unplug a cable and your guarantee goes out the window. And I can provide that degree of assurance with any protocol by simply setting a watchdog. What may be important is how often you don't meet the deadline and in real terms the difference may not be anything to brag about or even significant to the application. And then the question of cost effectiveness comes into play.

Few applications are critical enough where 1 missed update is the end of the world and those should probably not use a fieldbus anyway. When the uncertainty of scan rate sampling far exceeds the uncertainty of the network, it gets pretty illogical to be splitting hairs.

Regards
cww
 
Hello,
I think you're somewhat confused.

A little thought will show that nothing happens "instantly". No matter the protocol, there will be *some* delay. The difference between "real time" and "deterministic" is that "real time" asks whether the delay is acceptable to the application, while "deterministic" asks whether the delay can be exactly computed in advance.

Obviously, if the delay is deterministic, it's much easier to see whether or not it's acceptable.

The difference between hard real-time and soft real-time is that hard real-time expends engineering effort on never missing any deadlines, while soft real-time expends engineering effort on mitigating the consequences of missed deadlines. But that's a different question...

(There's also a second meaning to the phrase "real time", which is roughly synonymous to "on-line", as opposed to batch processing. It's probably better not to use that sense, because "real time" is confusing enough as it is.)

Jiri
--
Jiri Baum <[email protected]> http://www.baum.com.au/~jiri
 
M

Michael Griffin

In reply to Curt Wuollet: The applications that I have heard about for "deterministic" networking all seem to involve electronic gearing (or camming) of servo drives on things like large printing presses. Most industrial applications are not "real time" and do not require a high degree of "determinism" in networking.

This is probably just as well, as most people are connecting their networks to PLCs or to PCs running an MMI or SCADA application on an MS-Windows OS. Neither of these are real time or deterministic controllers. There is a lot more hot air generated about real time and deterministic response in fieldbuses than is justified by the number of applications that actually require it.

However, real time and deterministic response in standard ethernet will become fairly common before too long due to applications in video and audio streaming (particularly with interactive media). At that point I expect consumer grade ethernet hardware to outperform the specialty fieldbuses at a small fraction of the cost.
 
Dear All:
I have a feeling, Deterministic and Real Time are the same... the same nonsense - both provide endless discussions about their meanings, but they have no meaning at all, because they are not terms, they are just products of marketologists fantasy, Roland Barth's mythologies (mythologems), notation without any denotation.

regards,
Vladimir
 
S

September, Clyde

Dear Vladimir,

Interesting statement - what if events have to be time stamped? e.g. trend data as part of petrochemical plant.

"There is a lot more hot air generated about real time and deterministic response in field buses than is justified by the number of applications that actually require it." (MG)

...So I suppose if the application does justify it (even once), then can it really be hot air?

CWS
 
Top