Technical Article

Data Flow Programming in LabVIEW

October 01, 2021 by Seth Price

One of the most confusing concepts for new LabVIEW programmers is the concept of data flow programming. Data flow programming determines the run order of nodes, making some data available before others.

At first glance, the exact run order of nodes may not seem important. After all, LabVIEW will run all of the nodes faster than a human can detect the change anyhow. Why does it matter which one runs first? 

With file writing and instrument control, however, the incorrect run order could mean writing to a file that does not exist in memory, or accessing a port that is not yet open, causing a major runtime error. If the user can determine the order, the code is called “determinate,” else it is called “indeterminate.”

 

What is Data Flow Programming?

Data flow programming is a means of setting the run order in a graphical programming language. In a text-based language, the program executes lines of codes sequentially, reading one line after another. Looking at LabVIEW block diagrams, the temptation would be to execute code left to right, or top to bottom in a similar fashion, but this is not the case.

Instead, data flow programming checks to see which nodes are eligible to run. A node becomes eligible to run when all of its required input terminals contain data. When the virtual instrument (VI) executes, it selects any of the eligible nodes. 

Nodes are considered indeterminate if there is no reason one should run before another, meaning both of them have all of their inputs met. Sometimes this doesn’t matter, but other times, indeterminate nodes can lead to problems.

 

Data Flow Programming Examples 

File Problems

Suppose a programmer needs to write a simple VI to open a file, write some data, and then close it.

 

incorrect open file

Figure 1. Incorrect: The open file node will run first.  

 

After the open file node runs, both write to file node, and the close file nodes are eligible to run (indeterminate), as both have all required inputs met. If the close file executes first, the write to file node may write to unallocated memory.

To correct this, the programmer should ensure that the wires run from the open file node to the write to file node to the close file node. This wiring pattern will ensure that the open file node executes first, followed by the write to file node, then the close file node. This way, the write to file node is guaranteed to write to an actual file instead of an unallocated piece of memory, which can cause the infamous “blue screen of death” on Windows machines.

 

correct open file

Figure 2. Correct: The open file node will run first. 

 

The close file node cannot run until its input is met. Therefore, the write to file will run second; then, the close file node will run. They have to run in this order, meaning the system is determinate.

Using proper wiring, the programmer can force nodes to execute in a specific order. 

 

Math Operations

A complex math problem can sometimes require determination, just like order of operations (or PEMDAS). Suppose signals from four sensors are fed into a math routine to output a single value that an operator can use for a status. This is common in environments where the actual sensor values may be part of the company’s intellectual property (IP). The operator does not need access to the actual values, but must still make control decisions.

In this hypothetical math routine, some of the sensor values are multiplied together, added, incremented, and subtracted for the equation: 

 

(((X1 * X2) + (X3 * X4)) + 1) - X4 = Answer

 

From math class, the inner sets of parentheses must be performed first, then the next set of parenthesis, and so on. Some order is required, but the program can perform either X1*X2 or X3*X4 first. Those two terms do not need to be determinate.

 

indeterminate elements

Figure 3. Some indeterminate elements: There is no way to determine which multiply node will execute first. 

 

After the nodes have executed, the rest of the nodes are determinate, and have a set run order:  add, increment, subtract. As it turns out, it does not matter which multiply executes first.

 

Forcing a Run Order with Error Clusters

An error cluster is a tool for tracking and handling errors that may occur at runtime, rather than having the entire VI crash. For example, suppose the VI asks the operator to select a file to open, and the operator presses the “cancel” button instead. LabVIEW would treat this as an error and instantly stop the VI, without properly closing files, freeing up ports for instrumentation and other such problems. With a properly handled error cluster, a dialog box or default option can control the VI instead.

One technique for forcing a certain run order is to wire the error clusters between nodes. Not only are error clusters a great way to handle unexpected errors, but they can be passed between nodes along “beer” colored wires. 

 

Figure 4. Simulate signal must happen first; after that, spectral, distortion, and tone measurements are indeterminate.

 

 

error clusters

Figure 5. Using error clusters can determine the order, plus any problems can be handled.

 

Incorrect run orders can create hard-to-track bugs in the code. Because of the “random” nature of selecting from eligible nodes, it is possible to get away with sloppy coding. A piece of code may run correctly 99% of the time, but then crash the entire computer 1% of the time. To help minimize crashing, it is best practice to utilize data flow programming.