PID Tuning Without a Plant Model on a Slow System

I work at a food processing plant that requires that we hold a liquid product at a certain temperature while it is travelling through piping. A Rockwell PIDE controller controls this process, which is essentially a heat exchanger that opens a valve (0-100%) to release steam, heating the product in a triple tube heater to its desired set point. If the product does not reach this set temperature by the time it reaches the end of the hold pipe, then it is sent back to the product hold tank to be recirculated.

The main problem: the current system is overheating our product by ~20 degrees F, which is essentially burning it, incurring build-up on probes and affecting quality.

enter image description here

Above is a brief drawing of what the system looks like. TT-130 is the incoming product temperature, TT-132 is what the product is being heated to by the hot water, TT-135 is the hot water temperature (which is controlled by opening/closing a valve), TT-133 is what the finished product temperature is (if the temperature set point is reached, release to bottles.. if not recirculate).

Currently, our PID controller is set to Kp = 4, Ki = 4, Kd = 0. Clearly there is some overshoot, and because the process is so slow and delicate, it is not in our best interest to use trial and error to figure out the best parameters. Below is a plot I created in MATLAB to try and model our plant. The empirical data is normalized (TT-132 "Triple tube heater temp"/Set point) and shows many oscillations. Note the time (in seconds).. the initial rise time is approx. 10 minutes.

enter image description here

The empirical plot includes the current controller settings (p = 4, i = 4, d = 0) during a start up period. The blue graph is a first-order-plus-deadtime representation of the heat exchanger (taking 't' at 28.3 % and 63.2% of the final value then finding tau/theta to create a transfer function as the plant model). The following is the code used to execute the green plot, which is my attempt at using an ITAE PID controller alongside the calculated transfer function.

enter image description here

Obviously this response isn't exactly desirable for our system.. I want to remove the overshoot (from 220 degrees F to our setpoint of 200 degrees F) without the risk of using a trial and error method. Am I modelling this plant properly? What is a strategy I could employ that I could use to create an accurate plant model? Do IAE/ITAE formulas work for such a slow process? I noticed that the Ti and Td values were fairly large, while Kc was always <1.

Any guidance on this topic would be fantastic.. controls are not my strong suit.
Lots of fun! Where did you get the ITAE Controller Parameters for PID?
I have never seen formulas like those before. There is a good chance that whomever derived them did it wrong.
I notice you got some info from Matlab.
ITAE is used to find the where the closed loop pole locations should be by calculating coefficients for the closed loop transfer function. The coefficients can then be scaled to moves the poles farther away from the origin in the s-plane to get a faster response. See the coefficients listed here
The Optimal ITAE Transfer Function for Step Input - File Exchange - MATLAB Central (

Since you are using a FOPDT, there is one time constant or pole. The integrator in your PI controller adds another pole so you have two poles. You should be using the coefficients for a 2nd order system. However, the ITAE does not compensate for dead time which is probably most of your problem. In other words, don't use ITAE tuning for FOPDT systems unless you compensate for the dead time with something like a Smith Predictor. If the dead time isn't long relative to the time constant then IMC tuning works well.

If you don't have a model then you must do system identification to determine the model. Another problem is that if the product is not going through the triple tube heater at a consistent rate then the load is always changing. The product absorbs heat. A feed forward or bias would help a lot by anticipating the required amount of steam by measuring the rate of product and its temperature going through the triple tube heater. For instance, if you have a set point of 200 degrees, a bottle that has been recirculated because it was only at 190 degrees would require less heat than a new product at 100 degrees.

This can be a problem if recirculated product is mixed with new product because the amount of heat each needs is different.
You might check this.

I have a YouTube Channel "Peter Ponders PID"
A four pole ITAE example. My coefficients are about the same as Matlab's. The difference is rounding error.
Let's return to tuning later but in all probability this loop can be helped immensely by the addition of a cascade temperature loop on the hot water set. Your Master controller is the final product temperature which provides a setpoint to a new controller that controls the hot water temperature by manipulating the steam valve. There may well be other enhancements we could make such as feedforward from TT130 and flow (if you have it). The cascade controller has many advantages but a handy one I find is that you limit the output of the master controller to give a setpoint to the slave which is just a few degrees above the final product temperature. In that way you can accurately limit the temperature of the hot water to avoid product burn which is most likely to happen at startup. What algorithm are you using with the PIDE controller, Independent or Dependent?
An I time of just 4 seconds (am I understanding it correctly) is quite low for a large system like this
Is the P setting 4°C or 4% of the measurement scale? Both also pretty low. With too low settings you get oscillations. Experiment with larger settings and see what happens.

Technically speaking your system doesn't contain a D-action. But when all is set ok you can set D to 25% of the I setting to improve responsiveness to process changes and use it for over/undershoot suppression.

For a system like this you should consider using a cascaded master-slave control. That works better with a slowly reacting process in combination with fast reacting heating (steam).

The master controller:
- measures the temperature of the product you are trying to control: the "desired product temp"
- is set to slowly reacting (large P and I) corresponding to the reaction times of the process itself
- gives the setpoint from 0.0-100.0% to the slave controller

The slave controller:
- measures the temperature of the medium you use to heat up the process: the "hot water temp"
- is set fast (small P and I) corresponds to the medium for heating up
- the slave controls the temperature of the hot water you use for heating up the process
- use a thermocouple that is not too big (so it measures fast) and make sure it is well in contact with the steam

Now we will start to tune the master and slave controllers.
We first tune the (fast) slave controller.
Consider starting up with the slave in local mode (so not coupled to the master, so the slave gets its desired temperature setpoint from you, we call that: locally).
Then set the desired temperature ot a value in the middle of minimum and maximum (so we can go up and down) and let it rise to a stable value where it remains stable without oscillating.
You can play with the PID settings here to see if you can make it better.
You can also set it to for example 50°C (a value inbetween the minimum and maximum) and let a PID controller do some autotuning to automatically detect the correct PID settings for your process.
Now when that is ok, set it to remote mode (coupled to master, so master gives its desired setpoint temperature
Now see what happens and try to tune the master (also autotuning is a possibility if you let the setvalue be in the middle of the minimum and maximum.

Now when that is all ok you can try to go to the desired temperature.

Don't forget to install maximum temperature hardware safeties in your process.

And remember: the slave must be fast (so relatively low PID setting) and the master must be set slow (so relatively high PID settings). And remember that the slave which is fast also needs a fast small sensor with not too much mass.

I already made many controls like this using 2 channel temperature controllers.
I often use this one for these purposes.
It also has automatic over- and undershoot suppression algorithms and very excellent auto-tuning capabilities.
You couple it to your PLC via a fieldbus, the most easy one (and cheapest one) is Modbus RTU RS-485 3-wire connection
It measures and calculates PID 20 times per second. So it reacts fast to changes in temperature. See:

best regards,
Patrick Duis
Project Engineer
You are dealing with transport delay and thermal response lag, use proportional only to avoid overshoot.
Yes, it will avoid overshoot the response will still be sluggish and because temperature systems are non-integrating systems, the temperature will never reach the set point.

The cascaded loop solution is MUCH better as mentioned above is MUCH better. Dead times will still be problem but one can use a Smith Predictor but that requires an accurate model.

I have doubts about the OP's formula. Most people don't know how to use ITAE correctly. The bottle must be thought of as heat sinks. Rate and temperature of the bottles entering the heater varies the load.

I also wonder about the validity of the plot in the first post. Why the big spikes?
OK, the ITAE formula you used is based on giving minimum response time and ALWAYS yields proportional gains that are too aggressive. In my experience when applying this formula I would reduce the theoretical gain by as much as a half. However, I never use this formula as it does not provide the robustness that you need. Best plant models are derived from open-loop step tests and use the technique shown in the attached file to determine the transfer function and the ultimate tuning. There are a myriad of techniques but this one gives the best results for your type of process. The units are in seconds so you would need to change to minutes to suit your PIDE units. If you move to a cascade loop system (highly recommended) then first test would be steam valve step versus Hot Water Temperature. Build your slave controller and use the tuning parameters calculated. The second test would be output of the master controller (slave SP) and the final product temperature and tune the master controller. Startup is managed by one of two methods, first is to limit the output of the master controller to just a few degrees the setpoint for your product temperature or second method is to start the system purely with the slave controller in automatic and hold the temperature to just a few degrees above the final setpoint. When the product temperature reaches setpoint then revert to cascade control


OK, the ITAE formula you used is based on giving minimum response time and ALWAYS yields proportional gains that are too aggressive.
That is because the OP is using ITAE incorrectly. Most people do. Look at my video and the link to where Matlab calculated the ITAE coefficients. Matlab's numbers are closed to mine within a round off error. The ITAE should never be used to calculate the controller gains directly. Instead the coefficients for each power of s are calculated. These coefficients place the closed loop poles relative to the origin. That is what is important. If a faster response is necessary then the poles can be scaled or moved away from the origin as I showed in my video. After the closed loop pole locations are determined, THEN you calculate the controller gains to place the closed loop poles at those locations as shown in my video.

The problem with using ITAE to calculate the controller gains directly is that the size of the step will change the values of the controller gains.

The Matlab site is accurate. They could just do a better job of explaining how to use the ITAE coefficients correctly.
I have python examples of how to calculate the 3rd and 4rth order ITAE examples here and how to scale/more the closed loop poles to get the desired bandwidth.

Do I need to make a video on this?
Hey pnachtwey, I was reluctant to use the term ITAE formula as I agree with you that the OP has misapplied it. The formula he/she quoted was something akin to one proposed by Lopez many years ago and, as I said, rarely gives any robustness in the tuning. In my attached file I gave an example of Lambda tuning which works well if the model can be represented closely with a SOPDT model and it tends to cancel the pole quite well. I personally use sophisticated model identification software with pole cancelling algorithms that work extremely well but for a majority of applications Lambda will provide an adequate solution
Some info on tuning.
It is very important to use a registration system that registrates the measured temperature as well als the temperature setpoint. This way you can see if oscillations of the measured temperature get bigger or smaller incl. frequency changes.
You can use a computerized system for it. Or an ordinary paper recorder. This prevents you from turning in undendless circles.
I often use the step-response to get a rough PID setting to start with. First I set the setvalue halfway of the band and let the process stabilize before I initiate a tuning. Keep in mind that the measured temperature must be able to go up and down. So don't tune with water boiling at 100°C, no: set it to 50°C for the tuning.
I often keep the D action at 0sec at this point, that will come later for fast reaction to sudden process variations.
After the auto-tuning with step-response I start with manually changing the P and I settings.
Double it, let it stabilize and see what happens with the P. If it is getting worse use a value of 50% of the auto-detected one.
Do the same with the I.
And afterwards I set the D always to 25% of the I. That is just something I learned from 30 years of experience with PID control.
Ok, now we also have the D.
We start "playing" again with the P, I and D values (make them 2x, or 0.5x and see what happens).
What is very important here, in order to prevent you turning in unendless circles, is writing down the PID values you change.
When you got the tuning right in the stable process, make sudden changes by for example putting in product in the medium. Normally the D action detects this.

Now something else. Somethimes you just cannot get it right because the process is reacting so very slow....
Then I use so-called feed forward control.
Every xx seconds (make it adjustable) I look at the difference between PV and SV and just change the SV with a little bit (always the same amount) up or down.
I do this until I almost reach the desired setvalue (make a band around it). When I reach it I switch over from this kind-of feed forwardy control to PID to get the last small error out.
When I get outside the band I go on feed forward again.

Sometimes the products will not be right when feed-forwarding, when I'm controlling in that mode I prevent new product coming into the process until I'm on PID again.....

And when you got the correct PID settings: store them in a good place, and also write them on a sticker that is on the controller. Also write the correct dimensions with the P I and D so a replacement controller or another brand can be set correctly.

For suppressing overshoot it makes sense to calculate the PID more often per second, this way the D-action can do its job better with the over/undershoot. If you just calculate each second, or more slowly, the overshoot/undershoot will not be handled well.
Keep it fast and tight when suppressing overshoot/undershoots.
The RKC Instrument FZ controller I mentioned above has special over and undershoot algorithms. Also the PID control is different from standard. These algorithms have been developed by RKC specifically for applications in the Semiconductor industry like CVD/PVD furnaces, optics temperature control in wafersteppers etc. But it also works great for other processes.
The FZ measures & calculates PID 20 times per second. We even have a GZ series that measures+calculates 100 times per second. This is for example important with very fast reacting processes like RTP (rapid thermal processing), RTA (rapid thermal annealing) where lamps are used for heating. But also with pressure control, there it is also very important to measure and calculate often per second.

Just some thoughts....
@patrickduis, all though I agree that using a cascaded loop is best. You are suggesting trial and error.
First, everything can be calculated or estimated but this does require a model which the OP said he doesn't have. My advice is to get one. The inner steam loop will need a extra temperature sensor. The input to the inner steam loop from the outer loop will be a temperature, not a percent. I do think there needs to be a feed forward but it should be used as a bias on the outer loop. This will require an extra temperature sensor and a way to count the bottles. The OP said there is a temperature sensor to measure if the bottle reached the desired temperature. Perhaps that temperature sensor is good enough and the new bottle can be assumed to be entering at ambient. Like I said above, the bias needs to be proportional to the sum of temperature difference between the bottle and set point. Recirculated bottle don't need to be heated as much so they will impose a lower load than the new bottles. This can also help compensate for dead time but we really don't know yet because the OP hasn't responded. I know this is extra work and extra money but perhaps fewer bottles needed to be recirculated because of better temperature control. This would increase the through put of the machine if recirculating bottles slow the rate of new bottles entering the system.

GrahamJ says he uses a sophisticated system identification software package to determine a model. I think these are worth the money because they can improve control and save a lot of time "twiddling" gains.

One of my favorite quotes.
Lord Kelvin said:
I often say that when you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind.
Theoretically everything can be calculated.
But in a practical situation the model is always lacking something and there can be non-linearities in the process and dead-times. Then the modelling takes much more time than just a few hours of tuning.
Sometimes you just need to be pragmatic and make a good estimate based on experience with controls.
Over the years you gain such knowledge after doing many PID tunings yourself.
I have seen many machines and over the years you develop a sense to estimate the PID settings when you see a machine behave.

Nice quote about Lord Kelvin!!!! Very nice. I'll keep that one in mind for ever from now on!
Perhaps the OP could clarify but my view is the bottles are NOT part of the load for the heater so therefore would not be a load index to any feedforward. The primary load indices for the heater will be the incoming temperature from the holding and/or the product flow. I believe we are violent agreement that this loop would benefit immensely by creating a secondary controller on the hot water loop. However, without knowing much more about the product, holding times, flowrates etc. it is difficult to offer much more advice. I also like the quote from Lord Kelvin but not as much as these ones from William Deming, "In God we Trust, all others bring data", and "Without data, you're just another person with an opinion". People seriously underestimate the power of good quality step testing to achieve a process model. Once you know the transfer function of a process the tuning just happens!

I agree we need more OP input.
my view is the bottles are NOT part of the load for the heater
It is the bottle and contents that must be heated to some temperature. The cooler the bottles. the more the load.. Also the load increases with the frequency of bottles.

patrickduis said:
Theoretically everything can be calculated.
But in a practical situation the model is always lacking something and there can be non-linearities in the process and dead-times. Then the modelling takes much more time than just a few hours of tuning
NO! I fear no non-linearities or dead time. I don't fear time constants that aren't constants. I have examples of changing time constants.

One must be able to write differential equations that express the non-linearities etc. I write my own system identification, model generation and gain calculation software. I doubt you can find an off the shelf package that can handle more than the standard cases.

Differential equations rule because they are so flexible. You just need to know how to use them and have a gut feel for the system you are applying them to.

This is a video of one of our students tuning a difficult non-linear motion control system that goes over center. The student doesn't need to know the math because we make it easy.. The gains and inertia change as a function of angle. The student is telling the arm to move in degrees but that gets translated into linear motion of the hydraulic cylinder.

This is where I model a non-linear valve. I tried a few different models but approximating the non-linearities with 10 segments worked best. I used a swept sine wave. You don't use step jumps for motion control systems when moving a large mass.
It does take about 20 minutes for the software to find the best coefficients for the model on my 4th gen I7.
Step jumps in the set point are OK if nothing is moving. Off the shelf software will not be able to do this.

Certainly without further input from the OP we can keep guessing but from the original sketch it shows a typical Pasteuriser arrangement where the criteria is a time/temperature relationship of the product and not the product temperature in the bottles. Indeed it would not be uncommon for the heat in the product to be recovered and/or cooled before bottling (again guessing a bit here) but we can certainly agree that filling rate could be a feedforward index which could be derived from the filler speed and/or product flow. Also agree that non-linearity is an overrated problem. I intentionally tune controllers with a high degree of robustness (gain & phase margins) which provide excellent control over the vast majority of industrial control loops. Typically holding tubes will provide about 10 - 20 seconds of deadtime, and while significant is not a major problem and the application of a cascade loop on the hot water will improve matters considerably. A Smith Predictor can provide a good solution IF the process model is well defined and, more importantly, repeatable. If not then don't even try!
pnachtwey said:
"One must be able to write differential equations that express the non-linearities etc. I write my own system identification, model generation and gain calculation software. I doubt you can find an off the shelf package that can handle more than the standard cases.
Differential equations rule because they are so flexible. You just need to know how to use them and have a gut feel for the system you are applying them to.
This is a video of one of our students tuning a difficult non-linear motion control system that goes over center. The student doesn't need to know the math because we make it easy.. The gains and inertia change as a function of angle. The student is telling the arm to move in degrees but that gets translated into linear motion of the hydraulic cylinder. "

I sense you are from an educational and theoretical background. I also learned it this way more than 30 years ago, Laplace transforms to Z-transforms, dead-beat response calculation etc.. That is really nice to have a good model for theoretical reasons so you can calculate for hours on it. But in the real practical world, it is not used so much. However, practical step-response autotuning is used extensively by engineers in plants.

I'm just an engineer that has to solve problems quickly, there is no time from my customer for developing complex process models etc. They just need to get the machine running with the new temperature controller they bought because the other one broke down and they didn't write down the PID settings (of course).
In the practical engineering world things just have to be solved and work ok, the PID settings don't have to be exactly right to the last %s....And it has to be done quickly, because the machine has to run and make money again.
Only experienced people can do this quickly....I noticed over 30 years time...........
This is the data I get from a standard step test that usually takes between 2 to 3 minutes or a matter of seconds for fast loops. We perform a Digital Laplace Transform of the response to determine an accurate model of the process in the frequency domain. From the transfer function we cancel poles to achieve I or ID then choose P settings to give the desired stability criteria. The real irony here is that I will tune a loop high a very high degree of certainty that it will work and I will do it in a fraction of the time taken by trial & error or any of the published methods. And I am a practitioner first and foremost who went back and learnt the theory and when you see the two come together it is a beautiful thing. You can see that we identified a SOPDT model here
Ok that is really nice that you can calculate the response this way, and also so very fast.
But coming from a practical point of view, which P, I and D settings you arrived at? I understand this is for 1 PID controller and not a cascaded system?
Tuning cascade controllers starts with finding the transfer function and tuning for setpoint response. Then simply keep moving up the food chain, remembering that the tuning of the inner loop will affect the dynamics of its outer loop but not visa versa. Choosing Fast, Slow or Medium response depends on a number of factors but mainly comes down to the quality of your test data and if your system has significant non-linearity. Non-linearities are usually managed quite well with Slow to Medium response. My general advice on the D term is not use it unless you have a good understanding of the process and that it is NOT deadtime dominant. The irony is that the control of the majority of industrial processes is not helped with D but if you get it wrong things turn bad very quickly
Yes I also thought that way when I came from the school. A normal behaving temperature process, like for example an oven, doesn't have a D-action. Also controlled many ovens with just PI controllers without D action because in the model the process of most ovens just behave like an integrator and that's how I calculated it, and set it.

In the real world however, in the factory, experienced guys that did that tuning work for many years argued with me and have shown me in practical situations, that using a D action in a production situation makes for a better working in the production process.
There are always small disturbances that don't occur in theory and model, but only in practice. Think of small fluctuations in product feed, a door that is shortly opened with wind blowing in the factory, small leaks, short noise on the measurement or control signals because somebody switched on another heavy machine, short fluctuations in pressure of water and power supply. Sudden unexpected short-lived things the theory and model didn't think of.
And therefore I often (but only if needed) set a D action to about 25% of the I setting.
It just works a little bit better, as experience in practical situations has shown. The desired temperature is reached faster with less over/undershoot and the control is just a little bit more tighter than by just using PI. That also works.....but then the control just reacts a little bit "sluggisher" to unexpected process variations.

I'm still curious of the P, I and D settings you arrived at with your calculation.