P

Hi everyone, I'm a college student studying Electrical Engineering. I'm currently in a final year project right now. My question is : What are the advantages and drawbacks of using LQR (Linear Quadratic Regulator) design method ? Does it always work ? That's all. Any help that any of you might offer would be greatly appreciated. Thanks in advance.

M

#### Mehmet Alpay

In a nutshell, linear quadratic regulator design methods involve the determination of an input signal that will take a linear system from a given
initial state x(t0) to a final state x(tf) while minimizing a quadratic cost functional. The cost functional in question is the time integral of a
quadratic form in the state vector x and the input vector u such as x^TQx + u^TRu where Q is a non-negative definite matrix and R is positive definite matrix. With this basic definition in place, various flavors of the quadratic linear regulator design problem can be posed; e.g., finite horizon (tf finite), infinite horizon (tf infinite), time-varying (the system, R and
Q matrices themselves, or both) etc. Also, the final state itself may or may not contribute to the cost functional as a seperate term.

the main advantage is that the optimal input signal u(t) turns out to be obtainable from full state feedback; i.e. u = Kx for some K matrix. The
feedback matrix K in question is obtained by solving the Ricatti equation associated with the particular LQR problem you have at hand. One of the disadvantages of the LQR controller is that obtaining an analytical solution to the Ricatti equation is quite difficult in all but the simplest cases.

Nevertheless, there are quite a few numerical methods which you can apply to obtain approximate solutions (check out Matlab's Control toolbox, for
example).

Will the LQR always work? This is a bit vague: it depends on what you mean by "work" )) If you are asking whether or not you will always be able to obtain a LQR solving the particular optimization problem you have, the answer is "No". The reason is simple: the solution to a particular LQR problem is obtained under the implicit assumption that the desired final
state is reachable from the given initial state. If this is not possible, then you can not construct any u(t) input - let alone an optimizing one - to satisfy the main requirement for the existence of a solution; that you can
actually reach the final state! Even if you do manage to solve for the LQR, there is no guarantee that the resulting closed-loop system will be stable or well-behaved in any other way; e.g., a state that is "unobservable" from
the point of view of the cost functional might well be going unstable (read: blowing up!) and the controller you end up with would not even know it since that particular state had no bearing on its design - it was "unobservable",
remember?

Aside from these abstract concerns, there are more practical problems with implementation:
1) Full state feedback is hard to come by: you are more than likely to have a few output measurements from which you need to "infer" the
state information via state observers. Put the resulting observer-based feedback in the context of LQR design, and things get complicated real
quick!
2) The standard LQR design does not put any restrictions on the input signal u(t) amplitude. Your optimizing input might well turn out to have amplitudes that are well above the signal generation/carrying capacities of your real
system (read: saturation, fuses blowing up, etc.)

Finally, not to put down optimal control or anything, but optimizing the system performance with respect to one single criterion (such as the
quadratic cost functional you are trying to minimize in the LQR design)usually means sacrificing the overall system performance with respect to other criteria. The LQR filter will do precisely what it has been designed to do: minimize a cost metric. Whether this is enough for your design purposes is something you will have to decide as the design engineer.

Sorry for the long-winded explanation: I hope it was worth reading through If you need more details on the LQR design, I suggest the book "Optimal Control: Linear Quadratic Methods" by Brian O. Anderson and John B. Moore, published by Prentice Hall.

Take care & good luck with your project!

Mehmet Emin Alpay
Control Systems Engineer
ESI, Central Engineering
(503) 672 5755

S

#### Smith, Tony G

1) Stability is guaranteed if you have
a) all of the states in the system available for feedback
and
b) a really good model of your system.
In fact, not only is stability guaranteed, but the stability
_margins_ are guaranteed.
2) The controller is automatically generated by simply
selecting a couple of parameters (no need to do loop-shaping)

cannot use experimental Bode plots alone to do loop-shaping, but
you must derive a model from the experimental data.
2) If you can't measure all of the states, you must use an observer
to reconstruct them (LQG). Stability is still guaranteed (if
you have a perfect model of your system), but stability margins
may be arbitrarily small.
it may be difficult to get a controller that works the way you
desire.
4) The parameters that are used to generate the controller are
generally not directly related in an intuitive way to the
requirements, but are rather "knobs" that you turn to get the
desired effect.

I have used classical techniques and LQG to design motion controllers. You can (with a little effort) show that there is a very close
correspondence between the resulting controllers. In fact, there is a fairly straightforward relationship between the weights used in the LQG method and the desired closed-loop bandwidth and damping. When I do an LQG design (or any other "modern" controller), I always keep an eye on the open-loop Bode plots - just to make sure that they make sense. In the early days of modern control, it took a while for everyone to realize that they should still look for a reasonable open-loop crossover frequency. One of the easiest mistakes to make is to let the method come up with a controller that has a bandwidth that is just too high for your system to really deliver (because of unmodeled dynamics).

IMHO, there is no magic control design technique. Any linear control technique yields a filter (the compensator) in the end, and there is only so much that the filter can do. Just because the filter comes from PID, or LQG, or QFT, or H-infinity, doesn't necessarily give it special powers. Each of these techniques may provide its own insight and its own way of dealing with particular system design requirements, but it's best to always keep the fundamentals in mind - watch your crossover frequency, know when your model starts to fall apart, etc. Of course, having a little experience (i.e., screwing up a few times) helps to illuminate some of the more important points.

I'll be very interested in hearing the experiences of other list members on this subject.

Tony G. Smith
Sandia National Laboratories
505-844-8371

A

#### andy clegg

LQR is an optimal controller. Optimal in that it is defined so as to provide the smallest possible error to its input, i.e. one or more of the outputs of the controlled system (or 'plant'), combined with minimising the control output. Compared to LQR, a PID controller simply creates a
stable system, without explicitly optimising anything (Advantage #1).
LQR is also straightforward to use for multivariable systems; the design procedure is essentially the same as for single-input-single-output systems (Advantage #2).

LQR control is calculated based on a linear model of the plant under control. If the linear model represents plant exactly, then the controller is optimal. However, if there is a mismatch due to model inaccuracy (i.e. in the parameters of the linear model), plant changes (e.g. changes in vehicle or machine speed or power level in a power
plant) or nonlinearities (i.e. the real system is not actually linear) then the resulting controller will degrade and the system may even

The LQR is a state feedback controller. The states of a system can have some physical meaning (e.g. velocity, acceleration), but sometimes they
have no physical interpretation at all. Consequently there may be difficulty in obtaining the states to use for feedback. To get around
this another function is needed, called an observer, which estimates the values of the state. This makes the system even more complex

> Does it always work ?

Not always. You can't simply apply an 'advanced' control design technique like LQR and expect it to work without some effort. They always require
sound engineering practice to avoid problems. For example, finite word lengths of data (in digital computer implementations) can be problematic,
but these can (almost) always be alleviated.

The main potential problem is that a 'plant' is hardly ever linear with precisely known parameters. Therefore, you have to build in some
robustness against parameter variations/uncertainty during your control
design. Also, you may have to do some gain-scheduling or switching between single controllers to account for changes in operating condition (e.g. aircraft speed or altitude). As noted above, the implementation of LQR controllers requires some effort.

If you want to probe deeper into the theoretical side of LQR design, implementation and (dis)advantages, there is a lot of information to be
found in several textbooks and in the IEEE Transactions on Automatic Control and other 'academic' journals, which your college library may have. Maybe your project supervisor could steer you in the right direction if you need to go into such detail.

Hope this helps.

PS Thanks to my colleague, Gerrit van der Molen, for adding his own comments to this email as well.

Andy [email protected]___

Advanced Control Technology Club, Industrial Systems and Control Ltd.,
50 George Street, Glasgow, G1 1QE Tel: (+44) 0141 553 1111
http://www.isc-ltd.com/actclub.html Fax: (+44) 0141 553 1232
______________________________________________________________________

P

Dear Mr. Mehmet Alpay and everyone on the list,

Thank you very much for your explanation regarding LQR design method. But I'm still confused about one thing. If I've got time-domain
specification, such as rise time, max overshoot, and settling time. Then I can relate them to pole locations on the s-plane. So basically I have to place closed-loop poles to the location that can
satisfy the given time-domain specs. Then my question is : How can I find the value of Q and R so that they can place the closed-loop pole on the right place to satisfy all the time-domain specs ?
For your convenience, I explain again what LQR is : LQR (Linear Quadratic Regulator) is just an optimal design method. Its objective is to minimize the error signal and/or control effort of a control system. In another word, we have solve:
J=(1/2)*integral(x'Qx+u'Ru)dt so that
J is minimum. (x=state variable, u=input signal)

Maybe anyone on the list can help me out.

Regards,

PJ

W

#### WHY DO WE USE LQG OVER H-INFINITY

<clip> > IMHO, there is no magic control design technique. Any linear control technique yields a filter (the compensator) in the end, and there is only so much that the filter can do. Just because the filter comes from PID, or LQG, or QFT, or H-infinity, doesn't necessarily give it special powers. Each of these techniques may provide its own insight and its own way of dealing with particular system design requirements, but it's best to always keep the fundamentals in mind - watch your crossover frequency, know when your model starts to fall apart, etc. Of course, having a little experience (i.e., screwing up a few times) helps to illuminate some of the more important points. > <clip> Why do we use LQG over H-infinity?

T

#### Tony G. Smith

1) Stability is guaranteed if you have
a) all of the states in the system available for feedback
and
b) a really good model of your system.
In fact, not only is stability guaranteed, but the stability
_margins_ are guaranteed.
2) The controller is automatically generated by simply
selecting a couple of parameters (no need to do loop-shaping)

cannot use experimental Bode plots alone to do loop-shaping, but
you must derive a model from the experimental data.
2) If you can't measure all of the states, you must use an observer
to reconstruct them (LQG). Stability is still guaranteed (if
you have a perfect model of your system), but stability margins
may be arbitrarily small.
it may be difficult to get a controller that works the way you
desire.
4) The parameters that are used to generate the controller are
generally not directly related in an intuitive way to the
requirements, but are rather "knobs" that you turn to get the
desired effect.

I have used classical techniques and LQG to design motion controllers. You can (with a little effort) show that there is a very close
correspondence between the resulting controllers. In fact, there is a fairly straightforward relationship between the weights used in the LQG method and the desired closed-loop bandwidth and damping. When I do an LQG design (or any other "modern" controller), I always keep an eye on the open-loop Bode plots - just to make sure that they make sense. In the early days of modern control, it took a while for everyone to realize that they should still look for a reasonable open-loop crossover frequency. One of the easiest mistakes to make is to let the method come up with a controller that has a bandwidth that is just too high for your system to really deliver (because of unmodeled dynamics).

IMHO, there is no magic control design technique. Any linear control technique yields a filter (the compensator) in the end, and there is only so much that the filter can do. Just because the filter comes from PID, or LQG, or QFT, or H-infinity, doesn't necessarily give it special powers. Each of these techniques may provide its own insight and its own way of dealing with particular system design requirements, but it's best to always keep the fundamentals in mind - watch your crossover frequency, know when your model starts to fall apart, etc. Of course, having a little experience (i.e., screwing up a few times) helps to illuminate some of the more important points.

I'll be very interested in hearing the experiences of other list members on this subject.

Tony G. Smith
Sandia National Laboratories
505-844-8371

J

#### Joshua Schultz

PJ-

The Gains generated by the LQR algorithm will allow you to calculate the closed - loop pole locations. You can iterate on the LQR parameters until you get what you want. However, if you are looking for specific performance criteria, I suggest you use the simpler technique of pole placement. The attractive feature of LQR is its robustness properties, not its performance properties per se. Try {place} or {acker} in matlab. Just make sure if you have a repeated pole, offset one of them slightly; the algorithm can't do repeated poles.

-Josh

S

#### siriram

hi,

i'm working on adaptive control and i'm in need of C code for LQR controller. can anyone help? plzzzz... mail me if u can to [email protected]

regards,
siriram

V

#### V.Rajeswari

Hi,

Will anyone please tell what is the robust technique next to LQR technique?

And if it is then can it be applied for an pitch angle control of aircraft?

M

#### Moha

Hi ,

I have designed an LQG controller for my plant.

Weird point: When I am going to control this plant but this time just by the same LQR I applied in LQG,I see the answer is not as satisfactory as the result I got by LQG controller.

I would like to know even 1 %, whether the mentioned case might have happened or no?

Because usually LQG seems to have a worse performance rather than LQR since it works with the estimated states?

Just as a reminder, I have some unobservable states as well in my plant!!!!!

Great Summer for all of you,

Moha

S

#### seryna

Hi

can i know how to choose the best value for Q and R? is there any specific method that can be used to get the value for Q and R?

Tq

M

#### manoj

please use Bryson rule.. that is best rule. by using this rule you can reach to some point and then do some trail of above point to get more smooth results

> can i know how to choose the best value for Q and R? is there any specific
> method that can be used to get the value for Q and R?