Gas Turbine Exhaust Temperature Control Curve

J

Thread Starter

JCP

Can somebody please explain how to determine Gas Turbine Exhaust Temperature Control Settings.

Thanks in advance.
 
This description is for GE-design heavy duty gas turbines, and does not purport to cover aero-derivative units or turbines produced by any other manufacturer.

GE uses a very detailed and complex computer program to calculate the compressor discharge pressure-biased exhaust temperature control curve that represents a constant firing temperature (the temperature of the combustion gases entering/exiting the first stage turbine nozzles). Because it's the firing temperature that's really being controlled by the exhaust temperature control curve. And the firing temperature represents the combustion gas temperatures that are being experienced by the hot gas path components (combustion liners, transition pieces, turbine nozzles, turbine buckets, exhaust components). So, it's important to limit and control this temperature to maximize parts life while still producing as much power as possible.

The calculations include things like the expected pressure drop across the inlet air filters, the expected pressure drop due to the configuration and length of the inlet air duct, the expected pressure drop of the exhaust duct and any HRSG (Heat Recovery Steam Generator, or "boiler") connected to the exhaust, the height and configuration of the exhaust stack, the site elevation above sea level, the expected minimum and maximum ambient temperatures, the expected minimum and maximum humidity, the type of coatings used on the hot gas path components (if any), the design of the hot gas path components, the types of seals used the turbine buckets, the types of seals used on the turbine rotor. It's a very long list of information which must be assembled and input to the program.

All of the above are just for a conventional combustor-equipped unit *without* any kind of NOx emissions reduction (water injection, for example, or Dry Low NOx combustors, etc.). And, if the unit uses any kind of inlet cooling that must also be factored into the calculation.

The data which is used by the program has been gathered over decades and includes the results of many new unit performance tests during that time as well as laboratory data. The program is usually run over the course of several hours, sometimes days, to cover various operating conditions.

And, it's my personal belief (because I don't know exactly how the program works and why) that all of this is necessary because the parameter which is ultimately being controlled (the firing temperature, the temperature of the combustion gases entering/exiting the first stage turbine nozzles) is not measured. It's just empirical data that's used to set up the relationship of compressor discharge pressure to exhaust temperature for a given firing temperature.

Why do all of this? To maximize the life of the hot gas path parts while maximizing the power output of the turbine under normal operating conditions. (The operating conditions we're talking about are inlet pressure drops, ambient pressure changes normally experienced, ambient temperature changes normally experienced, compressor degradation due to fouling, etc.)

Sure, a number (or numbers) could be chosen that approximates a constant firing temperature or does not allow the firing temperature to exceed a certain value under "extreme" conditions, but at other times (most of the time, actually) the unit would be not be producing as much power as it was capable of if the firing temperature was not optimized. The parts would last longer, but the overall efficiency would be lower and the power output would generally be lower.

Or, a set of numbers could be chosen that averages the firing temperature over various operating conditions, but that would mean that under some conditions the power output would be a little lower than optimal and under other conditions the power output would be higher than optimal. This would probably result in a reduction of the hot gas path parts life as opposed to maintaining a constant firing temperature regardless of operating conditions.

So, it's really not a simple process for GE-design heavy duty gas turbines, and I would suspect it's a similar process or procedure for other turbine manufacturers as well. I've seen some third-party firms that have "reverse-engineered" parameters from the published documentation provided with the original turbines, but one would have to believe that process is more or less an approximation.

I just noted (because we can't see the information until we reply to a thread) that you have checked 'Siemens' as the vendor when you posted your question. I would presume you were referring to Siemens turbines, so all of the above may not be applicable, but one would think they have something of a similar process as well.

Hope this helps!
 
Dear CSA

Thanks for your very informative explanation about our topic.

Just to add to our discussion, I am talking about a Siemens Westinghouse units. Actually we have 2 identical gas turbines operating with water injection for NOX control and Wet Compression for power augmentation.

We have a dry exhaust temperature control curve and with the operation of water injection and Wet Compression, a temperature bias is being added to the dry curve which corresponds to the amount of additional water flows (i.e. dry exhaust temp limit + Water injection bias + Wet Compression bias). Two units have identical exhaust temperature control settings, however one unit operates with 4~5MW higher output than the other unit.

One thing that puzzles me is that the unit with higher load has lower comp discharge pressure (by approx 0.3 bar) compared to the other unit with lower MW output (comp inlet pressures are almost same. Should identical gas turbines operating side by side requires identical exhaust temp control settings or individual settings can be changed according to actual operational dynamics.

Due to some confidentiality with this issue, can you please reply to this email add: [email protected]
 
Top