Hello all,
I´m what you´d call a "seasoned" automation professional, but I´m not capable of explaining this one. The control gurus around here have helped me before.... perhaps I´m missing something really basic.
The application is a large diesel engine plant on the Canary islands, feeding the whole island. It is the only non-renewable power source for the island. There are several engines, all working in speed droop. We have two engines in particular that are exactly the same, mounting brand new Woodward electronic controllers, both with the same software and with the same configuration. Mechanically, actuator is the same, turbo is the same and both engines just went for on-site maintennance at the same time.
Both engines are set to work on KW-based speed droop. A transducer measures the active power generated by the group and a speed bias is calculated based on the active power. Droop is set on both engines at 4%, rated speed at 500 rpm. So for the 0 to 100% load range we´d have a 20 rpm difference in speed setpoint.
One of the engines works as expected. Once breaker is closed, speed setpoint is reset to 500 rpm and the AGC from the plant starts giving raise pulses to pick up load. As the speed reference goes up, the engine picks up load following the droop curve. For example, at 50% load, we have speed reference 510 rpm, speed bias -10 rpm. At the summing point, the reference is always 500 rpm and we are happy.
On the second engine, where everything is exactly the same hardware/software-wise, the loading curve does something strange. The AGC gives the pulses to raise, but the real power is higher than it should be for any given speed reference. For example, at 50% load, instead of a reference of 510 rpm, we have 509. The speed bias from the power % calculation is working as expected (@50% power, -10 rpm). That 1 rpm difference makes the speed reference to the PID be 499 rpm, which given the rated power, translates to 1MW load extra from where the engine should be.
Here an example of what I mean. The 511..34 is the speed reference manipulated with raise/lower. The -12.5 is the rpm bias from the KW droop curve. Output is the reference for the speed PID:
This also causes asymmetry when a frequency disturbance is injected. If a -2 rpm step is injected, the engine goes up 1MW instead of 2, whereas if a +2 rpm disturbance happens, the engine goes down 3 MW instead of 2.
All in all, we have two "identical" systems that behave differently and I cannot explain why. My best guess is that this 1 rpm offset happens during synchronization/breaker closure, but I am not able to explain it convincingly. Any ideas?
I´m what you´d call a "seasoned" automation professional, but I´m not capable of explaining this one. The control gurus around here have helped me before.... perhaps I´m missing something really basic.
The application is a large diesel engine plant on the Canary islands, feeding the whole island. It is the only non-renewable power source for the island. There are several engines, all working in speed droop. We have two engines in particular that are exactly the same, mounting brand new Woodward electronic controllers, both with the same software and with the same configuration. Mechanically, actuator is the same, turbo is the same and both engines just went for on-site maintennance at the same time.
Both engines are set to work on KW-based speed droop. A transducer measures the active power generated by the group and a speed bias is calculated based on the active power. Droop is set on both engines at 4%, rated speed at 500 rpm. So for the 0 to 100% load range we´d have a 20 rpm difference in speed setpoint.
One of the engines works as expected. Once breaker is closed, speed setpoint is reset to 500 rpm and the AGC from the plant starts giving raise pulses to pick up load. As the speed reference goes up, the engine picks up load following the droop curve. For example, at 50% load, we have speed reference 510 rpm, speed bias -10 rpm. At the summing point, the reference is always 500 rpm and we are happy.
On the second engine, where everything is exactly the same hardware/software-wise, the loading curve does something strange. The AGC gives the pulses to raise, but the real power is higher than it should be for any given speed reference. For example, at 50% load, instead of a reference of 510 rpm, we have 509. The speed bias from the power % calculation is working as expected (@50% power, -10 rpm). That 1 rpm difference makes the speed reference to the PID be 499 rpm, which given the rated power, translates to 1MW load extra from where the engine should be.
Here an example of what I mean. The 511..34 is the speed reference manipulated with raise/lower. The -12.5 is the rpm bias from the KW droop curve. Output is the reference for the speed PID:
This also causes asymmetry when a frequency disturbance is injected. If a -2 rpm step is injected, the engine goes up 1MW instead of 2, whereas if a +2 rpm disturbance happens, the engine goes down 3 MW instead of 2.
All in all, we have two "identical" systems that behave differently and I cannot explain why. My best guess is that this 1 rpm offset happens during synchronization/breaker closure, but I am not able to explain it convincingly. Any ideas?