This section on controller tuning and control loop performance stops where most books and courses on the subject begin. Too often the subject is introduced with math unfamiliar to the reader. That does not have to be – there are simple concepts to help those unschooled in the math to know and understand the basics, to appreciate the limitations and to know what can be expected.
Not everyone needs to know about controller tuning. Many businesses, like banking and insurance, probably need no one. Other businesses, like the auto- motive business, probably need only a few. But that still leaves many businesses that do need to know, and you wouldn’t be reading this if you didn’t need to know. In many industries proper tuning is vital to quality, and often decisions are made to take expensive steps when better tuning might do the job. On other occasions controller tuning is the scapegoat, being blamed for problems that are not related to tuning, with the result that time and energy are spent needlessly. Meanwhile a proper solution goes unsought.
While this document will give rules for tuning, the rules themselves are only part of the picture. The “tuner” needs to know what the desired performance is and what to expect – when the system is responding as well as can be expected, and when it is not. If it is not, then the rules may not apply, or should be modified. This book teaches not only rules, but what can and cannot be expected from tuning. It is also to teach some of the common pitfalls. Why do the tuning rules not seem to work sometimes? In addition, tuning is often done to fix some problem. You cannot use or fix anything unless you know how it should work, and that includes control loops.
Tuning rules presume that the desired result is a “tight” system, one that does the best job of reducing the effects of disturbances, and/or one that responds quickly to setpoint changes. This may not always be what is desired. Many level controls are often deliberately detuned (made sluggish than the tuning rules would make it), a condition referred to as averaging level control. Many loops in a plant do not have a very vital bearing on quality or other business considerations, so whether they are tuned tightly or not is not all that important.
How many new operations are started up and have all the loops on automatic for the first product made? Not many. Quite possibly not any. Usually there are at least a few loops that stay on manual for some time, sometimes even years. It is hard to argue that these loops need tight tuning.
Controller Tuning Approaches
Controller tuning is mostly science. Tuning rules are based on mathematically clean and simple models that approximate the real.
If the real world were mathematically clean and simple, then controller tuning would be all science (provided of course, there was agreement on what was desired from the tuning). Happily, experience (and higher math) has shown that the real world can be simplified without sacrificing accuracy enough to worry about. It is known with a reasonable degree of certainty, when this simplification is invalid, and therefore when the rules for tuning will break down.
There are numerous publications giving tuning rules, and, as you might expect, they don’t all give exactly the same rules. This is because different criteria are used for what constitutes “proper” tuning. Different mathematically pure and simple models are used to represent the “typical” process. Don’t worry about that, certainly not at this stage. The differences are relatively small compared with what I consider realistic goals in tuning. We will not be concerned about determining settings to within 1%, and generally not within 10 or 20%. For instance, if the tuning rules determine that a controller setting should be 1.00, it doesn’t really matter if it is set for 1.01 or 1.10.
Even if set for 1.20 you would be hard pressed to see the difference in most practical cases. Determining settings within 30 to 50% is a more realistic expectation. Two specialists in tuning will almost surely come up with different settings in any given situation. They are far more likely to, indeed will almost certainly, come up with the same analysis of what may be wrong with a loop. They are less likely to agree on what the best solution is. It is rather like politics in that regard.
Controller Tuning History
No reasonably thorough writing on controller tuning would be complete without paying tribute to J.G.Ziegler and N.B.Nichols. Their contribution was a quantum leap forward in the science and/or art of tuning industrial controllers. It took perhaps 10 years or more after that before subsequent authors started to hone and refine their recommendations, but the essence of their approach has remained unscathed to this day.
Ziegler and Nichols not only brought order out of chaos, but they presented it in a simple, understandable way. They presented two ways of determining controller settings. One was based on closed-loop tests, the other on open-loop tests. They were both based on sound mathematics, though their peers did not recognise or accept it at the time. A 1991 conversation with each of them revealed that Nichols, with a mathematical bent, was primarily responsible for verifying the math of the closed-loop formulas, while Ziegler, of a more empirical bent, conceived the open loop method. Nichols then verified the mathematical validity of the open-loop approach.
For history buffs there is a book you should know about: Automatic Control, Classical Linear Theory, edited by George J. Thaler, Naval Postgraduate School. The book is one of the “Benchmark Papers in Electrical Engineering and Computer Science,” v. 7. It is a photographic reproduction of milestone papers on the math of the feedback control loop, with editorial comments on the contribution each made from a historical viewpoint. The British papers by Callender, Hartree, Porter (and Stevenson), 1936 and 1937, from which Nichols was able to confirm the formulas presented by himself and Ziegler, are also contained in it, as well as the original Ziegler and Nichols paper.
The Ziegler and Nichols paper is also included in a collection of papers on PID tuning: Reference Guide to PID Tuning, published by Control Engineering.
Control Terminology
The task of fine-tuning a controller can range from relatively straightforward to highly intricate. This process is analogous to income taxes, where many cases are uncomplicated, but some require the expertise of a specialist. Regardless, it is a necessary task that must be completed. For the more straightforward cases, which constitute roughly 80% of typical loops encountered, a set of easy-to-follow rules can be applied.
While these rules are based on sound scientific principles, applying them without knowing what is expected can result in little comprehension of the process. Each tuning experience becomes an isolated event without any overarching framework to aid in understanding or transferring knowledge from one instance to the next, or from one individual to another. One objective of these articles is to provide that framework, which enables a clear definition of the experience, making it comprehensible to the person conducting the tuning and transferable to others.
The science of control is founded on mathematics that can be daunting to most people. Fortunately, it is not necessary to understand or utilize this supporting math to grasp the governing principles. The math will be mostly or entirely omitted, and no proofs will be provided. However, there are concepts that may be unfamiliar and should be mastered. These concepts primarily relate to understanding the significance of time in control loops. Amount is also important, but not to the same extent as time. The most important concept to grasp in understanding control loops is the idea of lags. A cause triggers an effect after some delay. For instance, a control valve moves after the controller output changes, or the measurement of a temperature outside the well. The coldest day of the year occurs after the shortest day of the year.
Not all lags are the same or have the same importance in a control loop. A significant part of this training material will be devoted to developing an understanding of where lags arise, the various types used to approximate the real world, and their relative importance. Specific words, denoting particular things in a control loop, will be added to your vocabulary.
Terminology For and Description of Controller Settings
Like all fields, the science and art of feedback control evolved before standard terminology committees were established, resulting in different terms being used to describe the same thing. The controllers we will discuss in this article have three adjustments: Proportional, Integral, and Derivative (PID). Computer-based systems may have a fourth adjustment for filter time, and sometimes have to make decisions about cycle time, though this is not considered a controller setting.
PID controllers have been in use since around 1940, and modern controllers perform the same functions, albeit with greater accuracy and sometimes additional features. The tuning rules have remained essentially unchanged over the years, though digital control systems may offer some aids. It is remarkable that in an era of remarkable technical progress, the PID function has remained constant. High-powered mathematics has demonstrated that the PID function is the best general-purpose function to perform the job. More sophisticated control algorithms can provide better performance when tailored to a specific process but may perform worse if the process changes. This sensitivity to changes in the process is known as robustness, with higher robustness indicating lower sensitivity. The PID algorithm strikes an excellent balance between performance and robustness.
Proportional Action
We talk about proportional action but we tend to refer to the adjustment itself as gain or proportional band. The action means that the controller output moves in proportion to the error between setpoint and controlled variable. Many terms have been used by different manufacturers to designate this action. It has been called proportional band, sensitivity and surely others. Some are reciprocals of others. For instance gain is 100 divided by the proportional band. Figure 1.1 shows what gain does to the controller output in response to the error. For a gain of one, the output changes the same amount as the controlled variable (or the setpoint). Higher and lower gains cause greater or smaller changes in the output for the same change in the error. If the output increases as the controlled variable increases, then the controller is said to be direct acting. If it decreases as the controlled variable increases, then it is called reverse acting. The controller action is set (or checked) initially, when a controller is first put into service, and is not changed after that. The action has to be right to get the controller output to go in the right direction when the controlled variable will avalanche away from the setpoint.
Figure 1.1 [A proportional-only controller has a fixed relationship between error and output].
With any controller that is proportional-only (no integrating action, yet to be discussed), there has to be an error between the setpoint and the controlled variable. This error is frequently called offset. The easiest way to understand this is to look again at Figure 1.1. The only time the error is zero is when the controller output is at any other value, then there has to be an error to produce that output. Simply remember that there will always be an offset if the controller has no integral action. The offset may not be important enough to worry about, but it will be there. It has to be, except at one precise point on the output-versus-error curve.
Any loop will cycle if the gain is increased far enough.
Figure 1.1 is not totally accurate. Many digital controllers can be configured to have the proportional action occur only on the controlled variable, not on the error, to avoid potentially undesirable action on setpoint changes. This is a desirable option, but will not be discussed here. Also, if a controller has no automatic reset (integral) action, to be described soon, then it will usually have a manual reset (integral) adjustment. This is an adjustment that allows some manual compensation for the offset.
Figure 1.2 [A signal- ow diagram to show how manual reset may be used to shift the output for any given error].
Figure 1.2 shows how this is represented in a signal- ow diagram, and Figure 1.3 shows how this might be represented graphically. It allows for adjusting what the controller output is to be when the error is zero. It may be thought of as sliding the gain curve up or down on the graph. Manual reset permits reducing the offset at the normal operating conditions, but it does not change the basic characteristic of proportional-only control, that there will always be an offset, except at one exact point.
Figure 1.3 [Manual reset in a proportional-only controller changes the fixed relationship between error and controller output].
Any loop will cycle (become unstable) if the gain is increased far enough. The task of setting the gain is one of getting the effect you want without causing instability. Math/Algebra This is the first section dealing with the math / algebra of control. If you want to skip it, or any subsequent section so devoted, I encourage you to do so. There is no need to get bogged down in this and frightened off what I am trying to say. My intent is to make the book stand alone without reference to math. These brief sections are presented to introduce the math to those of you who might be interested, with the hope that you won’t be intimidated if you decide to read other material on the subject.
The math of a proportional-only controller is quite simple:
(1)
The error is the setpoint minus the controlled variable. The gain is frequently named , for gain-of-the controller. The above equation than could be written as:
(2)
where:
= error
= controller gain
= bias
We grow very tired of writing all that math (actually algebra) and very quickly forget about the bias, , as it is rarely of concern since it does not affect dynamic performance. The proportional-only controller then has a Laplace transform or transfer function of:
(3)
or more simply:
(4)
since the left hand side of the equation is understood to be the output divided by the input. The transfer function of any element in the control loop is the output divided by the input. In this case it is what you multiply the error by to get the output.
Once you get into the algebra you can satisfy yourself that these are legitimate simplifications. You should not play around very much with the algebra without learning a great deal more than this book will teach you. If you decide to learn more, you need to use another source.
Integral Action
In the earlier days of industrial automatic control the integral function was almost universally called reset. Now the more scientifically correct term integral is gaining widespread use. I tend to use them interchangeably, especially when talking as compared with writing. When referring to the adjustment the terms reset time, and reset rate are both in common use. One is the reciprocal of the other, so of course it is vital to know which one you are talking about. To say to “turn the reset up” is an ambiguous statement, because you don’t know whether the speaker is talking about reset time or reset rate. It usually means to decrease the integral time, but the phrase still leaves uncertainty. It is rather like saying to turn the air conditioner up. Does that mean to get more cooling or to turn the thermostat higher? I will use reset time or integral time when referring to the setting itself, discouraging the use of reset rate.
Integral action is not as easy to understand as proportional action. The graph that is often used to explain it is given as Figure 1.4, which is really for proportional-plus-integral action. Imagine a controller just by itself, not connected to a process. Then imagine that from an initial condition for which the error is zero, that an error is suddenly introduced, called a step change. The controller output will then change to a new value, and the amount of the change is arbitrarily called “A” in Figure 1.4.
After that the controller output continues to move in the same direction it went initially. It will move an amount equal to the initial amount “A” in a time that is the integral time or the reset time. The units of reset time or integral time are minutes per repeat. The reason for this terminology is illustrated in Figure 1.4, which shows that the integral time is the time to repeat the change that was due to proportional action alone. Within the physical constraints of the controller, the output will continue to change at the same rate. This change comes from integrating the error.
Figure 1.4 [A proportional-plus integral controller will integrate the error to add an amount to the output equal to the proportional change in one integral time].
So, the integral action causes the controller output to change at a rate proportional to the error. The longer the integral time the slower it changes. A controller with integral action will eventually reduce the error to zero, as the output will continue to change until there is no error. That is, this will happen if there are no continuing disturbances to require the output to continue to change, and if the manipulated variable has enough “muscle” to achieve that.
Manufacturers build their integrating function to be as close to mathematically pure as they can, and they do a good job of it, whether it be one of the very first pneumatic controllers, or one of the latest digital controllers. Before the advent of digital controllers there were integral-only controllers, but they were not in widespread use. The function is the same as in a proportional-plus-integral controller, except of course there is no change in the controller output due to proportional action. The change in controller output is all from integrating the error. With essentially all digital controllers there is the option to have integral-only action. When this might be used will be discussed later.
Any loop will cycle if you reduce the integral time far enough.
Any loop will cycle if you reduce the integral time far enough. This is true whether the controller is proportional-plus-integral or only integral. The task of setting the integral time is one of setting it low enough but not too low.
Reset Windup
Any control loop with integral action is subject to having a problem called reset windup, or more recently, integral windup. This refers to the condition when the controller output does not have enough muscle to reduce the error to zero. Since the controller integrates this error, the output will continue to change until it reaches some limit, which may or may not be the limit of the manipulated variable. In digital controllers this is a limit set in the menu for that controller, or it may be set in the software. For electronic controllers it might be set with a manual adjustment. For pneumatic controllers the normal situation is that no provision is made to avoid windup, but that extra instrument items can be installed to combat the problem.
Not much more will be said about the reset windup problem at this point, except to say two things. One is that it is a phenomenon that does exist, and two is that the measures seldom totally eliminate the problem. It is far better to take steps to see that the controller does not windup in the first place, than to expect the anti-windup features to keep you out of trouble. For batch processes reset windup can be an especially severe problem on start up.
Math / Algebra
The math (algebra) for a proportional-only controller had nothing in it relative to time. The proportional-plus-integral controller does. This introduces a new symbol, which is used in essentially all of the literature today, and that is the lower case “s”.
(5)
If you didn’t know what it was before, you still don’t! The is the derivative relative to time. If you see , this is the reciprocal of derivative, which is integral. Please simply accept that. The Laplace transform (transfer function) for the proportional-plus-integral controller is written like this:
(6)
The is the controller gain, the same one you learned of before. The is the integral time (time-sub-integral). This alegbra can be expanded to be:
(7)
The first term is the proportional term and the second the integral term. The algebra shows quite simply that the contribution of the integral action increases as the controller gain is increased and decreases as the integral time is increased.
Very shortly, as you read other literature, you will quickly recognize the math:
(8)
or
(9)
as a proportional-plus-integral controller.
It is worth noting that with many digital controllers the control algorithm can be set up as
(10)
This is sometimes called “noninteracting” because the gain does not have a part in the integrating term. Tuning rules are based on the “interacting” algorithm for at least two reasons. First, when tuning rules first were introduced by Ziegler and Nichols, controllers were built in the interacting mode. Second, when frequency response analysis and Bode plots came along it was mush easier to understand what was going on if the interacting algorithm was used.
Derivative Action
Fortunately, only a few terms have evolved over the years to refer to the function of derivative action, and the scientific term derivative seems to have held sway.
Rate (and Pre-Act, Taylor, starting about 1940) have been used. I will use derivative time and derivative action. It is mathematically the opposite of integral action, but while we might have an integral-only controller, we would never have a derivative-only controller (though we could have a proportional-plus-derivative controller, with no integral action). The reason for this is that derivative action only knows that the error is changing. It doesn’t know what the setpoint actually is, so by itself it cannot control to a setpoint.
Figure 1.5 [A proportional-plus-derivative controller will respond to a step change in error by adding to the proportional component that decays with time. The longer the derivative the longer the decay time.]
Figure 1.5 shows the step response of a proportional-plus-derivative controller. The output will peak at some value and then decay back to some lower steady state value. The amount of the steady state change is that due to the proportional action only. You might ask why the output is changing between it peak and its final steady-state value, when the error is not changing, and therefore there should be no output component due to the derivative action. This imperfect derivative action is a practical matter on two counts. One is that it is physically impossible to build a mathematically perfect derivative function, and two is that you wouldn’t want to even if you could. At this point please simply accept both points. Derivative action is deliberately imperfect but achieves most of the desirable results sought when using the derivative function.
Figure 1.6 illustrates another way of conveying what the derivative function does. This time, instead of introducing a step change in error, a ramp change is used. This is simply a change that continues at a fixed rate, rather than all at once, as for a step change. The derivative function adds to the output that would normally occur, in effect advancing the response by an amount in time equal to the derivative time, which is a result of the deliberate imperfection in the function.
Derivative action has the potential to improve performance but is unlike proportional or integral action in one important aspect. With those, it is mostly a matter of using enough but not too much. If you did not use enough there would still be beneficial action, and performance would be better than if you did not use them at all. With derivative the problem is still one of using enough, there is no benefit at all and there could be some harm. If you use just a little bit too much the troubles increase a lot faster than the benefits.
Figure 1.6 [A proportional-plus derivative controller will respond to a ramp change in error by adding to the proportional-only response. The amount added will increase as the derivative time is increased.]
Math / Algebra
The algebra for the derivative function gets more involved that what has been presented up until now. The ideal derivative would have a transfer function of:
(11)
The transfer function for a proportional-plus-derivative controller would then be this:
(12)
where is the derivative time and s, as noted before, is the derivative function.
You already know that the derivative function is not mathematically perfect. Actually the way the algebra is written is to write it as a proportional-plus-derivative function, with the proportional part having a gain of one. Here is the transfer function typically used to describe the (proportional-plus-) derivative function:
(13)
The numerator is the ideal part and the denominator is the practical necessity. The denominator is the transfer function of a lag, which will be discussed more in the next section on filter time. The new parameter, , is known as the derivative gain. It determines the height of the peak in Figure 1.5. If the derivative gain is 10, a typical figure, then the maximum the derivative function can magnify any rate of change is 10.
It should be remembered that usually the derivative function on a digital controller is set up to act only on the controlled variable, not on the error.
Filter Time
With many digital control systems, the menu for controller settings includes a setting for filter time. It is not normally included in most published rules of tuning, because when they were written there were only analogue (non-digital) controllers around. The filter is a digital controller phenomenon and helps compensate for the small variations in reading the process variable because of sampling and because of round off errors.
Figure 1.7 [The effect of the filter time in a digital controller is to slow down the change the controller sees.]
Figure 1.7 shows what it does to the measurement; it slows it down a bit, or averages it. The gas gauge in a car is heavily filtered, so you do not see the waves generated by the motion of the car.
The task of setting the filter time is one of using as much as you dare without degrading the performance of the loop. Too long a filter time will affect controller settings and also make the controller slower to respond to disturbances. The use of it all is likely to start a lively discussion between those who grew up without its availability, and those who grew up after its availability.
Math / Algebra
The filter is just one name for a very simple and important element in control loops. It is really too early in the development of the subject to get into it here, but I will simply give you the transfer function:
(14)
It will get discussed more later. Notice that it has the same form as the denominator in the proportional-plus-derivative function.
Filter Time and Derivative Action
The filter action and the derivative action are opposites for all practical purposes. They can cancel each other. A filter time of one minute will cancel a derivative time of one minute, with the result being essentially the same as if you had used neither. This is still true if the filter time is set to only half or a third of the derivative time, and it makes no sense at all to set it higher. Frequently the decision is reached to use one or the other, but not both. Derivative action bounces the controller output more than when it is not used. Filter action dampens this bouncing, but if too much is used it will degrade performance and cancel the benefit of derivative action.
Math / Algebra
This is a place where the algebra gets quite neat. The proportional-plus-derivative function has this simplified transfer function:
(15)
The filter that has just been discussed, has the transfer function:
(16)
When you have a filter and the derivative function in a controller the resulting transfer function is obtained by multiplying the two together, which becomes:
(17)
From this you can see that if the two times are set the same, then the numerator and denominator are the same, and the whole transfer function reduces to one, which is not dynamic effect at all.