Nonlinear control is a type of control theory that deals with systems that cannot be effectively controlled using linear control methods. Nonlinear systems are those in which the relationship between the inputs and outputs of the system is not linear. This can make them more difficult to control, since the response of the system may be highly sensitive to changes in the input.
Adaptive control, on the other hand, is a type of control theory that deals with systems that are subject to changes or uncertainties over time. Adaptive control methods are designed to adjust the control parameters of a system in response to changes in the system or its environment.
When combined, nonlinear and adaptive control can be used to control complex systems that exhibit nonlinear behavior and are subject to changes over time. This can be useful in a wide range of applications, including robotics, aerospace, and manufacturing.
Some common techniques used in nonlinear and adaptive control include:
Model-based control: This approach involves creating a mathematical model of the system being controlled, which is then used to design a controller that can effectively control the system.
Feedback control: This approach involves using feedback from the system being controlled to adjust the control inputs in real-time.
Neural network control: This approach involves using a neural network to learn the control policy for the system being controlled.
Fuzzy logic control: This approach involves using fuzzy logic to model the behavior of the system being controlled, which can then be used to design a controller that can effectively control the system.
Overall, nonlinear and adaptive control are powerful techniques for controlling complex systems that exhibit nonlinear behavior and are subject to changes over time. By combining these techniques, it is possible to design controllers that can effectively control a wide range of complex systems.
Definitions and Terminology
Many industrial processes exhibit nonlinearity in various aspects, such as changes in process gain due to varying loads or buildup of dirt or coating. Additionally, the dead time of the process and the resulting period of oscillation can also vary due to factors such as transportation lag or reactor jacket conditions. To ensure that the process output is maintained at its set point, feedback control loops are designed to handle such nonlinearities and process changes.
However, when the process operates close to its performance limits, the total loop gain approaches its upper or lower limits, which can cause undamped oscillation or sluggish performance. Moreover, changes in process dead time may require revisions to the control settings. Adaptive control techniques are used to automatically detect changes in process parameters or set point and adjust the controller settings accordingly, enabling the loop to adapt to changing conditions.
Adaptation can be based on known operating conditions or measurable disturbances, which is known as feedforward adaptation or gain scheduling, or it can be based on the system’s own performance, known as feedback adaptation or self-adaptation. In the latter case, the system uses measurements of the controlled variable or tracking error to adjust the controller settings.
Steady-State and Dynamic Adaptation
Adaptive control criteria can be specified in either steady-state or transient response scenarios. An example of steady-state adaptive control is optimizing the air-fuel ratio of a boiler to achieve maximum efficiency and minimum total-loss. Dynamic adaptation, on the other hand, involves modifying tuning constants based on the damping of the controlled variable after an upset. This technique can be used to stabilize a PID-based pH control system.
Adaptive control systems automatically adjust their parameters to optimize the loop response of the controlled process. This is different from traditional control systems, where parameters are fixed. To maintain optimal performance, controller parameters should change as the process characteristics change.
Compensation for changing process characteristics can be achieved by introducing suitable nonlinear functions at the input or output of the controller. Examples of such compensation include the use of a square-root extractor in a flow-measurement signal to linearize it or selecting a particular valve characteristic to offset the effect of line resistance on flow. This type of compensation is not considered adaptive control because the controller functions remain fixed.
If the process nonlinearities are compensated by changing the gain, integral, or derivative tuning settings of the PID controller or by feedback linearization, the controller is considered nonlinear. For instance, a pH controller can adjust its gain to compensate for the nonlinearities of the pH system titration curve. This approach is not strictly adaptive, but it is nonlinear.
Approaches to Adaptive Control
Adaptive control is not only a significant area of theoretical research, but it is also a practical tool for solving real-world problems. Classical adaptive control techniques have been successful in providing straightforward solutions for controlling nonlinear and time-varying systems, and many texts and articles have been published on this topic.
As processes become more complex, with the introduction of system component imperfections and non-smooth nonlinearities, there is a growing need for adaptive control systems that can provide greater robustness. This has led to the development and emergence of advanced expert and artificial intelligence techniques such as neural networks, fuzzy logic, and genetic algorithms. These techniques are being increasingly used in adaptive control applications to improve system performance and robustness.
Feedforward or Gain Scheduling
When a measurable process variable has a predictable effect on the control loop gain or a known nonlinear behavior that depends on operating conditions, compensation for its effect can be incorporated into the control system. This approach is known as gain scheduling. A prime example of gain scheduling is the adjustment of dynamic gain with respect to flow in pipes and other longitudinal equipment that exhibit no back-mixing. This dynamic gain variation is commonly observed in heat exchangers and poses a particularly severe challenge in once-through boilers.
In a once-through boiler, feed water enters the economizer tubes, then flows directly into the evaporative and superheater tubing, and finally exits as superheated steam that requires precise temperature control. Since there is no mixing, as in drum-type boilers, a significant amount of dead time exists in the temperature control loop, especially at low flow rates.
Figure 1.1a demonstrates the response of steam temperature to changes in firing rate at two different feed water flow rates. At 50% flow, the steady-state gain is twice as high as at 100% flow, as only half as much water is available to absorb the same increase in heat input. Additionally, the dead time and the dominant time constant are twice as large at 50% flow.
Figure 1.1a [Step response of steam temperature to firing rate in a once-through boiler].
Figure 1.1b [Response of the steam temperature loop to step changes in load without adaptation].
Figure 1.1b demonstrates how the variable properties affect the dynamic gain of the process. The same disturbance in load causes a greater temperature excursion at 50% flow, indicating a higher dynamic gain. The response is also twice as fast at 100% flow, and the damping between the two conditions differs, revealing the change in dynamic gain. If there were oscillations in the response curve at 100% flow, their period would be much shorter than at 50% due to the difference in dead time. At 25% flow, oscillation periods would be even shorter, and damping would disappear entirely.
If only proportional feedback were used in this scenario, the oscillation period would vary inversely with flow, as would the dynamic gain at that period. The only possible adjustment would be to vary the controller gain () in direct proportion to flow. However, with a plain proportional controller, nothing can address the increased sensitivity of the process to upsets at low flow rates.
Since reset and derivative modes are typically used to control temperature as well, their adjustment should also be considered as a function of changes in flow. The process dead time – and therefore, the period of oscillation under proportional control – varies inversely with flow. Therefore, to achieve adaptation, the reset and derivative time constants should also vary inversely with flow.
The equation for the flow-adapted three-mode controller is
(1)
where is the fraction of full-scale flow and , , and are the proportional, integral, and derivative settings at full scale flow. The equation above can be rewritten to reduce the adaptive terms to two:
(2)
Other parameters can be substituted for flow in instances where the adaptation is based on variables other than flow.
Dead-Band Control
Dead-band control is a nonlinear control technique that is commonly used in various applications. The dead-band action is a programmed nonlinearity that depends on either a process variable or a disturbance factor. Dead-band control is not used alone but in conjunction with proportional, integral, and derivative modes.
When using this function, a dead band is placed around the controller error, and no control action occurs unless the error exceeds the dead-band range. If the error is within the dead band, the error is set to zero.
Dead-band control is also used in stabilizing fast or sensitive processes, such as the guidance of missiles, where stability is achieved by avoiding changes in direction unless the limits of the control tunnel are reached.
The dead-band control technique has found applications in pH control and systems that require two control valves, one large and one small, to overcome the rangeability problem. For instance, the control scheme in Figure 1.1c shows how a dead-band controller can drive the large valve when the output of the conventional proportional and integral controller reaches a preset limit. The large valve makes rough adjustments when the error is outside of its dead band or gap.
The size of the dead band can vary widely depending on the relative size of the small (trim) valve and the desired response. Commercial controllers and computer control packages are available with the dead-band feature.
Figure 1.1c [The valve position controller (VPC) measures the opening of the small valve and prevents it from fully opening or fully closing by becoming active only when such extreme positions are evolving].
Switching Controller Gains
Variable breakpoint control is a type of nonlinear control that allows for programmed adaptation. In this form of control, the proportional gain is adjusted at specific predetermined values of the process variable or controller error (i.e., the difference between the process variable and the set point). The controller gain is set to a low value near the set point to minimize controller action at that point.
When the process variable deviates from the set point by a pre-set distance, the controller gain is increased to drive the process back to the set point faster. This type of control can also compensate for highly nonlinear process gain characteristics by adjusting the controller gain according to the process gain.
For example, the pH control problem is a classic example of nonlinear control. The process gain of a strong acid-strong base neutralization process is illustrated in Figure 1.1d, with the control set point at pH=7. At this point, the process has extremely high gain, making it very sensitive to changes in reagent concentration. To maintain stability, the required controller gain between pH 3 and pH 9 must be kept low.
However, if the pH measurement were to exceed pH 9 or fall below pH 3, the low gain would result in a sluggish response due to the low process gain at these pH values. To address this, the controller gain is switched to a high value above pH 9 and below pH 3, as shown by the dashed line in Figure 1.1d.
Figure 1.1d [Variable breakpoint nonlinear control action is used for strong acid, strong base neutralization].
Nonlinear controllers can be implemented in electronic hardware or DCS algorithms and can consist of more than two line segments. In sophisticated systems, the controller gains may correspond to the mirror image of the process gain curve. However, implementing this algorithm requires caution to prevent bumps or discontinuities in control action.
In surge tank level control, it is desirable to avoid reacting to small level variations and changing the flow controller set point, as it would upset the material balance of the downstream process. Therefore, the nonlinear controller is an ideal cascade master for this application. The controller can be adjusted to make no change in the slave set point as long as the level is between 20 and 80%, ensuring that the tank is protected from draining or flooding.
Figure 1.1e [Nonlinear level control keeps the set point of its cascade slave (FRC) unchanged most of the time while allowing the level to float, but it still prevents flooding or draining of the surge tank].
Continuous Adjustment of Controller Gains
Just as in the programmed adaptation algorithm of Equation 2.19(1), in which the gain was multiplied by (and thereby was made a function of) the load variable, one can also adjust the controller gain as a function of other parameters. One obvious option is to multiply the control gain by some function of the error . For example, if is substituted with , this will make the controller gain nonlinear and would result in the following PID algorithm:
(3)
In the above, the derivative mode acts on the measurement only, instead of acting on the error in order to make it insensitive to set-point changes. Note that this algorithm has a lower gain when the errors are small, and therefore this controller is less responsive to small errors.
This characteristic makes the nonlinear controller well suited for use in applications involving noisy measurements, such as flow or level, because instead of amplifying the noise, its response is minimal when the error is small. Naturally, it should be remembered that with large errors the controller gain rapidly gets very high and can cause stability problems and cycling unless some reasonable limit is placed on it.
When using “error squared” PID algorithms, a variety of configurations is available. In one configuration, the controller gain is substituted with , where the value of is found from below:
(4)
where = the normalized absolute value of the error ( is between 0 and 1.0) and = a constant that sets the severity of the desired nonlinearity in the controller action.
For example, if = 14%, it means that will double after every 14% increase in the error. In other words, if is 2 when the error is 14%, it will be 4 when the error is 28%, and it will be 8 when the error is 42%.
The multiplier can be applied to the controller gain and/or to its integral time . When both are chosen the algorithm is particularly suited to level control.
Gain scheduling obtained its name because it was originally used to accommodate changes in the process gain. Today it is also used, based on measurements of the operating conditions of the process, to compensate for the variations in process parameters or for known nonlinearities in the process.
Feedback Adaptation or Self Adaptation
If the source of changes in control-loop response cannot be identified or measured, feedforward adaptation or gain scheduling techniques may not be applicable. In such cases, an adaptive system must be designed based on the feedback loop’s response. This approach is known as feedback adaptive control and is more challenging because it requires accurate assessment of loop responses without prior knowledge of the nature of the disturbance input.
Figure 1.1f shows the block diagram of a feedback adaptive system. This system faces the same difficulties as those associated with implementing feedforward adaptation, as well as the added complexity of evaluating the response and determining the appropriate adjustment.
There are several feedback adaptive techniques used in the process industry. These include the self-tuning regulator, the model reference controller, and the pattern recognition adaptive controller. In the following paragraphs, we will briefly discuss each of these techniques.
Figure 1.1f [A self-adaptive system is a control loop around a control loop].
Self-Tuning Regulator
The self-tuning regulator (STR) is a type of adaptive system that encompasses various techniques. The structure of these STR systems is shown in Figure 1.1g. The figure depicts that all STRs consist of an identifier section, which estimates the process parameters, and a regulator parameter calculation section that determines the new controller parameters based on the estimated process parameters. The particular methods employed in these two blocks differentiate one STR from another. There are several types of STRs, including minimum-variance, generalized minimum-variance, detuned minimum-variance, dead-beat, and generalized pole-placement controllers.
Model Reference Adaptive Controls
Model reference adaptive control (MRAC) is a self-adaptive control method that utilizes a reference model, an adjustable controller, and an adaptation mechanism. The reference model specifies the desired performance, while the adjustable controller’s performance should closely match that of the reference model. The adaptation mechanism processes the error between the reference model and the actual process and adjusts the parameters of the adjustable controller accordingly.
Figure 1.1h provides a schematic representation of how the components of a model reference controller are structured. Originally developed to solve the deterministic servo problem, MRAC has largely been developed based on stability theory.
Figure 1.1h [Model reference adaptive controller].
Pattern Recognition Controllers
There are self-adaptive controllers that don’t rely on modelling or estimating discrete time models. Instead, they adjust their tuning based on evaluating the closed loop response characteristics of the system, such as rise time, overshoot, settling time, loop damping, etc. These controllers use a trial-and-error approach to adjust tuning parameters and recognize the response pattern, hence they are called “pattern recognition” controllers. Instrument vendors offer commercially available microprocessor-based pattern recognition controllers that have limited allowable tuning parameter adjustments. Despite their constraints, they are gaining acceptance in operating plants.
Intelligent Adaptive Techniques
Adaptive control is often justified in dealing with inaccurate process information caused by process changes, such as flow disturbances and sensor noise, which can lead to degraded control loop performance. As process complexity increases due to nonlinearities and time-varying parameters, adaptive control with intelligent identification and/or tuning can further enhance its capability to deal with unknown or time-variant dynamics. In addition, economically optimal strategies, such as multiple model adaptive control that require multiple models or controllers, have emerged to improve profitability.
To maintain optimal performance, the control system must continuously adapt to process changes and perform well while adapting and when set points change. In such cases, a supervised switching control strategy can be useful. This strategy involves choosing from a variety of control algorithms driven by a switching logic-based criterion, often realized using artificial intelligence or expert systems. The following paragraphs will discuss some of these strategies.
Intelligent Identification and/or Tuning
The aim of this approach is to employ intelligent architectures, such as fuzzy logic, neural networks, and genetic algorithms, to replace either the identification, control, or adaptation functions in the control scheme (refer to Figure 1.1i). As discussed in other sections of this chapter, control schemes that combine neural networks with model predictive control (MPC) can handle inaccuracies and uncertainties in the model and can use online training to enhance the model continuously.
Figure 1.1i [Intelligent identification and tuning in adaptive control].
Multiple Model Adaptive Control (MMAC)
Figure 1.1j depicts the concept of multiple model adaptive control (MMAC), which relies on a library of process models, a library of controller algorithms, and an intelligent switching logic.
The MMAC system identifies a process model in its library that closely matches the actual process and selects a control algorithm from the Candidate Controller Library that is best suited for controlling such a process. The process model is usually selected based on pattern recognition of the controlled variable (process output) from the Model Library.
To adaptively select the correct matching between the model and its corresponding controller, a switching law is applied (Logic Switching Supervisor or Transition Supervisor). The switching laws can be divided into two categories: those based on process estimation and those based on direct performance evaluation of each candidate controller.
Figure 1.1j [Adaptive control system for matching multiple process models to multiple process algorithms and selecting the best pairing in a supervised adaptive scheme].
Conclusions
Traditionally, the performance difference between gain scheduling and self-adaptive systems is similar to that between feedback and feedforward control.
A self-adaptive controller (similar to feedback) cannot make an adjustment until it encounters an unsatisfactory response. It requires two or more cycles to pass before an evaluation can be made to base an adjustment. Therefore, the adaptive loop’s cycle time must be much longer than the control loop’s natural period. Hence, the self-adaptive system cannot correct the present poor response but can only prepare for the response to the next upset, assuming that the presently generated control settings will also be valid then.
In contrast, the gain scheduling system should always have the correct settings because it responds automatically to changes in process variables, similar to feedforward control. It does not need to “learn” the new process dynamics via adaptive loops.
MMAC with supervised switching has the potential for good performance and robustness over a broader range of operating conditions than traditional approaches. Still, it will take some time to mature. It is best suited for poorly understood and nonlinear processes with time-varying dynamics (dead times and time constants). The field of AI-based model adaptive control is currently evolving, and there is a lot of activity in this area.