
The so-called control is simply to construct a control law for a given controlled system, so that the system presents a desired state.
Generally speaking, the tasks of the control system are divided into two categories, namely, stabilization and tracking. Generally speaking, it is to achieve an expected value and always keep consistent with external input.
The so-called stabilization means that for a dynamic system, looking for the control law makes the system start from any point in a certain area, and when the time tends to infinity, the state tends to zero. If the control goal is to tend to a non-zero state, then we can use the difference to transform it into a zero adjustment problem.
The so-called tracking is to find the control law for the dynamic system, so that the tracking error tends to zero when the system starts from any point in a certain area and the whole state maintenance function is bounded.
Generally speaking, the tracking problem is more difficult to solve than the stabilization problem. In the tracking problem, the controller should not only stabilize the whole state variable, but also make the output of the system track the desired output. However, theoretically, they are interlinked, and the stabilization problem can be regarded as a special case of the tracking problem, that is, the expected trajectory is constant.