Optimal control is the core of modern control theory.

Optimal Control

The main problem of its research is:

Under the condition of satisfying certain constraints, the optimal control strategy is sought, so that the performance index takes the maximum value or the minimum value.

Introduction

The basic conditions and integrated methods for optimizing the performance indicators of the control system.

It can be summarized as:

For a controlled dynamic system or motion process, find an optimal control scheme from a class of allowed control schemes, so that the motion of the system shifts from an initial state to a specified target state, while the indicator value of its performance is optimal.

Such problems are widespread in technical or social issues.

For example, determining an optimal control mode minimizes fuel consumption during the transition of a spacecraft from one track to another.

The optimal control theory was formed and developed under the impetus of space technology in the mid-1950s.

The American scholar R. Berman proposed dynamic programming in 1957 and the principle of maximality proposed by the former Soviet scholar L.S. Pontiac King in 1958. The creation of the two was only a year apart.

It plays an important role in the formation and development of optimal control theory.

The optimal control problem for linear systems under quadratic performance indicators was proposed and solved by R.E. Kalman in the early 1960s.

Mathematical angle

From a mathematical point of view, determining the optimal control problem can be expressed as:

Under the constraint of the motion equation and the allowable control range, the extreme value (maximum value or minimum value) is obtained for the performance index function (called the functional function) with the control function and the motion state as variables.

The main methods for solving the optimal control problem are the classical variational method (a mathematical method for the extremum of functionals), the principle of maxima and dynamic programming.

Optimal control has been applied to the synthesis and design of the fastest speed control system, the most fuel-saving control system, the minimum energy consumption control system, the linear regulator and so on.

A powerful mathematical tool for studying optimal control problems is the variational theory.

Classical variational theory can only solve the problem of unconstrained control.

However, most of the problems in engineering practice are the problems of controlling constraints, so modern variational theory emerges.

Research method

There are two methods most commonly used in modern variational theory.

One is the dynamic programming method, and the other is the principle of minimum value.

They are all well able to solve the variational problem of controlling closed set constraints.

It is worth pointing out that the dynamic programming method and the principle of minimum value are essentially analytical methods.

In addition, the variational method and the linear quadratic control method are also analytical methods for solving the optimal control problem.

In addition to the analytical method, the research method of the optimal control problem includes a numerical method and a gradient method.