Optimization of dynamic systems in the class of discrete controls of

Russian Mathematics (Iz. VUZ)
Vol. 47, No. 12, pp.1{27, 2003
Izvestiya VUZ. Matematika
UDC 517.977
OPTIMIZATION OF DYNAMIC SYSTEMS
IN THE CLASS OF DISCRETE CONTROLS OF FINITE DEGREE
R. Gabasov, F.M. Kirillova and N.S. Pavlenok
In this paper, we consider the problems of optimal control (OC) of linear systems in the class
of discrete controls which are supposed to be representable as polynomials of nite degree on each
time quantization period. Taking into account the geometrical restrictions imposed on the coecients of these polynomials and on the control variables for each instant of time, we formulate the
problems mentioned above as complex extremum problems. We propose techniques for calculating
the optimal programs and synthesizing the feedback type OC for two classes of control actions,
namely, for the rst degree discrete controls with the nite number of time quantization periods
and the nite degree controls with one time quantization period. These techniques are based on
the dynamic implementation of one particular linear programming (LP) technique. The solutions
of the OC problems are used for the synthesis of the limitary stabilizing feedbacks according to the
moving control principle. The techniques are illustrated with the numeric examples.
1. Introduction
The mathematical theory of optimal processes 1] is based on rather broad classes of controls
which include, in particular, the measurable and piecewise continuous functions. In the constructive
OC theory, which is aimed on the computer-aided problems solution, the more specic classes of
controls are used, such as discrete controls.
The latter may change only at the predened points of time and keep the constant values on the
intervals between these points. In this paper, we call such controls the discrete zero degree controls.
Their natural generalizations are the discrete nite degree controls. In general, the discrete nite
degree controls with the constant quantization period are representable as follows:
u(t) =
p
X
j =0
uj (tk )'j (t ; tk ) t 2 tk tk+1 k = 0 N ; 1
where tk = t + kh, h = (t ; t )=N 'j (t), t 2 0 h, j = 0 p are the given linearly independent
functions, uj (tk ), j = 0 p, k = 0 N ; 1 are the parameters of the discrete function. We choose
the basic functions 'j (t), t 2 0 h, j = 0 p due to the specic features of the OC problem under
consideration. Mostly, we use the following simplest
type of the basic functions 'j (t) = tj , j = 0 p.
p
The set of corresponding controls we denote by UN .
The extension of the classical type of zero degree discrete controls is also useful for the qualitative
control theory. It is well known that the investigation of the continuous control system
x_ = Ax + bu (A 2 Rnn b 2 Rn)
(1)
The work was supported by the Byelorussian Foundation for Basic Research (projects nos. F03M-031,
F02R-008) and the Byelorussian government program of Basic Research \Mathematical Structures".
c 2003 by Allerton Press, Inc.
Authorization to photocopy individual items for internal or personal use, or the internal or personal use of specic clients, is granted
by Allerton Press, Inc. for libraries and other users registered with the Copyright Clearance Center (CCC) Transactional Reporting
Service, provided that the base fee of $ 50.00 per copy is paid directly to CCC, 222 Rosewood Drive, Danvers, MA 01923.
1