SISO, Transfer function, Youla

2
Stabilization of SISO Feedback Systems: Transfer Function approach
2.1
Notation
• R denotes the set of real numbers, C denotes the set of complex num¯ + debers, C+ denotes the complex numbers with positive real part, C
notes the complex numbers with non-negative real part, C− denotes the
complex numbers with negative real part.
• the set of polynomials with real coefficients is denoted by P. We
will typically use the variable s as the indeterminate variable. Examples
of polynomials are s2 + 3s + 2 , 4s3 − s2 + 7s + 5.
• the degree of a polynomial is the highest order of s that appears. It is
denoted ∂(p), for a polynomial p ∈ P.
∂ s2 + 3s + 2 = 2
3
2
∂ 4s − s + 7s + 5 = 3
∂ (6.2) = 0
• the set of proper, rational functions is denoted by R, and defined
(
)
n
R :=
: n ∈ P, d ∈ P, ∂(n) ≤ ∂(d)
d
The rational functions are ordered pairs of polynomials, often interpreted as fractions. Two rational functions nd11 and nd22 are equal if
n1 d2 = n2d1 , as elements of P. Hence, from these definitions
1
s−2
=
s + 1 (s − 2)(s + 1)
We often interpret proper, rational functions as transfer functions of
causal, though not necessarily stable, linear systems. Obviously, some
care must exercised in interpreting the transfer function of the statespace system





˙ 1 (t)   0 1 0   x1 (t) 
 x







 x
2 1 1   x2 (t) 
 ˙ 2 (t)  = 





u(t)
−2 1 0
y(t)
22
as
1
s+1 ,
depending on the physical meaning of the system and the states.
• The set of proper, stable, rational functions is
(
n
¯+
: n, d ∈ P, ∂(n) ≤ ∂(d), d(s) 6= 0 ∀s ∈ C
S :=
d
)
We often interpret proper, stable rational functions as the transfer functions of causal, stable linear systems, hence the name. Recall though,
that 1/(s+1) is also the transfer function on an unstable, anticausal system, with different region of convergence. The same cancellation rules
of R apply.
Note that the usual operations of addition and multiplication apply, and
that the sets are closed under these operations. Within these sets, define
subsets called the units of the set, which are the elements of the set
whose multiplicative inverse is also in the set.
(
1
UP := p ∈ P : ∈ P
p
)
)
(
1
∈R
UR := G ∈ R :
G
(
)
1
∈S
US := W ∈ S :
W
A little analysis gives that the units are
UP = {nonzero real numbers}
UR = {G ∈ R : G (∞) 6= 0}
n
¯+
US = W ∈ S : W (∞) 6= 0, W (so ) 6= 0 ∀so ∈ C
2.2
o
Facts about polynomials
We begin with some facts about the polynomials:
• If a, c ∈ P, and there exists an element ac ∈ P such that a = cac , then
c is called a divisor of a.
23
• If a, b ∈ P, and c ∈ P is a divisor of both a and b, then c is called a
common divisor of a and b.
• If d ∈ P is a common divisor of a and b, and every other common divisor
c of a and b is also a divisor of d, then d is called a greatest common
divisor of a and b
• If 1 is a greatest common divisor of a and b, then a and b have no
common roots, and are called coprime.
• The set of polynomials forms a Euclidean domain, that is, if a, b ∈ P,
and b 6= 0, then there exist polynomials q and r (interpreted as quotient
and remainder) such that a = qb + r, and either r = 0 or ∂(r) < ∂(b).
Definition 10 A subset I ⊂ P is called an ideal if for every a, b ∈ I, the
sum a + b ∈ I, and for every a ∈ I , x ∈ P, the product ax ∈ I
Lemma 11 If I ⊂ P is an ideal, then either I = {0}, or I = {qd : q ∈ P},
where d ∈ I is a nonzero element of I of least degree.
Proof: If I = {0}, then we are done, so assume that I has nonzero elements.
Let d be any nonzero element of least degree in I. This means of course,
that if r ∈ I, r 6= 0, then ∂(d) ≤ ∂(r). Now, let a ∈ I be an arbitrary
element. Since d 6= 0, divide a by d to obtain q, r ∈ P such that
a = qd + r, and either r = 0 or ∂(r) < ∂(d). Note that since d ∈ I, it
must be that qd ∈ I, and also a ∈ I, therefore r = a − qd ∈ I. Hence,
if r 6= 0, then this contradicts that d is a nonzero element of I of least
degree, therefore, r = 0. This means that a = qd for some q ∈ P, as
desired. ♯
Suppose that a, b ∈ P, and define the set
I := {ta + pb : t, p ∈ P} .
Note that if both a = b = 0, then I = {0}, so, assume that at least a or b is
nonzero. It is easy to show that this is an ideal (please check this). Let d be
24
any nonzero element of I of least degree. Since a and b are in I, by Lemma
11, there must exist polynomials qa , qb ∈ P such that a = qa d, b = qb d.
Hence d is a common divisor of a and b. Moreover, since d ∈ I, there must
exist td , pd ∈ P such that td a + pd b = d. Next, suppose that c ∈ P is a
common divisor of a and b. Then, there are polynomials ac , bc such that
a = ac c, b = bc c. Substituting gives that d = (td ac + pd bc ) c, so that in fact,
c is a divisor of d. Hence, d is a greatest common divisor of the pair (a, b).
Summarizing,
Theorem 12 Let a, b ∈ P. If d ∈ P is a greatest common divisor of the
pair (a, b), then there exist t, p ∈ P such that ta + pb = d.
Also, we have
Theorem 13 Let a, b ∈ P. If a and b are coprime, then there exist t, p ∈ P
such that ta + pb = 1.
DIVISION ALGORITHM: To find the greatest common divisor of two
polynomials a and b, Euclid’s algorithm can be used. Assume that both
of the polynomials are nonzero (otherwise the gcd is easy).
Divide a by b, to obtain polynomials q1 and r1 such that a = bq1 + r1, where
r1 = 0 or ∂(r1) < ∂(b). Note that a polynomial d is a common divisor of the
pair (a, b) if and only if d is a common divisor of the pair (b, r1). Hence, the
gcd(a, b) = gcd(b, r1).
Define q2, r2 by dividing b by r1. Hence
b = q2 r 1 + r 2
and either r2 = 0 or ∂ (r2) < ∂ (r1). Also, gcd(r1, r2) = gcd(b, r1) =
gcd(a, b).
Continue iteratively, via
ri−1 = qi+1ri + ri+1
with ri+1 = 0 or ∂ (ri+1) < ∂ (ri ), and gcd(ri, ri+1) = gcd(a, b).
25
Each time, the degree of ri decreases, so that for some finite γ, rγ = 0. Hence
rγ−2 = qγ rγ−1
and gcd(rγ−1, rγ−2) = gcd(a, b). But, since rγ−1 is a divisor of rγ−2, it
must be that gcd(a, b) = gcd(rγ−1, rγ−2) = rγ−1. Backsubstitution into the
recursive relations gives the polynomials t and p such that
ta + pb = rγ−1
2.3
Feedback structure and closed-loop stability
Consider the standard feedback structure shown below, where C, P ∈ R.
u2
u1
e1
- g−6
e
g 2-?
C
-
P
The equations at the two summing junctions are written in the matrix form



u1
u2
which when inverted gives



e1
e2






=

= 
1 P
−C 1

1
 1+P C

C
1+P C



e1
e2

−P

1+P C 

1
1+P C

(2.7)


u1
u2



Definition 14 The closed-loop system is well-posed if all of the closed-loop
transfer functions are in R
Well-posedness means that for any signals u1 and u2 , there exist unique
signals e1 and e2 solving the loop equations 2.7. If the closed-loop system is
not well-posed, then for given signals u1 and u2, there are either an infinite
number of solutions to the loop equations, or there are no solutions to the
loop equations.
26
Theorem 15 The closed-loop system is well-posed if and only if the rational
function 1 + P C ∈ UR .
Corollary 16 The closed-loop system is well-posed if and only if P (s)C(s)|s=∞ 6=
−1.
Definition 17 The closed-loop system is stable if all of the closed-loop
transfer functions are in S
Remark 18 This is the correct definition. It is equivalent to the following
definition: Suppose that (AP , BP , CP , DP ) is a stabilizable and detectable
realization of P , and that (AC , BC , CC , DC ) is a stabilizable and detectable
realization of C. Let P (s) = DP + CP (sI − AP )−1 BP , and C(s) = DC +
CC (sI − AC )−1 BC . Then, the closed-loop system is defined to be stable if the
“A” matrix of the closed-loop interconnection is a Hurwitz matrix. Moreover,
the Hurwitzness (or not) of the closed-loop “A” matrix depends only on P (s)
and C(s), and not on the particular realizations.
2.4
Parametrization of all stabilizing controllers for stable plants
Consider the special situation when the plant itself is stable, so P ∈ S. The
set of all C ∈ R that result in a well-posed, stable closed-loop system is
given by
(
)
Q
S (P ) :=
: Q ∈ S, 1 − P (∞)Q(∞) 6= 0
1 − PQ
To see this, first suppose that C ∈ R, 1 + P (∞)C(∞) 6= 0, and the closedloop system is stable. Define
Q :=
C
.
1 + PC
Since 1 + P C ∈ UR , it is clear that Q ∈ R. Also, Q ∈ S, since Q is in
fact the transfer function from u1 to e2 , and the closed-loop system is stable.
Q
Also, inverting the definition for Q gives that C = 1−P
Q . Conversely, let
Q
. Because of
Q ∈ S be given, with 1 − P (∞)Q(∞) 6= 0. Define C := 1−P
Q
27
1
the assumption on Q(∞), C is certainly proper. Also 1 + P C = 1−QP
. Since
Q and P are finite at s = ∞, it is clear that 1 + P C 6= 0 at s = ∞.
1
1+P C
C
1+P C
P
1+P C
= 1 − QP
= Q
= P (1 − QP )
which are elements of S, since they are products and sums of P and Q both
of which are assumed to be in S.
2.5
Coprime factorizations over S
Definition 19 If N, D ∈ S, and there exist U, V ∈ S such that U N + V D =
1, then the pair (N, D) is called coprime over the set S. The equation
U N + V D = 1 is called the Bezout equation.
Remark: This notion of coprime also is derived from the greatest common
divisor being a unit (in S). This is covered in section 2.1 of [Vid].
Because of time constraints, we will not pursue this idea further.
Another way to write this is: given N, D ∈ S, then the pair (N, D) is called
coprime if there exist U˜ , V˜ ∈ S such that the matrix



N −V˜
D U˜



∈ S 2×2
is a unit in S 2×2 (ie. if it has an inverse in S 2×2). In fact, if the Bezout
relation is satisfied, then



U V
−D N



N −V
D U




= 
1 0
0 1



Definition 20 Given G ∈ R, suppose that N ∈ S , D ∈ S are coprime
in the sense of definition 19, and N
D = G, then the pair (N, D) is called a
coprime factorization of G.
28
A main fact is that we can always do a coprime factorization for a proper
rational transfer function.
Lemma 21 Every G ∈ R has a coprime factorization over S.
Proof: Write G(s) = a(s)
b(s) where a and b are polynomials, with no common
roots, so that gcd (a, b) = 1. Let n denote ∂(b). Since G is proper,
∂(a) ≤ n. Since gcd (a, b) = 1, find polynomials t and p such that
t · a + p · b = 1. Let c1 and c2 be any n’th order polynomials, each with
roots in the open-left-half plane. Call c(s) = c1 (s)c2 (s). Certainly
cta + cpb = c
(2.8)
and for any polynomial γ, −γba + γab = 0. Adding these gives that for
any polynomial γ
(ct − γb) a + (cp + γa) b = c.
Pick γ so that ∂ (ct − γb) < ∂(b) = n (divide ct by b, and call the
quotient γ). Then ∂ ((ct − γb)a) < 2n, so ∂ (cp + γa) = n. Divide
equation 2.8 by c to get
ct − γb a
cp + γa b
+
=1
c
c
c
c
1
2
1
2
{z
} |{z}
| {z } |{z}
|
U
V
N
D
Note that in this construction
• U (∞) = 0
• V (∞) 6= 0, D (∞) 6= 0
• N (∞) = 0 if and only if G (∞) = 0.
Remark: This is one way to get a coprime factorization. Other ways, based
on state-space realizations of G(s), will be covered later.
Lemma 22 (Characterization of right-half plane poles) Let (N, D) be
a coprime factorization (over S) of G ∈ R. Suppose that so ∈ C, with
Re (so ) ≥ 0. Then so is a pole of G if and only if D (so ) = 0.
29
Proof: ⇒ Since N and D are in S, they are both finite at so . If D (so ) 6= 0,
(so )
is finite, and so is not a pole of G.
then G (so ) = N
D(so )
⇐ Since (N, D) is a coprime factorization, there exist elements U, V ∈ S
with U N + V D = 1. Divide by D to give
1
UG + V = .
D
If D (so ) = 0, then the right hand side of this equation has a pole
at so . Both U and V are finite at so , hence it must be that G has a
pole at so . ♯
Lemma 23 (Characterization of right-half plane zeros) Let (N, D) be
a coprime factorization (over S) of G ∈ R. Suppose that so ∈ C, with
Re (so ) ≥ 0. Then so is a zero of G if and only if N (so ) = 0.
If N and D are coprime, then the following lemma characterizes all proper
stable solutions to the linear equation XN + Y D = 0. This is used in
parametrizing all of the stabilizing controllers for a given plant.
Lemma 24 Suppose that (N, D) are elements of S, and are coprime, as in
definition 19. Then







X
Y













: X, Y ∈ S, XN + Y D = 0 =

QD
−QN






: Q ∈ S

Proof: It is clear that for any Q ∈ S, if we define X := QD, Y := −QN ,
then X, Y ∈ S, and XN + Y D = 0. Conversely, suppose that X, Y ∈ S,
and XN + Y D = 0. Let U, V ∈ S be chosen to satisfy the Bezout
relation for N and D. Then, we have



N −V
D U



U V
−D N




= 
1 0
0 1
Pre-multiply this equation by [X Y ]. This gives
0 −XV + Y U



30
U V
−D N



=



X Y
This implies that X = (XV − Y U ) D, and Y = − (XV − Y U ) N .
Defining Q := XV − Y U completes the proof. ♯
2.6
Parametrization of all stabilizing controllers
Theorem 25 (Stability of closed loop system in terms of coprime factorization
Let P ∈ R and C ∈ R be given. Suppose that (Np , Dp) is a coprime
factorization of P , and (Nc , Dc ) is a coprime factorization of C. Then
1 + P (∞)C(∞) 6= 0 if and only if Nc (∞)Np(∞) + Dc (∞)Dp(∞) 6= 0. Furthermore, the closed loop system is stable if and only if
Nc Np + Dc Dp ∈ US
Proof: By substituting the coprime factorizations, the 4 closed-loop transfer
functions take on the form

1
 1+P C

C
1+P C

P
1+P C 

PC
1+P C
=



Dc
Nc

1
 Nc Np +Dc Dp

[Dp Np ]
Recall that the closed-loop system is defined to be stable if all 4 of these
transfer functions are elements of S.
1
⇐ If Nc Np + Dc Dp ∈ US , then Nc Np +D
∈ S, and all four closedc Dp
loop transfer functions are products of elements in S, and hence are
stable.
⇒ Suupose that all of the closed-loop transfer functions are stable. Let
(Up, Vp ) and (Uc, Vc ) be the elements of S which comprise the Bezout
identity for the coprime factorizations of P and C. Since these are
stable, it must be that
[Vc Uc ]

1
 1+P C

C
1+P C
But, this product simplifies to

P

1+P C 

PC
1+P C
1
Nc Np +Dc Dp ,
Vp
Up



∈S
so the claim is true. ♯
Theorem 26 Let P ∈ R and C ∈ R be given, with 1 + P (∞)C(∞) =
6 0.
Suppose that (Np , Dp ) is a coprime factorization of P . The closed loop
31
system is stable if and only if there exists a coprime factorization (Nc , Dc)
of C such that
Nc Np + Dc Dp = 1.
Theorem 27 Let P ∈ R be given. Let S (P ) denote the set of all proper,
rational controllers C such that the closed loop system is stable. Let (N, D)
be a coprime factorization of P , and let (U, V ) be the corresponding elements
of the Bezout identity. Then
(
)
U + QD
S (P ) =
: Q ∈ S, (V − QN )(∞) 6= 0
V − QN
Remark: Every proper, rational controller C satisfying
• 1 + P (∞)C(∞) 6= 0
• the closed-loop system is stable,
has a coprime factorization C =
Nc
Dc
Nc = U + QD
,
where
Dc = V − QN,
for some Q ∈ S. Furthermore, any Q ∈ S satisfying the constraint at
s = ∞ defines a proper, rational controller, that results in a well-posed,
stable, closed-loop system.
Proof: ⊃ Let Q ∈ S be given, with (V − QN )(∞) 6= 0. Define Nc :=
U + QD, Dc := V − QN . Since Dc (∞) 6= 0, we must have that
Nc
C := D
∈ R. Also note that
c
1 + PC = 1 +
1
N (U + QD)
=
D (V − QN ) D (V − QN )
which is 6= 0 at s = ∞. Now,
Nc N + Dc D =
=
=
=
(U + QD) N + (V − QN ) D
U N + QDN + V D − QN D
UN + V D
1
Hence, the pair (Nc , Dc ) is indeed a coprime factorization of C, and
by theorem 26, this controller results in a stable closed-loop.
32
⊂ Suppose that C ∈ R stabilizes P . Then by theorem 26, there exists
a coprime factorization (Nc , Dc ) of C such that
NcN + Dc D = 1
But, we also have U N + V D = 1, so upon subtracting these equations, we are left with
(Nc − U ) N + (Dc − V ) D = 0
Using Lemma 24, there must be a Q ∈ S such that
Nc − U = QD
Dc − V = −QN
so that Nc = U + QD, and Dc = V − QN for some Q ∈ S. ♯
2.7
References
The best is [Vid], though that contains alot of extraneous, yet interesting
material.
[Vid] Chapter 2, 3 and Appendix A, [SanPS] Section 3.7, [GreL] Appendix
A, [Fra] Chapter 4, [SkoP] Section 4.8, [BoyB] Chapter 7, [DoyFT] Chapter
?, [ZhoDG] Section 5.4.
1. Read Section 2. In section 2.2, you should read the bullet about polynomials being a Euclidean domain, but you can skip Definition 10,
Lemma 11, and Theorem 12. However, Theorem 13 is important, and
the Euclidean division algorithm, which follows, proves it (and proves
Theorem 12 as well). Note that the construction of a coprime factorization (Lemma 21) is proven directly, without a “magical” state-space
formula. Beyond that Lemma though, much of the ideas remain the
same as the MIMO case covered in class.
2. Suppose Re(α) ≥ 0. Which closed-loop transfer function(s) is (are)
guaranteed not to be in S if C has a pole at α and P has a zero at α? Is
it possible (find an example or proof) that the other closed-loop transfer
33
functions are in S? What about the case where C has a zero at α and
P has a pole at α?
3. Assume that z, p ∈ C, with Re (p) ≥ 0, Re (z) ≥ 0. Suppose that z is
a zero of P (s), and p is a pole of P (s). Suppose that C is a stabilizing
controller for the plant P . Show that
1
=0 ,
1 + P (s)C(s) s=p
P (s)C(s) =0 ,
1 + P (s)C(s) s=z
1
=1
1 + P (s)C(s) s=z
P (s)C(s) = 1.
1 + P (s)C(s) s=p
This can be done several ways (one way is to use the lemmas about
RHP poles and zeros in coprime factorization descriptions). Also, show
by example that these 4 constraints do not necessarily hold for open-loop
poles and zeros that are in the open left-half-plane.
4. Let s0 ∈ C have Re(s0) ≥ 0. Let γ ∈ C be given. Find a clean
parametrization of the set
{Q ∈ S : Q(s0) = γ}
5. Suppose P ∈ R is stable, so P ∈ S. Show that N := P, D := 1
is a coprime factorization (over S) of P . Compare the general Youlaparametrization for S (P ) (Theorem 27) with the simple characterization
for stable plants. Draw a block diagram of the closed-loop system, using
the controller parametrization (diagram should contain 2 copies of P and
one Q). Explain intuitively why the feedback system is stable, and why
this architecture would not work for unstable plants.
6. The question one should have after this section is “Why do we go to
the trouble of using the set S to do the factorizations? What’s wrong
with the polynomials, P?” So, let’s study this. Here are the things to
recognize/verify:
(a) Section 2.3 remains unchanged.
(b) Section 2.4 need not be considered.
34
(c) In Section 2.5, restate Definition 19, 20 and Lemma 21 with P replacing S everywhere. The proof for Lemma 21 is just the first 4
sentences (ie., 2.3 lines) of the original proof.
(d) Lemma 22 and 23 now hold for any point in the complex plane.
(e) Replace S with P everywhere in Lemma 24.
(f) Let PH be the set of “stable” polynomials, that is, those with roots
in the open-left-half-plane. We will call them the “Hurwitz” polynomials. Hence
n
¯+
PH := p ∈ P : p(s) 6= 0∀s ∈ C
o
In section 2.6, Theorem 25 now is written in terms of coprime factorizations of P and C over P. Make 1+P (∞)C(∞) 6= 0 a requirement,
and drop the “if and only if Nc (∞)Np(∞) + Dc (∞)Dp(∞) 6= 0,”
since the right-hand-side of this doesn’t make sense for polynomials.
Also, change the stability condition to
Nc Np + Dc Dp ∈ PH
(g) Theorem 26 gets dropped.
(h) Based on theorem 25, we simply need to parametrize all polynomials
Nc and Dc such that
Nc Np + Dc Dp ∈ PH
This is easy (using 2 free variables, λ and q), based on the Bezout
and Lemma 24:
Theorem (to prove): Given P and a coprime factorization Np, Dp
over P, with corresponding Bezout elements Up , Vp over P. Then,







Nc
Dc






: Nc , Dc ∈ P, NcNp + Dc Dp ∈ PH  =








λUp + qDp
λVp − qNp



: λ ∈ PH , q ∈ P 
(i) What is deficient with this as a parametrization of all stabilizing
controllers? Hint: Take the data from problem 16, do the polynomial factorizations, pick an arbitrary λ and q as above, and see
what’s (potentially) wrong with the controller you get. Is there a
clean way to fix this?
35




7. Using the FL (J, Q) parametrization for S (P ), the closed-loop system
appears as
u2 y1
y2 P
?
g
6
stabilizing
controller
J
-
g
6
−
u1
Q
If J is built with no unstable hidden modes (ie., stabilizable and detectable realization) then a question is: can the controller actually be
implemented in this manner? In other words (using the results from
problem ??), are all transfer functions indicated below are in S.
vP yJ1
yP P
?
g
6
yJ2 vQ
-?
g -
J
Q
g
6
−
g vJ2
6
- yQ
vJ1
8. Assume that the feedback loop shown below is stable. Let TN denote
the transfer function from d → e.
e6 d
?f -f
L −6
Find TN in terms of L, and then invert this, to obtain L in terms of TN .
Pick α > 0. At any frequency ω, show that
|L(jω) + 1| < α ⇒ |TN (jω)| >
36
1−α
α
In terms of the Nyquist Stability criterion, what does this imply when
α is very small?
Similarly, choose β > 1. Show that
|T (jω)| > β ⇒ |1 + L(jω)| <
1
β−1
What does this imply when β is very large?
9. Let P denote the plant, and use P to also denote its transfer function.
Suppose
1
P (s) = 2
s + 2 · 0.05s + 1
The goal is to make the closed-loop transfer functions satisfy
Tr→y =
ωn2
,
s2 + 2 · 0.75 · ωn + ωn2
where ωn = 3 and τ =
Tn→y = −
1
Tr→y
τs + 1
1
4ωn .
(a) Design a 2-degree-of-freedom compensator to achieve this closedloop behavior.
(b) What are the gain margins, phase margins, and associated crossover
frequencies achived with this design?
(c) Plot the 2-by-3 open-loop/closed-loop array of Bode-magnitude plots.
10. Suppose that J ∈ R2×2, in other words, J is a 2 × 2 matrix of realrational, proper transfer functions. Pictorially, we draw this as
z1 J
z2 w1
w2
The equations implied by this picture are
z1 = J11w1 + J12w2
z2 = J21w1 + J22w2
Suppose that the signal w2 is generated by feedback from z2 through a
transfer function Q ∈ R, so w2 = Qz2 . This is shown below.
37
z1 J
-
w1
Q
Find the transfer function from w1 to z1 with this loop closed. We will
use the notation FL (J, Q) for this transfer function. This indicates that
the lower loop of J is closed with Q (assume 1 − J22 (∞) Q (∞) 6= 0).
11. Suppose that P, C ∈ R, and 1 + P (∞)C(∞) 6= 0. Define transfer
functions S and T as
1
PC
S :=
,
T :=
1 + PC
1 + PC
Find a simple relationship between S and T . Conclude from this that
S ∈ S if and only if T ∈ S.
12. Let P denote the plant, and use P to also denote its transfer function.
Consider the 1-degree-of-freedom architecture. Assume P (0) 6= 0.
(a) What constraint should be placed on the Youla parameter Q so that
the resulting controller has a pole at s = 0 (ie., so that the controller
has “integral” action).
(b) If Q1 and Q2 both satisfy the above constraint, does a convex combination of them, ie.,
αQ1 + (1 − α)Q2
also satisfy the constraint?
13. Let P ∈ R be given. Suppose that (N, D) is a coprime factorization of
P , and the corresponding elements that make up the Bezout identity
are U and V . Without loss in generality (see the existence proof of the
coprime factorization), assume that V ∈ UR . Let S (P ) be the set of all
stabilizing controllers for P . Find J ∈ R2×2 such that
S (P ) = {FU (J, Q) : Q ∈ S, 1 + J11(∞)Q(∞) 6= 0}
(HINT: J is not quite unique).
38
14. Suppose that z ∈ C has Re(z) ≥ 0, and z is not a zero of P ∈ R. Give
a parametrization of all controllers C with at least one pole at s = z,
which stabilize P .
15. (You need to use Bezout identity cleverly. Once done, your solution
should be about 3 lines). Let P ∈ R be the transfer function of a finite
dimensional, linear, time-invariant system (not necessarily stable). Find
a parametrization (involving a single free parameter) of the set of all
possible stable input-output pairs, which is defined as







y
u






: y ∈ S, u ∈ S, y = P u ⊂ S 2

This set is referred to as the graph of P , G(P ). It is an important concept, because once a plant is stabilized, applying exponential decaying
inputs to the feedback system will produce plant input/output pairs
(u, y) that are both stable. Hence, the only behavior of the plant that
is observed is the stable input/output pairs that the plant can produce.
Hint # 1: Use a coprime factorization of P , and the Bezout identity.
Hint # 2: If P ∈ S, then it is easy, in fact
G(P ) =
Try to generalize this.







Pξ
1ξ






: ξ ∈ S

16. Consider the standard feedback system, with P (s) =
2
.
s(s − 2)
u2
y
P
?
g
C
g
−
6
u1
(a) Find the parametrization of all stabilizing controllers for P .
(b) In terms of the Q parameter, find the closed-loop transfer functions
from u1 → y and u2 → y.
39
(c) Use simple results about transfer functions (final value theorem,
steady-state response due to sinusoids) to find (see remark) a stabilizing controller C(s) such that the final value of u1 − y is 0 when u1
is a ramp and/or u2 is a sinusoid of frequency 3 rads/sec. Remark:
You do not have to explicitly find C (or Q), but you should outline
how you would do so.
17. Consider a 2 degree-of-freedom design, as shown below. Here C ∈ R1×2,
mapping a reference input and noisy measurement into the control action.
(a) Write the closed-loop transfer function matrix (denoted H) relating
[r; d; n] to [u; yP ]. It should be in terms of P , C1 and C2 .
(b) By definition, the closed-loop system is stable if and only if H ∈
S 2×3. Assume (N, D) is a coprime factorization over S of P . Show
that the closed-loop system is stable if and only if C2 ∈ S(P ) and
there exists a Q1 ∈ S with
C1
= DQ1 ,
1 + P C2
P C1
= N Q1
1 + P C2
(c) Using Q ∈ S to parametrize C2 ∈ S(P ), express an allowable C1
in terms of Q, Q1 and any of N, D, U, V . Using this expression,
clarify how open-loop, right-half plane poles and zeros of P constrain
certain closed-loop transfer functions. Remember that an open loop
right-half-plane pole, U N = 1, and that at an open loop right-halfplane zero V D = 1.
(d) Prove that C1 can have unstable poles, as long as they are unstable
poles of C2 as well. So, while C1 is thought of as a feedforward
controller, it can have unstable modes, as long as they are implemented within the feedback loop itself. This shows that C can be
40
decomposed into
where Cf f ∈ S, and C1 = K1Cf f , C2 = K1K2.
(e) Using Q ∈ S to parametrize C2 ∈ S(P ), and Q1 (above) to parametrize
C1 , write the closed-loop transfer function matrix H in terms of Q,
Q1 and any of N, D, U, V .
(f) Let J be defined as in problem 13. Show that the C is of the form
18. Let P ∈ R be given. Suppose (N, D) is a coprime factorization of P
(over S). If U ∈ US , show that (N U, DU ) is a coprime factorization of
P . Conversely, if (N1, D1 ) and (N2, D2 ) are both coprime factorizations
of P , show that there is an element U ∈ US such that
N1 = N2 U
,
D1 = D2 U.
Hint: In one direction, you will need the Bezout identity.
19. This is an artificial control problem designed to highlight some issues
with multiloop plants.
For a ∈ R, define
For r > 0, define

Ra := 
cos a sin a
− sin a cos a

Mr := 
r 0
0 1r






Let φ, θ ∈ R and Define plant and controller
P := R−φ Mr−1 R−θ ,
41
C(s) :=
β
Rθ Mr Rφ
s
Consider a closed-loop system consisting of P and C in feedback (negative feeback). Show that
(a) The closed-loop output-sensitivity function satisfies
So =
s
I2
s+β
(b) The closed-loop input-complementary-sensitivity function satisfies
Ti = −
β
I2
s+β
(c) At the 1st input channel to the plant, the gain margin is ∞ (ie., any
positive multiplicative variability of gain in input-channel 1 can be
tolerated without loss of stability).
(d) At the 2nd input channel to the plant, the gain margin is ∞ (ie.,
any positive multiplicative variability of gain in input-channel 2 can
be tolerated without loss of stability).
(e) The phase margin in input-channel 1 is 90◦ and the phase margin
in input-channel 2 is also 90◦.
(f) By proper choice of θ and φ, small simultaneous variations in the
gain in input channels 1 and 2 result in a rapid deterioration of the
uncertain output-sensitivity function. In this example, you may use
calculations/plots in Matlab, for example. The level of deterioration
is related to the magnitude of r.
42