Document 263121

ZOR - Mathematical Methods of Operations Research (1995) 41:89-108
V~.
I
] [4
Denumerable Controlled Markov Chains with
Average Reward Criterion: Sample Path Optimality 1
ROLANDO CAVAZOS-CADENA2
Departamento de Estadistica y Chlculo, Universidad Aut6noma Agraria Antonio Narro,
Buenavista 25315, Saltillo, COAH. MEXICO
EMMANUEL FERN/~NDEZ-GAUCHERAND3
Systems & Industrial Engineering Department, The University of Arizona, Tucson, AZ 85721, USA
Abstract: We consider discrete-time nonlinear controlled stochastic systems, modeled by controlled
Makov chains with denumerable state space and compact action space. The corresponding stochastic control problem of maximizing average rewards in the long-run is studied. Departing from the
most common position which uses expected values of rewards, we focus on a sample path analysis
of the stream of states/rewards. Under a Lyapunov function condition, we show that stationary
policies obtained from the average reward optimality equation are not only average reward optimal,
but indeed sample path average reward optimal, for almost all sample paths.
Key Words: Denumerable Controlled Markov Chains, Average Reward Criterion, Sample Path
Optimality, Lyapunov Function Condition.
I
Introduction
There are numerous applications, in many different fields, of denumerable controlled Markov chain (CMC) models with an infinite planning horizon; see
Bertsekas (1987), Ephremides and Verd6 (1989), Ross (1983), Stidham and
Weber (1993), Tijms (1986).
We consider the stochastic control problem of maximizing average rewards in
the long-run, for denumerable CMC. Departing from the most common position
which uses expected values of rewards, we focus on a sample path analysis of the
stream of states/actions. Under a Lyapunov function condition, we show that
1 Research supported by a U.S.-Mrxico Collaborative Research Program funded by the National
Science Foundation under grant NSF-INT 9201430, and by CONACyT-MEXICO.
2 Partially supported by the MAXTOR Foundation for applied Probability and Statistics. under
grant No. 01-01-56/04-93.
3 Research partially supported by the Engineering Foundation under grant RI-A-93-10, and by
a grant from the AT&T Foundation.
0340-9422/95/41 : 1/89-108 $2.50 9 1995 Physica-Verlag, Heidelberg
90
R. Cavazos-Cadena and E. Fernhndez-Gaucherand
stationary policies obtained from the average reward optimality equation are
not only expected average reward optimal, but indeed sample path average
reward optimal. For a summary of similar results, but under a different set of
conditions as those used here, see Arapostathis et al. (1993), Section 5.3.
The paper is organized as follows. In section 2 we present the model. Section
3 defines the standard stochastic control problem, under an expected average
reward criterion. Section 4 introduces the sample path optimality average reward criterion, and the statement of our main result. After some technical
preliminaries in section 5, our main result is proved in section 6.
2
The Model
We study discrete-time controlled stochastic dynamical systems, modeled by
CMC described by the triplet (S, A, P), where the state space S is a denumerable
set, endowed with the discrete topology; A denotes the control or action set, a
nonempty cumpact subset of a metric space. Let K := S x A denote the space of
state-action pairs, endowed with the product topology. The evolution of the
system is governed by a collection of stochastic matrices {P(a) = [px.y(a)] }a~A,
i.e., P(a) is a state transition matrix, with elements px,y(a).
In addition, to assess the performance of the system, a measurable 4 (and
possibly unbounded) one-stage reward function r: K--* R is chosen. Thus, at
time t ~ No := {0, 1, 2 . . . . }, the system is observed to be in some state, say
Xt = x ~ S, and a decision At = a ~ A is taken. Then a reward r(x, a) is obtained,
and by the next decision epoch t + 1, the state of the system will have evolved
to X,+I = y with probability p~,y(a). Given a Borel space B, let Cg(B) denote the
set of all real-valued and continuous functions on B. The following continuity
assumptions are standard.
Assumption 2.1: For each x, y e S, Px,r(') e Cg(A);furthermore r(., .) E Cg(K). []
Remark 2.1: We are assuming that all actions in A are available to the decisionmaker, when the system is at any given state x ~ S; this is done with no loss in
generality; see Arapostathis et al. (1993), Section 5.3, and Borkar (1991).
The available information for decision-making at time t E No is given by the
history of the process up to that time H t := (X0, Ao, X1, A1, ..., At_l, Xt), which
is a random variable taking values in H,, where
4 Given a topological space W, its Borel or-algebra will be denoted by ~(W); measurability will
always be understood as Borel measurability henceforth.
Denumerable Controlled Markov Chains with Average Reward Criterion
H o:=S,
H,:=H,_ 1 x(AxS),
91
Hoo:=(SxA) ~ ,
are the history spaces, endowed with their product topologies.
An admissible control policy is a (possibly randomized) rule for choosing
actions, which may depend on the entire history of the process up to the present
time (H,); see Arapostathis et al. (1993) and Hern~ndez-Lerma (1989). Thus, a
policy is specified by a sequence 7r = (Tq},E No of stochastic kernels 7~,on A given
He, that is: a) for each h, ~ H,, n,('l h,) is a probability measure on ~(A); and b)
for each B ~ ~(A), the map he ~ 7~,(BIh,) is measurable. The set of all admissible
policies will be denoted by li. In our subsequent exposition, two classes of
policies will be of particular interest: the stationary deterministic and the stationary randomized policies. A policy n ~ II is said to be stationary deterministic if
there exists a decision function f : S --, A such that A, = f(x) is the action prescribed by n at time t, if X, = x. The set of all stationary deterministic policies is
denoted as lisa. On the other hand, a policy n e II is said to be a stationary
randomized policy if there exists a stochastic kernel y on A given S, such that for
each B e ~(A), y(BIX,) is the probability of the event [Ae e B], given lit =
(Hi-l, A,_I, Xt). The class of all stationary randomized policies is denoted by
IlsR; 7~~ Ilso or 7~'~ IIsR will be equivalently identified by the appropriate
decision function f or stochastic kernel y, respectively.
Given the initial state Xo = x, and a policy 7t ~ II, the corresponding state,
action and history processes, {X,}, {A,} and {H,} respectively, are random
processes defined on the canonical probability space (H~, ~(H~o), ~ ) via the
projections X,(hJ := x,, Ai(h~) := at and H,(h~) := h,, for each hoo = (x, a 0 .....
x,, at .... ) ~ I-I~, where ~ is uniquely determined; see Arapostathis et al. (1993),
Bertsekas/Shreve (1978), Hinderer (1970), Hern/mdez-Lerma (1989). The corresponding expectation operator is denoted by ~:~.The following notation will also
be used in the sequel: given Borel spaces B and D (see Arapostathis et al. (1993)),
then a) P(B) denotes the set of all probability measures on B; b) P(BID) denotes
the set of all stochastic kernels on B given D.
3
The Stochastic Control Problem
Our interest is in measuring the performance of the system in the long run, i.e.,
after a steady state regime has been reached. A commonly used criterion for this
purpose is the expected long-run average reward, where the stochastic nature of
the stream of rewards is itself averaged by the use of expected values; see
Arapostathis et al. (1993). Thus, we have the following definition.
Expected Average Reward (EAR): The long-run expected average reward obtained by using rc~ li, when the initial state of the system is x e S, is given by
92
R. Cavazos-Cadenaand E. Fernhndez-Gaucherand
N+ 1
,-o r'x' A"I
,31,
and the optimal expected average reward is defined as
J*(x) := sup {J(x, ~)} .
(3.2)
lt~ll
A policy n* ~ II is said to be EAR optimal if J(x, n*) = J*(x), for all x ~ S.
Remark 3.1: In (3.1) above, the limit superior could also have been used, instead
of the limit inferior. Although both alternatives are equivalent under some
additional conditions, see Theorem 7.2 in Cavazos-Cadena (1991), in general the
behavior of the inferior and superior limits is distinct, see pp. 181-183 in Dynkin
and Yushkevich (1979).
Remark 3.2: In choosing the limit inferior, we are embracing a conservative
philosophy, in that we choose to maximize the worst expectations for the gains.
That is, the limit inferior has the economic interpretation of guaranteed rewards,
even if the (long) horizon is not known in advanced, or if the termination point
is not determined by the decision-maker. On the other hand, the limit superior
assumes a bolder, more optimistic position, in that what is being maximized is
the best expectation of the gains. This criterion is rarely considered, since it
makes sense only if the decision-maker has control on when to stop the processes.
Our analysis will be carried out under the following asumption, which among
other things guarantees that the expected value in (3.1) above is well defined, and
that EAR optimal stationary policies exist.
Assumption 3.1: Lyapunov Function Condition (LFC). There exists a function
f: S -* [0, ~), and a fixed state z* such that:
(i) For each (x, a) ~ K,
1 + [r(x, a)[ + ~ px, y(a)~(y) <_ E(x) ;
y~z*
(ii) For each x e S, the mapping f ~ E~{f(X1) } = ~y~ s Px,y(f(x))f(Y), is continuous in f e IIsD;
(iii) For each f e IlsD and x e S,
E~{r(X.)I[T > n]} .~oo 0 ,
DenumerableControlledMarkov Chains with AverageReward Criterion
93
where T := min{m > 01Xm = z*} is the first passage time to state z*, and I[A]
denotes the indicator function for the event A.
[]
The LFC was introduced by Foster (1953) for noncontrolled Markov Chains,
and by Hordijk (1974) for CMC, and has been extensively used in the study of
denumerable CMC with an EAR criterion; see Arapostathis et al. (1993), Section
5.2. Furthermore, Cavazos-Cadena and Hern~mdez-Lerma (1992) have shown
its equivalence, under additional conditions, to several other stability/ergodicity
conditions on the transition law of the system. There are two main results
derived under the LFC: a) EAR optimal stationary policies are shown to exist,
and such policies can be obtained as minimizers in the EAR optimality equation
(EAROE); and b) an ergodic structure is induced in the stream of states/rewards.
We sumarize these well known results in the two lemmas below.
Lemma3.1:Under Assumptions 2.1 and 3.1, there exist p* ~ ~ and h: S ~
R such
that the following holds:
(i) J*(x) = p*, Vx e S;
(ii) th(')l < (1 + ~(z*)).l(.);
(iii) The pair (p*, h(.)) is a (possibly unbounded) solution to the EAROE, i.e,
P* + h(x)= sup [r(x'a) +y~sPx'Y(a)h(Y)l'Vx eS
(3.3)
(iv) For each x ~ S, the term within brackets in (3.3) is a continuous function of
a e A, and thus it has a maximizer f*(x) e A. Moreover, the policy f * e IIsD
thus prescribed is EAR optimal.
Proof." For
a proof of results (i), (iii), and (iv) see Arapostathis et al. (1993),
Cavazos-Cadena and Hern~mdez-Lerma (1992), Hordijk (1974). On the other
hand, the result in (ii) is essentially contained in the proof of Theorem 5.1 in
Hordijk (1974); see especially Lemmas 5.3-5.7. However, we have not found an
explicit statement of this inequality, and thus we provide one next. As shown in
the above references, h(') can be defined as
h(x) :=
IEx
y*
1(r(X,, At) -
]
p*) , x e S ,
Lt=O
where T is as in Assumption 3.1. Observe that by iterating the equation in
Assumption 3.1 (i) we get, for each x E S,
94
R. Cavazos-Cadena and E. Fernfindcz-Gaucherand
E~"
F
9 (Ir(Xt, A t ) I + I )
Lt=o
1
_<d(x),xeS .
(3.4)
Therefore,
Ih(x)l < E~*Fr~"10r(X.
Lt=O
At)l+ Ip*l)]
< (1 + ]p*I)E{* L, =o (Jr(X,, Atl] + 1)
]
< (1 + IP*IV(x),
(3.5)
where the third inequality follows from (3.4). Now observe that
y*rx'r-1
p , ~ Ez*
k./~t=~,
r(X,, A,)]
Ez,(T)
which follows from the theory of renewal reward processes (see Ross (1970)):
under the action of f * , successive visits to the state z* determine a renewal
process; see also Hordijk (1974), pp. 41-42. Then using (3.4) with x = z*, it
follows that
t(z*)
Ip*l < ~
-
IF=, ( T )
< f(z*) ,
-
since T > 1. Combining this inequality with (3.5), we obtain the desired result.
[]
Lemma 3.2: Let Assumption 3.1 hold.
(i) Let x ~ S and n ~ II be arbitrary. Then:
1
n + 1 IF~E#(Xn)].--,oo-'*0 .
(ii) Let x ~ S and ~ ~ H be arbitrary. Then, for T as in Assumption 3.1,
1 _< E~,[T] _< t'(x) .
Denumerable Controlled Markov Chains with Average Reward Criterion
95
In particular, for every stationary policy f e IlsD, the Markov chain induced
by f has a unique invariant distribution q: ~ P(S), such that:
1
1
q:(z*) = ~
>~
> O;
~~,1_11 ~tz')
moreover, the mapping f ~ q:(z*) is continuous on f e Ilso.
(iii) For each f e Ilso, the following holds:
n + ~ ~-~
r(X t, f(X,)) ~ |
qy(y)r(y, f(y)) ,
t=O
and
1
~ r(X,, f ( X t ) ) , ~
n + 1 t=o
E qy(y)r(y, f(y)), ~Y~ - a.s..
r~s
(iv) Let ~ e IlsR. The Markov chain induced by ~ has a unique invariant distribution qr s P(S), and:
n + 1 ~-
r(X,, A t
~oo
q~(y)r~(Y) ,
I
and
i ~" r(Xt' At)~oo E qr(y)rr(Y), ~ -- a.s. ,
rt + 1 t=0
r~s
where
rr(Y) := fA r(y, a)y(dalx) .
(v) Let W e C~(K) be such that IWI ~ Irl + L, for some positive constant L.
Then (L + 1)-:(.) is a Lyapunov function corresponding to W, i.e., it satisfies Assumption 3.1, with W(., .) taken as the one-stage reward function.
Furthermore, there exist p* e R and hw: S ~ • such that:
96
R. Cavazos-Cadenaand E. Fern~mdez-Gaucherand
p* + hw(x)= sup [W(x,a)
y,s
~ P~'Y(a)hw(y)I'Vx ~S '
that is, (p*, hw(')) is a solution to the EAROE, for the CMC with reward
function W(', ").
Remark 3.3: The result in Lemma 3.2 (i) is due to Hordijk (1974); see also
Cavazos-Cadena (1992). For the results in Lemma 3.2 (ii), see Lemma 5,3 and
the equivalence between conditions L1 and L 5 in Cavazos-Cadena and HernhndezLerma (1992); see also Theorem 5.8 in Arapostathis et al. (1993). On the other
hand, the convergence results in (iii) and (iv) can be easily derived by using the
theory of (delayed) renewal reward processes; see Theorem 3.16 and the remarks
on pp. 53-54 in Ross (1970). Finally, by applying Lemma 3.1 to the reward
function W(., .), the result in Lemma 3.2 (v) follows immediately.
4 Sample Path Optimality
The EAR criterion of (3.1)-(3.2) is commonly used as an approximation of
undiscounted optimization problems when the planning horizon is very long.
However, this criterion can be grossly underselective, in that the finite horizon
behavior of the stream of costs is completely neglected. Moreover, it can be the
case that EAR optimal policies not only fail to induce a desirable (long) finite
horizon performance, but that the performance actually degrades as the horizon
increases; see examples of this pathology in Flynn (1980). Thus, stron#erEAR
criteria have been considered, see Arapostathis et al. (1993), Cavazos-Cadena
and Fenhndez-Gaucherand (1993), Dynkin and Yushkevich (1979), Flynn (1980)
and Ghosh and Marcus (1992). Also, weightedcriteria, which introduce sensitivity to both finite and asymptotic behaviour, have been recently introduced, see
Fern~mdez-Gaucherand et al. (1994), and references therein.
If there exist a bounded solution (p, h(')) to the optimality equation, i.e., with
h(-) a bounded function, then EAR stationary optimal policies derived as maximizers in the optimality equation have been shown to be also (strong) average
optimal and sample path optimal, i.e., the long-run average of rewards along
almost all sample paths is optimal; see Arapostahis et al. (1993), Dynkin and
Yushkevich (1979), Georgin (1978), and Yushkevich (1973). Undoubtely, sample
path average reward (SPAR) optimality is a much more desirable property than
just EAR optimality, since a policy has to actually be used by the decision-maker
along nature's selected sample path. However, bounded solutions to the optimality equation necessarily impose very restrictive conditions on the ergodic
Denumerable Controlled Markov Chains with Average Reward Criterion
97
structure of the controlled chain; see Arapostathis et al. (1993), Cavazos-Cadena
(1991), and Fernfindez-Gaucherand et al. (1990). Under the conditions used in
this paper, the solutions to the optimality equation obtained in Lemma 3.1 are
possibly unbounded. For similar results, but under a different set of conditions
as those used here, see the summary in Arapostathis et al. (1993), Section 5.3.
After precisely defining SPAR optimality (see also Arapostathis et al. (1993))
we show in the sequel the SPAR optimality of the EAR optimal stationary
policies in Lemma 3.1 (iv).
Sample Path Average Reward ( S P A R ) : The long-run sample path average reward obtained by 7z e II, when the initial state of the system is x ~ S, is given by
1
Js(x, 7z) := lim i n f ~
N~o
N+
N
~ r ( X t, At) .
lt=o
(4.1)
A policy ~* E II is said to be SPAR optimal if there exists a constant ~ such that
for all x ~ S we have that:
Js(x, ~*) = -fi, ~ *
- a.s. ,
while, for all rr ~ II and x e S,
Js(x, ~) < ~, ~
- a.s. .
The constant fi is the optimal sample path average reward.
We present next our main result, showing the SPAR optimality of the EAR
optimal stationary policy obtained in Lemma 3.1 (iv); its proof is presented in
Section 6, after some technical preliminaries given in the next section.
Theorem 4.1: Let Assumptions 2.1 and 3.1 hold. Let f * and p* be as in Lemma
3.1. Then:
(i) f * is SPAR optimal, and p* is the optimal sample path average reward.
(ii) For all ? ~ HsR, we have that
1
lira sup - -
N~
N
~ r(X,, At) < p*, ~ r _ a . s . .
N + lt=o
98
R. Cavazos-Cadenaand E. Fern~indez-Gaucherand
Remark 4.1: According to the above result, regardless of the initial state x 9 S
and policy rr 9 II being used, with probability 1 the limit inferior of the sample
average reward over N periods does not exceed p*, th expected average reward.
Moreover:
lim
1
N-'~oo ~
N
~ r(X, At) = p*, ~f* - a.s.
t=
-~-0
and the limit inferior and the limit superior of sample path average rewards lead
to the same optimal policy and optimal value, when the optimization is restricted to only policies in IlsR.
5
Preliminaries
This section contains some technical results that will be used in the proof of
Theorem 4.1. The next dominated convergence result is similar to that in Proposition 18, p. 232 of Royden (1968), although the latter requires stronger assumptions.
Lemma 5.1: Let
{v,} c P(K) be a sequence converging weakly to v e P(K). Let
R s ~(K) be a nonnegative function such that:
fK Rdv"~~ f l Rdv < oo .
Then, for each C e Cg(K)satisfying lC(" )f -< R(-),it follows that:
fK Cdv. ~| f~ Cdv .
Proof." Let ~ >
;~
Rdv <
exA
0 be fixed, and pick a finite set G c S such that
e ,
(5.1)
Denumerable Controlled Markov Chains with AverageReward Criterion
99
where Gc stands for the complement of G. Since {vn} converges weakly to v, it
follows that
f~• A Rdv~~ f~
x A
Rdv .
Then, we have that
= f~ • A Rdv .
Therefore, there exists M ~ ~ such that for all n > M
fG
exA
Rdv,,<
e ,
(5.2)
and thus
+ f~
exA
]Cldv
where the second inequality follows from (5.1) and (5.2), together with the fact
that IC(')l _< R(.), On the other hand, since {vn} converges weakly to v and C(.)
is continuous, it follows that
f~ x A Cdvn,,~o f~ x A Cdv ,
(5.4)
since G is a finite set. Taking the limit superior, as n ~ o% in both sides of (5.3)
and using (5.4), we obtain that
100
R. Cavazos-Cadenaand E. Fern~indez-Gaucherand
lim sup I Cdv, - f
n~oo
Cdv < 2~ ,
dg
and the conclusion follows since ~ > 0 was arbitrary.
[]
Definition 5.1: Let p* be the optimal EAR associated with the reward function
Irl, and set
6:=
sup {I -- q y ( z * ) } ,
(5.5)
f e lisp
where qs(') is as in Lemma 3.2; notice that 0 < 6 < 1, by Lemma 3.2 (ii). Next,
define L 9 ~ as
p*
L := 1 + 1 ~
'
(5.6)
and R 9 if(K) as
~lr(x,a)l ,+L, x#z*,a 9
R(x'a):=(Ir(x,a)l
x=z*,a 9
;
.
(5.7)
The function R defined above will play an important role in the proof of
Theorem 4.1. The following result will also be very useful.
Lemma 5.2: Let p~ be the optimal EAR corresponding to the reward function R.
Then p* < L.
Proof: Let f E IIsD be an EAR optimal policy for the reward function R. In this
case
p* = ~., qy(x)R(x,f(x))
xeS
= ~ qy(x)lr(x,f(x))l + L ~ qT(x)
x~ S
< p~' + L(1 - qT(z*)) ,
x~z*
Denumerable Controlled Markov Chains with Average Reward Criterion
101
where the second equality follows from (5.7), and the inequality from the fact that
p* is the optimal EAR corresponding to Lr(', ")1. Then (5.5) and (5.6) yield that
P* <- P* + LtS = ~ + l P ~ <
,
since 6 < 1.
[]
Definition 5.2: The sequence of state/action empirical measures {vn} ~ P(K) is
computed as follows: for each pair of Borel sets G 9 ~(S) and B 9 ~(A), let
v,(G x B ) : = - I[Xt 9
n + lt=0
At 9149
~o 9
Remark 5.1: Notice that {vn} is an stochastic process, adapted to the filtration
{a(Hn, An)}, and that vn 9 P(K). Also, for each W: K ~ 0~, we have that
1
n + lt=o
w(x,, A,) = f, wdv,.
(5.8)
Next, let S := S u {~} be the one-point compactification of S, and observe
that vn can be naturally considered as an element of P(S x A). Since S x A is
compact, then {vn} is a tight sequence in ~(S x A). The following result, summarized from Borkar (1991), Chapter 5, describes the asymptotic behavior of the
sequence of empirical measures; see also Arapostathis et al. (1993), Section 5.3.
Lemma 5.3: Let x 9 S, and n 9 II be arbitrary. Then the following holds for
~ - a l m o s t all sample paths: Ifv 9 P(S x A) is a limit point of {v,}, then v can be
written as
1J = (1 - - 0r
1 + ~2
,
(5.9)
where 0 < ~t < 1, and/~x,/~2 9 P(S x A) satisfy the following:
(i) pl(S x A) = 1 = #2({oo} x A);
(ii) /~1 can be decomposed as
/~I({Y} x B) = ~(y)'r(Bly),
for each y 9 S and B 9 &(A), where p 9 P(S) and V9 P(AIS);
(5.10)
102
R. Cavazos-Cadena and E. Fern~indez-Gaucherand
(iii) if ~ and ? are as in (5.10), then ~ is the unique invariant distribution of the
Markov chain induced by ?, when 7 is viewed as a policy in IIsR. Thus, we
have that ~ = q~, using the notation in Lemma 3.2.
Now, let R(., .) be the function in Definition 5.1, and recall that (L + 1)~(.) is
a Lyapunov function for R(., .). Let p* ~ R and hR: S ~ R be a solution to the
EAROE corresponding to the reward function R(., .), i.e.,
P~ + hR(x) = supA IR(x' a)
yeS
~ Px'r(a)hR(Y)] ' vx E S ;
recall that such a solution exists, by Lemma 3.1. Furthermore, by Lemma 3.2 we
obtain that
n
1
IF~[hR(X.)].~ 0, Vx E S, ~t ~ H .
+ 1
(5.11)
Next, define Mandl's discrepancy function q~: K --* [0, ~), by (see Arapostathis
et al. (1993), Hernfindez-Lerma (1989))
~5(x, a) = p* + hR(x) -- R(x, a) - ~ px.,(a)hR(y), V(x, a) ~ K .
yeS
Note that r
a) > 0, for all (x, a) ~ K and that r
a) = 0 if and only if the
action a ~ A attains the maximum in the EAROE. With the above definitions,
standard arguments show that for each n e No, x e S, and n e II,
hR(x)
P~
+
-
-
n+l
-_ n + 1 E:, ,~o R(X,, A,) + O(Xt, At) +
nz~[hR(X,.+l)]
where the second equality follows from (5.8). Then, we have that (5.11)yields
(5.12)
In particular, for any policy y e IIsa,
Denumerable Controlled Markov Chains with Average Reward Criterion
103
(5.13)
where
RY(y) = f ~ R(x, a)y(dalx), x e S ,
with a similar definition for 4~~. The following technical result will also be used
in the proof of Theorem 4.1; we present its proof in the Appendix.
Theorem 5.1: Let x e S and rc~ II be arbitrary. Then for ~ - a l m o s t all sample
paths {X,(h~), At(h~o)}, h| e H~, there exists a sequence {nk} C N, with n k ---+OO
as k ~ 0% such that the following holds:
(i) {vnk} converges weakly to v e P(K);
(ii)
f (R + ~)dv,knk--2~of~ (R + c~)dv ~
(5.14)
[]
Remark 5.2: Note that Theorem 5.1 (i) says that v e P(K) = P(S x A), a stronger
assertion than v e P(S x A).
6
Proof of the Main Result
Now we are ready to present the proof to Theorem 4.1, our main result.
Proof of Theorem 4.1:
(i) By Lemma 3.2 (iii), we have that
1
Z r(Xz, f*(X~)).~ r~s
~, qI,(y)r(y, f*(y)) = p*, ~[* - a.s. ,
n + lt=o
104
R. Cavazos-Cadena and E. Fernhndez-Gaucherand
i.e., when the system is driven by policy f*, the average of the sample path
rewards converges to the optimal EAR, almost surely.
Now, let x ~ S and rc ~ II be arbitrary but fixed, and let U ~ I-I| be the set
of sample paths along which the conclusions in Theorem 5.1 and Lemma 5.3
are valid; observe that ~ { U } = 1. Select next a sample path {Xt(h~), A,(h~o)},
h| e U, and pick a sequence {nk} ~ N as in Theorem 5.1; note that {nk}
depends on the sample path selected. Recall that ~(x, a) > 0, for any (x, a)
K, and that Irl -< R + ~, by (5.7). Then by (5.14) and Lemma 5.1, we get that
(6.1)
f ~ rdvn~nk~~ f i rdv .
On the other hand, by Lemma 5.3, v can be decomposed in such a way that
for all y ~ S and B e M(A),
qr(Y)'7(BIy); ~ E P(AIS) .
v({y} x B) =
Therefore, by Lemma 3.2 (iv),
fx rdv = y~e s qr(Y) fA r(y, a)7(daly)
= ~, qr(y)rr(Y)
yes
= lim n + 1 E~
n--*oo
r ( X , , At)
,
t=O
and thus
fK
rdv < p* ,
(6.2)
since p* is the optimal expected average reward associated with the reward
function r(', .). Since
lim inf ~
n~oo
~. r(X t, At) = lim inf
/1 "l" 1 t = O
rdvn
n~oo
< lim~..~inffx rdvn~ '
Denumerable Controlled Markov Chains with AverageReward Criterion
105
then using (6.1) and (6.2) it follows that
lim inf - ,~
n + 1 t=o
r(Xt, At) <_p* .
Therefore, the conlusion follows since the sample path {Xt(ho~), At(h~)},
h~o E U, was arbitrary, and ~ { U } = 1.
(ii) This follows inmediatly from (i) and Lemma 3.2 (iii)-(iv).
[]
Appendix: Proof of Theorem 5.1
Let x ~ S and rce II be arbitrary but fixed. Notice that (5.12) and Fatou's Lemma
(see Royden (1968), p. 231) imply that
P* > Nz~'Ilimin,-'o0
fL ,)~I (R + q~)dv"] "
(A.1)
Next, let V c H~o be the set of sample paths along which the conclusions in
Lemma 5.3 are valid, and observe that ~ { V} = 1. Now select and fix arbitrarily
a sample path {Xt(h~), A,(hJ}, h~o ~ V, and pick a sequence {nk} ~ N such that
(e.g., see Royden (1968), p. 36),
lim f (R + ~)dv.~ : liminf f (R + ~)dv. .
n u "* oo
K
n -* ~
(A.2)
J K
Taking a subsequence if necessary, we can assume that {v,~} converges weakly
to v = (1 - ~)#1 + ~/~2 e P(S • A), where 0 < at < 1 and/~1, #2 are as in Lemma
5.3. Now let G c S be a finite set, chosen arbitrarily except that z* ~ G. Define
Ro and q~Gin c~(~ x A) as follows:
~R(x,a)
,
[.L ,
(x,a)~G x A ;
(x, a) e (S\G) x A ,
~'~(x,a) ,
a) : = / . 0 ,
(x,a)~G xA ;
(x, a) e (S\G) x A .
RG(x, a) :=
(A.3)
and
r
(AA)
106
R. Cavazos-Cadena and E. Fern~ndez-Gaucherand
Therefore, we have that
0 < Ro(x, a) + Co(x, a) "~R(x, a) + @(x, a) ,
as G increases to S. Thus, by the weak convergence of {v~k} to v, (A.3)-(A.4), and
since Ro + r e c~(g x A) is a bounded function, we obtain that,
liminf I (R + ~)dv,, = lim f (R + ~)dv~k
> tim t_
nk--~~
=f
d'~ •
(R~ + ~)dv~
.J S • A
(R~ + cl)~)dv
=(l --ct) fg(RR + ~G)dl'tl "k g f{~}xA ( R a - k ~ a ) d # 2
= (1 - g) f x (R~ + @~)d#l + ~L .
(A.5)
Now, let G increase to S. In this case the Monotone Convergence Theorem and
(A.5) yield that
lim f (R + r
k >_(1 - 0t) ~ (R + r
,dK
nk ~ oo
+ 0tL.
(A.6)
To continue, as in Lemma 5.3 (ii), let/A be decomposed as,
#~({y} x B) = qr(y).y(Bly); y ~ P(AIS) .
Then, straightforward calculations yield that
f (R + ~)d#l = ~, q~(y)(R~(y)+ ~b~(y))= p* ,
K
(g.7)
Y~$
where the second equality follows from (5.13). Hence, combining (A.6) and (A.7),
it follows that
Denumerable Controlled Markov Chains with Average Reward Criterion
lim f (R + ~)dv,, > (1
n k ~ oo
107
(A.8)
- ~)p* + ctL .
K
Since by Lemma 5.2 p* < L, (A.8) and (A.2) yield that:
tim inf f (R + qb)dv, >_p* ;
n--,oo
d
(A.9)
K
Note that (A.9) has been established for an arbitrary sample path {X,(hJ, A,(h~o)},
h~ e V. Since ~ { V } = 1, we have that (A:9) holds ~ - a l m o s t surely; this together with (A.1) yields that
liminf f (R + ~)dv,= PR,
* ~ " - a.s. .
n ...*oo
(A.IO)
d K
Hence, by (A.8) and since p* < L, ~ above must be zero, i.e., {Vn~} converges
weakly to #1 e P(S x A).
Let W be the set of all sample paths for which (A.10) holds; notice that
:~x"[Wc~ V] = 1. Then, for each sample path {X,(hoo), At(boo)}, h| ~ W n V,
equality is attained in (A.8). Thus choosing a sequence {v.,} which converges
weakly to v e P(S x A) and satisfying (A.2), the following holds:
(i) {v,,} converges weakly to v = ~1 e P(S x A); and
(ii)
lim f l (R + q))dv.k = lira inf t (R + ~)dv. = p*
nk -4~
n~~176 d K
= f x ( R + ~)dPl ,
where the second equality follows from (A.7).
In conclusion, it has been shown that for any sample path {X,(h~), A,(h|
h~o e W n V, there exists a subsequence {v,k} satisfying th conclusions in Theorem 5.1. Since ~ { W c~ V} = 1, the proof is complete.
[]
We want to thank the referee for several comments and corrections that helped
us improve the paper. We also want to thank Professor A. A. Yushkevich for some elucidating
comments in relation to Remark 3.2.
Acknowledgements:
108
R. Cavazos-Cadena and E. Fern~.ndez-Gaucherand
References
Arapostathis A, Borkar VS, Fern~.ndez-Gaucherand E, Ghosh MK, Marcus SI (1993) Discrete-time
controlled Markov processes with an average cost criterion: A survey. SIAM J Control Optim
31 : 282-344
Bertsekas DP (1987) Dynamic programming: Deterministic and stochastic models. Prentice-Hall
Englewood Cliffs
Borkar VS (1991) Topics in controlled Markov chains. Pitman Research Notes in Mathematics
Series 240, Longman Scientific & Technical UK
Cavazos-Cadena R (1991) Recent results on conditions for the existence of average optimal stationary policies. Annals Operat Res 28: 3-28
Cavazos-Cadena R (1992) Existence of optimal stationary policies in average reward Markov
decision processes with a recurrent state. Appl Math Optim 26:171-194
Cavazos-Cadena R, Hern~.ndez-Lerma O (1992) Equivalence of Lyapunov stability criteria in a class
of Markov decision processes. Appl Math Optim 26:113-137
Cavazos-Cadena R, Fern~.ndez-Gaucherand E (1993) Denumerable controlled Markov chains with
strong average optimality criterion: Bounded & unbounded costs. SIE Working paper 93-15, SIE
Department The University of Arizona
Dynkin EB, Yushkevich AA (1979) Controlled Markov processes. Springer-Verlag New York
Ephremides A, Verdfi S (1989) Control and optimization methods in communication networks.
IEEE Trans Automat Control 34: 930-942
Fern~mdez-Gaucherand E, Arapostathis A, Marcus SI (1990) Remarks on the existence of solutions
to the average cost optimality equation in Markov decision processes. Syst Control Lett 15:425432
Fern~.ndez-Gaucherand E, Ghosh MK, Marcus SI (1994) Controlled Markov processes on the
infinite planning horizon: Weighted and overtaking cost criteria. ZOR: Methods and Models of
Operations Research 39: 131-155
Flynn J (1980) On optimality criteria for dynamic programs with long finite horizon. J Math Anal
Appl 76: 202-208
Foster FG (1953) On the stochastic processes associated with certain queueing processes. Ann Math
Stat 24:355-360
Georgin J-P (1978) Contr61e des chaines de Markov sur des espaces arbitraires. Ann Inst H
Poincar6, Sect B, 14:255-277
Ghosh MK, Marcus SI (1992) On strong average optimality of Markov decision processes with
unbounded costs. Operat Res Lett 11:99-104
Hern~ndez-Lerma O (1989) Adaptive Markov control processes. Springer-Verlag New York
Hinderer K (1970) Foundations of non-stationary dynamic programming with discrete time parameters. Lect Notes Operat Res Math Syst 33, Springer-Verlag Berlin
Hordijk A (1974) Dynamic programming and Markov potential theory. Math Centre Tracts 51,
Mathematisch Centrum Amsterdam
Ross SM (1970) Applied probability models with optimization applications. Holden-Day San
Francisco
Ross SM (1983) Introduction to stochastic dynamic programming. Academic Press New York
Royden HL (1968) Real analysis, 2nd ed. Macmillan New York
Stidham S, Weber R (1993) A survey of Markov decision models for control of networks of queues.
Queueing Syst 13: 291-314
Tijms HC (1986) Stochastic modelling and analysis: A computational approach. John Wiley Chichester
Yushkevich AA (1973) On a class of strategies in general Markov decision models. Theory Prob
Applications 18: 777-779
Received: November 1993
Revised version received: July 1994