Section Midterm Practice Problems 1.

Section Midterm Practice Problems
EE278: Introduction to Statistical Signal Processing (Fall 2014)
1.
Mean-square Inequality.
>0
Let X and Y be rvs with finite mean and variance. Show that for all
E (X − Y )2
.
Pr{|X − Y | > } ≤
2
2.
Non-identical Law of Large Numbers.
Let {Xn ; n ≥ 1} be a sequence of independent but not
Pn
identically distributed rvs. Denote Sn = i=1 Xn . We say that the WLLN holds for this sequence if for all
>0
Sn
E[Sn ] lim Pr ≥ =0
−
n→∞
n
n 2
Show that the WLLN holds if there is some constant A such that σX
≤ A for all n
n
3. Function of a CDF. Suppose that the rv X is continuous and has a strictly increasing CDF FX (x).
Consider another rv Y = f (X) where the function f is defined as f (x) = FX (x). Show that Y is uniformly
distributed in the interval 0 to 1.
4.
Gaussian Random Vectors.
Let X ∼ N (0, KX )

a b1
KX = b1 a
c b2
with

c
b2  ,
a
where a, b1 , b2 , c 6= 0, b21 < a2 , and b22 < a2 .
(a) Specify the joint pdf of X1 + X2 and X1 − X2 . Are they statistically independent?
(b) Find a linear transformation A such that
4
Y = AX ∼ N 0,
2
2
.
2
Your answer should be in terms only of the entries of KX . Hint: Note that
2
1
T 0
4
=
1
2
0 2
1 1
2
2
(c) Specify the conditional pdf of [X1 X3 ]T given X2 = x as a function of x and the parameters of KX .
5.
Vector Central Limit Theorem
(a) Let X1 , X2 , . . . be a sequence of i.i.d. random variables each having a density fX . State the Central
Limit Theorem for the sequence.
1
(b) Let X1 , X2 , . . . , be a sequence of i.i.d. random vectors each having a joint density fX . Postulate a
natural generalization of the Central Limit Theorem from random variables to random vectors. (No
need to prove it.)
(c) The signal received over a wireless communication channel can be represented by two sums
n
1 X
Zj cos Θj
X1n = √
n j=1
n
1 X
and X2n = √
Zj sin Θj ,
n j=1
where Z1 , Z2 , Z3 , . . . are i.i.d. with mean µ and variance σ 2 and Θ1 , Θ2 , Θ3 , . . . are i.i.d. U[0, 2π] and
independent of the Zi ’s. Find the distribution of [ X1n X2n ]T as n → ∞.
6.
Two Coins You are given two coins, coin 1 has bias 1/2 and coin 2 has a randomly selected bias
P ∼ U[0, 1]. You pick one of them at random and flip it twice. Let X = 1 if coin 1 is selected and X = 2 if
coin 2 is selected with pX (1) = pX (2) = 1/2. Let Yi = 1 if the outcome of flip i is heads and Yi = 0 if the
outcome is tails for i = 1, 2. Observing the outcomes of these two coin flips y1 and y2 , you wish to decide
which coin was selected. Assume that Y1 and Y2 are conditionally independent given the value of the bias
of the selected coin.
(a) Find the estimate x
ˆ(y1 , y2 ) ∈ {1, 2} that minimizes the probability of error Pr{X 6= x
ˆ}. Your answer
should be explicit in terms only of y1 and y2 .
(b) Find the minimum probability of error.
7. Big Game Stanford hires you to estimate whether it will rain during the Big Game. Checking past
data you determine that the chance of rain is 20%. You model this with the random variable R with pmf
pR (1) = 0.2,
pR (0) = 0.8,
where R = 1 means that it rains and R = 0 that it doesn’t. Your first idea is to be lazy and just use the
forecast of a certain website. Analyzing data from previous forecasts, you determine that this website is
right 70% of the time. You model this with a random variable W that satisfies
Pr {W = 1|R = 1} = 0.7,
Pr {W = 0|R = 0} = 0.7.
(a) What is your prediction given the forecast of the website (use the MAP estimate of R given W )? What
is the probability of error under your model?
Unsatisfied with the accuracy of the website, you look at the data used for the forecast (they are available
online). Surprisingly the relative humidity of the air is not used, so you decide to incorporate it in your
prediction in the form of a random variable H.
(b) Is it more reasonable to assume that H and W are independent, or that they are conditionally independent given R? Explain why.
You assume that H and W are conditionally independent given R. More research establishes that conditioned
on R = 1, H is uniformly distributed between 0.5 and 0.7, whereas conditioned on R = 0, H is uniformly
distributed between 0.1 and 0.6. Use the MAP estimate of R given W and H as your forecast. Recall that
since R and W are discrete but H is continuous, the following equality holds
pR|HW (r|h, w)fH|W (h|w)pW (w) = fH|RW (h|r, w)pW |R (w|r)pR (r).
(c) What is your forecast if H = 0.65 and the website predicts no rain?
(d) What is your forecast if H = 0.55 and the website predicts rain?
(e) What is the probability of error under this new model?
2