Review of Probability Theory (Examples): Example 1: Suppose we

Review of Probability Theory (Examples):
Example 1:
Suppose we are interested in an experiment where we conduct Bernoulli trials until the occurrence of m
1’s. When the mth ‘1’ occurs the experiment ends. Let define a random variable X for the number of trials
until occurrence of m 1’s. What is the distribution of the random variable X?
Solution:
We would like to evaluate P[X=k], k=m, m+1, m+2, ……………
By definition of the random variable the mth ‘1’ occurs at the kth trial. This indicates that in the (k-1) trials
before that (m-1) 1’s have occurred. The probability of having (m-1) 1’s in (k-1) trials follows a binomial
distribution with probability:
 k  1  m1
k m

 p 1  p 
 m 1
Now to calculate P[X=k], this means that we had (m-1) 1’s occur in (k-1) trials AND the kth trial is a ‘1’
 k  1  m1
k m
 P X  k   
 p 1  p   p
m

1


 k 1  m
k m
P X  k   
 p 1  p 
m

1


Example 2:
If two events A and B can occur and P[A], P[B] are not zero. What combinations of Independent (I), Not
Independent (NI), Mutually exclusive (M), and Not Mutually exclusive (NM) are permissible? In other
words, which of the four combinations (I,M), (NI,M), (I,NM), and (NI,NM) are permissible? Construct an
example for those combinations that are permissible.
Solution:
Notes:
i) (I,M)
What is mutually exclusive?
This combination is NOT POSSIBLE because by the definition,
mutually exclusive means that by the occurrence of event A,
the probability of occurrence of event B is zero (i.e.,
P[B|A]=0). This means that even though P[B]≠0, knowledge
of A changes this probability to zero.
Ex: Throwing a die once. Let A={1,2}. Let B={3,4}
Now if event A has occurred  for sure the outcome of the
experiment does not belong to B  P[B|A]=0 (even though
we know that P[B] = 2/6 = 1/3  Independence is not
possible)
Events A, B in a given experiment are mutually
exclusive if there exists no intersection
between them. In other words, if event A is
true then event B is definitely not true.
What is independent?
Events A, B in a given experiment are
independent if knowing that event A has
occurred does not change anything about the
probability of event B has occurred.
ii) (NI,M)
This combination is POSSIBLE because if we review part (i), if two events are mutually exclusive, then
they have to be dependent.
Ex: Throwing a die once. Let A={1,2}. Let B={3,4}
iiI) (I,NM)
This combination is POSSIBLE. To construct an example, we need to define two events A, B such that:
P[B|A] = P[B] and P[A|B] = P[A], P[A,B] ≠ 0
Ex: Throwing a die once. Let A be the vent of the outcome smaller than or equal to 2. Let B be the event
of the outcome being even.
 A={1,2}, B={2,4,6}
P[A] = 2/6 = 1/3
P[B] = 3/6 = 1/2
P[A,B] = 1/6
P[B|A] means that we would like to calculate probability that event B has occurred given that with the
occurrence of event A, the sample space has now become {1, 2}. From this set one of the two elements
would correspond to the occurrence of B.
P[B|A] = 1/2 = P[B]
P[B|A] means that we would like to calculate probability that event A has occurred given that with the
occurrence of event B, the sample space has now become {2, 4, 6}. From this set one of the three
elements would correspond to the occurrence of A.
P[A|B] = 1/3 = P[A]
Note also that P[A,B]=1/6=P[A]P[B]=(1/3)*(1/2) = 1/6 (which is also a condition for independence)
iv) (NI,MM)
This combination is POSSIBLE.
Ex: Throwing a die once. Let A={1,2}. Let B={2,3}
P[A] = 2/6 = 1/3
P[B] = 2/6 = 1/3
P[A,B] = 1/6  Not mutually exclusive
P[A|B]= 1/2 ≠ P[A]
P[B|A]= 1/2 ≠ P[B]
 Not independent
Example 3:
A digital communication system sends two messages M=0 or M=1, with equal probabilities. A receiver
observes a voltage which can be modeled as a Gaussian random variable, X, whose PDFs conditioned on
the transmitted message are given by
1
fX M 0  x  
2πσ
2
e

x2
2σ2
1
, fX M 1  x  
2πσ
2
e

 x 12
2 σ2
i) Find P[M=0|X=x] for σ2=1
Solution:
P M  0 X  x  
P M  0 
fX M 0  x  
fX M 0  x  P M  0
fX  x 
1
2
1
2πσ2
e

x2
2σ2
fX  x   PM  0 fX M 0  x   PM  1 fX M 1  x 
1
 P M  0 X  x  
2πσ
x2
e
x2
2σ2
1
 
2
 2

1
1
1
1
e 2σ    
e
 2 
  2πσ 2
 2  2πσ 2
e
 P M  0 X  x  
e


x2
2σ2
2
x
2σ2
e

 x 12
2σ2
1
 P M  0 X  x  
1e
 P M  0 X  x  
2


 x 12
2σ2
x2
2
2
e σ
1
12
2 x 1
2
2
1e σ
 x 12
2σ2
1
For σ2  1  P M  0 X  x  
 1
 x 
2
1  e
ii) Repeat part(i) assuming P[M=0] = 1/4, P[M=1]=3/4
Solution:
P M  0 X  x  
fX M 0  x  P M  0
fX  x 
1
 P M  0 X  x  
2πσ
x2
e
x2
2σ2
1
 
4
 2

1
1
1
3
e 2σ    
e
 4 
2
2
  2πσ
 4  2πσ
e
 P M  0 X  x  
e
 P M  0 X  x  
2



x2
2σ2
2
x
2 σ2
 x 12
 3e

 x 12
2 σ2
1
 1
 x 
 2
1  3e
Note: In general for P[M=0] = p and P[M=1] = 1-p and σ2=1
 P M  0 X  x  
p
 1
 x 
 2
p  1  p  e
2σ2