CHAPTER 1 PROBABILITY Probability:

CHAPTER 1
PROBABILITY
Probability:
The term probability refers to the chance of happening or not happening of an event. In
any statement when we use the word chance it means that there is an element of uncertainty in
that statement. A numerical measure of uncertainty is provided by the theory of probability.
In the words of Morris Hamburg, “probability measures provide the decision maker in
the business and in government with the means for quantifying the uncertainties which affect
his choice of appropriate action.”
Origin:
The theory of probability has its origin in the games of chance related gambling like
drawing cards from a pack or throwing a dice, etc. Jerome Cardan (1501 – 1576) an Italian
mathematician was the first man to write a book on the subject entitled, “Book on Games of
Chance”, which was published after his death in 1663. The foundation of the theory of
probability was laid by French mathematicians Blasé Pascal (1623 – 1662) and Pierre de
Fermat.
Terminology:
In order to understand the meaning and concept of probability, we must know various
terms in this context.
Random Experiment:
An experiment is called a random experiment if when conducted repeatedly under
essentially homogenous conditions, the result is not unique, i.e. it does not give the same result.
The result may be anyone of the various possible outcomes.
Sample Space:
The set of all possible outcomes of an experiment is called the sample space of that
experiment and is usually denoted by S. Every outcome (element) of the sample space is called
sample point.
1
Some random experiments of sample space:
(i)
If an unbiased coin is tossed randomly, then there are two possible outcomes for this
experiment, viz., head (H) or tail (H) up. Then the sample space is
S=
(ii)
When two coins are thrown simultaneously, the sample space is
S=
(iii)
H,T
(H, H)
(H, T) (T, H) (T, T)
When three coins are thrown simultaneously, then the sample space consists of 2 x 2
x 2, i.e., 8 sample points as shown below:
S=
(iv)
(H,H,H), (H,H,T), (H,T,H), (T,H,H), (H,T,T), (T,H,T), (T,T,H, (T,T,T)
When a die is thrown randomly, then the sample space is
S = 1, 2, 3, 4, 5, 6
(v)
When two dice are thrown simultaneously and the sum of the points is noted, then
the sample space is:
S = 2, 3, 4, 5, 6, 7, 8, 9, 10,11, 12
Trial and Event.:
Performing of a random experiment is called a „trial‟ and outcome or outcomes are
termed as „events‟. For instance, tossing of a coin would be called a trial and the result (falling
head or tail upward) an event.
Types of Events:
Exhaustive Cases:
The total number possible outcomes of a random experiment Is called the exhaustive
cases for the experiment. For instance, in toss of a single coin, we can get head or tail. Hence
exhaustive number of cases is 2 because they between themselves exhaust all possible outcome
of the random experiment.
Favorable cases or Events:
The numbers of outcomes which result in the happening of a desired event are called
favorable cases. For instance, in the drawing a card from a pack of cards, the cases favorable to
getting a club 13 and getting an ace is 4.
2
Mutually Exclusive Events:
Two or more events are said to be mutually exclusive if the happening of any one of them
excludes the happening of all others in the same experiment. Thus mutually exclusive events are
those events, the occurrence of which prevents the possibility of the other to occur.
Symbolically, a set of events E1, E2, …..En is mutually exclusive if Ei ∩ Ej = ∅ (i ≠ j). This means the
intersection of two events is a null set (∅).
For example, let a dice be thrown once, the event E1 of getting an even number is
E1 = 2, 4, 6
The event E2 is getting an odd number is
E2 = 1, 3, 5
Since E1 ∩
E2 = ∅ , the two events are mutually exclusive.
Equally likely events:
The outcomes are said to be equally likely if none of them is expected to occur in
preference to other. For instance, head and tail are equally likely events in tossing an unbiased
coin.
When an unbiased dice is thrown once, we may get 1 or 2 or 3 or 4 or 5 or 6. The six
events are equally likely.
Independent Events.:
Events are said to be independent if occurrence of one does not affect the outcome of
any of the others. For instance, the result of the first toss of a coin does not affect the result of
successive tosses at all.
Dependent Events:
If the occurrence of the one event affects the happening of the other event, then they
are said to be dependent events. For instance the probability of drawing a king from a pack of
52 cards is
4
52
. If this card is not replaced before the second draw, the probability of getting a
king again in 3/51 as there are now only 51 cards left and they contain only 3 kings.
3
Compound Events:
Two events are said to be compound when their occurrences are related to each other. For
example, a dice is thrown once. The sample space S is
S = 1, 2, 3, 4, 5, 6
Let one event be E1 that is, of getting an even digit upper most, i.e.,
E1 = 2, 4, 6
Let the other event be E2, that is, that of getting a number greater than 4, i.e.,
E2 = 5, 6
The event of getting an even number and a number greater than 4 is
E=
6
Clearly a compound event is a intersection of two or more events. In the present case
E = E1 ∩ E2
Complementary Events:
If E is any subset of the sample space, then its complement denoted by E (read as E-bar)
contains all the elements of the sample space that are not part of E. If S denoted the sample
space, then
E= S – E
=
All sample elements not in E
Expressions of Probability:
Probability will always be a number between 0 and 1. If an event is certain to happen its
probability would be 1 and if it is certain that the event would not take place, then the
probability of its happening is zero.
The general rule of the happening of an event is that if an event can happen in m ways
and fail to happen ion n ways, then the probability (P) of the happening of the event is given by
P =
𝑚
𝑚 +𝑛
or
P =
Number of cases favorable to the occurrence of the event
Total number of mutually exclusive and exhaustive cases
4
Probabilities can be expressed either as ratios, fraction or in percentages. For instance
the probability of getting a head in toss of a coin can be expressed as ½ or 0.5 or 50%.
Odds in Favour and Odds Against An Event:
If as a result of an experiment, a of the outcomes are favourable to an event E and b of the
outcomes are against it, then the odds in favour of E is a : b, and the odd against E is b : a.
Example:
The odds against an event are 2 : 5. Find the probability of its happening.
Solution:
Odds against the event E are b : a i.e., 2 : 5
P (E) =
𝑏
𝑏+𝑎
2
=
2+5
=
2
7
P (E) + P(E) = 1
P(E) = 1 – P(E)
=1-
2
7
=
5
7
Hence, the probability of happening of the event is
5
7
Approaches of Probability:
1. Classical Approach:
 This approach of defining probability is based on the assumption that all possible outcomes
(finite in number) of an experiment are mutually exclusive and equally likely.
 If a random experiment is repeated finite number of times, out of which outcomes „a‟ are in
favour of event A, outcomes „b‟ are not in favour of event A and all these possible outcomes
are mutually exclusive, collectively exhaustive and equally likely, then probability of
occurrence of event A is defined as:
P(A) =
a
a+b
=
Number of favourable outcomes
Total number of possible outcomes
=
c(A)
c(S)
 Since the probability of occurrence of an event is based on prior knowledge of the process
involved, therefore this approach is often called a priori (original) approach or classical
approach. This approach implies that there is no need to perform random experiments to
5
find the probability of occurrence of an event. Also, no experimental data are required for
computation of probability.
2. Relative Frequency Approach:
 If outcomes or events of a random experiment are not equally likely or not known
whether they are equally likely, then classical approach is not desirable to determine
probability of a random event. For example, in cases like (i) whether a number greater
than 3 will appear when die is rolled or (ii) whether a lot of 100 items will contain 10
defective items, etc., it is not possible to predict occurrence of an outcome in advance
without repetitive trials of the experiment.
 This approach of computing probability states that when a random experiment is
repeated a large number of times under identical conditions where trails are
independent to each other, the desired even may occur some proportion (relative
frequency) of time. Thus, probability f an event can be approximated by recording the
relative frequency with which such an event has occurred over a finite number of
repetitions of the experiment under identical conditions.
 Since the probability of an event is determined through repetitive expirical observations
of experimental outcomes, it is also known as empirical probability. Few situations to
which this approach can be applied are follows:
1) Observing how often you win lottery when buying regularly.
2) Observing whether or not a certain traffic signal is red when you cross it.
3) Observing births and noting how often the baby is a female.
3. Subjective Approach:
The Subjective Approach of calculating probability is always based on the degree of beliefs,
convictions and experience concerning the likelihood of occurrence of a random event. It is
a way to quantify an individual‟s beliefs, assessment and judgement about a random
phenomenon.
Probability assigned for the occurrence of an event may be based on just guess or on having
some idea about the relative frequency of past occurrences of the event. This approach
must be used when either sufficient data are not available or sources of information giving
different results are not known.
6
THEOREMS OF PROBABILITY
In order to solve problems involving probabilities, we require a thorough understanding
of some of the basic rules which govern the working of probabilities. They are known as
probability theorems or laws of probability.
There are two important theorems of probability which are discussed below:
(1) The Addition Theorem
(2) The Multiplication Theorem
(1) Addition Theorem:
If two events A and B are mutually exclusive, (so that if one happens the other cannot
happen) the probability that any one of them would happen is the sum of the
probabilities of the happening of A and B.
Symbolically:
Probability (A or B) = Probability (A) + Probability (B)
If the probability of „n‟ mutually exclusive events are P1, p2, p3, Pn then the probability
that some one of these events would happen would be:
P1 + P2 + P3+…..Pn
For eg.
1
1) If a coin is tossed, the chance of its head coming up is 2 and the chance of its tail coming
up is also
1
2
i.e. the chance of its coming down itself is equal to heads p =
1
2
i.e. absolute certainty of coming down.
2) In a bag there are 4 red balls, 3 white balls, 2 yellow balls and 1 green ball. If we take
out a ball without seeing it, find the probability of coming out of either a red or a
yellow ball.
Solution:
Probability of a red ball coming out
=
Probability of a yellow ball coming out
=
4
=
4+3+2+1
10
4
2
4+3+2+1
Probability of coming out of either a red or a yellow ball
7
= 102
=
2
6
+
=
10
10
10
4
1
+ Tail (q) = ,
2
3) A card is drawn at random from an ordinary pack of playing cards. Find the probability
that a card drawn is either a club or the king of diamonds.
Solution:
Probability of drawing a club
=
Probability of drawing King of diamonds
=
Probability of drawing either a Club or King of diamonds =
13
52
1
52
1
14
7
+
=
or
52
52
52
26
13
Common Items:
If events contain any common item, then some adjustment is necessary. Under such a
case, it can be symbolically expressed as follows:
Probability (A or B) = Probability (A) + Probability (B) – Probability (A and B).
Example:
From a pack of cards, if a card is drawn, what is the probability that the card
will be either an King or a Heart.
Solution:
In a pack of 52 cards, there are 4 King, 13 Hearts and one of them is common, i.e., king
of hearts.
Probability of drawing a King =
4
52
Probability of drawing a Heart =
13
52
Probability of drawing king of hearts =
1
52
Probability of drawing either on king or a heart
=
4
52
+
13
52
-
1
52
=
16
52
4
or
13
Multiplication Theorem:
 When we examine the probability of two or more dependent events occurring successively,
the multiplication theorem or law of multiplication is used.
 This theorem states that if two events „A‟ and „B‟ are dependent and can happen
simultaneously the probability of their joint occurrence will be:
Probability (AB)
= Probability (A) × Probability (B)
Probability (ABC) = Probability (A) × Probability ×
8
(B) × Probability (C)
1) If we toss a coin twice, what is the probability that both time it will be head upward?
Solution:
Probability of head upward in the first toss =
1
2
Probability of head upward in the second toss =
Probability of head upward in both tosses =
1
2
1
1
x =
2 2
1
4
2) What is the probability of throwing two „threes‟ in two throws of a dice?
Solution:
Probability of a three in first throw
=
Probability of a three in second throw
=
1
6
1
6
Hence, probability of throwing two threes in two throws =
1
6
1
1
6
36
x =
Rules of Multiplication
Statistically Independent Events:
When occurrence of an event does not affect and is not affected by the probability of
occurrence of any other event, the event is said to be a statistically independent event. There
are three types of probabilities under statistical independence: marginal, joint and
conditional
Marginal Probability:
A marginal or unconditional probability is the probability of the occurrence of an event. For
example, in the toss of coin, the outcome of each toss is an event that is statistically
independent of the outcomes of every other toss of the coin.
Joint Probability:
The probability of two or more independent events occurring together or in succession is called
the Joint probability. The joint probability of two or more independent events is equal to the
product of their marginal probabilities. In particular, if A and B are independent events, the
probability that both A and B will occur is given by
P(AB) = P (A∩B) = P(A) x P(B)
9
Suppose, in the toss of a coin twice, probability that in both tosses the coin will turn up head is
given by
1
P(H1H2) = P(H1) x P(H2) = 2 𝑥
1
2
=
1
4
The formula (6-6) is applied because the probability of H or T is not affected by any preceding
outcome, i.e. these outcomes are independent.
Conditional Probability:
The conditional probability of event A given that event B has already occurred is written as P
(A/B). similarly, we may write P(B/A). The vertical bar is read as „given‟ and events appearing to
the right of the bar are those that have already occurred. Two events A and B are said to be
independent if and only P(A/B) = P(A) or P(B/A) = P(B).
otherwise, events are said to be
dependent.
Statistically Dependent Events:
When the probability of an event depends upon or affected by the occurrence of any
other event, the events are said to be statistically dependent.
There are three types of
probabilities under statistical dependence: joint, conditional and marginal.
1. Joint probability:
If A and B are dependent events, then their joint probability is no longer equal to the
product of their respective probabilities. That is, for dependent events
P(A and B) = P (A ∩ 𝐵) ≠ P(A) x P(B)
Accordingly,P(A) ≠ P(A/B) and P(B) ≠ P(B/A)
The joint probability of events A and B occurring together or in succession under
statistical dependences is given by
P(A ∩ B) = P(A) x P(B/A)
or
P(A∩B) = P(B) x P(A/B)
2. Conditional Probability:
Under statistic dependence, the conditional probability of event B, given that event A
has already occurred is given by
P(B/A) =
(𝑃 ∩𝐵)
𝑃(𝐴)
Similarly, the conditional probability of a, given that event B has occurred, is
P(A/B) =
(𝑃 ∩𝐵)
𝑃(𝐵)
10
3. Marginal Probability:
The marginal probability of an event under statistical dependence is the same as the
marginal probability of an event under statistical independence.
The marginal probability of events A and B can be written as:
P(A) = P (A ∩ B) and P (B) = P(A ∩ B)
Calculation of Probability from one
We know that the probability of happening a certain event = 1, When we are required to
calculate the probability of at least one of the several events then we take the help of this fact.
In the form of notations it can be expressed as follows:
If „n‟ independent events n1, n2 ……. nn have probabilities denoted by p1, p2……..pn respectively
then the probability that atleast one of the events n1, n2…..nn would happen is: 1-p (happening
of none of the events)
If the probability of not happening of an event is denoted by „q‟ then p = 1-q.
Example: The probability that X will live upto 70 years is
3
4
and probability that „y‟ will live upto
2
70 years is . What is the probability?
3
(a) That both x and y will live upto 70 years
(b) That both die before reaching 70 years.
Solution:
(a) Probability that x will live upto 70 years =
3
Probability that y will live upto 70 years =
2
4
3
∴ Probability that both x and y will live upto 70 years =
(b) Probability that x will die before 70 years = 1 -
3
Probability that y will die before 70 years = 1 -
2
∴
1
1
4
3
Probability that both die before 70 years = x
4
3
=
1
=
1
=
4
3
𝟏
𝟏𝟐
11
3
4
2
𝟔
3
𝟏𝟐
x =
or
𝟏
𝟐
Bayes’ Theorem
Bayes’ Theorem is based on the proposition that probabilities should be revised when
new information is available. The idea of revising probabilities is a common one because all of
us use it in daily life even though we may not have any previous experience in calculating
probabilities.
Our concern for revising probabilities arises from a need to make better use of available
information. This is referred to as Bayes‟ Theorem after the Revered Thomas Bayes who
proposed in 1702 that probabilities be revised in accordance with empirical findings.
Probabilities of each event are designated as P (E1), P (E2), etc. which are prior
probabilities. Probabilities after revision may be designated as P (R/E1), P (R / E2), etc Posterior
probability of E2 for a particular result R of an empirical investigation may be found from:
P E𝑅1
=
P (E1) P (𝑅 E1 )
P(E1) P (𝑅 E1 ) = P (E2) P (𝑅 E2 )
12
M.C.Qs.
1
1. If P = Q = 2, the frequency distribution will be
(a) Perfectly symmetrical
(b) Non-perfectly symmetrical
(c) Zero
(d) 1
2. What is the mean of Poisson Distribution
(a)
(b) m
npq
(c) np
(d) n
3. The curve of Normal Distribution is
(a) Symmetrical
(b) Perfectly Symmetrical
(c) Asymmetrical
(d) None of these
4. Type I error is denoted by
(a) 𝛼 (alpha)
(b) 𝛽 (beta)
(c) 𝛾 (gamma)
(d) 𝜇 (mu)
5. How we denote Null Hypothesis?
(a) H0
(b) H1
(c) H2
(d) None of these
ANSWERS
1
A
2
C
3
B
4
A
13
5
A