COURSE: JUST 3900 INTRODUCTORY STATISTICS FOR CRIMINAL JUSTICE Chapter 10:

COURSE: JUST 3900
INTRODUCTORY STATISTICS
FOR CRIMINAL JUSTICE
Chapter 10:
t Test for Two Independent Samples
Instructor:
Dr. John J. Kerbs, Associate Professor
Joint Ph.D. in Social Work and Sociology
© 2013 - - DO NOT CITE, QUOTE, REPRODUCE, OR DISSEMINATE WITHOUT
WRITTEN PERMISSION FROM THE AUTHOR:
Dr. John J. Kerbs can be emailed for permission at [email protected]
Independent-Measures Designs

Allows researchers to evaluate the mean difference
between two populations using data from two
separate samples.
 The identifying characteristic of the independentmeasures or between-subjects design is the
existence of two separate or independent
samples.
 Thus, an independent-measures design can be
used to test for mean differences between two
distinct populations (such as men versus women)
or between two different treatment conditions
(such as drug versus no-drug).
Teaching Method A vs. B
Independent-Measures Designs
(cont'd.)

The independent-measures design is used in
situations where a researcher has no prior
knowledge about either of the two populations
(or treatments) being compared.
In particular, the population means and standard
deviations are all unknown.
 Because the population variances are not known,
these values must be estimated from the sample
data.

The t Statistic for an IndependentMeasures Research Design



As with all hypothesis tests, the general purpose of the
independent-measures t test is to determine whether the
sample mean difference obtained in a research study indicates
a real mean difference between the two populations (or
treatments) or whether the obtained difference is simply the
result of sampling error.
Remember, if two samples are taken from the same
population and are given exactly the same treatment, there will
still be some difference between the sample means (i.e., this
difference is called sampling error).
The hypothesis test provides a standardized, formal procedure
for determining whether the mean difference obtained in a
research study is significantly greater than can be explained
by sampling error
Two Population Distributions
Hypothesis Tests and Effect Size with
the Independent Measures t Statistic
To prepare the data for analysis, the first step
is to compute the sample mean and SS (or s,
or s2) for each of the two samples.
 The hypothesis test follows the same four-step
procedure outlined in Chapters 8 and 9.

Elements of a t statistics:
Single-Sample &
Independent Measures
NOTE: This alternative formula on the
bottom left for pooled variance works
when you have sample variances for
the first and second samples (s2) and
not sum of squares (SS) as noted here
Hypothesis Testing with the
Independent-Measures t Statistic

Book Example:
For 10 students who watched Sesame Street
(group 1) and 10 students who did not watch the
show (group 2), was there a difference in their
average high-school grades.
 Key Information to Run t-tests for independentmeasures

 n1
= 10
n2 = 10
 M1 = 93
M2 = 85
 SS1 = 200 SS2 = 160
Hypothesis Testing with the
Independent-Measures t Statistic

Step 1:


State the hypotheses and select the α level. For the
independent-measures test, H0 states that there is no
difference between the two population means.
For a two-tailed test, the hypotheses are as follows:
Example: H0: µ1 - µ2 = 0
H1: µ1 - µ2 ≠ 0
Select α: α = 0.01 (two tail)
For a one-tail test, H0: µ1 - µ2 ≤ 0
H1: µ1 - µ2 > 0
Hypothesis Testing with the
Independent-Measures t Statistic

Step 2:

Locate the critical region. The critical values for the t
statistic are obtained using degrees of freedom that are
determined by adding together the df value for the first
sample and the df value for the second sample. For two
samples with n = 10 in each sample, df is calculated as
follows:
df = df1 + df2
df = (n1 - 1) + (n2 -1)
df = 9 + 9 = 18
To find the critical t value for 18df at α = 0.01, we look at the
t-distribution table: t = +/- 2.878

Example:
Hypothesis Testing with the
Independent-Measures t Statistic

Step 3:

Compute the test statistic. The t statistic for the
independent-measures design has the same structure as
the single sample t introduced in Chapter 9. However, in
the independent-measures situation, all components of the t
formula are doubled: there are two sample means, two
population means, and two sources of error contributing to
the standard error in the denominator. Three key parts to
the t-statistic calculation:



Part A) Calculate the pooled variance
Part B) Use the pooled variance to calculate the estimated standard
error
Part C) Compute t-statistic
Hypothesis Testing with the
Independent-Measures t Statistic

Step 3 (Part A): Pooled Variance

Step 3 (Part B): Estimated Standard Error

Step 3 (Part C): Compute t-Statistic

t=
𝑀1 − 𝑀2 −(µ1 − µ2 )
𝑆(𝑀1 − 𝑀2 )
=
𝑀1 − 𝑀2 −(µ1 − µ2 )
𝑆(𝑀1 − 𝑀2 )
=
93−85 −0
2
8
2
= =4
Hypothesis Testing with the
Independent-Measures t Statistic

Step 4:


Make a decision. If the t statistic ratio indicates that the
obtained difference between sample means (numerator) is
substantially greater than the difference expected by
chance (denominator), we reject H0 and conclude that there
is a real mean difference between the two populations or
treatments.
T = 4.00>+2.878, thus reject H0 and conclude that there is
a significant difference in high school grades for those who
watched Sesame Street as compared to those who did not
watch this show.
Measuring Effect Size for the
Independent-Measures t
Effect size for the independent-measures t is
measured in the same way that we measured
effect size for the single-sample t in Chapter 9.
 Specifically, you can compute an estimate of
Cohen’s d or you can compute r2 to obtain a
measure of the percentage of variance
accounted for by the treatment effect.

Measuring Effect Size for the
Independent-Measures t


Compute an Estimated Cohen’s d
Est Cohen’s d =
Magnitude of d
𝑀1 − 𝑀2
𝑆𝑝 2
=
93−85
20
=
8
4.7
= 1.79
Evaluation of Effect Size
d = 0.2
Small effect (mean difference around 0.2 standard deviations)
d = 0.5
Medium effect (mean difference around 0.5 standard deviations)
d = 0.8
Large effect (mean difference around 0.8 standard deviations)
2
2
𝑡
4
16
2
 Compute r = 2
= 2
=
= 16
= 0.47 or 47%
34
𝑡 +𝑑𝑓
4 +18
Percent of Variance Explained
as Measured
by r2 16+18
Evaluation of Effect Size
r2 = 0.01 (0.01*100 = 1%)
Small effect
r2 = 0.09 (0.09*100 = 9%)
Medium effect
r2 = 0.25 (0.25*100 = 25%)
Large effect
Removing Treatment Effects
NOTE: We added 4
points to those
students who did not
watch Sesame Street
NOTE: We
subtracted 4 points
to those students
who watched
Sesame Street
Confidence Intervals and
Hypothesis Tests


Calculate a 95% Confidence Interval for the Sesame Street
example. For a two-tail critical value with df=18 and α = 0.05,
tcrit = +/- 2.101.
NOTE: Because 0 is not in the 95% C Interval,
we can conclude that the value of 0 is not in
µ1 - µ2 = M – M +/-t𝑠(𝑀1 − 𝑀2 )
the 95% Confidence Interval. Alternatively, the
= 93 – 85 +/- 2.101(2)
= 8 +/- 4.202
=
3.798
value of 0 is rejected with 95% confidence.
This is the same as rejecting H0 with α = 0.05
to
12.202
Sample Variance and Sample Size




Standard error is positively related to sample variance (larger
variance leads to larger sample error).
Standard error is inversely related to sample size (large
sample sizes lead to smaller standard error).
Larger variance produces smaller t-statistic and reduces the
likelihood of a significant finding.
 Larger variance also produces smaller measures of effect
size
Larger samples produces larger values for t-statistic and
increases the likelihood of rejecting H0.
 Sample size has no effect on Cohen’s d and only a small
influence on r2
Homogeneity of Variance Assumption


Most hypothesis tests usually work reasonably well
even if the set of underlying assumptions are violated.
The one notable exception is the assumption of
homogeneity of variance for the independentmeasures t test.
 Requires that the two populations from which the
samples are obtained have equal variances
 Necessary in order to justify pooling the two sample
variances and using the pooled variance in the
calculation of the t statistic
Homogeneity of Variance Assumption



If the assumption is violated, then the t statistic
contains two questionable values: (1) the value for
the population mean difference which comes from the
null hypothesis, and (2) the value for the pooled
variance.
The problem is that you cannot determine which of
these two values is responsible for a t statistic that
falls in the critical region.
In particular, you cannot be certain that rejecting the
null hypothesis is correct when you obtain an
extreme value for t.
Homogeneity of Variance Assumption
If the two sample variances appear to be
substantially different, you should use Hartley’s
F-max test to determine whether or not the
homogeneity assumption is satisfied.
 If homogeneity of variance is violated, Box
10.2 presents an alternative procedure for
computing the t statistic that does not involve
pooling the two sample variances.

Homogeneity of Variance Assumption

Hartley’s F-max test

Step 1: Compute Sample Variance for each separate sample
𝑠𝑠
NOTE: If the F-max test rejects the
(s2 = ).
𝑑𝑓
𝑠2(𝑙𝑎𝑟𝑔𝑒𝑠𝑡)
hypothesis of equal variances, or if
you suspect that the homogeneity of
variance assumption is not justified,
you should not compute an
independent-measures t-statistic
using pooled variance. In such
cases, use the alternative formula
for the t-statistic on the next slide.

Step 2: Compute F-Max =

Step 3: Find critical F-Max value
 k = number of separate samples (2)
 df = n – 1 for each sample (Hartley test assumes all
samples are the same size)
 Select an alpha (α) level for critical F-Max value - homogeneity tests use larger alpha levels at 0.05 or 0.01
Step 4: If computed F-Max < F-crit, then no evidence that the
homogeneity of variance assumption has been violated.

𝑠2(𝑠𝑚𝑎𝑙𝑙𝑒𝑠𝑡)
Homogeneity of Variance Assumption

Alternative formula for t-statistic

Step 1: Calculate the standard error using the two separate
sample variances as in equation 10.1
NOTE: The following alternative
formula for the t-statistic does not
pool sample variances and does
not require the homogeneity of
variance assumption.

Step 2: The value of degrees of freedom
for the t-statistic is adjusted using the
following equation:
Decimal values for df should be
rounded down to the next lower
integer. By lowering the df, you
push the boundaries of the critical
region farther out. This makes the
test more demanding.