Document 263794

PREFACE
2002 NSAF Sample Design is the second report in a series describing the methodology of the
2002 National Survey of America’s Families (NSAF). The NSAF is part of the Assessing the
New Federalism project at the Urban Institute, conducted in partnership with Child Trends. Data
collection for the NSAF was conducted by Westat.
The NSAF is a major household survey focusing on the economic, health, and social
characteristics of children, adults under the age of 65, and their families. During the third round
of the survey in 2002, interviews were conducted with over 40,000 families, yielding information
on over 100,000 people. The NSAF sample is representative of the nation as a whole and of 13
states, and therefore has an unprecedented ability to measure differences between states.
About the Methodology Series
This series of reports has been developed to provide readers with a detailed description of the
methods employed to conduct the 2002 NSAF. The 2002 series of reports include:
No. 1:
An overview of the NSAF sample design, data collection techniques, and
estimation methods
No. 2:
A detailed description of the NSAF sample design for both telephone and inperson interviews
No. 3:
Methods employed to produce estimation weights and the procedures used to
make state and national estimates for Snapshots of America’s Families
No. 4:
Methods used to compute and results of computing sampling errors
No. 5:
Processes used to complete the in-person component of the NSAF
No. 6:
Collection of NSAF papers
No. 7:
Studies conducted to understand the reasons for nonresponse and the impact of
missing data
No. 8:
Response rates obtained (taking the estimation weights into account) and methods
used to compute these rates
No. 9:
Methods employed to complete the telephone component of the NSAF
No. 10:
Data editing procedures and imputation techniques for missing variables
No. 11:
User’s guide for public use microdata
No. 12:
2002 NSAF questionnaire
About This Report
Report No. 2 describes the sample design for the 2002 NSAF. As in previous rounds of the
survey, the 2002 NSAF sample consists of a random digit dial (RDD) telephone sample
supplemented by an area probability sample of nontelephone households. While the
nontelephone sample for previous NSAF rounds were both nationally and state representative,
the 2002 NSAF used only a nationally representative nontelephone sample. The report covers
both the telephone and nontelephone sample design, adjustments made to the sample design
during the field period, within household sampling procedures and achieved sample sizes.
For More Information
For more information about the National Survey of America’s Families, contact:
Assessing the New Federalism
Urban Institute
2100 M Street, NW
Washington, DC 20037
E-mail: [email protected]
Web site: http://anf.urban.org/nsaf
Adam Safir and Tim Triplett
ii
CONTENTS
Chapter
Page
1
OVERVIEW ..........................................................................................
1-1
2
PRINCIPAL FEATURES OF SAMPLE DESIGN BY ROUND .........
2-1
2.1
2.2
2.3
2.4
The Survey .................................................................................
Survey Components ...................................................................
Number of Completed Interviews..............................................
Projected Effective Sample Size................................................
2-1
2-3
2-6
2-8
RANDOM DIGIT DIAL HOUSEHOLD SAMPLING ........................
3-1
3.1
3.2
3.3
3.4
3.5
3.6
3-1
3-2
3-7
3-8
3-9
3-13
3
4
Sampling Telephone Numbers...................................................
Subsampling Households...........................................................
Subsampling Adult-Only Households .......................................
Subsampling High-Income Households ....................................
Household Sampling Revisions during Data Collection ...........
Achieved Response and Eligibility Rates ..................................
AREA SAMPLE....................................................................................
4-1
4.1
4.2
4-1
4-4
First-Stage Sampling..................................................................
Second-Stage Sampling .............................................................
4.2.1
Exclusion of Block Groups with High Telephone
Coverage Rates............................................................
Segment Stratification and Selection...........................
4-5
4-5
Chunk Selection .........................................................................
Achieved Response and Eligibility Rates ..................................
4-6
4-7
WITHIN-HOUSEHOLD SAMPLING AND ACHIEVED
SAMPLE SIZES ....................................................................................
5-1
4.2.2
4.3
4.4
5
5.1
5.2
5.3
5.4
Sampling Children .....................................................................
Sample Selection of Other Adults in Households with
Children......................................................................................
Sample Selection of Adults from Adult-Only Households........
Achieved Sample Sizes and Response Rates.............................
iii
5-1
5-1
5-2
5-3
CONTENTS (continued)
Chapter
6
Page
CONCLUSION......................................................................................
6-1
REFERENCES ......................................................................................
R-1
Number of Completed Interviews by Round, Sample Type,
and Interview Type ................................................................................
2-7
Projected Design Effects for the RDD Sample, by Interview Type
and Study Area.......................................................................................
2-10
Projected Effective Sample Sizes for the RDD Sample, by Interview
Type and Study Area .............................................................................
2-10
Assumed Proportion of Households by Household Type
and Poverty Status..................................................................................
3-5
3-2
Assumed Misclassification Rates, by Income Categories .....................
3-6
3-3
Assumed Residential and Response Rates.............................................
3-6
3-4
Subsampling or Household Retention Rates..........................................
3-7
3-5
Income Levels for Determining Less than 200 Percent
of the Poverty Level...............................................................................
3-8
Revised Subsampling Rates for Households with Children
Screening as High-Income, by Release Group ......................................
3-12
Revised Subsampling Rates for Adult-Only Households,
by Release Group...................................................................................
3-12
3-8
Reserve Sample Released, by Study Area .............................................
3-13
3-9
Screening for Residential Status and Presence of Children...................
3-14
Tables
Table
2-1
2-2
2-3
3-1
3-6
3-7
iv
CONTENTS (continued)
Tables (continued)
Table
Page
3-10
Subsampling Screener Refusals and Response Rates............................
3-15
3-11
Outcomes of Household Screening of Telephone Households..............
3-17
3-12
Outcomes of Income Screening of Telephone Households
with Children .........................................................................................
3-18
Outcomes of Income Screening of Adult-Only Telephone
Households.............................................................................................
3-19
4-1
Number of Round 3 Primary Sampling Units........................................
4-3
4-2
Maximum Telephone Service Rates Allowed in Covered
Block Groups .........................................................................................
4-5
Segment Counts, by Planned and Unplanned Chunking (Including
Sample Supplement) ..............................................................................
4-7
4-4
Outcomes of Area Listing......................................................................
4-8
4-5
Outcomes of Area Prescreening and Screening.....................................
4-8
5-1
Proportion of Random Digit Dial Adult-Only Households
with Three Other Adults under Age 65 in which Just One Adult
Is Selected ..............................................................................................
5-3
Within-Household Sampling and Extended Interviews of Children in
Telephone Low-Income Households .....................................................
5-6
Within-Household Sampling and Extended Interviews of Children in
All Subsampled Telephone Households ................................................
5-7
Within-Household Sampling and Extended Interviews of Other
Adults in Subsampled Telephone Households with Children ...............
5-8
3-13
4-3
5-2
5-3
5-4
v
CONTENTS (continued)
Tables (continued)
Table
5-5
Page
Subsampling and Extended Interviews of Adults in Subsampled
Adult-Only Telephone Households .......................................................
5-9
5-6
Sources of Adult Telephone Interviews.................................................
5-10
5-7
Within-Household Sampling and Extended Interviews of Children
in Nontelephone Low-Income Households ...........................................
5-11
Within-Household Sampling and Extended Interviews of Children
in All Nontelephone Households ...........................................................
5-12
Within-Household Sampling and Extended Interviews of Other
Adults in Nontelephone Households with Children ..............................
5-13
Subsampling and Extended Interviews of Adults in Adult-Only
Nontelephone Households .....................................................................
5-14
5-11
Sources of Adult Nontelephone Interviews ...........................................
5-15
5-12
Within-Household Sampling and Extended Interviews of Children
in Telephone and Nontelephone Low-Income Households...................
5-16
Within-Household Sampling and Extended Interviews of Children
in All Telephone and Nontelephone Households ..................................
5-17
Within-Household Sampling and Extended Interviews of Other
Adults in Telephone and Nontelephone Households with Children......
5-18
Subsampling and Extended Interviews of Adults in Adult-Only
Telephone and Nontelephone Households.............................................
5-19
Sources of Adult Telephone and Nontelephone Interviews ..................
5-20
5-8
5-9
5-10
5-13
5-14
5-15
5-16
vi
CONTENTS (continued)
Figures
Figure
Page
2-1
Study Areas............................................................................................
2-1
2-2
Sampling Frame Inclusions and Exclusions ..........................................
2-2
3-1
Household Subsampling Operations......................................................
3-3
vii
1. OVERVIEW
This report describes the sample design for the 2002 National Survey of America’s Families
(NSAF). This survey is the third round of the NSAF, and the objective is to estimate both the
characteristics of households and persons in 2002 and changes in those characteristics since the
1997 and 1999 NSAF. While the designs for all three rounds of the NSAF are similar, several
important differences exist. The first survey in 1997 was a dual-frame survey of both households
with telephones and those without telephones developed to serve as a baseline for evaluating
changes over time. The Round 2 survey was designed to improve estimates of change between
1997 and 1999 by retaining a substantial portion of the Round 1 sample. Analysis of the Round 2
data showed that the design changes did not improve the precision for estimates of change
between rounds as well as expected. Furthermore, the retention of a portion of the sample
resulted in additional operational and design complications.
Based on the findings from Round 2, the sample design for Round 3 was developed to be similar
to the Round 1 design, in the sense that the sample was mainly independent of the sample from
previous rounds. However, the sample design for Round 3 did include important modifications
from the previous rounds’ designs that were intended to reduce data collection costs. The most
important design change was the reduction of the sample size for nontelephone households in the
study areas. This change also has important implications for the estimation strategy, which is
discussed in 2002 NSAF Sample Estimation Survey Weights, Report No. 3.
This report describes the sample design and how it relates to the designs from previous rounds. It
also provides the details needed to appreciate the considerations that went into the decisions that
resulted in the features of this large and complex survey.
Chapter 2 summarizes the survey goals and the sampled units, and introduces its two major
components, the telephone and in-person surveys. One of the main objectives of Chapter 2 is to
describe the similarities and differences between the Round 3 sample design and the designs for
the previous rounds. The remaining chapters focus primarily on the Round 3 design. Chapter 3
describes the random digit dial (RDD) telephone sample design and the sampling of households
in the telephone component. The subsampling procedures for households without children and
for high-income households are included in this chapter. Chapter 4 gives a detailed account of
the sampling for the in-person survey component. It discusses the changes in the sampling
needed to move from a sample for each study area to an overall national sample for this
component of the survey. Chapter 5 presents the methods used to sample children and adults
from within the sampled households. It contains tables on the number of sampled and
interviewed persons from the survey. Chapter 6 provides some concluding remarks.
1-1
2. PRINCIPAL FEATURES OF SAMPLE DESIGN BY ROUND
The samples for all three rounds of NSAF—Round 1 in 1997, Round 2 in 1999, and Round 3 in
2002—have similar designs. A sample design Report No. 2 similar to this one is available for
each earlier survey. These reports describe the details of the sample design for the specific
survey. In this chapter we discuss the design features for Round 3 in relation to the features from
the previous rounds. We focus on how the Round 3 design differs from that used in Round 1 and
Round 2. The specifics of the Round 3 design are given in subsequent chapters.
2.1
The Survey
The NSAF collected information on the economic, health, and social dimensions of the wellbeing of children, adults under the age of 65, and their families in 13 states and the balance of the
nation. The Urban Institute selected these study areas (see figure 2-1) in 1996 prior to the first
survey because they represent a broad range of fiscal capacity, child well-being, and approaches
to government programs. Data were also collected in the balance of the nation to permit
estimates for the United States as a whole.
Figure 2-1.
Study Areas
Alabama
California
Colorado
Florida
Massachusetts
Michigan
Minnesota
Mississippi
New Jersey
New York
Texas
Washington
Wisconsin
Balance of nation
In Round 1 and Round 2, Milwaukee County in Wisconsin was a separate study area that had its
own sample. A separate sample from the balance of Wisconsin was selected in these rounds to
produce estimates for the entire state of Wisconsin by combining Milwaukee and the balance of
Wisconsin. In Round 3, the separate Milwaukee County study area was eliminated and the entire
state of Wisconsin was treated as a single study area, as shown in figure 2-1.
The primary goal of the survey in all three rounds was to obtain social and economic information
about children in low-income households (those with incomes below 200 percent of the poverty
threshold), since the impact of New Federalism was likely to be greatest on these children.
Secondary goals included obtaining similar data on children in higher-income households, plus
adults under age 65 (with and without children).
With few exceptions, the decision was made to limit the survey to children, adults, and families
living in regular housing. Figure 2-2 explains the concept of regular housing through examples
of specific inclusions and exclusions. Although one impact of New Federalism could be the
displacement of persons from regular housing, including the population that lives outside of
2-1
Figure 2-2.
Sampling Frame Inclusions and Exclusions
Inclusions
Houses, apartments, and mobile homes occupied by individuals, families, multiple families,
or extended families where at least one occupant is under the age of 65
Houses, apartments, and mobile homes occupied by multiple unrelated persons, provided that
the number of unrelated persons is less than nine and at least one occupant is under the age of
65
People in workers’ dormitories and camps
Military personnel living on post with their families, as well as military personnel living off
post with or without their families
Included Persons in Excluded Structures
People living temporarily away from home were enumerated at their usual residences. This
includes college students in dormitories, patients in hospitals, vacationers, business travelers,
snowbirds, and so on. Structures that were expected to primarily include only such people
were excluded
Exclusions
The institutionalized population. Examples of institutions include prisons, jails, juvenile
detention facilities, psychiatric hospitals and residential treatment programs, and nursing
homes for the disabled and aged
Noninstitutional group quarters, including communes, monasteries, convents, group homes
for the mentally or physically disabled, shelters, halfway houses, dormitories, and dwelling
units with nine or more unrelated persons
The homeless
People in transient hotel/motel rooms, tents, recreational vehicles, trailers, and other similar
temporary arrangements
Military barracks and ships
2-2
regular housing was considered infeasible within the survey context. The elderly population was
also excluded. College students were enumerated at their parents’ homes. Most of these
inclusions and exclusions are typical of those made in other household surveys. For example, the
Current Population Survey (CPS) has essentially the same rules (U.S. Bureau of the Census
2000). The major difference is that the CPS includes military personnel living on post with
families but excludes those living in noninstitutional group quarters.
2.2
Survey Components
The sample for each round of NSAF has two separate components: an RDD survey of
households with telephones, and an area survey of households without telephones. The RDD
component provides a cost-effective method to collect the desired data on a large number of
households for each study area and nationally. The area component enables the survey to cover
households without telephones. The area component is important in this survey because lowincome households are more highly concentrated in nontelephone households than the entire
universe of households. For example, Giesbrecht et al. (1996) estimated that about 20 percent of
families in poverty live in households without a telephone and that about 10 percent of families
with one child 3 years old or under have no telephone based on CPS data. Even though these
results are out-of-date given the changes in technology in the past 5 to 10 years, the only recent
data on the relationship of poverty and having a telephone in the household published are from
the 2000 Census and are discussed below. Even these data do not distinguish between landline
and wireless telephones. After the area sample was selected, households were screened to find
households without telephones. Only these nontelephone households were interviewed from the
area sample.
When Round 1 was designed, the dual-frame approach to this type of problem was relatively
new and data to optimize various aspects of the design were not available. Despite the lack of
data, it was clear that the dual-frame approach produced more precise estimates than a pure areasampling approach of the same cost. In addition, it produced less biased estimates than a pure
RDD approach of the same cost. Waksberg et al. (1997) describe some of the options that were
considered early in the process of designing the Round 1 survey.
The Round 2 sample design used data from Round 1 to improve survey efficiency. The research
supporting revisions to the Round 2 design are described in chapter 3 of 1999 NSAF Sample
Design, Report No. 2. The primary emphasis in Round 2 shifted from producing estimates of the
current level (the Round 1 design objective) to estimating changes from 1997 to 1999. As a
result, a substantial proportion of the sample from Round 1 was retained and included in the
Round 2 sample. For the RDD sample, the proportion of the sample of telephone numbers
retained depended on the outcome of the Round 1 interview. Round 1 telephone numbers that
were residential and cooperative were sampled at higher rates than those numbers that were
either nonresidential or uncooperative. In addition to retaining a portion of the Round 1 sample,
telephone numbers not in existence in Round 1 were sampled to provide complete coverage of
telephone households in 1999. The Round 2 area sample was largely the same as used in
2-3
Round 1, but an additional sample from the balance of the nation was added to improve the
precision of national estimates.
Analysis of the Round 2 data found that the expected improvements in the precision of change
estimates due to retaining a portion of the Round 1 sample were not as large as had been
expected. The reduction in variances for change estimates depended on retaining a substantial
fraction of the Round 1 households and the characteristics of the retained households being
highly positively correlated across time. The design assumptions regarding both of these key
factors were somewhat optimistic. In the component of the sample in which an interview was
completed in Round 1 and the telephone number was retained for Round 2, the proportion of
identical households was less than assumed. In addition, even when the same household
responded in both rounds, the correlation for important estimates was lower than expected.
Given these Round 2 findings, it was decided that it would be unwise to retain a sample of
telephone numbers from the previous rounds for Round 3. Retaining telephone numbers has its
own costs in terms of introducing differential sampling fractions that decrease the precision of
the estimates and lower response rates. Report No. 8 in the 1999 series showed the screener
response rate in the retained sample was about 2 or 3 points lower than the response rate in the
newly sampled numbers from the same sampling frame. This result is consistent with panel
surveys (see Kalton, Kaspryzk, and McMillen, 1989). We examine the 2002 response rates in
Report No. 8 and provide a more direct assessment of the effect of retaining telephone numbers
in the sample there. The retained sample also added some complexity operationally. Thus, it was
decided that the Round 3 telephone sample should be independent of the samples selected in
Round 1 and Round 2. Although the sample for Round 3 was selected independently, the
allocation of the sample to the study areas and the sampling procedures were very similar to
those used in Round 1.
On the other hand, the design for the Round 3 area sample was very different from that used in
the previous rounds. In Round 1 and Round 2, separate area samples were selected in each study
area and for the balance of the nation. The area and RDD samples were combined to produce
estimates of both telephone and nontelephone households for each study area and nationally.
Because of the cost of screening a large sample of households to find nontelephone households,
the sample size for the area sample was relatively small in each of the study areas. The small
sample size of nontelephone households caused some instability in the estimates of the precision
for the study areas (see 1997 Report No. 4 for more details). At the national level, the sample
size was sufficiently large that the variance estimates were reliable.
These concerns about the area sample for producing reliable estimates of nontelephone
households at the study area level led to research to find a lower-cost alternative. Ferraro and
Brick (2001) studied various methods for adjusting a sample of telephone households to account
for the undercoverage of households without telephones. This research found that an approach
called “modified poststratification” had better statistical properties for NSAF study areas than
previously considered approaches. In addition, new data from the 2000 Census of Population
showed the percentage of households without telephones was much smaller than reported in the
2-4
CPS for the same time period. The census estimates only 2.4 percent of households did not have
a telephone, compared with the 4.9 percent reported in the CPS for the same time.
Based on cost, stability of the variance estimates, new research on statistical adjustment methods,
and lower estimates of the percentage on households without telephones, the sample design for
the area sample for Round 3 was changed significantly from that used in previous rounds. The
Round 3 design included a sufficient sample to produce reliable national estimates of all
households, using the area sample to represent households without telephones. At the same time,
the area sample size for the study areas was reduced, and the plan was to rely on the modified
poststratification approach for estimating characteristics of all households for the study areas.
Thus, the sample size for the area sample and the cost of collecting these data were significantly
reduced. Tables later in this chapter show the changes in sample size at the study area level and
overall.
The other basic features of the sample design were very consistent across the three rounds of data
collection. In all three rounds, costs were reduced through the use of screener-based subsampling
of households contacted in the RDD component. In this approach the RDD screening interview
includes a very short income question. Those households that report no children in the household
or reported incomes above 200 percent of the poverty threshold were subsampled. The extended
interview has more detailed and reliable income questions for those included in the subsample.
In Round 1, the inconsistency between the responses to the short and detailed versions of the
income questions was greater than anticipated, and the Round 2 subsampling rates were revised
in Round 2 to account for this difference. In Round 3, subsampling rates similar to those
suggested by the Round 2 research (see 1999 NSAF Sample Design, Report No. 2) were used.
In all the rounds and across both the RDD and the area samples, the number of household
members that could be sampled and interviewed were limited. The main reason to impose these
limits was to reduce the respondent burden for the household as a whole. Even if there were
several children under age 6 in a household, only one was randomly selected. Similarly, only one
child age 6 to 17 was sampled in a household. The most knowledgeable adult (MKA) in the
household for the child was interviewed about the sample child. During the MKA interview,
additional data were collected about the MKA and about the MKA’s spouse/partner, if that
person was living in the same household. The MKA provided all the data about the
spouse/partner. Generally, every question about the MKA was repeated with reference to the
spouse/partner. However, some questions on health insurance and health care usage were asked
about only one of the two. The appropriate person targeted for these questions was randomly
assigned as either the MKA or the spouse/partner. Some questions were asked only about the
MKA, related to feelings, religious activities, and opinions. These items were not repeated for
the spouse/partner because proxy response did not seem sufficiently valid or reliable, and
because self-response on these few questions was operationally impractical. All these rules for
subsampling persons within sample households were the same for all three rounds.
2-5
Two other within-household subsampling steps were used in all three rounds of data collection.
Other adults in households with children (adults who were not the MKA of any children in the
household) and adults in adult-only households were subsampled. The rules for the subsampling
were complex and are described in detail in the 1997 NSAF Sample Design, Report No. 2. Selfresponse was required for sample adults. During the interview with a sample adult, additional
data were collected about the sample adult’s spouse/partner, if they were living in the same
household. As in the MKA interview, the data were always collected by asking the sample
person to respond for the spouse/partner. No attempt was made to collect these data directly from
the spouse of a sample adult. As in the MKA interview, some questions were asked only about
the sample adult, related to feelings, religious activities, and opinions.
In all three rounds of NSAF, the sample design set in place at the beginning of the survey was
revised somewhat as data on the outcomes of the data collection became available. For example,
the expected sample sizes depended on assumptions about residency rates, response rates,
poverty rates, and other parameters. Since the observed rates differed from the assumed rates,
sometimes revisions in sampling rates or the number of sampled units had to be implemented as
the data collection proceeded. For the first two rounds, the adjustments that took place are
documented in the previous sample design reports. In Round 3, the outstanding changes that took
place during the field period involved releasing more telephone numbers than originally planned
due to lower-than-expected residency and response rates, and using a refusal subsampling
procedure for the RDD sample to speed up the data collection and reduce costs slightly. These
changes to the original sample design are discussed in later chapters, along with the other more
minor changes.
2.3
Number of Completed Interviews
The number of completed interviews for each round of NSAF is given in table 2-1. The table
shows the number of completed interview by type of sample (RDD or area sample) and by type
of interview (various types of adult interviews and child interviews). Note that one MKA
interview can provide data on up to two sample children and that there were a few MKA
interviews with parents under the age of 18. In addition to the total number of interviews, the
table also shows the range of the number of completed interviews in each study area and in the
balance of the nation by round.
The table shows that the number of completed RDD interviews is about the same for all three
rounds. The big difference is in the number of completed area interviews, where the number
completed in Round 3 is lower than the previous rounds. This difference is particularly obvious
in the study areas. This difference is a consequence of the sample design change described
earlier.
2-6
Table 2-1.
Number of Completed Interviews by Round, Sample Type, and Interview Type
2-7
Type of Sampled Interview
RDD sample
All adults
Adult MKAs
Other adults in households
with children
Adults in households
without children
Children under age 6
Low-income children under
age 6
Children age 6 to 17
Low-income children age 6
to 17
Area sample
All adults
Adult MKAs
Other adults in households
with children
Adults in households
without children
Children under age 6
Low-income children under
age 6
Children age 6 to 17
Low-income children age 6
to 17
Total
Range for Study Areas
Round 1 Round 2 Round 3 Round 1
Round 2
Round 3
46,621
27,248
2,407
45,025
29,054
3,054
43,133
28,208
2,872
16,966
13,917
12,053
651–1562
342–1,354
466–1,346
12,067
7,246
11,990
4,923
12,088
5,331
613–914
398–539
515–958
258–407
500–941
264–491
21,210
11,813
22,841
8,560
21,864
8,887
1,682
915
121
1,678
876
113
648
294
46
36–299
18–150
2–28
33–236
11–128
1–15
2–90
0–45
0–6
646
689
308
15–121
19–92
548
497
525
463
180
149
9–94
6–87
615
550
582
509
199
168
13–105
11–98
Balance of the Nation
Round 1 Round 2 Round 3
2,347–3,771 2,085–3,746 1,933–3,798 4,913
1,431–2,025 1,309–2,382 1,254–2,196 2,669
102–218
104–279
101–310
225
6,168
3,610
496
7,251
4,603
521
2,019
2,062
2,127
1,179
746
1,462
633
1,987
873
1,103–1,557 1,056–1,881 1,001–1,708 2,064
624–844
470–627
475–779 1,182
2,872
1,097
3,557
1,501
191
120
7
326
178
31
320
143
24
1–38
64
117
153
8–72
7–63
0–32
0–28
69
66
104
86
84
64
6–91
6–80
0–26
0–23
80
69
119
95
103
84
2.4
Projected Effective Sample Size
In surveys like NSAF that sample study areas and specific subgroups of the population at
different rates, the number of completed interviews does not give a full and accurate picture of
the precision of the estimates. Sampling with differential probabilities generally reduces the
precision of estimates aggregated over groups with different rates, even though it improves the
precision for those subgroups sampled at higher rates. For example, households in the study
areas are sampled at higher rates than are households in the balance of the nation, and the
precision of national estimates is lower than it would be if the rates for the groups were identical.
Of course, the higher sampling rates in the study areas provide reliable estimates for these areas
that would not be possible otherwise.
In designing the sample, the losses in efficiency of the samples due to the differential sampling
rates were taken into account in determining the needed sample sizes. The other source of
efficiency losses in typical household surveys is clustering of sampled households in geographic
areas. List-assisted RDD surveys do not cluster households, so the only clustering of sampled
households in the NSAF is for nontelephone households. Since the nontelephone households are
a small proportion of the total sample, we developed our projections of the precision based only
on the RDD sample. The design effect or deff (the ratio of the variance of an estimate under the
actual design to what would be obtained with a simple random sample of the same size) is one
method of accounting for the efficiency of the sample. Another way of thinking about the design
effect is that it is the inverse of the efficiency of the sample, so that a sample with a deff of 2 is
equivalent to a simple random sample of half the sample size. Design effects were estimated
from the samples for the previous two rounds and were reported in the corresponding Report No.
4 for the round. These average deffs were instrumental in projecting design effects for Round 3.
The primary goal of the sample design was to obtain approximately the same number of lowincome child interviews as achieved in Round 2, with a secondary objective of obtaining about
the same number of low-income adult interviews. Since the efficiency of the sample is directly
tied to the rates used for subsampling the different groups, keeping the subsampling rates as
consistent as possible with the optimal rates developed in Round 2 while still completing about
the same number of interviews was also important. Other factors also affected the sample design.
The sample size allocated to the Milwaukee study area could now be used in other areas because
this was dropped as a stand-alone study area for Round 3. The sample size for this area was
reallocated to California, the balance of the nation, and Michigan to improve the precision for
these areas and the national estimates.
Once the sample sizes were determined, projected design effects and effective sample sizes for
the key interview groups could be computed. The interview groups are children, other adults
(who are not MKAs), and all adults. Since the survey is critically interested in people in families
with incomes less than 200 percent of the poverty level, table 2-2 has the projected deffs for both
low-income and all persons. Using these projected design effects and an additional adjustment
due to other losses of efficiency in the sample design, the effective sample sizes for each study
area and for the nation were computed. These projected effective RDD sample sizes are shown in
table 2-3 for all groups.
2-8
The projected design effects and effective sample size shown in tables 2-2 and 2-3 were
computed prior to the start of data collection. Thus, changes in the design that occurred during
data collection and the actual sample sizes achieved are not reflected in these tables. The average
design effects based on the data collected in Round 3 are given in Report No. 4.
The next two chapters describe the procedures used to select the RDD and area sample of
households.
2-9
Table 2-2.
Projected Design Effects for the RDD Sample, by Interview Type and Study Area
Study Area
Alabama
California
Colorado
Florida
Massachusetts
Michigan
Minnesota
Mississippi
New Jersey
New York
Texas
Washington
Wisconsin
Bal. of nation
National
Children
Low-Income
1.08
1.03
1.05
1.04
1.04
1.04
1.05
1.08
1.05
1.04
1.05
1.05
1.02
1.06
1.99
All
1.11
1.04
1.04
1.04
1.03
1.03
1.04
1.12
1.04
1.04
1.07
1.05
1.02
1.06
2.11
Other Adults
Low-Income
1.47
1.49
1.59
1.59
1.51
1.54
1.38
1.37
1.43
1.50
1.34
1.42
1.33
1.68
2.76
All
1.37
1.33
1.32
1.39
1.27
1.30
1.19
1.31
1.19
1.28
1.22
1.22
1.17
1.44
2.56
All Adults
Low-Income
1.52
1.43
1.73
1.73
1.51
1.42
1.32
1.46
1.73
1.60
1.55
1.53
1.32
1.50
3.34
All
1.58
1.52
1.82
1.88
1.61
1.47
1.30
1.58
1.72
1.72
1.58
1.60
1.37
1.60
3.14
Table 2-3.
Projected Effective Sample Sizes for the RDD Sample, by Interview Type and Study Area
Study Area
Alabama
California
Colorado
Florida
Massachusetts
Michigan
Minnesota
Mississippi
New Jersey
New York
Texas
Washington
Wisconsin
Bal. of nation
National
Children
Low-Income
663
845
634
662
594
665
562
690
619
747
802
629
563
1,699
5,225
All
1,322
2,006
1,892
1,525
1,990
1,986
1,935
1,242
2,258
1,692
1,619
1,813
1,660
4,177
12,854
Other Adults
Low-Income All
207
508
235
703
135
472
137
411
165
652
209
732
247
995
172
426
113
587
150
500
179
491
160
581
203
730
518
1,546
1,025
3,643
2-10
All Adults
Low-Income All
593
1,207
734
1,720
440
1,236
459
1,012
498
1,546
629
1,804
635
2,216
583
1,080
405
1,530
532
1,198
599
1,269
504
1,422
564
1,647
1,588
3,729
3,960
11,337
3. RANDOM DIGIT DIAL HOUSEHOLD SAMPLING
This chapter describes the sample design and implementation for the RDD component of the
survey. The first section describes the sampling telephone numbers using the list-assisted sample
design for each study area and the balance of the nation. The second section describes the overall
plan for subsampling households. The third and fourth sections go into detail about the main
subsampling procedures, subsampling adult-only households and high-income households,
respectively. The fifth section describes changes to the sampling parameters made during the
data collection period and based on monitoring the progress of the earlier stages of the sample.
The final section of the chapter presents tables on the sample sizes achieved using the methods
discussed.
3.1
Sampling Telephone Numbers
The RDD sample for all three rounds of NSAF used a list-assisted approach to select the sample
of telephone households. These households were screened to identify low-income households
with children and other households of interest as described later in this chapter. Casady and
Lepkowski (1993) describe list-assisted sampling and a recent update to the application of this
method is Tucker, Lepkowski, and Piekarski (2002). In list-assisted sampling, the set of all
possible residential telephone numbers is divided into 100-banks. Each 100-bank contains the
100 telephone numbers with the same first eight digits (i.e., the identical area code, telephone
prefix, and first two of the last four digits of the telephone number). The frame consists of all
100-banks with at least one residential number listed in a published telephone directory. Any
household telephone number that is not in these 100-banks is not covered by the sample. A
simple random or a systematic sample of telephone numbers is selected from this frame.
List-assisted RDD sampling is now the standard sampling procedure for telephone surveys. The
key advantages of this method relative to the Mitofsky-Waksberg method (Waksberg 1978) is
that the sample is unclustered, and the full sample of telephone numbers can be released to
interviewers without the sequential impediment in the Mitofsky-Waksberg method. A
disadvantage of list-assisted RDD is a small amount of undercoverage due to excluding
households in 100-banks with no listed households. Studies of the undercoverage due to this
exclusion have shown that only a small percentage of households are excluded, and households
excluded from the frame are not very different from those that are included (Brick, et al. 1995;
Tucker et al. 2002). Together, these attributes indicate the undercoverage bias from excluding
100-banks with no listed households is small.
Once the sample was selected from the 100-banks, two steps were used to improve the working
residential rate and thereby reduce costs. The first procedure eliminated numbers listed only in
the yellow pages. The second procedure used a vendor to dial the household automatically to
eliminate many nonworking numbers. If a tritone signal was detected, then the telephone was
classified as a nonworking number and was never dialed by Westat interviewer. Both steps were
used in all three rounds of NSAF data collection. The procedures were enhanced by the vendor
3-1
over time, and a larger percentage of numbers was classified as nonresidential or business in
Round 3 compared to earlier rounds. When the vendor’s new procedures were introduced,
Westat examined the accuracy of the results by comparing the outcomes from the vendor’s
process with those from a recently conducted survey (the 2001 National Household Education
Survey). Based on this study, Westat obtains the results of the purging from the vendor and
specifies which cases will not be dialed so that less than 1 percent of the purged numbers are
residential.
The sample was selected all at once, when the newest quarterly update of the sampling frame
was available in December 2001. The sample was then assigned to waves for data collection as
described in section 3.5. Consideration was given to sampling for later releases from the next
quarterly frame update that was expected by March 2002. However, the entire sample was
selected at once because most of the sample was planned to be in process before a sample from
the new frame could be prepared. Furthermore, the differences in the frames by quarter are
typically very small. Because of the magnitude of the number of telephone numbers in the
sample, the purging of the sample of telephone numbers was done in five batches, each
containing about 100,000 telephone numbers. The dates the batches were purged were January
7–21, January 14–28, January 22–February 5, January 28–February 11, and February 4–18. The
order of the batches corresponded to the waves of releasing the numbers.
Westat interviewers dialed any telephone number not eliminated by the tritone and yellow page
purges to determine its working residential status. When a residential telephone number was
reached, the interviewer asked about the age composition and income of the household. These
questions were used to subsample households, as discussed in the next section.
3.2
Subsampling Households
As in Rounds 1 and 2, households were subsampled for the NSAF interview using subsampling
rates that depended on whether there were children in the household and whether the household
income was below 200 percent of the poverty threshold. Specifically, households with children
under 18 and incomes below 200 percent of the poverty threshold were subsampled at 100
percent (in other words, they were retained with certainty). Households with all members 65 and
older were not subsampled (none were retained). All other households were retained for the
interview with subsampling rates greater than zero and less than one. The rationale and
procedures for the subsampling are the same as in previous rounds. Essentially, the subsampling
rates were developed to obtain the desired sample sizes and effective sample sizes for the
targeted subgroups needed for analysis. The rationale is discussed in more detail in the 1997 and
1999 sample design reports.
Two distinct subsampling steps were implemented. The first subsampled households with no
children. The second subsampled households with and without children if the reported income
was above 200 percent of the poverty threshold or if the response to income screener question
was missing. Thus, four strata with different subsampling fractions were created, as illustrated in
figure 3-1. The two subsampling steps are discussed in more detail in the next sections.
3-2
Figure 3-1.
Household Subsampling Operations
Children
present?
Yes
Keep
No
Low
income?
Yes
Keep
No
Subsample
Anyone
under age
65?
No
Yes
Subsample
Yes
Low
income?
No
Drop
Subsample
3-3
Keep
The subsampling rates were computed based on a variety of assumptions. Most assumptions
were computed using data from Round 2. The key assumptions used to compute the subsampling
rates are as follows:
Type of household—Table 3-1 gives the proportion of child, adult, and elderly households
estimated from Round 2.
Low-income rates—Table 3-1 also gives the estimated proportion of households that are
poor (less than 200 percent of the poverty threshold), nonpoor, and unknown, for both
child and adult households. These proportions are the expected rates for households that
respond to the screener income item. The Round 2 rates were used for Round 3 despite the
recent economic downturn to be conservative.
Misclassification by income—The rates of households being misclassified by income are
shown in table 3-2. The false negative rate is the proportion of households that report high
income when they are actually low income. The false positive rate is the proportion of
households that report low income when they are actually high-income. The proportions in
the table are the observed (weighted) rates from Round 2 and are discussed more fully later
in the chapter.
Residential rates—The assumed residency rates are given in the first column of table 3-3.
The rates are three points less than the residency rates estimated from Round 2. Lower
rates were used because the residency rate has been decreasing rather steeply during this
time period. The ring–no answer (NA) and answering machine (NM) cases are treated as
nonresidential for this purpose since they do not result in completed interviews.
Response rates—The assumed response rates are shown in the remaining columns of
table 3-3. The screener response rates are 2 percentage points less than the estimated
response rate from Round 2, again without any allocation for NA and NM cases. We also
assumed the adult and child extended response rates would be 2 percentage points less than
the observed Round 2 rates. Once again, the lower rates were assumed because response
rates to most RDD surveys have been decreasing in recent years and the Round 2 rates
were some of the highest response rates we have seen in RDD surveys in the past three
years.
The rates for subsampling adult-only households and households by income were derived using
these assumptions. These subsampling or retention rates are shown in table 3-4. The required
number of completed number of screeners and sampled telephone numbers was then derived
from these assumptions.
3-4
Table 3-1.
Assumed Proportion of Households, by Household Type and Poverty Status
3-5
Study Area
Alabama
California
Colorado
Florida
Massachusetts
Michigan
Minnesota
Mississippi
New Jersey
New York
Texas
Washington
Wisconsin
Bal. of nation
Total
Household Type
Child
Adult
0.366
0.421
0.389
0.418
0.376
0.462
0.324
0.410
0.337
0.457
0.367
0.427
0.361
0.441
0.373
0.397
0.366
0.434
0.350
0.437
0.397
0.415
0.368
0.459
0.351
0.434
0.360
0.432
0.363
0.430
Elderly
0.213
0.193
0.163
0.266
0.206
0.205
0.198
0.230
0.200
0.213
0.188
0.174
0.214
0.208
0.207
Child Households–Screening
Poor
Nonpoor
Unknown
0.339
0.617
0.045
0.306
0.628
0.066
0.233
0.725
0.041
0.311
0.631
0.057
0.203
0.735
0.062
0.239
0.706
0.054
0.202
0.761
0.037
0.390
0.557
0.053
0.183
0.763
0.054
0.309
0.632
0.059
0.348
0.601
0.051
0.241
0.712
0.047
0.253
0.703
0.044
0.271
0.684
0.044
0.279
0.671
0.050
Adult-Only Households–Screening
Poor
Nonpoor Unknown
0.227
0.696
0.077
0.171
0.751
0.078
0.136
0.810
0.055
0.167
0.762
0.071
0.123
0.801
0.077
0.151
0.788
0.061
0.130
0.825
0.045
0.235
0.701
0.064
0.100
0.839
0.061
0.159
0.774
0.067
0.179
0.752
0.069
0.139
0.807
0.055
0.148
0.796
0.057
0.172
0.769
0.060
0.165
0.771
0.064
Table 3-2.
Assumed Misclassification Rates, by Income Categories
Study Area
Alabama
California
Colorado
Florida
Massachusetts
Michigan
Minnesota
Mississippi
New Jersey
New York
Texas
Washington
Wisconsin
Bal. of nation
Total
Child Households
False
False
Positive Negative
0.166
0.139
0.195
0.109
0.249
0.118
0.216
0.149
0.218
0.086
0.245
0.105
0.269
0.104
0.184
0.142
0.267
0.083
0.176
0.138
0.179
0.148
0.198
0.086
0.200
0.119
0.195
0.121
0.182
0.123
Unknown to
Low-Income
0.315
0.597
0.462
0.541
0.401
0.503
0.358
0.536
0.442
0.562
0.547
0.361
0.332
0.495
0.537
Adult-Only Households
False
False
Unknown to
Positive Negative Low-Income
0.321
0.142
0.385
0.358
0.129
0.385
0.362
0.112
0.427
0.380
0.128
0.455
0.354
0.073
0.325
0.360
0.106
0.275
0.373
0.098
0.401
0.322
0.120
0.510
0.473
0.085
0.329
0.332
0.118
0.205
0.298
0.158
0.521
0.282
0.086
0.243
0.320
0.110
0.286
0.327
0.104
0.279
0.337
0.111
0.332
Table 3-3.
Assumed Residential and Response Rates
Study Area
Alabama
California
Colorado
Florida
Massachusetts
Michigan
Minnesota
Mississippi
New Jersey
New York
Texas
Washington
Wisconsin
Bal. of nation
Total
Residency Rate
(%)
45.6
39.5
38.5
39.9
43.4
39.1
39.6
45.9
40.0
41.9
37.4
39.0
40.9
40.1
40.1
Screener
Response Rate
(%)
80.3
69.6
75.1
72.3
69.3
74.1
80.3
80.3
65.7
66.8
74.6
74.7
79.5
76.7
74.3
3-6
Adult Extended
Response Rate
(%)
77.4
70.3
76.0
71.8
72.1
75.0
82.9
77.9
67.5
69.0
74.0
77.6
82.8
77.1
75.0
Child Extended
Response Rate
(%)
78.7
72.9
80.9
74.6
76.7
79.9
84.5
81.2
72.8
72.9
77.9
82.2
83.0
80.6
78.5
Table 3-4.
Subsampling or Household Retention Rates
Study Area
Alabama
California
Colorado
Florida
Massachusetts
Michigan
Minnesota
Mississippi
New Jersey
New York
Texas
Washington
Wisconsin
Bal. of nation
3.3
Child Households
Above
Unknown
Poverty
Income
0.507
0.750
0.652
0.750
0.629
0.750
0.636
0.750
0.646
0.750
0.654
0.750
0.619
0.750
0.498
0.750
0.606
0.750
0.651
0.750
0.576
0.750
0.593
0.750
0.750
0.750
0.570
0.750
Adult-Only Households
Above
Unknown
Adult
Poverty
Income
0.643
0.255
0.397
0.803
0.247
0.471
0.611
0.224
0.424
0.607
0.220
0.432
0.708
0.236
0.429
0.859
0.243
0.406
0.877
0.295
0.435
0.579
0.288
0.400
0.534
0.264
0.429
0.600
0.257
0.388
0.550
0.312
0.402
0.584
0.280
0.402
0.777
0.327
0.408
0.838
0.206
0.419
Subsampling Adult-Only Households
Since one goal of the survey was to estimate characteristics of all adults under age 65 and of the
subset of these adults who lived in households without children, the same procedure for
subsampling adult-only households used in the previous rounds was also used in Round 3.
Together with the subsampling of persons within households with children, these subsampling
procedures provide a sufficient sample size for making reliable estimates for these adults. As in
previous rounds, the MKAs and their spouse/partners are called “Option A” adults. Other adults
are called “Option B” adults. “Option B” adults who live in households with children are called
“Option B stragglers.” Statistics about nonelderly adults are formed using data from the “Option
A” interviews about the MKAs and their spouses and using data from the “Option B” interviews
about all other adults under age 65.1 The group of all other adults under age 65 consists of
nonelderly adults in adult-only households and some nonelderly adults in households with
children. In households with children, an adult was eligible for sampling if the adult did not have
any children under age 18 living in the household and if the adult had not already been identified
as the MKA for a focal child or as the spouse/partner of an MKA.
1
“Option A” interviews were administered to the MKA about the focal child(ren). It also obtained income, earnings, health insurance, and other
information about the MKA and his/her spouse/partner. The “Option B” interview obtained the same information about the sample adult and
his/her spouse/partner as obtained by the “Option A” interview—only questions about children were missing from the “Option B” version of
the questionnaire.
3-7
The target sample sizes for adults were set for the combined group of “Option B” adults and
“Option B” stragglers. The targets for each study area were approximately equal to the observed
sample sizes in Round 2. Since the Round 3 sample had a much smaller sample of nontelephone
households in Round 3 compared with previous rounds, the target sample sizes were close to the
number of telephone interviews from Round 2.
The retention rates for adult-only households in the RDD sample are shown in table 3-4. The
retention rates ranged from 53 percent to 88 percent across the study areas. These rates are
generally higher than they were in earlier rounds in order to reduce the variability in the
estimates due to subsampling adults at differential rates. The research supporting this approach is
described in 1999 NSAF Sample Design, Report No. 2.
The subsampling was implemented by loading the retention rate table into the computer-assisted
telephone interviewing (CATI) system. Each telephone number was randomly assigned as either
a “child-only” household or a “child and adult” household using a random sampling procedure
before the case was loaded. Households subsampled as “child and adult” were classified as
“adult-only” households if there were no children under age 18 present but at least one person
was under the age 65.
3.4
Subsampling High-Income Households
The same procedure used in the previous rounds was used in Round 3 to subsample households
by income. In households with children and in subsampled adult-only households, all households
were asked a simple question on household income. Those households reporting income above
200 percent of the poverty threshold were subsampled. The income levels for determining 200
percent of poverty are shown in table 3-5. The subsampling rates are given in table 3-4 for both
types of households.
Table 3-5.
Income Levels for Determining Less than 200 Percent of the Poverty Level
Household Size by Person
1
2
3
4
5
6
7
8
9+
Without Children
$18,300
$23,500
$27,500
$36,200
$43,700
$50,200
$57,800
$64,600
$77,700
a. This type of household can occur only if an emancipated minor is living alone.
3-8
With Children
$18,300a
$24,200
$28,300
$35,800
$42,400
$47,900
$54,500
$60,500
$71,400
After asking whether there were children in the household, the interviewer asked whether
household income was above or below the poverty threshold given in table 3-5. The household
was then either retained or subsampled out, depending on the response. As in the previous
rounds, we expected some household responses to the simple income item to be incorrect
compared with what they would say in response to the income items in the extended interview.
This misclassification error introduces an additional design effect on low-income households
with children (the household subsampling is only done for the RDD sample of households). The
benefit of asking the question and taking the loss in the effective sample size is that it sharply
reduces costs for completing the extended interviews.
The switching rates for households that do and do not report low income on the screener are
functions of the false negative and false positive error rates. The false negative and false positive
rates that were observed in Round 2 and assumed to apply to Round 3 are given in table 3-2.
Report No. 2 in the 1999 series gives a full discussion of all these issues. The subsampling rates
that were designed to be close to the optimum values for Round 2 were fully implemented for
Round 3.
3.5
Household Sampling Revisions during Data Collection
As noted in section 3.2, the sample design depended on a number of assumptions. When data
collection began, the outcomes were monitored and used to project eventual sample yields. If the
revised projections differed substantially from the target, the sampling rates and number of
telephone numbers could be adjusted to be consistent with the new projections. The same
monitoring process was used in both Round 1 and Round 2. In both rounds the monitoring led to
revisions in the sampling procedures during data collection.
We assumed that changes in sampling parameters might be needed and designed the sampling to
facilitate making changes during the data collection period. The sample for each study area was
divided into a main sample and a reserve sample. Within the main sample cases were further
partitioned into release groups. The overall sample of telephone numbers was selected, and then
the telephone numbers were assigned randomly to the main and reserve samples for each study
area independently. Within the main sample, all telephone numbers in a study area were
randomly assigned to one of 103 release groups. Each release group had 5,000 telephone
numbers and was a microcosm of the main sample. The only exception is the last release group
(103) that was smaller because it contained the remainder of the sample. Release groups 1
through 60 were used for a variety of methodological experiments. To support the analysis of
these experiments, it was decided that no revisions in the sampling or other procedures would be
implemented in the first 60 release groups.
Relatively early in the data collection period we observed several deviations from the assumed
rates. The proportion of households with children that was low-income differed from the
assumed rate in several study areas. Similarly, the proportion of adult-only households by
income differed from the assumed rate. These differences resulted in deviations from the
expected number of interviews for specific types of households. The deviations from the
3-9
assumed rates were concentrated in specific study areas rather than being uniformly distributed
over the entire sample. To deal with the differences, the subsampling rates were revised in later
release groups to achieve sample yields that were closer to the targets.
Monitoring the data collection process also found that the assumed residency rates and response
rates for the screener were lower than expected. The consequence of these deviations from the
assumptions was the completion of fewer screener interviews than planned and consequently the
sampling of a smaller number of people for the extended interviews. The magnitude of the
deviations of the residency and response rates from the assumed rates varied by study area. The
losses in sample yield due to these lower rates were modest in most study areas, but in some
study areas the expected consequences on the sample yield were substantial and changes were
necessary. The obvious way to deal with the lower-than-expected residency and response rates
was to release reserve sample in selected study areas. However, releasing reserve sample would
have some disadvantages, such as increasing data collection costs, extending the data collection
period, and delaying the release of estimates from the survey.
To mitigate the cost and time implications associated with releasing more sample telephone
numbers, a subsampling approach was instituted. The plan was to subsample telephone numbers
that resulted in an initial refusal at the screener. A random subsample of telephone numbers with
screener refusals was instituted and only those subsampled were included in refusal conversion
efforts. The screener refusal cases that were not subsampled were excluded from subsequent data
collection efforts. The plan included a provision for weighting the subsampled refusal cases to
account for all cases that were eligible for subsampling for the preparation of estimates and
response rates. The subsampling was limited to screener refusal cases and did not apply to
extended interview work.
Data collection costs were reduced because expenses related to refusal cases could be avoided
for cases that were not subsampled for conversion. When a household refuses the screener, the
interviewer records demographic information about the refusing respondent and the respondent’s
reasons for refusing to participate. Interviewers also rate the strength of the refusal as mild, firm,
or hostile. In Round 3, up to two refusal conversion attempts were made for refusing households
at the screener level. In addition, Telephone Research Center (TRC) supervisors reviewed all
cases coded as hostile to determine whether that designation was merited. Cases rated as hostile
that supervisors judged inappropriately coded were recoded and then were eligible to be released
for an additional conversion attempt. Truly hostile (profane or abusive) refusal cases were never
released for conversion. Before attempting to convert the screener refusal, an express mailing
was made to the household if an address was available for that household, adding further to the
cost of refusal conversion.
The reduction in field time was realized by implementing the subsampling so that screeners
released later in the field period were not subsampled for refusal conversion, while those
released earlier were converted. The details on implementing the subsampling to achieve this
goal are given below. Thus, the refusal subsampling allowed the sample to be worked more
efficiently while still including all of the appropriate scheduling procedures (including hold
periods for refusal cases). The usual practice of continuing a smaller interviewer workforce to
3-10
cover the calls to late refusals is not needed to the same extent and the data collection period can
end more quickly.
Since the telephone numbers had been randomly assigned to the release groups prior to data
collection, these release groups were well suited for the subsampling approach. Release groups 1
to 82 and release group 103 were designated for subsampling (i.e., refusal conversions were
scheduled for all initial screener refusals in these groups). Telephone numbers in release groups
83 to 102 were designated as not subsampled. The same designation was used for all study areas
and for the balance of the nation.
The number of release groups included in the subsample was determined by evaluating the effect
on the sample yield and the increase in variance in the estimates due to weighting the refusal
conversion cases for the subsampling. The sample yield obtained by subsampling was
determined using the initial screener refusal rate observed in the early waves of the Round 3 data
collection. Since only about 20 percent of the full sample of telephone numbers was excluded by
the subsampling, the weighting factor on the retained screener initial refusal cases was not very
large. The increase in variance due to the weighting was expected to have a negligible effect
(less than 2 percent) on the variance of the estimates.
Since the revised data collection plan had two components (changes in the subsampling rates
dealing with screening by income and the introduction of refusal subsampling), the
implementation of the two components was coordinated. The goal was to use the same
subsampling rate by income for the cases irrespective of the subsampling for refusal conversion.
To accomplish this goal, the average rate of subsampling by income was computed for release
groups 61 through 82 and 103 (those subsampled for refusal conversion). This subsampling rate
was applied to telephone numbers in release groups 83 through 102 and any reserve sample.
The revised subsampling rates for households with children that were classified as high-income
in the screener are given in table 3-6. The first column shows the subsampling rate applied for
release groups 61 through 82 plus 103, and the second column shows the rate for release groups
83 through 102 plus any reserve sample. The bold entries are the only ones that changed from the
corresponding planned rates given in table 3-4. Table 3-7 gives the revised subsampling rates for
household with adults-only by the release groups. The bolded entries indicate revisions from the
rates applied in release groups 1 through 60.
Table 3-8 gives the number of reserve telephone numbers released in the study areas where the
reserve was needed. In total, the sample of telephone numbers increased by 42,411 numbers.
This represented an increase of about 8 percent over the originally planned sample size of
telephone numbers.
3-11
Table 3-6.
Revised Subsampling Rates for Households with Children Screening
as High-Income, by Release Group
Study Area
Alabama
California
Colorado
Florida
Massachusetts
Michigan
Minnesota
Mississippi
New Jersey
New York
Texas
Washington
Wisconsin
Bal. of nation
Release Groups
61–82 and 103
0.507
0.000
0.629
0.100
0.000
0.654
0.619
0.300
0.000
0.500
0.576
0.593
0.750
0.570
Release Groups
83–102
0.507
0.473
0.629
0.490
0.468
0.654
0.619
0.447
0.440
0.610
0.576
0.593
0.750
0.570
Table 3-7.
Revised Subsampling Rates for Adult-Only Households, by Release Group
Study Area
Alabama
California
Colorado
Florida
Massachusetts
Michigan
Minnesota
Mississippi
New Jersey
New York
Texas
Washington
Wisconsin
Bal. of nation
Subsampling Rate for Adult-Only
Households prior to Income Item
Release Groups Release Groups
61–82 and 103
83–102
0.643
0.643
0.803
0.803
0.611
0.611
0.400
0.550
0.500
0.650
0.400
0.733
0.877
0.877
0.579
0.579
0.100
0.415
0.600
0.600
0.550
0.550
0.420
0.539
0.415
0.676
0.350
0.704
3-12
Subsampling Rate for Adult-Only
High-Income Households
Release Groups Release Groups
61–82 and 103
83–102
0.255
0.255
0.247
0.247
0.224
0.224
0.220
0.220
0.236
0.236
0.459
0.302
0.295
0.295
0.288
0.288
0.500
0.329
0.257
0.257
0.020
0.231
0.280
0.280
0.315
0.323
0.400
0.260
Table 3-8.
Reserve Sample Released, by Study Area
Study Area
Total
Colorado
Florida
Massachusetts
Michigan
Minnesota
New Jersey
Wisconsin
Bal. of nation
3.6
Number Released
42,411
6,700
2,952
2,322
4,589
7,498
6,324
6,608
5,418
Achieved Response and Eligibility Rates
The RDD household screening component for Round 3 is summarized in table 3-9. The sample
of 556,651 telephone numbers includes the 42,411 telephone numbers released from the reserve
sample. The table shows the numbers for each study area and the total across all the study areas
and the balance of the nation.
Overall about 54 percent of the sample phone numbers were determined to be nonworking or
nonresidential, and another 9 percent were never resolved. The never resolved numbers are those
that had only ring–no answer or answering machine responses, despite repeated attempts to reach
someone at the number. The survival method (see Brick et al. 2002 or Report No. 2 for 1999)
was used to estimate the residential status for the never resolved numbers. About 32 percent of
these numbers were estimated as residential and the rest were not residential. The percentage
varies by study area. Using the survival method allocation approach for the never-resolved
numbers, the estimated residency rate for the nation is 39 percent (this is a weighted number that
accounts for the population of the study area and is discussed in more detail in Report No. 4).
The last column of the table gives the number of households that responded to the questionnaire
item on the presence of children in the household.
Since screeners that were initially refused were subsampled, the screener response rate cannot be
computed by dividing the number of completed screeners by the number of residential numbers.
The response rate must account for the subsampling. The details of the computations and other
discussion of response rates are given in Report No. 7, but essentially the completed screeners
that were subsampled were also weighted by the inverse of the subsampling rate in the
computations. Table 3-10 shows the number of sampled telephone numbers that were initial
refusal screener cases and the number of these that were subsampled. As can be seen, about 20 to
25 percent of the initial refusal cases were sampled in each study area. The last column of the
table shows the estimated screener response rate. This response rate is based on weighting cases
by the inverse of their probability of selection, including the refusal subsampling probability.
3-13
Table 3-9.
Screening for Residential Status and Presence of Children
Verified Nonworking
No Contact
or Nonresidentiala
(after 14
Released
Study Area
Sample Number Rate (%) Calls)
Alabama
22,608
11,766
52.0
1,165
California
37,554
19,610
52.2
3,249
Colorado
41,180
23,840
57.9
3,077
Florida
36,637
19,665
53.7
2,629
Massachusetts
42,142
20,666
49.0
3,203
Michigan
40,858
23,159
56.7
2,816
Minnesota
41,035
24,792
60.4
2,015
Mississippi
19,325
9,591
49.6
1,022
New Jersey
60,479
29,038
48.0
7,145
New York
35,108
19,044
54.2
2,689
Texas
29,653
17,330
58.4
1,926
Washington
34,137
19,699
57.7
2,129
Wisconsin
30,927
16,670
53.9
1,977
Bal. of nation
85,008
47,153
55.5
5,395
Total
556,651 302,023
54.3
40,437
Only
Answering
Machine
Contacts
341
682
649
802
758
623
444
206
1,241
590
369
473
357
1,289
8,824
Known and
Imputed
Known
b
Residential Residential
9,336
9,934
14,009
15,314
13,614
14,534
13,537
14,697
17,512
18,706
14,258
15,298
13,783
14,702
8,506
8,903
23,051
25,979
12,782
13,728
10,028
10,715
11,835
12,732
11,923
12,712
31,168
33,378
205,342
221,333
Residency Complete Age
c
Rate (%) Screening
43.9
6,400
40.8
8,901
35.3
9,008
40.1
8,364
44.4
10,465
37.4
9,534
35.8
9,747
46.1
5,947
43.0
13,136
39.1
7,518
36.1
6,476
37.3
8,329
41.1
8,563
39.3
21,115
39.4
133,503
a. These include those telephone numbers that were classified as nonresidential on the basis of the automated tritone detection and White Page matching, as well as those verified by
interviewers.
b. Survival analysis was used to allocate phones that were never answered and those where the only contact was with an answering machine by residency status.
c. The percent of released sample that was either verified or imputed to be working phone numbers for private residences.
3-14
Table 3-10.
Subsampling Screener Refusals and Response Rates
Study Area
Alabama
California
Colorado
Florida
Massachusetts
Michigan
Minnesota
Mississippi
New Jersey
New York
Texas
Washington
Wisconsin
Bal. of nation
Total
Known
Residential
9,336
14,009
13,614
13,537
17,512
14,258
13,783
8,506
23,051
12,782
10,028
11,835
11,923
31,168
205,342
Initial Refusals
Not
Subsampled Subsampled
3,533
798
5,382
1,303
4,376
2,032
5,258
1,631
7,075
2,173
4,707
1,752
3,780
1,835
3,144
694
8,799
3,251
5,342
1,254
4,265
909
4,291
949
3,043
1,592
10,692
3,279
73,687
23,452
Subsampled
(%)
81.6
80.5
68.3
76.3
76.5
72.9
67.3
81.9
73.0
81.0
82.4
81.9
65.7
76.5
75.9
Screener
Response Rate
(%)
68.2
61.9
68.8
61.6
60.8
67.8
72.8
70.6
55.5
58.3
64.3
69.1
73.9
67.9
65.4
Note: Response rates are weighted and account for refusal subsampling and use the survival allocation method for handling unknown residency
numbers.
Table 3-11 shows the results of screening households by age. The screening was used to classify
households into three categories:
Households with children under 18;
Households with no children but at least one person under the age of 65; and
Households where everyone was 65 or older.
The first category included the very small number of households that had no adults but did have
emancipated minors (persons under 18). The largest overall sampling rates were applied to the
households with children, consequently the sample size requirements for this group determined
the size of the sample needed for screening. Overall, 37 percent of the screened sample of
households had children. All households with children were retained for income screening.
Adult-only households, those with someone under the age of 65 but with no children, constituted
44 percent of the sampled households. As described in section 3.3, adult-only households were
subsampled prior to income screening. The average retention rate for adult-only households at
this stage was 63 percent. Households with only elderly members were 20 percent of screened
households. Since these households were out of the scope of the survey they were immediately
eliminated as ineligible.
3-15
Table 3-12 shows the results of the next stage of screening, income screening, for telephone
households with children. Of households with children, 95 percent answered the income
screening question. Of those households that did answer, 28 percent reported low income
(income 200 percent of the poverty threshold or less) and 68 percent reported income above 200
percent of the poverty threshold. All the low-income households were retained, and 56 percent of
the others with known income were retained. For the households that did not respond to the
income items, 69 percent were retained.
Table 3-13 shows comparable information for adult-only telephone households retained for
income screening. The income question was answered by 94 percent of the households. Among
those that answered, the low-income rate was 16 percent and all of these low-income households
were retained. About 25 percent of those households who responded but did not have low
income were retained. Finally, 35 percent of those with unknown income were retained.
Combining the subsampling of households with children and households with adults only who
were retained for income screening, 42 percent of all households that screened with income
above 200 percent of the poverty threshold were retained for extended interviews.
3-16
Table 3-11.
Outcomes of Household Screening of Telephone Households
3-17
Households with Children
Screened
Number of Households
Households with
Children
Total Age with
(%)
Study Area Screeners Children
Alabama
6,400
2,244
35.1
California
8,901
3,702
41.6
Colorado
9,008
3,367
37.4
Florida
8,364
2,738
32.7
Massachusetts 10,465
3,670
35.1
Michigan
9,534
3,452
36.2
Minnesota
9,747
3,523
36.1
Mississippi
5,947
2,201
37.0
New Jersey
13,136
4,922
37.5
New York
7,518
2,705
36.0
Texas
6,476
2,654
41.0
Washington
8,329
3,079
37.0
Wisconsin
8,563
2,942
34.4
Bal. of nation 21,115
7,482
35.4
Total
133,503
48,681
36.5
Adult-Only Households
65 and Over Households
Screened
Adult-Only Retention
Households
Households Rate of
Number of Screened
Adult-Only Households Retained for Adult-Only Number of with Only
Households 65 and Over Adults Over
(under 65) with Adults Income
Households 65 (%)
Screening (%)
Households Only (%)
2,842
44.4
1,773
62.4
1,314
20.5
3,775
42.4
2,968
78.6
1,424
16.0
4,209
46.7
2,508
59.6
1,432
15.9
3,564
42.6
1,871
52.5
2,062
24.7
4,684
44.8
3,021
64.5
2,111
20.2
4,110
43.1
3,010
73.2
1,972
20.7
4,298
44.1
3,737
86.9
1,926
19.8
2,472
41.6
1,380
55.8
1,274
21.4
5,726
43.6
2,265
39.6
2,488
18.9
3,335
44.4
1,902
57.0
1,478
19.7
2,723
42.0
1,453
53.4
1,099
17.0
3,782
45.4
2,036
53.8
1,468
17.6
3,865
45.1
2,608
67.5
1,756
20.5
9,445
44.7
6,498
68.8
4,188
19.8
58,830
44.1
37,030
62.9
25,992
19.5
Table 3-12.
Outcomes of Income Screening of Telephone Households with Children
Not Low-Income Households
3-18
Households
with
Children
Study Area
Alabama
2,244
California
3,702
Colorado
3,367
Florida
2,738
Massachusetts
3,670
Michigan
3,452
Minnesota
3,523
Mississippi
2,201
New Jersey
4,922
New York
2,705
Texas
2,654
Washington
3,079
Wisconsin
2,942
Bal. of nation
7,482
Total
48,681
a. According to the simple screener question.
Complete
Income
Screening
2,160
3,496
3,226
2,596
3,471
3,297
3,383
2,094
4,645
2,534
2,510
2,964
2,824
7,132
46,332
Low-Income
Householdsa Identified Selected
836
1,324
703
1,247
2,249
1,042
784
2,442
1,566
913
1,683
817
731
2,740
1,250
876
2,421
1,587
735
2,648
1,654
909
1,185
545
983
3,662
1,622
789
1,745
1,096
922
1,588
918
812
2,152
1,272
716
2,108
1,569
2,145
4,987
2,857
13,398
32,934
18,498
Households with Unknown
Income
Selection
Rate (%) Identified Selected
53.1
84
56
46.3
206
145
64.1
141
103
48.5
142
96
45.6
199
138
65.6
155
110
62.5
140
99
46.0
107
69
44.3
277
195
62.8
171
113
57.8
144
97
59.1
115
70
74.4
118
84
57.3
350
237
56.2
2,349
1,612
Selection
Rate (%)
66.7
70.4
73.0
67.6
69.3
71.0
70.7
64.5
70.4
66.1
67.4
60.9
71.2
67.7
68.6
Total
Sample
Households
with
Children
1,595
2,434
2,453
1,826
2,119
2,573
2,488
1,523
2,800
1,998
1,937
2,154
2,369
5,239
33,508
Table 3-13.
Outcomes of Income Screening of Adult-Only Telephone Households
3-19
Low-Income
Households with Unknown
Complete Households Not Low-income Households
Income
Retained
for Income Income
Identified
Selection
Selection
Study Area Screening Screening and Selected Identified Selected Rate (%) Identified Selected Rate (%)
Alabama
1,773
1,642
415
1,227
331
27.0
131
49
37.4
California
2,968
2,784
478
2,306
547
23.7
184
53
28.8
Colorado
2,508
2,362
352
2,010
444
22.1
146
50
34.2
Florida
1,871
1,736
351
1,385
282
20.4
135
48
35.6
Massachuset 3,021
2,820
384
2,436
565
23.2
201
75
37.3
Michigan
3,010
2,831
449
2,382
650
27.3
179
71
39.7
Minnesota
3,737
3,558
514
3,044
910
29.9
179
62
34.6
Mississippi
1,380
1,285
382
903
251
27.8
95
33
34.7
New Jersey
2,265
2,108
253
1,855
547
29.5
157
54
34.4
New York
1,902
1,753
313
1,440
346
24.0
149
51
34.2
Texas
1,453
1,338
271
1,067
223
20.9
115
40
34.8
Washington
2,036
1,931
302
1,629
454
27.9
105
31
29.5
Wisconsin
2,608
2,465
364
2,101
638
30.4
143
47
32.9
Bal. of
6,498
6,092
1,156
4,936
1,095
22.2
406
143
35.2
Total
37,030
34,705
5,984
28,721
7,283
25.4
2,325
807
34.7
a. According to the simple screener question.
Total
Sample
Adult-Only
Households
795
1,078
846
681
1,024
1,170
1,486
666
854
710
534
787
1,049
2,394
14,074
4. AREA SAMPLE
This chapter describes the in-person or area sample that is designed to yield a national sample of
households without current telephone service. Unlike in 1997 and 1999, only a national area
sample was selected for 2002. Since the design of the area sample for 2002 was a modification of
the sample design used in 1999 (which was a modification of the 1997 design), the sections that
follow briefly explain key features of the earlier design as an introduction to the 2002 design.
More details on the sample design for the area sample for 1997 and 1999 are in Report No. 2 for
each of those years.
The first section of this chapter covers selecting the primary sampling units (PSUs) for 2002 by
subsampling PSUs from the 1999 sample. The second section describes with the sampling of
segments. The third section reviews the subsampling segments using “chunks,” which are
compact subsets of segments. The last section summarizes the outcomes of these sampling
procedures as they were implemented in the field.
The sampling procedures are those typically used in area probability samples, with two
exceptions. The first special procedure was the elimination from the sampling frame of blocks in
census block groups (BGs) with very high telephone service rates. This was done to lower the
cost of data collection even though it was realized that this would leave a certain group of
households unrepresented. The second special procedure concerned sampling of dwelling units
(DUs) within segments. Traditionally, a constant expected number of DUs is sampled from all
sample segments after a preliminary listing operation. In this survey, listing and screening were
carried out simultaneously so that the number of DUs varied by segment. Segments with a large
number of expected DUs were thus “chunked” so that only a portion of the selected segment was
listed and prescreened. Briefly, “chunking” involves obtaining a rough count of DUs within a
segment dividing the segment into “chunks” of roughly the same number of DUs. One “chunk”
is then selected for listing and prescreening. These two special procedures are described in more
detail in the sections that follow.
4.1
First-Stage Sampling
When the sample of PSUs were selected originally, the process was categorized into four distinct
phases. The first was to define the PSUs in terms of geographic units. Starting with a standard set
of PSU definitions used by Westat as the sampling frame for many surveys, one modification
was made to optimize the definitions for NSAF. The second phase was to decide how many
PSUs to select in each targeted state and in the balance of the nation. The third phase was to
stratify the PSUs to reduce between-PSU variance as much as possible for statistics of interest.
The fourth phase was to actually draw the sample PSUs from the strata. These phases are each
described here in terms of the original sampling and the subsampling for 2002.
4-1
PSU Definition. Since the 2002 sample of PSUs is a subsample of those sampled in 1999, the
definitions of the PSUs were identical in all three rounds. This section summarizes those
definitions. The standard Westat PSUs were formed in 1991. These PSUs were defined to follow
several rules. Each metropolitan statistical area (MSA) defined by the Census Bureau in 1990
was generally defined as a separate PSU. This procedure has the effect of minimizing betweenPSU variance while adding only modestly to within-PSU travel costs. The between-PSU
variance was reduced compared to a plan that established separate PSUs for the central cities and
the suburbs, because heterogeneity is maximized within PSUs rather than across them. The
within-PSU travel cost was not much higher despite leaving the central cities and suburbs
together because of the generally high-quality transportation networks in metropolitan areas.
Number of Sample PSUs. The number of sample PSUs was reduced from the previous rounds
because the 2002 area sample is a national sample only rather than also providing study area
estimates. In consultation with the Urban Institute, the total number of PSUs was reduced so that
only approximately half the total number of PSUs from 1999 were retained. To increase the
efficiency of the sample and reduce the variability in the sampling rates for households without
telephones, all the PSUs in the balance of the nation were retained (37), and the number of
sampled PSUs in the study areas was reduced.
The allocation of the sample was determined by first setting the total number of PSUs for the
sample using by the proportion of the number of 1999 eligible nontelephone households in the
balance of the nation as a guide. The estimated number of 1999 eligible nontelephone
households by study area is shown in column one of table 4-1. Because the balance of the nation
had 56 percent of the nontelephone population in 1999, a total of 66 PSUs (37/0.56) was chosen
so that the number of PSUs could be allocated roughly proportional to size by study area. Thus,
29 PSUs were to be subsampled from the 96 PSUs for 1999 in the study areas.
The next step was allocating these additional 29 PSUs to the study areas proportional to the
percent of 1999 eligible nontelephone households. This allocation is shown in column three.
Because some sites such as Milwaukee and Massachusetts have such small nontelephone
populations, the proportional allocation did not assign a full PSU to these study areas. To have
the study areas as strata, at least one PSU had to be sample from each of the study area. The
number of PSUs shown in column four uses the proportional allocation but requires at least one
PSU in each study area. This revision increased the number of sampled PSUs from 66 to 68. The
number of PSUs sampled in Round 2 is shown in column five for comparison.
Before we selected the subsample of PSUs using this allocation, we investigated the effects the
allocation would have on the variability in the weights for the nontelephone households by
examining the average segment weights. The distribution of average segment weights within
each study area indicated a great deal of variability in the weights, with the main problems
arising because of the supplemental sample of segments selected in Round 1 for six study areas
(Colorado, Florida, Massachusetts, New Jersey, New York, and Washington). In particular, the
average segment weight in Florida and New York was much higher than the average weight. To
4-2
Table 4-1.
Number of Round 3 Primary Sampling Units
Study Area
Alabama
Bal. Wisconsina
California
Colorado
Florida
Mass.
Michigan
Milwaukeea
Minnesota
Miss.
New Jersey
New York
Texas
Bal. of nation
Washington
Total U.S.
Focal Sites
1999 Eligible Percent Proportion
Nontelephone by Study Allocation of
Households
PSUs
Area
60,879
2.9
1.9
19,105
0.9
0.6
199,315
9.6
6.3
15,168
0.7
0.5
70,550
3.4
2.2
10,702
0.5
0.3
57,664
2.8
1.8
7,831
0.4
0.2
15,182
0.7
0.5
54,050
2.6
1.7
30,663
1.5
1.0
66,053
3.2
2.1
265,790
12.8
8.4
1,167,908
56.3
37.0
35,028
1.7
1.1
2,075,889
100.0
66
907,980
43.7
29
Number of
Round 3
PSUs
2
1
6
1
2
1
2
1
1
2
1
2
8
37
1
68
31
Number of
Round 2
PSUs
10
8
6
6
8
4
6
1
7
12
6
7
9
37
6
133
96
Final
Allocation
2
1
6
1
3
1
2
1
1
2
1
3
6
37
1
68
31
a. Balance of Wisconsin and Milwaukee are shown separately but are combined into one site in the 2002 designs.
reduce the variability in the weights, the number of allocated PSUs in Round 3 was revised.
Since Texas had a low average segment weight and a relatively large number of allocated PSUs,
the allocation for Texas was decreased by two PSUs and the allocations for Florida and New
York were each increased by one. The final number of PSUs allocated to each study area is
shown in column six.
Stratification. Since the 2002 sample was a subsample of the Round 2 PSUs, the 1999 PSU
stratification was maintained. The following is a short description of the stratification procedure.
The general idea of stratification is to group similar PSUs into the same stratum and then select
just one or two PSUs per stratum. This procedure reduces between-PSU variance for most
statistics, particularly those related to the statistics used in the grouping. If enough information
related to the outcomes is available for stratification purposes, having a large number of strata
can increase the precision of point estimates. However, forming the maximum possible number
of strata makes estimation of variances more difficult. Given the small number of PSUs for each
state in the NSAF, the original sample design called for creating the maximum possible number
of strata, and then sampling one PSU per stratum. For efficiency, strata are formed so that they
are of nearly equal size in terms of population, whenever possible. When some PSUs are larger
than the average population per desired stratum, these PSUs are selected with certainty and
4-3
called self-representing (SR), or certainty, PSUs. These procedures were carried out for the
original stratification of PSUs in Round 1.
In Round 3, the subsampling of PSUs from the study areas reduced the effectiveness of most of
the stratification from the earlier rounds. Two notable exceptions are for the entire balance of the
nation sample of PSUs and for California. In these two study areas, the entire sample of PSUs
was retained and thus all the previous stratification was carried forward into Round 3. In the
other study areas, the study area was considered a separate stratum and a small subsample of
PSUs was selected. Thus, much of the stratification of the PSUs within the study area used in the
previous rounds was of little utility in these study areas.
Selecting PSUs. When the original sample of PSUs was selected, a probability proportional to
size (PPS) (population in the eligible BGs in the strata) selection method was used. Since the
goal of the subsampling was to retain this proportionality, the subsample of PSUs was selected
with equal probability from all the originally sampled PSUs in the study area.
Prior to subsampling, the PSUs were sorted in order of PSU weight (the inverse of PSU
probability of selection within each stratum). The sort order was alternating (ascending,
descending) across strata or study areas. The sort was used to control the variability in the sizes
of the subsampled PSUs, thereby reducing the chance of obtaining a subsample that had
predominately large- or small-sample PSUs. The last step was to select the PSUs with equal
probability within the study area stratum.
4.2
Second-Stage Sampling
The second-stage was the sampling of segments from the subsampled PSUs. Since all the
sampled segments were retained in the subsampled PSUs, only a short description of the 1999
second-stage sampling is given here. One exception was the sampling of segments in
Milwaukee, and this procedure is described in detail later in this section.
As mentioned previously, two special procedures were used in the sampling of segments for this
survey. The first concerned the exclusion of areas with very high rates of telephone coverage.
This exclusion had little effect on segment formation and selection rules. It is discussed in
section 4.2.1. The second special procedure was to screen compact chunks without listing. This
feature had a major effect on segment formation and selection rules. In surveys where there will
be subsampling within each segment to yield a uniform household sample size for all segments,
it is fairly straightforward to decide on a minimum size for each segment and on how many
segments to select from each sample PSU. For NSAF, natural variation in block size made it
advantageous to allow the minimum segment size and the number of sample segments to vary by
PSU while keeping the number of sample households nearly uniform across PSUs. Section 4.2.2
focuses on how the segments were defined and on the determination of the number of segments
to select in each PSU. The selection procedures are described in 4.2.3.
4-4
4.2.1
Exclusion of Block Groups with High Telephone Coverage Rates
Extending coverage to every nontelephone household was judged as an expensive undertaking
with uncertain benefits. The 1990 Decennial Census showed that there are BGs where virtually
every household had a telephone. The details from the 2000 Decennial Census were not available
at the time of sampling for 2002, but nontelephone households are even more rare using these
data. The prospect of having to screen hundreds or even thousands of households in some areas
to find a single nontelephone household seemed to be carrying the concept of coverage for every
nontelephone household to an unreasonable extreme. Accordingly, the decision was made to
restrict the area sample to BGs where the percentage of households with a phone in 1990 was
below 92 to 98 percent, with the exact limit varying from state to state. Table 4-2 shows the
cutoffs by state. In addition to the exclusion of blocks in BGs with high telephone coverage, all
blocks with zero year-round housing as of 1990 were excluded.
Table 4-2.
Maximum Telephone Service Rates Allowed in Covered Block Groups
Study Area
Alabama
California
Colorado
Florida
Massachusetts
Michigan
Minnesota
Mississippi
New Jersey
New York
Texas
Washington
Wisconsin
Bal. of nation
4.2.2
Maximum
Telephone Service
Rate Allowed in
Covered BGs (%)
95
98
97
97
98
97
98
92
98
97
95
98
98
97
Nontelephone
Households
Excluded (%)
7.3
7.3
8.8
9.1
9.4
9.8
9.1
9.7
5.6
7.5
7.7
6.1
9.2
8.0
BGs Excluded
(%)
37.4
59.2
56.6
48.0
70.5
56.1
57.6
34.5
68.2
58.7
42.2
53.3
56.7
54.9
All Households
Excluded (%)
40.9
59.0
57.9
54.6
70.4
59.9
60.7
35.9
66.8
58.7
45.3
53.7
59.5
57.1
Segment Stratification and Selection
The segments were stratified by size in the original sampling procedure. Size was defined as the
ratio of the number of year-round housing units in the segment to the desired chunk size, c, for
the PSU. The two cutpoints in the stratification were 0.75 and 10.0. This means that a low
stratum was established for segments with DU counts more than 25 percent below the desired
4-5
chunk size, a high stratum was established for segments with 10 or more chunks, and a middle
stratum was left for all the remaining segments in the PSU.
Within each stratum, the segments were sorted by the segment-level telephone coverage rate.
(This was computed as the average telephone rate in 1990 for census BGs intersecting the
segment.) The sort order was reversed in the middle stratum so that small segments with high
nontelephone rates were close to medium segments with high nontelephone rates and so that
medium segments with low nontelephone rates were close to large segments with low
nontelephone rates. A systematic PPS sample was then drawn where the measure of size for a
segment was its assigned chunk count. Sampling was independent across PSUs, both NSR and
SR.
Segments were sampled PPS to the number of chunks assigned to the segment. When a chunk
was subsampled within a segment, the chunk was selected PPS to the number of housing units in
the chunk (based on the quick count) in Round 3. The chunks were created of equal size
whenever feasible. In Round 3, chunks were sampled with equal probabilities.
As mentioned earlier, the segments in the subsampled PSUs were retained for the 2002 sample
except for Milwaukee. The Milwaukee and balance-of-Wisconsin study areas were combined
into a single study area (Wisconsin). Since Milwaukee was the only nonstate area in the original
sample, it had been divided into segments (without the intermediate sampling of PSUs) and a
sample of 50 segments had been selected. In the average sampled PSU outside of Milwaukee,
PSUs were sampled and then about eight segments per PSU were selected. With the new unified
Wisconsin study area and the national design in Round 3, it was neither practical nor necessary
to retain all 50 segments for Milwaukee. A subsample of 20 segments of the 50 segments
sampled from Milwaukee was selected and retained for Round 3. To control the sample of
nontelephone households, the 50 segments were partitioned into two strata, based on the percent
of nontelephone households found in each segment in 1999. The two strata represent a high
(more than 4 percent nontelephone households) and a low density of having nontelephone
households. An equal probability sample was drawn from each stratum. A subsample of 7
segments was selected from the low eligibility stratum and 13 from the high eligibility stratum.
4.3
Chunk Selection
The number of chunks associated with each sampled segment was the same as the number used
in 1999. The method of associating the number to the segment is described in section 4.2.2 of
Report No. 2 for 1999. The only difference is that in previous rounds the number depended on
information from the Decennial Census, while in Round 3 the number was a function of the
chunk selection in Round 2. Essentially, if the interviewer’s initial count of the number of DUs
in the segment fell outside a range based on Round 2 listing, special instructions were required.
If the initial count for the segment contained less than 60 DUs, the entire segment was listed. If
the initial count for the segment contained more than 60 DUs, the interviewer would call the
home office. If the segment had been chunked in Round 2 and the count was within 20 percent of
the 1999 count, the same chunking instructions were used as in 1999. If the count was not within
4-6
20 percent, the segment was rechunked (divided into chunks based on the new counts) and a new
chunk was selected.
A few segments were identified as single-chunk segments but were discovered, during cruising,
to have grown considerably since 1999 due to new construction. Rather than screen many more
DUs than planned (most of which would be fairly new and would thus likely have phone
service), the decision was made to chunk some of these. Also, there were segments where
chunking was planned, but growth made it advisable to form more chunks than originally
planned. Lastly, in a few segments so much housing had been demolished that the planned
chunking was not carried out. Table 4-3 provides some information on planned and unplanned
chunking.
Table 4-3.
Segment Counts, by Planned and Unplanned Chunking (Including Sample Supplement)
Chunking
Chunking Expected
Done As but Not
Expected Done
Study Area
Alabama
1
0
California
24
2
Colorado
2
1
Florida
7
0
Massachusetts
0
1
Michigan
5
0
Minnesota
1
0
Mississippi
2
0
New Jersey
13
1
New York
33
0
Texas
14
0
Washington
7
1
Wisconsin
9
0
Bal. of nation
64
5
Total
182
11
4.4
More
Chunking
Done than
Expected
0
5
2
1
0
1
0
2
0
3
0
4
1
10
29
Less
Chunking
Done than
Expected
1
5
4
1
1
0
0
0
0
1
0
0
0
1
14
Chunking
Not
Expected
but Done
Anyway
1
1
0
3
0
1
0
2
0
1
1
1
0
4
15
Chunking
Not
Expected
and Not
Done
15
29
6
21
7
11
6
7
10
7
35
7
19
151
331
Total
Segments
18
66
15
33
9
18
7
13
24
45
50
20
29
235
582
Achieved Response and Eligibility Rates
The results of the area listing operation are shown in table 4-4. With only a national sample and
no reliable site-level estimates, the tables shown in this chapter in the previous reports at the site
level are only shown nationally for Round 3. Over 18,000 households were listed and 89 percent
were occupied. This is slightly less than half the number of households listed in Round 2.
4-7
Table 4-4.
Outcomes of Area Listing
Category
Listed Households
Occupied Households
Rate
Total, U.S.
18,299
16,305
89.1%
The next step in the survey was to prescreen and screen the households to determine if they were
eligible (i.e., there was no phone in the household and at least one occupant was under age 65).
The prescreening attempted to determine eligibility, and the screening was the initial screener
interview. If the respondent to the prescreener reported that there was no phone and at least one
occupant under age 65, then the interviewer immediately segued into the cell phone procedures
so that the screener and extended interviews could be conducted by telephone interviewers at the
TRC in Maryland. However, at some DUs, the information on the presence of a telephone may
have come from a teenager who was not eligible to make the cell phone call to the TRC. In
addition, there were instances where a qualified respondent was too busy to participate. In such
cases, the interviewer returned later for the screener interview. In most DUs, there was no time
lapse between prescreening and screening.
The results of the prescreening and screening operations are shown in table 4-5. Response on the
prescreener level (phone ownership and age only) was very high, at 99 percent. At the screener
level, it was 84.5 percent. The response rate for the screener in Round 2 was 80 percent. The
overall eligibility rate was 4.5 percent, which is slightly lower than the 4.8 percent eligibility rate
in Round 2. This means that roughly 22 households had to be contacted to find a single
household without a phone (and with at least one adult under 65).
Table 4-5.
Outcomes of Area Prescreening and Screening
Category
Complete
Response Rate
Total, U.S. Prescreening
16,109
98.8%
Screening
616
84.5%
Eligible
Eligibility Rate
Households Eligible for WithinHousehold Subsampling
730a
4.5%
721b
4.4%
NA
607
a. No telephone in household and at least one occupant under age 65.
b. Nine households switched from reporting no telephone and at least one occupant under age 65 at the prescreener to having either a phone or
only elderly residents at the screener.
4-8
5. WITHIN-HOUSEHOLD SAMPLING AND ACHIEVED SAMPLE SIZES
In both the RDD and area components, a sample of people living in the sampled households was
selected to reduce the burden of participation for the household and to improve response rates.
Different methods were used to sample children, adults in households with children, and adults
in households without children. These methods were the same as those used in Round 1 and
Round 2. The first three sections of this chapter describe the sampling methods. The fourth
section summarizes the outcomes of sampling people within households.
5.1
Sampling Children
After selecting the households, children under 18 were sampled from the selected households for
the child sample. If a sampled household had exactly one child, then that child was always
selected. In households with more than one child under 18, either one child or two children were
selected. One child was randomly selected from all the children 5 years old or younger.
Similarly, one child was randomly sampled from all children between the ages of 6 and 17 years.
Thus, in households with both children under 6 years old and children 6 and older, two children
were selected. The procedures were the same for households sampled from the RDD and the area
components.
The interviewer asked the MKA about each sample child. The MKA was most often the mother
or father of the child, but people with other relationships to the child were also interviewed if
they were most knowledgeable about a sample child.2 During the interview about the child,
questions were also asked about the MKA and his/her spouse/partner if the spouse/partner also
lived within the household. The MKA responded by proxy for all items about the spouse/partner.
However, some questions were asked only about the MKA and others were randomly asked
about only the MKA or the spouse/partner when both were present. The MKA or the
spouse/partner was randomly chosen as the subject for the items with equal probability. The
strategy was applied uniformly in the RDD and area components. As mentioned earlier, these
rules were the same as those used in Rounds 1 and 2.
5.2
Sample Selection of Other Adults in Households with Children
If the household was subsampled for adults as discussed in chapter 3, then a sample of adults
under age 65 (other than the MKA and the MKA’s spouse/partner) was selected from households
with children. The telephone numbers designated as being “in sample” if they were adult-only
households were also designated for sampling other adults. The procedure was used if the
household contained both children and other adults. Adults in the other-adult category include
2
If a person under the age of 18 was identified as a focal child, but did not have an MKA or the MKA was the spouse or unmarried partner of this
person, then this individual was considered an emancipated minor and regarded as an adult in the interview. There were 26 such interviews in
Round 3.
5-1
adult siblings, grandparents, aunts, uncles, and other relatives of sample children, boarders, and
live-in servants. The purpose of interviewing some of these adults was to ensure their
representation in estimates about the entire group of all adults under the age of 65.
The sample selection of other adults was performed at the close of the MKA interview so that
relationship data were available on all the adults in the household. To be eligible, the adults had
to be under age 65 and not be the parent of a child under age 18 living in the household. Ideally,
the eligibility rule would have included only adults who were not potential MKAs or potential
MKA spouse/partners for nonsample children under age 18 living in the household. However,
determining this would have involved asking detailed questions for each child in the household
about who was the MKA for the child and which adults in the household might be
spouse/partners of the potential MKAs. Such an approach would have been too burdensome and
any small biases that were induced by the procedure used were deemed acceptable. The biases
involved multiple chances of selection for nonparent MKAs and no chance of selection for nonMKA parents. To clarify this last point, a parent had zero chance of selection if he/she resided
with his/her child but was not viewed by the household respondent as the MKA or as the
spouse/partner of the MKA. Such situations are exceedingly rare, so any bias was of little
consequence.
To reduce the variability in the sampling rates of other adults in households with large numbers
of adults, the sample size for other adults in a household depended upon the number of such
adults present. If there were only one or two, then one of these adults was randomly selected. If
there were three or more present, then two were randomly selected.
Once the random selection of other adults was completed, the sampled adults were interviewed.
During each such interview, data were collected about both the sample adult and his/her
spouse/partner if the spouse/partner also lived in the household.3 All this information was
collected by proxy through the sample adult. Questions on health care insurance and usage were
asked about both the selected adult and the spouse/partner, but proxy responses from the selected
adult were accepted for both of them. As with the MKA interview, some questions were asked
only of the sample adult.
5.3
Sample Selection of Adults from Adult-Only Households
Sampling methods used for adults under age 65 in adult-only households were similar to those
used for other adults in households with children. A sample of either one or two adults was
selected depending upon how many adults were present. Unlike other adults in households with
children, the within-household subsampling rule was different for RDD households with three
adults under age 65 than in corresponding area households. A decision was made based on a
random number whether to sample one or two in the RDD component. The probabilities for
deciding whether to take one or two adults from adult-only RDD households with three adults
3
Since the sample selection was done at the close of the MKA interview, it was not possible to accidentally select both an adult and his/her
spouse/partner and then ask them both about themselves and each other.
5-2
varied by study area and are given in table 5-1. These probabilities were jointly set with the
subsampling rate for adult-only telephone households in order to achieve targeted sample sizes
for adults from adult-only households discussed in chapter 3. The rates are the same as used in
Rounds 1 and 2. Note that the rates do not vary by household income.
Table 5-1.
Proportion of RDD Adult-Only Households with Three Other Adults under Age 65 in
which Just One Adult Is Selected
Study Area
Alabama
California
Colorado
Florida
Massachusetts
Michigan
Minnesota
Mississippi
New Jersey
New York
Texas
Washington
Wisconsin
Bal. of nation
Proportion
0.54
0.65
0.49
0.51
0.56
0.53
0.53
0.68
0.62
0.65
0.56
0.53
0.61
0.53
In the area component, two adults were always selected when the household had three adults.
The rules for households with other numbers of adults did not vary by component. As with the
sample of other adults from households with children, data were collected from the sampled
adult about the spouse/partner of each sample adult if that adult lived in the same household. The
only exceptions were the set of questions that were asked only of the sampled adult.
Since no household relationship data were available at the time of sampling in these households
(in households with children the data were available), it was possible to select both an adult and
the spouse of that adult. If the relationships revealed a sampled adult was the spouse/partner of
the sampled adult, then the interview for that adult was automatically deleted.
5.4
Achieved Sample Sizes and Response Rates
This section gives a series of tables showing the outcomes of the within-household sampling
operations for children and adults by survey component (RDD and area) and by household
income (low and overall). The tables give achieved nominal sample sizes and response rates by
site. Effective sample sizes will be smaller due to design effects that arise from clustering in the
area sample and the variability of the sampling rates and weights. Effective sample sizes will be
5-3
given in the 2002 NSAF Variance Estimation, Report No. 4. Some groups that were sampled at
lower rates than other groups are as follows:
Households without phones,
Households with screener income above 200 percent of the poverty threshold,
Households outside of the study areas,
Children in households with multiple children in the same age range,
Adults in households without children, and
Other adults in households with children and multiple adults besides the MKA and spouse
(e.g., grown children living with parents and young siblings).
The first five tables (table 5-2 through 5-6) are the sample sizes from the RDD component. The
first two tables are sample sizes of children and the remaining are sample sizes of adults. The
next five tables (table 5-7 through 5-11) are for the area component. The same pattern is
followed, with the first two tables in the set giving child sample sizes and the other three tables
giving adult sample sizes. The last five tables (table 5-12 through 5-16) give the combined RDD
and area sample sizes.
The response rates reported in the tables are the simple unweighted ratios of the number of
interviewed persons to the number of eligible sampled persons. These rates reflect the
operational aspects of interviewing for different groups. Response rates that include differential
sampling rates and combine the screener and extended rates are discussed in detail in the 2002
NSAF Data Editing, Report No. 8.
Tables 5-2 and 5-3 show that in the RDD sample, children under age 6 were selected at a higher
rate than those 6 to 17 years old. This is a direct result of the subsampling rules that specified
that only one child in the age range was to be sampled per household. Since the mean number of
children per household is greater for 6- to 17-year-olds, the subsampling rate is lower for this
group. The response rate in table 5-2 is about the same as the response rate in table 5-3,
indicating the extended interview response rates were relatively constant by income level. This
pattern is consistent with previous rounds of the NSAF.
Tables 5-4 and 5-5 give the results for the sampling and extended interview response rates for
other adults (adults who are not MKAs or their spouse/partner) in telephone households. Nearly
80 percent of the other adults listed in households with children were sampled, but only about 60
percent in households without children were sampled. The extended response rate was higher in
adult-only households than in households with children. The higher response rate may have been
a consequence of having more difficulty locating and interviewing other adults (many of whom
were older siblings still living at home) in households with children. In adult-only households, it
is often possible to continue the interview since the screener respondent may be the sampled
adult, and this tends to increase the response rate in these households.
5-4
Table 5-6 shows that 65 percent of the adult interviews were conducted with the MKA of the
sampled child (28,208 of 43,158). Only 28 percent of the adult interviews were conducted with
adults in households without children (12,053), and the remaining 7 percent were other adults in
households with children (2,872).
Tables 5-7 to 5-11 are the corresponding counts for nontelephone households. In general, the
results are very similar to those noted earlier for the telephone sample. The main difference is
that the extended response rates are much higher—for children the response rates are about 95
percent, for other adults in households with children the rates are around 75 percent, and for
adults in adult-only households the rates are about 94 percent. Again, the likely reason for the
lower rates for other adults in households with children has to do with locating the sampled
person to conduct the interview. Table 5-11 shows that the percent of interviews with adults in
households without children is higher than in the RDD sample, and the percent of MKA
interviews is lower.
Tables 5-12 to 5-16 combine the telephone and nontelephone samples, producing counts for the
entire effort. While these tables are valuable in the sense that they give the overall counts from
Round 3, they provide less information about the interviewing operations than the previous
tables because they combine units with different attributes. For example, the extended response
rates in these tables are very similar to those in the telephone tables because the sample size from
the RDD sample was so much larger that it dominates the combined rates. Thus, these tables do
not provide much insight into the survey operations.
5-5
Table 5-2.
Within-Household Sampling and Extended Interviews of Children in Telephone Low-Income Households
Children under Age 6
5-6
Study Area
Alabama
California
Colorado
Florida
Massachusetts
Michigan
Minnesota
Mississippi
New Jersey
New York
Texas
Washington
Wisconsin
Bal. of nation
Total
Listed
556
910
597
571
465
649
499
570
645
536
697
626
549
1,401
9,271
Selected
398
642
414
401
336
435
342
405
464
390
483
419
366
1,001
6,496
Children Age 6 to 17
Average
Selection
Rate (%)
71.6
70.5
69.3
70.2
72.3
67.0
68.5
71.1
71.9
72.8
69.3
66.9
66.7
71.4
70.1
Interviewed
336
491
337
309
264
365
282
331
350
305
409
351
328
873
5,331
Ineligible
0
0
0
3
0
0
0
0
0
0
0
0
0
0
3
Response
Rate (%)
84
76
81
78
79
84
82
82
75
78
85
84
90
87
82
Listed
1,168
1,955
1,186
1,336
1,122
1,386
1,149
1,320
1,420
1,123
1,356
1,175
1,128
3,283
20,107
Selected
677
1,015
624
735
603
708
578
742
796
634
744
628
570
1,712
10,766
Average
Selection
Rate (%)
58.0
51.9
52.6
55.0
53.7
51.1
50.3
56.2
56.1
56.5
54.9
53.4
50.5
52.1
53.5
Interviewed
574
779
530
545
475
595
502
607
596
505
620
542
516
1,501
8,887
Ineligible
0
0
1
1
0
0
0
0
0
0
0
0
0
0
2
Response
Rate (%)
85
77
85
74
79
84
87
82
75
80
83
86
91
88
83
Table 5-3.
Within-Household Sampling and Extended Interviews of Children in All Subsampled Telephone Households
Children under Age 6
5-7
Study Area
Alabama
California
Colorado
Florida
Massachusetts
Michigan
Minnesota
Mississippi
New Jersey
New York
Texas
Washington
Wisconsin
Bal. of nation
Total
Listed
891
1,591
1,486
1,007
1,255
1,552
1,449
849
1,671
1,228
1,212
1,340
1,383
3,107
20,021
Selected
667
1,169
1,084
749
910
1,115
1,046
625
1,231
916
879
946
974
2,284
14,595
Children Age 6 to 17
Average
Selection
Rate (%)
74.9
73.5
72.9
74.4
72.5
71.8
72.2
73.6
73.7
74.6
72.5
70.6
70.4
73.5
72.9
Interviewed
564
909
909
596
726
935
898
500
941
712
732
813
866
1,987
12,088
Ineligible
0
0
0
3
0
0
0
0
0
0
0
0
0
0
3
Respons
e Rate
(%)
85
78
84
80
80
84
86
80
76
78
83
86
89
87
83
Listed
2,109
3,442
3,264
2,430
2,841
3,644
3,385
2,090
3,701
2,569
2,644
2,866
3,265
7,104
45,354
Selected
1,280
1,923
1,916
1,456
1,661
2,044
1,937
1,239
2,197
1,535
1,557
1,686
1,871
4,119
26,421
Average
Selection
Rate (%)
60.7
55.9
58.7
59.9
58.5
56.1
57.2
59.3
59.4
59.8
58.9
58.8
57.3
58.0
58.3
Interviewed
1,073
1,499
1,621
1,123
1,309
1,708
1,686
1,001
1,681
1,230
1,264
1,443
1,669
3,557
21,864
Ineligible
0
0
1
1
0
0
0
0
0
0
0
0
0
0
2
Response
Rate (%)
84
78
85
77
79
84
87
81
77
80
81
86
89
86
83
Table 5-4.
Within-Household Sampling and Extended Interviews of Other Adults
in Subsampled Telephone Households with Children
5-8
Study Area
Alabama
California
Colorado
Florida
Massachusetts
Michigan
Minnesota
Mississippi
New Jersey
New York
Texas
Washington
Wisconsin
Bal. of nation
Total
Low-Income Telephone Households with Children
Listed
Average
Other
Selection
InterInAdults
Selected Rate (%) viewed
eligible
142
118
83.1
69
1
432
310
71.8
185
7
128
94
73.4
61
0
137
105
76.6
71
0
91
74
81.3
43
0
160
128
80.0
88
2
156
119
76.3
90
1
134
116
86.6
70
0
152
110
72.4
51
3
169
130
76.9
70
4
146
111
76.0
81
1
113
88
77.9
62
1
107
81
75.7
58
0
396
309
78.0
222
2
2,463
1,893
76.9
1,221
22
Response
Rate (%)
59
61
65
68
58
70
76
60
48
56
74
71
72
72
65
All Telephone Households with Children
Listed
Average
Other
Selection
InterAdults
Selected Rate (%) viewed
249
209
83.9
132
711
515
72.4
310
320
253
79.1
168
255
200
78.4
127
299
235
78.6
141
423
333
78.7
243
468
378
80.8
278
196
167
85.2
101
379
290
76.5
158
370
283
76.5
166
280
217
77.5
154
265
215
81.1
148
362
288
79.6
225
886
700
79.0
521
5,463
4,283
78.4
2,872
Note: Other adults are adults who are neither the MKA of a sample child nor the spouse of such a person, nor the parent of a child under age 18 in the household.
Ineligible
1
11
0
0
1
4
4
0
5
4
1
2
0
4
37
Response
Rate (%)
63
62
66
64
60
74
74
60
55
59
71
69
78
75
68
Table 5-5.
Subsampling and Extended Interviews of Adults in Subsampled Adult-Only Telephone Households
5-9
Study Area
Alabama
California
Colorado
Florida
Massachusetts
Michigan
Minnesota
Mississippi
New Jersey
New York
Texas
Washington
Wisconsin
Bal. of nation
Total
Low-Income Adult-Only Households
Average
Listed
Selection
InterAdults
Selected Rate (%) viewed
735
463
63.0
350
916
569
62.1
402
624
392
62.8
306
619
389
62.8
297
734
453
61.7
313
747
490
65.6
367
866
564
65.1
472
667
427
64.0
321
519
304
58.6
197
592
368
62.2
263
515
314
61.0
241
552
338
61.2
263
615
400
65.0
328
1,997
1,275
63.8
1,046
10,698
6,746
63.1
5,166
Ineligible
11
14
6
6
10
10
16
12
6
11
9
8
6
22
147
Response
Rate (%)
77
72
79
78
71
76
86
77
66
74
79
80
83
83
78
All Adult-Only Households
Average
Listed
Selection
Adults
Selected Rate (%)
1,438
879
61.1
2,089
1,262
60.4
1,530
928
60.7
1,247
755
60.5
1,996
1,193
59.8
2,145
1,294
60.3
2,716
1,655
60.9
1,204
753
62.5
1,762
1,023
58.1
1,381
844
61.1
1,000
604
60.4
1,504
882
58.6
1,936
1,177
60.8
4,401
2,674
60.8
26,349
15,923
60.4
Interviewed
667
875
722
569
818
985
1,346
578
684
592
466
679
945
2,127
12,053
Ineligible
17
24
16
15
31
24
40
22
31
24
16
26
20
56
362
Response
Rate (%)
77
71
79
77
70
78
83
79
69
72
79
79
82
81
77
Table 5-6.
Sources of Adult Telephone Interviews
5-10
Study Area
Alabama
California
Colorado
Florida
Massachusetts
Michigan
Minnesota
Mississippi
New Jersey
New York
Texas
Washington
Wisconsin
Bal. of nation
Total
MKA for
Sample
Child
1,357
1,941
2,113
1,454
1,708
2,196
2,173
1,254
2,184
1,606
1,620
1,862
2,137
4,603
28,208
Spouse of
MKA
916
1,432
1,660
1,038
1,243
1,630
1,711
815
1,611
1,126
1,218
1,414
1,687
3,391
20,892
Emancipated
minors
2
6
2
4
1
0
1
0
1
3
0
0
0
5
25
Spouse
0
0
1
3
0
0
0
0
0
1
0
0
0
2
7
Other Adults in
Household with Children
Interviewed Spouse
132
12
310
51
168
11
127
10
141
11
243
30
278
13
101
7
158
22
166
15
154
25
148
10
225
14
521
61
2,872
292
Adults in Household
without Children
Interviewed
Spouse
667
288
875
341
722
317
569
261
818
336
985
429
1,346
642
578
255
684
302
592
230
466
202
679
317
945
463
2,127
963
12,053
5,346
Total Adult
Extended
Interviews
2,158
3,132
3,005
2,154
2,668
3,424
3,798
1,933
3,027
2,367
2,240
2,689
3,307
7,256
43,158
Table 5-7.
Within-Household Sampling and Extended Interviews of Children in Nontelephone Low-Income Households
5-11
Study Area
Alabama
California
Colorado
Florida
Massachusetts
Michigan
Minnesota
Mississippi
New Jersey
New York
Texas
Washington
Wisconsin
Bal. of nation
Total
Children under Age 6
Average
Selection
Listed
Selected Rate (%)
6
5
83.3
25
15
60.0
0
0
0.0
14
10
71.4
4
3
75.0
16
6
37.5
0
0
0.0
8
5
62.5
2
1
50.0
3
3
100.0
57
30
52.6
1
1
100.0
18
10
55.6
102
68
66.7
256
157
61.3
Children Age 6 to 17
Interviewed
5
15
0
9
3
6
0
5
1
2
28
1
10
64
149
Ineligible
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
Response
Rate (%)
100
100
0
90
100
100
0
100
100
67
93
100
100
94
95
Listed
12
39
4
10
5
13
0
24
0
5
49
6
24
168
359
Selected
8
15
2
5
3
6
0
9
0
3
25
3
10
88
177
Average
Selection
Rate (%)
66.7
38.5
50.0
50.0
60.0
46.2
0.0
37.5
0.0
60.0
51.0
50.0
41.7
52.4
49.3
Interviewed
8
15
2
4
3
6
0
8
0
3
23
2
10
84
168
Ineligible
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
Response
Rate (%)
100
100
100
80
100
100
0
89
0
100
92
67
100
95
95
Table 5-8.
Within-Household Sampling and Extended Interviews of Children in All Nontelephone Households
5-12
Study Area
Alabama
California
Colorado
Florida
Massachusetts
Michigan
Minnesota
Mississippi
New Jersey
New York
Texas
Washington
Wisconsin
Bal. of nation
Total
Children under Age 6
Average
Selection
Listed
Selected Rate (%)
7
6
85.7
25
15
60.0
0
0
0.0
15
11
73.3
4
3
75.0
16
6
37.5
0
0
0.0
9
6
66.7
2
1
50.0
4
4
100.0
65
34
52.3
2
2
100.0
21
12
57.1
135
89
65.9
305
189
62.0
Children Age 6 to 17
Interviewed
6
15
0
10
3
6
0
6
1
3
32
2
12
84
180
Ineligible
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
Response
Rate (%)
100
100
0
91
100
100
0
100
100
75
94
100
100
94
95
Listed
12
39
4
14
5
13
0
25
0
6
52
8
28
201
407
Selected
8
15
2
7
3
6
0
10
0
4
28
5
13
107
208
Average
Selection
Rate (%)
66.7
38.5
50.0
50.0
60.0
46.2
0.0
40.0
0.0
66.7
53.8
62.5
46.4
53.2
51.1
Interviewed
8
15
2
6
3
6
0
9
0
4
26
4
13
103
199
Ineligible
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
Response
Rate (%)
100
100
100
86
100
100
0
90
0
100
93
80
100
96
96
Table 5-9.
Within-Household Sampling and Extended Interviews of Other Adults in Nontelephone Households with Children
5-13
Study Area
Alabama
California
Colorado
Florida
Massachusetts
Michigan
Minnesota
Mississippi
New Jersey
New York
Texas
Washington
Wisconsin
Bal. of nation
Total
Low-Income Telephone Households with Children
Listed
Average
Other
Selection
InterInAdults Selected Rate (%) viewed
eligible
0
0
0.0
0
0
9
7
77.8
6
0
0
0
0.0
0
0
1
1
100.0
1
0
0
0
0.0
0
0
0
0
0.0
0
0
0
0
0.0
0
0
5
4
80.0
3
0
0
0
0.0
0
0
3
2
66.7
0
0
9
8
88.9
6
0
1
1
100.0
1
0
3
3
100.0
3
0
32
25
78.1
19
0
63
51
81.0
39
0
Response
Rate (%)
0
86
0
100
0
0
0
75
0
0
75
100
100
76
76
All Telephone Households with Children
Listed
Average
Other
Selection
InterAdults
Selected
Rate (%) viewed
0
0
0.0
0
9
7
77.8
6
0
0
0.0
0
1
1
100.0
1
0
0
0.0
0
0
0
0.0
0
0
0
0.0
0
5
4
80.0
3
0
0
0.0
0
3
2
66.7
0
9
8
88.9
6
1
1
100.0
1
6
5
83.3
5
43
33
76.7
24
77
61
79.2
46
Note: Other adults are adults who are neither the MKA of a sample child nor the spouse of such a person, nor the parent of a child under age 18 in the household.
Ineligible
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
Response
Rate (%)
0
86
0
100
0
0
0
75
0
0
75
100
100
73
75
Table 5-10.
Subsampling and Extended Interviews of Adults in Adult-Only Nontelephone Households
5-14
Study Area
Alabama
California
Colorado
Florida
Massachusetts
Michigan
Minnesota
Mississippi
New Jersey
New York
Texas
Washington
Wisconsin
Bal. of nation
Total
Low-income Adult-Only Nontelephone Households
Listed
Selection
InterInAdults
Selected Rate (%) viewed
eligible
17
11
64.7
10
0
17
13
76.5
13
0
8
8
100.0
8
0
14
11
78.6
11
0
6
4
66.7
4
0
4
3
75.0
3
0
1
1
100.0
1
0
17
12
70.6
12
0
1
1
100.0
1
0
3
2
66.7
2
0
43
27
62.8
24
1
6
3
50.0
2
1
23
17
73.9
16
1
176
119
67.6
109
2
336
232
69.0
216
5
Response
Rate (%)
91
100
100
100
100
100
100
100
100
100
92
100
100
93
95
All Adult-Only Nontelephone Households
Listed
Selection
InterAdults
Selected Rate (%) viewed
20
14
70.0
13
30
21
70.0
21
9
9
100.0
9
19
14
73.7
13
10
7
70.0
7
8
5
62.5
4
9
6
66.7
5
20
14
70.0
14
1
1
100.0
1
5
4
80.0
4
60
41
68.3
38
8
5
62.5
4
31
23
74.2
22
257
172
66.9
153
487
336
69.0
308
Ineligible
0
0
0
1
0
0
0
0
0
0
1
1
1
4
8
Response
Rate (%)
93
100
100
100
100
80
83
100
100
100
95
100
100
91
94
Table 5-11.
Sources of Adult Nontelephone Interviews
5-15
Study Area
Alabama
California
Colorado
Florida
Massachusetts
Michigan
Minnesota
Mississippi
New Jersey
New York
Texas
Washington
Wisconsin
Bal. of nation
Total
MKA for
Sample Child
10
22
2
13
5
9
0
12
1
6
45
5
21
143
294
Spouse of
MKA
2
14
0
7
0
2
0
2
0
3
26
3
3
86
148
Emancipated
Minor
0
0
0
0
0
0
0
0
0
0
1
0
0
0
1
Spouse
0
0
0
0
0
0
0
0
0
0
1
0
0
0
1
Other Adults in
Household with Children
Adults in Household
without Children
Interviewed
Interviewed
0
6
0
1
0
0
0
3
0
0
6
1
5
24
46
Spouse
0
1
0
0
0
0
0
0
0
0
0
0
0
2
3
13
21
9
13
7
4
5
14
1
4
38
4
22
153
308
Spouse
3
1
0
2
0
2
1
4
0
0
12
1
6
41
73
Total Adult
Extended Interviews
23
49
11
27
12
13
5
29
2
10
90
10
48
320
649
Table 5-12.
Within-Household Sampling and Extended Interviews of Children in Telephone
and Nontelephone Low-Income Households
5-16
Children under Age 6
Average
Selection
Study Area
Listed Selected Rate (%)
Alabama
562
403
71.7
California
935
657
70.3
Colorado
597
414
69.3
Florida
585
411
70.3
Massachusetts
469
339
72.3
Michigan
665
441
66.3
Minnesota
499
342
68.5
Mississippi
578
410
70.9
New Jersey
647
465
71.9
New York
539
393
72.9
Texas
754
513
68.0
Washington
627
420
67.0
Wisconsin
567
376
66.3
Bal. of nation 1,503
1,069
71.1
Total
9,527
6,653
69.8
Children Age 6 to 17
Interviewed
341
506
337
318
267
371
282
336
351
307
437
352
338
937
5,480
Ineligible
0
0
0
3
0
0
0
0
0
0
0
0
0
0
3
Response
Rate (%)
85
77
81
78
79
84
82
82
75
78
85
84
90
88
82
Listed
1,180
1,994
1,190
1,346
1,127
1,399
1,149
1,344
1,420
1,128
1,405
1,181
1,152
3,451
20,466
Selected
685
1,030
626
740
606
714
578
751
796
637
769
631
580
1,800
10,943
Average
Selection
Rate (%)
58.1
51.7
52.6
55.0
53.8
51.0
50.3
55.9
56.1
56.5
54.7
53.4
50.3
52.2
53.5
Interviewed
582
794
532
549
478
601
502
615
596
508
643
544
526
1,585
9,055
Ineligible
0
0
1
1
0
0
0
0
0
0
0
0
0
0
2
Response
Rate (%)
85
77
85
74
79
84
87
82
75
80
84
86
91
88
83
Table 5-13.
Within-Household Sampling and Extended Interviews of Children in All
Telephone and Nontelephone Households
Children under Age 6
5-17
Study Area
Listed
Alabama
898
California
1,616
Colorado
1,486
Florida
1,022
Massachusetts 1,259
Michigan
1,568
Minnesota
1,449
Mississippi
858
New Jersey
1,673
New York
1,232
Texas
1,277
Washington
1,342
Wisconsin
1,404
Bal. of nation
3,242
Total
20,326
Selected
673
1,184
1,084
760
913
1,121
1,046
631
1,232
920
913
948
986
2,373
14,784
Average
Selection
Rate (%)
74.9
73.3
72.9
74.4
72.5
71.5
72.2
73.5
73.6
74.7
71.5
70.6
70.2
73.2
72.7
Interviewed
570
924
909
606
729
941
898
506
942
715
764
815
878
2,071
12,268
Ineligible
0
0
0
3
0
0
0
0
0
0
0
0
0
0
3
Children Age 6 to 17
Average
Response
Selection
Rate (%) Listed Selected Rate (%)
85
2,121
1,288
60.7
78
3,481
1,938
55.7
84
3,268
1,918
58.7
80
2,444
1,463
59.9
80
2,846
1,664
58.5
84
3,657
2,050
56.1
86
3,385
1,937
57.2
80
2,115
1,249
59.1
76
3,701
2,197
59.4
78
2,575
1,539
59.8
84
2,696
1,585
58.8
86
2,874
1,691
58.8
89
3,293
1,884
57.2
87
7,305
4,226
57.9
83
45,761 26,629
58.2
Interviewed
1,081
1,514
1,623
1,129
1,312
1,714
1,686
1,010
1,681
1,234
1,290
1,447
1,682
3,660
22,063
Ineligible
0
0
1
1
0
0
0
0
0
0
0
0
0
0
2
Response
Rate (%)
84
78
85
77
79
84
87
81
77
80
81
86
89
87
83
Table 5-14.
Within-Household Sampling and Extended Interviews of Other Adults in Telephone
and Nontelephone Households with Children
5-18
Low-Income Households with Children
Listed
Average
Other
Selection InterStudy Area
Adults Selected Rate (%) viewed
Alabama
142
118
83.1
69
California
441
317
71.9
191
Colorado
128
94
73.4
61
Florida
138
106
76.8
72
Massachusetts
91
74
81.3
43
Michigan
160
128
80.0
88
Minnesota
156
119
76.3
90
Mississippi
139
120
86.3
73
New Jersey
152
110
72.4
51
New York
172
132
76.7
70
Texas
155
119
76.8
87
Washington
114
89
78.1
63
Wisconsin
110
84
76.4
61
Bal. of nation
428
334
78.0
241
Total
2,526
1,944
77.0
1,260
Ineligible
1
7
0
0
0
2
1
0
3
4
1
1
0
2
22
Response
Rate (%)
59
62
65
68
58
70
76
61
48
55
74
72
73
73
66
All Households with Children
Listed
Average
Other
Selection
Adults Selected Rate (%)
249
209
83.9
720
522
72.5
320
253
79.1
256
201
78.5
299
235
78.6
423
333
78.7
468
378
80.8
201
171
85.1
379
290
76.5
373
285
76.4
289
225
77.9
266
216
81.2
368
293
79.6
929
733
78.9
5,540
4,344
78.4
Interviewed
132
316
168
128
141
243
278
104
158
166
160
149
230
545
2,918
Note: Other adults are adults who are neither the MKA of a sample child nor the spouse of such a person, nor the parent of a child under age 18 in the household.
Ineligible
1
11
0
0
1
4
4
0
5
4
1
2
0
4
37
Response
Rate (%)
63
62
66
64
60
74
74
61
55
59
71
70
78
75
68
Table 5-15.
Subsampling and Extended Interviews of Adults in Adult-Only
Telephone and Nontelephone Households
5-19
Low-Income Adult-Only Households
Average
Selection InterStudy Area
Listed
Selected Rate (%) viewed
Alabama
752
474
63.0
360
California
933
582
62.4
415
Colorado
632
400
63.3
314
Florida
633
400
63.2
308
Massachusetts
740
457
61.8
317
Michigan
751
493
65.6
370
Minnesota
867
565
65.2
473
Mississippi
684
439
64.2
333
New Jersey
520
305
58.7
198
New York
595
370
62.2
265
Texas
558
341
61.1
265
Washington
558
341
61.1
265
Wisconsin
638
417
65.4
344
Bal. of nation
2,173
1,394
64.2
1,155
Total
11,034
6,978
63.2
5,382
Ineligible
11
14
6
6
10
10
16
12
6
11
10
9
7
24
152
Response
Rate (%)
78
73
80
78
71
77
86
78
66
74
80
80
84
84
79
All Adult-Only Households
Average
Selection
Listed
Selected Rate (%)
1,458
893
61.2
2,119
1,283
60.5
1,539
937
60.9
1,266
769
60.7
2,006
1,200
59.8
2,153
1,299
60.3
2,725
1,661
61.0
1,224
767
62.7
1,763
1,024
58.1
1,386
848
61.2
1,060
645
60.8
1,512
887
58.7
1,967
1,200
61.0
4,658
2,846
61.1
26,836
16,259
60.6
Interviewed
680
896
731
582
825
989
1,351
592
685
596
504
683
967
2,280
12,361
Ineligible
17
24
16
16
31
24
40
22
31
24
17
27
21
60
370
Response
Rate (%)
78
71
79
77
71
78
83
79
69
72
80
79
82
82
78
Table 5-16.
Sources of Adult Telephone and Nontelephone Interviews
5-20
Study Area
Alabama
California
Colorado
Florida
Massachusetts
Michigan
Minnesota
Mississippi
New Jersey
New York
Texas
Washington
Wisconsin
Bal. of nation
Total
MKA for
Sample Child
1,367
1,963
2,115
1,467
1,713
2,205
2,173
1,266
2,185
1,612
1,665
1,867
2,158
4,746
28,502
Spouse of
MKA
918
1,446
1,660
1,045
1,243
1,632
1,711
817
1,611
1,129
1,244
1,417
1,690
3,477
21,040
Emancipated
Minor
2
6
2
4
1
0
1
0
1
3
1
0
0
5
26
Spouse
0
0
1
3
0
0
0
0
0
1
1
0
0
2
8
Other Adults in Household
with Children
Interviewed
Spouse
132
12
316
52
168
11
128
10
141
11
243
30
278
13
104
7
158
22
166
15
160
25
149
10
230
14
545
63
2,918
295
Adults in Household
without Children
Interviewed
Spouse
680
291
896
342
731
317
582
263
825
336
989
431
1,351
643
592
259
685
302
596
230
504
214
683
318
967
469
2,280
1,004
12,361
5,419
Total Adult
Extended
Interviews
2,181
3,181
3,016
2,181
2,680
3,437
3,803
1,962
3,029
2,377
2,330
2,699
3,355
7,576
43,807
6. CONCLUSION
The primary goal of this report has been to describe the features of the sampling procedures used
in the 2002 NSAF. Since this is the third round of data collection for NSAF we also pointed out
how the Round 3 design is similar to the design in the previous rounds but does contain a number
of important changes.
The design for the survey is complicated because it uses a dual-frame approach with a sample of
telephone households selected in each study area combined with a sample of nontelephone
households. While a dual-frame design was used in Round 1 and Round 2, a very important
change in the design in Round 3 was the elimination of separate nontelephone samples in each
study area. The nontelephone sample in Round 3 was designed to produce national estimates of
nontelephone households that could be combined with the RDD sample to produce national
estimates of all households. The study area estimates will be based entirely on the RDD sample,
using methods described in 2002 NSAF Sample Estimation Survey Weights, Report No. 3.
Another very important change in the sample design for Round 3 was the combining of the
Milwaukee, Wisconsin, and the balance-of-Wisconsin study areas into one study area for all of
Wisconsin.
As discussed in the report, many of the procedures used in Round 3 were based on experiences
from the earlier rounds. The data from Round 1 were used to optimize the sample design for
Round 2 and the research results were directly carried forward to Round 3. In addition, the
earlier data collection efforts provided parameters estimates that were the initial assumptions
used to design the Round 3 sample. Estimates of response rates, residency rates, nontelephone
eligibility rates, and incoming switching are just a few parameters derived from the earlier
rounds.
As in previous rounds, the data collection effort was monitored and deviations from the assumed
rates were tracked. Based on the tracking of these data changes were made in the sample during
the data collection period. The sample size for the RDD component was increased in some study
areas and the sampling rates were modified, as deemed necessary. One procedure that was
introduced in the RDD sample for Round 3 as a result of the monitoring was refusal
subsampling. Using this procedure, telephone numbers that resulted in refusals at the screener
level were subsampled and only the retained telephone numbers were included in refusal
conversion efforts.
The sampling procedures used in Round 3 have important implications for producing estimates
from Round 3 and for producing estimates of change from 1997 or 1999 to 2002. In particular,
the elimination of the nontelephone sample in the study areas complicates analyses in the sense
that two sets of weights are required in Round 3, one for producing national estimates and
another for producing study-area estimates. These estimation issues are addressed in 2002 NSAF
Sample Estimation Survey Weights, Report No. 3 and 2002 NSAF Variance Estimation, Report
No. 4.
6-1
REFERENCES
Methodology References
Brick, M. Forthcoming. 2002 NSAF Sample Estimation Survey Weights. Methodology
Report No. 3.
Brick, M. Forthcoming. 2002 NSAF Variance Estimation. Methodology Report No. 4.
Brick, M. Forthcoming. 2002 NSAF Response Rates and Methods Evaluation.
Methodology Report No. 7.
Flores-Cervantes, Ismael, J. Michael Brick, and Ralph DiGaetano. 1999. 1997 NSAF
Variance Estimation. Methodology Report No. 4.
Judkins, David, Gary Shapiro, J. Michael Brick, Ismael Flores-Cervantes, David Ferraro,
Teresa Strickler, and Joseph Waksberg. 1999. 1997 NSAF Sample Design. Methodology
Report No. 2.
Judkins, David, J. Michael Brick, Pam Broene, David Ferraro, and Teresa Strickler. 2001.
1999 NSAF Sample Design. Methodology Report No. 2.
General References
Brick, J. Michael, J. Montaquila, and Fritz Scheuren. 2002. “Estimating Residency Rates
for Undetermined Telephone Numbers.” Public Opinion Quarterly 66: 18–39.
Brick, J. Michael, J. Waksberg, D. Kulp, and A. Starer. 1995. “Bias in List-Assisted
Telephone Surveys.” Public Opinion Quarterly 59(2): 218–35.
Casady, R., and J. Lepkowki. 1993. “Stratified Telephone Survey Designs.” Survey
Methodology 19: 103–13.
Ferraro, D., and J. Michael Brick. 2001. “Weighting for Nontelephone Households in RDD
Surveys.” Proceedings of the Survey Research Methods Section of the American Statistical
Association [CD-ROM], Alexandria, VA: American Statistical Association.
Giesbrecht, L.H., D.W. Kulp, and A.W. Starer. 1996. “Estimating Coverage Bias in RDD
Samples with Current Population Survey Data.” Proceedings of the Survey Research
Methods Section, American Statistical Association (503–8).
Kalton, G., D. Kapryzk, and D. McMillen. 1989. “Nonsampling Errors in Panel Surveys.”
In Panel Surveys, edited by Kaspryzk, Duncan, Kalton, and Singh (249–70). New York:
John Wiley and Sons.
R-1
Tucker, C., J.M. Lepkowski, and L. Piekarski. 2002. “The Current Efficiency of ListAssisted Telephone Sampling Designs.” Public Opinion Quarterly 66: 321–38.
Waksberg, J. 1978. “Sampling Methods for Random Digit Dialing.” Journal of the
American Statistical Association 73: 40–46.
Waksberg, J., Brick, J.M., Shapiro, G., Flores-Cervantes, I., and Bell, B. 1997. Dual-frame
RDD and area sample for household survey with particular focus on low-income
population. Proceedings of the Survey Research Methods Section of the American
Statistical Association (713–18).
U.S. Bureau of the Census. 2000. Technical Paper 63, Current Population Survey: Design
and Methodology. http://www.bls.census.gov/cps/tp/tp63.htm.
R-2