Syllabus and Reading List

Causal Inference without Experiments:
Empirical Strategies and Examples
PhD course at the NHH Norwegian School of Economics
Lecturer: Gordon Dahl (UC San Diego)
Dates: August 3 - 7, 2015
Overview: This course will discuss various cutting-edge strategies for obtaining causal estimates
without an experiment, with examples from labor, public finance, health, and education. Methods
which allow for selection on observables as well as selection on unobservables will be covered.
The course will explore the pros and cons of using panel data, social experiments, regression
discontinuity, register data, simulated instrumental variables, and other methods to arrive at
causal estimates. An emphasis will be placed on current best practices, with empirical examples
ranging from the evaluation of social safety net programs, to the effect of educational policy
reforms, to the identification of peer effects.
This course is intended to be both more and less than a course in applied econometrics. It is “less”
in that we will not concentrate too much on formal derivations of estimators. Instead, we will
focus on the properties of various methods and how to implement them. It is “more” than a
course in applied econometrics in that, for each technique, we will study empirical examples in
considerable detail. The goal is to provide a practical guide to the key advantages and
disadvantages of each approach.
Reading list: Below is a preliminary reading list, which is subject to change. We will likely not
be able to cover all subject areas in full detail. Readings marked with a * should be read in
advance of the course if possible; other readings provide additional information and examples.
A. Causal Infe rence with and without Experiments
*LaLonde, Robert J. (1986), "Evaluating the Econometric Evaluations of Training Programs with
Experimental Data," American Economic Review, 76(4).
Angrist, Joshua D., Guido W. Imbens and Donald B. Rubin (1996), “Identification of Causal
Effects Using Instrumental Variables,” Journal of the American Statistical Association,
91(434).
Angrist, Joshua and Alan Krueger (1999), "Empirical Strategies in Labor Economics," in the
Handbook of Labor Economics, Vol. 3A, O. Ashenfelter and D. Card, eds. Amsterdam:
Elsevier Science.
Duflo, Esther, Rachel Glennerster, and Michael Kremer (2007), “Using Randomization in
Development Economics: A Toolkit” Centre for Economic Policy Research, Discussion
Paper No. 6059.
Muralidharan, Karthik and Venkatesh Sundararaman (2011), “Teacher Performance Pay:
Experimental Evidence from India,” Journal of Political Economy, 2011.
B. Panel Data, Fixed Effects and Difference-in-Difference
*Bharadwaj, Prashant (2015), “The Impact of Changes in Marriage Law – Implications for
Fertility and School,” Journal of Human Resources, forthcoming.
1
Abadie, Alberto, Alexis Diamond, and Jens Hainmueller (2010), “Synthetic Control Methods for
Comparative Case Studies: Estimating the Effect of California's Tobacco Control Program,”
Journal of the American Statistical Association, vol. 105, no. 490, 493-505.
Ashenfelter, Orley and Alan Krueger (1994), "Estimates of the Economic Return to Schooling
from a New Sample of Twins", American Economic Review, 84(5), 1157-1173.
Bertrand, M. E. Duflo, and S. Mullainathan (2004), “How much Should We Trust Differences inDifferences Estimates?” Quarterly Journal of Economics, 119:1, 249-275.
Black, Sandra E. and Philip E. Strahan (2001), “The Division of Spoils: Rent-Sharing and
Discrimination in a Regulated Industry.” American Economic Review, 814-831.
Card, David (1990), “The Impact of the Mariel Boatlift on the Miami Labor Market”, Industrial
and Labor Relations Review, 43:245-257.
Cameron, A. Colin and Douglas L. Miller (2015), "A Practitioner's Guide to Cluster-Robust
Inference," Journal of Human Resources, forthcoming, Spring 2015.
C. Propensity Score and Matching
*Imbens, Guido (2014), "Matching Methods in Practice," NBER WP 19959.
Deheji, Rajeev H. and Sadek Wahba (1999), “Causal Effects in Nonexperimental Studies:
Reevaluating the Evaluation of Training Programs,” Journal of the American Statistical
Association, December, 94:448, 1053-1062.
Rosenbaum, Paul and Donald Rubin (1983), “The Central Role of the Propensity Score in
Observational Studies for Causal Effects,” Biometrika 70:1, 41-55.
Smith, Jeffrey and Petra Todd (2001), “Reconciling Conflicting Evidence on the Performance of
Propensity Score Matching Methods,” American Economic Review, May, 91:2, 112-118.
D. Selection Correction
*Mogstad, Magne, Lars Kirkeboen, and Edwin Leuven (2014), "Field of Study, Earnings, and
Self-Selection," NBER WP 20816.
Dahl, Gordon (2002), “Mobility and the Return to Education: Testing a Roy Model with Multiple
Markets,” Econometrica, Vol. 70, No. 6, pp. 2367-2420.
Heckman, James (1976), “The Common Structure of Statistical Models of Truncation, Sample
Selection and Limited Dependent Variables and a Simple Estimator for Such Models”,
Annals of Economic and Social Measurement 5:475-492.
Lee, David. S. (2009), Training, Wages, and Sample Selection: Estimating Sharp Bounds on
Treatment Effects. Review of Economic Studies, 76: 1071–1102.
E. Control Functions
*Heckman, James and Salvador Navarro-Lozano (2004) “Using Matching, Instrumental
Variables, and Control Functions to Estimate Economic Choice Models,” The Review of
Economics and Statistics, February 2004, 86:1, 30-57.
Imbens, Guido and Jeffrey Wooldridge (2009) “Recent Developments in the Econometrics of
Program Evaluation,” Journal of Economic Literature, 47:1, 5-86.
2
F. IV and Simulated IV
*Dahl, Gordon and Lance Lochner (2012), “The Impact of Family Income on Child
Achievement: Evidence from Changes in the Earned Income Tax Credit,” American
Economic Review, Vol. 102, No. 5, pp. 1927-1956.
Angrist, Joshua (1990), "Lifetime Earnings and the Vietnam Era Draft Lottery: Evidence from
Social Security Records," American Economic Review, 80:3.
Angrist, Joshua and Alan B. Krueger (1991), "Does Compulsory School Attendance Affect
Schooling?" Quarterly Journal of Economics, 106, pp. 979-1014.
Bound, John, David Jaeger and Regina Baker (1995), "Problems with Instrumental Variables
Estimation when the Correlation Between the Instruments and the Endogenous Explanatory
Variables is Weak," Journal of the American Statistical Association, 90, pp. 443-450.
G. Social Experiments
*Cullen, Julie, Brian Jacob and Steven Levitt (2006), “The Effect of School Choice on
Participants: Evidence from Randomized Lotteries,” Econometrica, 74(5), pp. 1191-1230.
*Duflo, Esther and Emmanuel Saez (2003), “The Role of Information and Social Interactions in
Retirement Decisions: Evidence from a Randomized Experiment,” Quarterly Journal of
Economics, pp. 815-42.
Burtless, Gary (1995), “The Case for Randomized Field Trials in Economic and Policy
Research,” Journal of Economic Perspectives, 9(2), pp. 63-84.
Carrell, Scott, Bruce Sacerdote, and James West (2013), “From Natural Variation to Optimal
Policy? The Importance of Endogenous Peer Group Formation,” Econometrica, 81 (3), 855882.
Heckman, James, Robert LaLonde, and Jeff Smith (1999), “The Economics and Econometrics
of Active Labor Market Programs,” Handbook of Labor Economics, Vol. 3A, O. Ashenfelter
and D. Card, eds. Amsterdam: North Holland, pp. 1865-2097.
H. Regression Discontinuity
*Dahl, Gordon, Katrine Løken and Magne Mogstad (2014), “Peer Effects in Program
Participation,” American Economic Review, Vol. 104, No. 7, pp. 2049-2074.
Hahn, Jinyong, P. Todd and W. Van Der Klaauw (2001) “Identification and Estimation of
Treatment Effects with a Regression-Discontinuity Design,” Econometrica, 69 (1), 201-209.
Imbens, Guido and Thomas Lemieux (2007) “Regression Discontinuity Designs: A Guide to
Practice,” NBER Technical Working Paper 337.
Lee, David (2008) “Randomized Experiments from Non-random Selection in U.S. House
Elections,” Journal of Econometrics, 142:2, 675-697.
Lee, David and David Card (2008) “Regression Discontinuity Inference with Specification
Error,” Journal of Econometrics, 142:2, 655-674.
3