The Difference-in-Difference Technique (DID) comes from the field of econometrics, but the logic behind the technique was used as far back as the 1850s by John Snow and is referred to as 'controlled before and after study' on some social media. denotes sciences.
DID is a quasi-experimental design that uses longitudinal data from treatment and control groups to obtain an appropriate counterfactual result to estimate a causal effect. DID is typically used to gauge the impact of a particular intervention or treatment (e.g., law-making, policy-making, or large-scale program implementation) by comparing changes in outcomes over time between a population participating in a program (the intervention group) and a population that is not (the control group).
Figure 1. Difference-in-difference estimate, graphic explanation
DID is used in observation situations in which an interchangeability between the treatment and control group cannot be assumed. DID is based on a less strict interchangeability assumption; H. without treatment, the unobserved differences between treatment and control groups are the same overtime. Therefore, difference-in-difference is a useful technique when randomization at the individual level is not possible. DID requires data from the pre / post intervention, such as cohort or panel data (data at the individual level over time) or repeated cross-sectional data (individual or group level). The approach eliminates biases in post-intervention comparisons between the treatment and control groups that might be the result of permanent differences between these groups, and biases in comparisons over time in the treatment group that are the result of trends due to other causes of the outcome.
Kausale Effekte (Ya = 1 - Ya = 0)
DIS is usually used to estimate the treatment effect on those treated (causal effect in those exposed), although with stronger assumptions the technique can be used to estimate the average treatment effect (ATE) or the causal effect in the population. Further information can be found in the article by Lechner 2011.
In order to be able to estimate a causal effect, three assumptions must apply: interchangeability, positivity and assumption of the stable unit treatment value (SUTVA) 1
. The DIS estimation also requires the following:
Intervention independent of the outcome at the start of the study (allocation of the intervention was not determined by outcome)
Treatment / intervention and control groups show parallel outcome trends (see below for details)
Composition of intervention and comparison groups is stable for repeated cross-sectional design (part of SUTVA)
No spillover effects (part of SUTVA)
Parallel trend assumption
The parallel trend assumption is the most critical of the above four assumptions to ensure the internal validity of DID models and is the most difficult to fulfill. It requires that, without treatment, the difference between the “treatment” and “control” groups be constant over time. Although there is no statistical test of this assumption, a visual inspection is useful when you have observations over many points in time. It has also been suggested that the shorter the period of time tested, the more likely it is. A violation of the assumption of parallel trends leads to a distorted assessment of the causal effect.
Fulfillment of the parallel trend assumption 2
Violation of the parallel trend assumption 3
Y = β0 + β1 * [time] + β2 * [intervention] + β3 * [time * intervention] + β4 * [covariates] + ε
Strengths and limitations
Can achieve causal effect using observational data when assumptions are met
Can use both individual and group level data
Comparison groups can start at different levels of outcome. (DID focuses on changes instead of absolute values)
Takes into account changes / changes due to factors other than the intervention
Requires basic data and a non-intervention group
Cannot be used if the intervention assignment is determined by the outcome
Cannot be used if comparison groups show different results trends (Abadie 2005 suggested a solution)
Cannot be used if the composition of the groups is not stable before / after the change
Recommended course of action
Make sure the outcomes trend has not affected the treatment / intervention allocation
Collect more data points before and after to test the parallel trend assumption
Use a linear probabilistic model to aid interpretability
Be sure to study the composition of the population in the treatment / intervention and control groups before and after the intervention
Use robust standard errors to account for the pre / post autocorrelation in the same person
Sub-analyze to see if the intervention had similar / different effects on the components of the outcome
Epi6 presentation in class April 30, 2013
1. Ruby, DB. Randomization Analysis of Experimental Data in the Fisher Randomization Test. Journal American Statistical Association. 1980.
3. Adjusted for Estimation of Effect of Education Programs on Income, Review of Economics and Statistics, 1978 (Orley Ashenfelter)
Textbooks & Chapters
Mostly harmless econometrics: Chapter 5.2 (pp. 169-182)
WHO-Impact Evaluation in Practice: Kapitel 6.
This publication gives a very simple overview of the DIS estimate from a health program evaluation perspective. There is also a section on best practices for each of the methods described.
Bertrand, M., Duflo, E., & Mullainathan, S. How much should we trust estimates of differences in differences? Quarterly journal of economics. 2004.
Cao, Zhun, et al. Difference-in-difference and instrumental variable approaches. An alternative and supplement to propensity score matching for estimating treatment effects. CER Issue Brief: 2011.
Lechner, Michael. Estimating causal effects using difference-in-difference methods. Economics, University of St. Gallen. 2011.
Norton, Edward C. Interaction Conditions in Logit and Probit Models. UNC at Chapel Hill. Academy of Health 2004.
Abadie, Alberto. Semi-parametric difference-in-difference estimators. Review of economics. 2005
This article discusses the parallel trend assumption in detail and suggests a weighting method for DID when the parallel trend assumption may not apply.
Examples of generalized linear regression:
- Branas, Charles C. et al. A difference-in-differences analysis of the health, safety and greening of empty urban spaces. American Journal of Epidemiology. 2011.
- Harman, Jeffrey et al. Changes in spending per member per month following the implementation of the Florida Medical Aid Reform Demonstration. Health services research. 2011.
- Wharam, Frank et al. Use of the emergency room and subsequent hospital stays among members of a high-deductible health plan. JAMA. 2007.
Examples of logistic regressions:
- Bendavid, Eran, et al. HIV development aid and adult mortality in Africa. JAMA. 2012
- Carlo, Waldemar A. et al. Training in newborn care and perinatal mortality in developing countries. NEJM. 2010.
- Man, Gerry. The Impact of Cost-Sharing on Access to Childless Adult Care. Health Services Research. 2010.
- King, Marissa, et al. Medical School Gift Restriction Policies and Physician Prescribing of Commerical Psychotropic Drugs: Difference-in-Difference Analysis. BMJ. 2013.
- Li, Ruiet al. Self-monitoring of blood sugar before and after Medicare enhancement in patients with diabetes who do not use insulin. AJPH. 2008.
- Ryan, Andrew et al. The effect of Phase 2 of the Leading Demonstration of Hospital Quality Incentives on Incentive Payments to Hospitals Caring for Disadvantaged Patients. Health services research. 2012.
Examples of linear probability:
- Bradley, Cathy et al. Operation waiting times and special services for insured and uninsured breast cancer patients: does the status of the hospital security network matter? HSR: Health Services Research. 2012.
- Monheit, Alan et al. How have government measures to expand liability insurance affected the health insurance status of young adults? HSR: Health Services Research. 2011.
- Afendulis, Christopher et al. The Influence of Medicare Part D on Hospital Admission Rates. Health Services Research. 2011.
- Domino, Marisa. Rising Time Costs and Co-Payments for Prescription Drugs: An Analysis of Policy Changes in a Complex Environment. Health Research. 2011.
- Card, David and Alan Krueger. Minimum Wage and Employment: A Case Study of the Fast Food Industry in New Jersey and Pennsylvania. The American Economic Report. 1994.
- DiTella, Rafael and Schargrodsky, Ernesto. Do the police reduce crime? Estimates of the use of police forces following a terrorist attack. American economic report. 2004.
- Galiani, Sebastian et al. Water for Life: The Impact of Water Services Privatization on Child Mortality. Journal for Political Economy. 2005.
Statistical (example R and Stata code)
National Bureau of Economic Research
What's New in Econometrics? Summer Institute 2007.
Lecture 10: Differences-in-Differences
Lecture notes and video recordings mainly focusing on the theory and mathematical assumptions of the difference-in-differences technique and its extensions.