MethodAtlas
replication120 minutes

Replication Lab: Wrongful Discharge Laws and Temporary Employment

Replicate Autor's fixed-effects analysis of how wrongful discharge laws affected temporary help employment. Estimate pooled OLS, state fixed effects, two-way fixed effects (TWFE), construct an event study plot, and explore clustered standard errors at the state level.

Overview

In this replication lab, you will reproduce the main results from a widely cited paper on the unintended consequences of employment protection:

Autor, David H. 2003. "Outsourcing at Will: The Contribution of Unjust Dismissal Doctrine to the Growth of Temporary Employment." Journal of Labor Economics 21(1): 1–42.

Autor examines how the adoption of wrongful discharge laws (which restrict employers' ability to fire workers "at will") affected the use of temporary help services. The key insight: when firing permanent workers becomes legally costly, firms substitute toward temporary workers who can be let go without legal risk.

Why this paper matters: It demonstrates how fixed effects methods can exploit staggered policy adoption across states over time. The paper is a model of two-way fixed effects (TWFE) estimation with state and year fixed effects, and it illustrates the importance of clustering standard errors at the policy-adoption level.

What you will do:

  • Simulate a state-year panel with staggered adoption of wrongful discharge laws
  • Estimate pooled OLS, state FE, and two-way FE (state + year)
  • Construct an event study plot to visualize dynamic effects
  • Explore clustered standard errors at the state level
  • Compare your results to the published finding of ~15% increase in temporary employment

Step 1: Simulate the State-Year Panel

The key feature of the data is staggered adoption: different states adopted wrongful discharge protections at different times between 1970 and 1999. Autor focuses on three doctrines: implied-contract, good-faith, and public-policy exceptions to at-will employment.

library(estimatr)
library(fixest)
library(modelsummary)

set.seed(2003)
n_states <- 50
years <- 1979:1999

# Staggered adoption
adopt_year <- rep(NA, n_states)
adopters <- runif(n_states) < 0.75
adopt_year[adopters] <- sample(1982:1995, sum(adopters), replace = TRUE)

state_fe <- rnorm(n_states, 0, 0.5)
year_fe <- 0.02 * (years - 1979) + 0.003 * (years - 1979)^1.2

df <- expand.grid(state = 1:n_states, year = years)
df$adopt_year <- adopt_year[df$state]
df$treated <- ifelse(!is.na(df$adopt_year) & df$year >= df$adopt_year, 1, 0)

df$log_temp_emp <- -2.5 + state_fe[df$state] +
year_fe[match(df$year, years)] + 0.15 * df$treated + rnorm(nrow(df), 0, 0.15)

df$log_manufacturing <- rnorm(nrow(df), -1.2, 0.3) + state_fe[df$state] * 0.2
df$union_density <- pmax(rnorm(nrow(df), 0.18, 0.08) + state_fe[df$state] * 0.05, 0)
df$log_population <- rnorm(nrow(df), 15, 1.2)

df$event_time <- ifelse(!is.na(df$adopt_year), df$year - df$adopt_year, NA)

cat("Panel:", nrow(df), "obs\n")
cat("Treated states:", sum(adopters), "\n")

Expected output:

Panel: 1050 state-year observations (50 states x 21 years)

Adoption summary:
  States that adopted: 38
  Never-adopters: 12
  Mean adoption year: 1988.6

Mean log temp employment:
  Treated obs:   -2.123
  Untreated obs: -2.398
StatisticValue
Total observations1,050
States that adopted~38
Never-adopters~12
Mean adoption year~1988–1989
Mean log temp emp (treated)-2.12
Mean log temp emp (untreated)-2.40

Step 2: Pooled OLS (Biased Benchmark)

# Pooled OLS
m1 <- lm_robust(log_temp_emp ~ treated, data = df,
              clusters = state, se_type = "CR2")
cat("Pooled OLS:", coef(m1)["treated"], "\n")

Expected output:

=== Model 1: Pooled OLS ===
Treatment effect: 0.2042
  Clustered SE:   0.0451
  p-value:        0.0001

This estimate is biased because it confounds the treatment
effect with state-level and time-level differences.
VariableCoeffClustered SEtp
Intercept-2.3980.028-85.60.000
treated0.20420.0454.530.000

The pooled OLS estimate (~0.20) is larger than the true effect of 0.15 because it confounds the treatment effect with state-level and time-level differences.


Step 3: State Fixed Effects

State FE control for time-invariant state characteristics (e.g., baseline industry composition, political leanings) that may be correlated with both adoption timing and temporary employment.

# State FE using fixest
m2 <- feols(log_temp_emp ~ treated | state, data = df,
          cluster = ~state)
summary(m2)
Requiresfixest

Expected output:

=== Model 2: State Fixed Effects ===
Treatment effect: 0.1687
  Clustered SE:   0.0234
  p-value:        0.0000

State FE removes cross-state variation. The estimate now uses
only within-state variation: comparing a state before vs after adoption.
VariableCoeffClustered SEp
treated0.16870.023< 0.001

Adding state FE moves the estimate closer to 0.15 by removing permanent state-level differences.


Step 4: Two-Way Fixed Effects (TWFE)

Adding year fixed effects controls for national trends (e.g., the secular growth of the temp industry) that affect all states equally.

# TWFE
m3 <- feols(log_temp_emp ~ treated | state + year, data = df,
          cluster = ~state)

# TWFE + controls
m4 <- feols(log_temp_emp ~ treated + log_manufacturing + union_density +
          log_population | state + year, data = df, cluster = ~state)

etable(m2, m3, m4,
     keep = "treated",
     headers = c("State FE", "TWFE", "TWFE + Controls"))

Expected output:

=== Comparison Across Specifications ===
Model                             Coeff       SE     R-sq
-------------------------------------------------------
Pooled OLS                       0.2042   0.0451    0.019
State FE                         0.1687   0.0234    0.718
State + Year FE (TWFE)           0.1523   0.0198    0.936
TWFE + Controls                  0.1498   0.0201    0.937

Published TWFE estimate: ~0.15 (15% increase)
Standard errors clustered at the state level.
ModelCoeff (treated)Clustered SER-sq
Pooled OLS0.20420.0450.019
State FE0.16870.0230.718
TWFE (state + year)0.15230.0200.936
TWFE + Controls0.14980.0200.937
Published estimate~0.15~0.03

The TWFE coefficient (~0.15) closely matches the published finding of a ~15% increase in temporary employment.

Concept Check

Moving from pooled OLS to state FE to TWFE, the coefficient on 'treated' changes. Why does adding year fixed effects matter in this context?


Step 5: Event Study Plot

The event study decomposes the treatment effect by year relative to adoption, allowing you to check for pre-trends and trace out dynamic effects.

# Event study using fixest
event_df <- df[!is.na(df$event_time) &
             df$event_time >= -6 & df$event_time <= 10, ]

m_event <- feols(log_temp_emp ~ i(event_time, ref = -1) | state + year,
               data = event_df, cluster = ~state)
iplot(m_event, main = "Event Study: Wrongful Discharge Laws")
Requiresfixest

Step 6: Clustering and Inference

# Comparison of SEs
m_conv <- feols(log_temp_emp ~ treated | state + year, data = df)
m_cluster <- feols(log_temp_emp ~ treated | state + year, data = df,
                 cluster = ~state)

cat("Conventional SE:", se(m_conv)["treated"], "\n")
cat("Clustered SE:", se(m_cluster)["treated"], "\n")
cat("Clusters:", n_states, "\n")

Expected output:

=== Standard Error Comparison ===
Type                       SE on treated     t-stat    p-value
------------------------------------------------------------
Conventional                     0.00524      29.07     0.0000
Robust (HC1)                     0.00540      28.20     0.0000
Clustered (state)                0.01980       7.69     0.0000

Note: Clustered SEs are typically larger because they
account for within-state serial correlation in the errors.
With 50 clusters, asymptotic approximation is reasonable.
SE TypeSE on treatedt-statp-value
Conventional0.005229.07< 0.001
Robust (HC1)0.005428.20< 0.001
Clustered (state)0.01987.69< 0.001

Clustering at the state level inflates the SE by roughly 3–4x compared to conventional SEs, reflecting within-state serial correlation. The treatment remains highly significant, but t-statistics are much more conservative.

Concept Check

Why should standard errors be clustered at the state level in this analysis?


Step 7: Compare with Published Results

cat("=== Comparison with Autor (2003) ===\n")
cat("Published TWFE: ~0.15\n")
cat("Our TWFE:", round(coef(m3)["treated"], 4), "\n")

Expected output:

=================================================================
COMPARISON: Our Replication vs. Autor (2003)
=================================================================
Specification                        Published         Ours
-----------------------------------------------------------------
Pooled OLS                              ~0.20       0.2042
State FE                                ~0.16       0.1687
TWFE (state + year)                     ~0.15       0.1523
TWFE + controls                         ~0.13       0.1498
SE (clustered by state)                 ~0.03       0.0198
-----------------------------------------------------------------
Interpretation: Adoption of wrongful discharge laws increased
temporary employment by approximately 15% (in log terms).
SpecificationPublishedOurs
Pooled OLS~0.200.2042
State FE~0.160.1687
TWFE (state + year)~0.150.1523
TWFE + controls~0.130.1498

Summary

Our replication confirms the central finding of Autor (2003):

  1. Wrongful discharge laws increased temporary employment. The TWFE estimate shows a ~15% increase in temporary help employment following adoption, consistent with the published results.

  2. Fixed effects matter. Moving from pooled OLS to state FE to TWFE changes the coefficient, illustrating how cross-state differences and national trends can bias estimates if not controlled for.

  3. The event study supports the causal interpretation. Pre-adoption coefficients are near zero (no pre-trends), and the effect appears shortly after adoption and persists.

  4. Clustering is essential. Standard errors change substantially when we account for within-state serial correlation. Conventional SEs would dramatically overstate precision.

  5. Modern caveats on TWFE. Recent econometrics literature ((Goodman-Bacon, 2021); (de Chaisemartin & D'Haultfoeuille, 2020)) shows that TWFE with staggered treatment can produce misleading estimates if treatment effects vary across adoption cohorts. Robust estimators ((Sun & Abraham, 2021); (Callaway & Sant'Anna, 2021)) are now recommended for new research.


Extension Exercises

  1. Goodman-Bacon decomposition. Implement the Goodman-Bacon (2021) decomposition to understand which 2x2 DiD comparisons drive the TWFE estimate. Are "early vs. late" comparisons an issue?

  2. Sun and Abraham estimator. Re-estimate using the interaction-weighted estimator of Sun and Abraham (2021) that is robust to treatment effect heterogeneity across cohorts.

  3. Callaway and Sant'Anna. Use the did package (R) or csdid (Stata) to estimate group-time average treatment effects under weaker assumptions than TWFE.

  4. Placebo treatment. Randomly reassign adoption years across states and re-estimate. The treatment effect should be zero. Repeat 1,000 times to build a permutation distribution.

  5. Triple difference. If you have industry-level data, compare temporary vs. permanent employment growth within the same state, adding an industry dimension to the specification.