When you cannot run an A/B test, synthetic control is a practical way to estimate causal impact from observational data. But it leans on assumptions that are easy to state and hard to verify and most prior theory effectively assumes those assumptions keep holding after the intervention, even though the model is only learned before it.
This article explains a newer, non-parametric view of synthetic control that does not rely on linear factor models and shows how identifiability can be grounded in invariant causal mechanisms, framed explicitly in Pearl’s structural causal model language. It then tackles the uncomfortable question practitioners actually face: what if there are missing latent causes or proxies you did not include. The paper proposes a general sensitivity analysis framework that bounds potential bias from these kinds of violations and demonstrates it on both simulated and real datasets.
If you use synthetic control in the real world and want to know when to trust the estimate and how wrong it could be when assumptions bend, the full piece is worth the click.