Refutation of DML IRM inference

Refutation of DML IRM inference

This notebook explain refutation tests that implemented in Causalis DGP took from benchmarking notebook

Result

Treatment share ≈ 0.2036 Ground-truth ATE from the DGP: 0.913 Ground-truth ATT from the DGP: 1.567

Result
ydtenure_monthsavg_sessions_weekspend_last_monthpremium_userurban_resident
00.6894040.012.1305444.056687181.5706070.00.0
13.0452820.019.5865601.671561182.7935980.00.0
27.1735951.039.4551035.452889125.1857081.01.0
31.9262160.026.3276935.0516294.9329050.01.0
41.2250880.035.0427714.93399623.5774070.00.0

Inference

Result

0.9917276396749556 0.0 (0.869543879249174, 1.1139114001007373) Ground-truth ATE from the DGP: 0.913

As we see our estimate is accurate and CI bounds include ground-truth ATE. In real life we can't compare estimation with truth so we need check robustness of it: run some tests on assumptions and answer questions about research design

Overlap

What “overlap/positivity” means

Binary treatment T \in \{0,1\}: for all confounder values xx in your target population,

0<e(x):=P(T=1X=x)<1,0 < e(x) := P(T = 1 \mid X = x) < 1,

often strengthened to strong positivity: there exists an \varepsilon &gt; 0 such that

εe(x)1εalmost surely.\varepsilon \le e(x) \le 1 - \varepsilon \quad\text{almost surely.}

Why it matters

  • Identification: Overlap + unconfoundedness are the two pillars that identify causal effects from observational data. Without overlap, the effect is not identified — you must extrapolate or model-specify what never occurs.

  • Estimation stability: IPW/DR estimators use weights

    w1=De(X),w0=1D1e(X).w_1 = \frac{D}{e(X)}, \qquad w_0 = \frac{1 - D}{1 - e(X)}.

    If e(X)e(X) is near 0 or 1, these weights explode, causing huge variance and fragile estimates.

  • Target population: With trimming or restriction, you may change who the effect describes — e.g., ATE on the region of common support, not on the full population.

It's summary for overlap diagnostics

Result
metricvalueflag
0edge_0.01_below0.000000GREEN
1edge_0.01_above0.000000GREEN
2edge_0.02_below0.077300YELLOW
3edge_0.02_above0.000400YELLOW
4KS0.511643RED
5AUC0.835125YELLOW
6ESS_treated_ratio0.247034YELLOW
7ESS_control_ratio0.327069GREEN
8tails_w1_q99/med38.676284YELLOW
9tails_w0_q99/med20.575638YELLOW
10ATT_identity_relerr0.177229RED
11clip_m_total0.023600YELLOW
12calib_ECE0.018453GREEN
13calib_slope0.889332GREEN
14calib_intercept-0.106806GREEN

edge_mass

edge_0.01_below, edge_0.01_above, edge_0.02_below, edge_0.02_above are shares of units whose propensity is below or above the percents

To keep in mind: DML IRM is clipping out the interval [0.02 and 0.98]

Huge shares are dangerous for estimation in terms of weights exploding

Flags in Causalis:

For ε=0.01ε=0.01: YELLOW if either side 0.02 (2%), RED if 0.05 (5%).

For ε=0.02ε=0.02: YELLOW if either side 0.05 (5%), RED if 0.10 (10%).

Result

{'share_below_001': 0.0, 'share_above_001': 0.0, 'share_below_002': 0.0773, 'share_above_002': 0.0004, 'min_m': 0.01, 'max_m': 0.99}

Result

{'share_below_001_D1': 0.0, 'share_above_001_D0': 0.0, 'share_below_002_D1': 0.010805500982318271, 'share_above_002_D0': 0.00025113008538422905}

ks - Kolmogorov–Smirnov statistic

Here KS is the two-sample Kolmogorov–Smirnov statistic comparing the distributions of the propensities for treated vs control:

D=maxtF^A(t)F^B(t)D=\max_t |\hat F_A(t)-\hat F_B(t)|

Interpretation:

  • (D=0): identical distributions (perfect overlap).
  • (D=1): complete separation (no overlap).
  • Your value KS = 0.5116 means there exists a threshold (t) such that the share of treated with (mt)(m\le t) differs from the share of controls with (mt)(m\le t) by ~51 percentage points. That’s why it’s flagged RED (your thresholds mark RED when (D>0.35)): treatment assignment is highly predictable from covariates ⇒ poor overlap / strong confounding risk.
Result

0.5116427657267132

AUC

Probability definition (most intuitive)

AUC=Pr(s+>s)+12,Pr(s+=s),\text{AUC} = \Pr\big(s^+ > s^-\big)+\tfrac12,\Pr\big(s^+ = s^-\big),

where (s+)(s^+) is a score from a random positive and (s)(s^-) from a random negative. So AUC is the fraction of all (n1n0)(n_1 n_0) positive–negative pairs that are correctly ordered by the score (ties get half-credit).

Rank / Mann–Whitney formulation

  1. Rank all scores together (ascending). If there are ties, assign average ranks within each tied block.

  2. Let (R1)(R_1) be the sum of ranks for the positives.

  3. Compute the Mann–Whitney (U) statistic for positives:

    U1=R1n1(n1+1)2.U_1 = R_1 - \frac{n_1(n_1+1)}{2}.
  4. Convert to AUC by normalizing:

    AUC=U1n1n0=R1n1(n1+1)2n1n0\boxed{\text{AUC} = \frac{U_1}{n_1 n_0} = \frac{R_1 - \frac{n_1(n_1+1)}{2}}{n_1 n_0}}

    This is exactly what your function returns (with stable sorting and tie-averaged ranks).

ROC-integral view (equivalent)

If (\text&#123;TPR&#125;(t)) and (\text&#123;FPR&#125;(t)) trace the ROC curve as the threshold (t)(t) moves,

AUC=01TPR(FPR1(u))du,\text{AUC} = \int_0^1 \text{TPR}\big(\text{FPR}^{-1}(u)\big)du,

i.e., the geometric area under the ROC.

Properties you should remember

  • Range: (0 \le \text&#123;AUC&#125; \le 1); 0.5 = random ranking; 1 = perfect separation.
  • Symmetry: (\text&#123;AUC&#125;(s,y) = 1 - \text&#123;AUC&#125;(s,1-y)).
  • Monotone invariance: Any strictly increasing transform (f)(f) leaves AUC unchanged (only ranks matter).
  • Ties: Averaged ranks ⇒ adds the (12Pr(s+=s))(\tfrac12\Pr(s^+=s^-)) term automatically.

In the propensity/overlap context

  • A higher AUC means treatment (D) is more predictable from covariates (bad for overlap/positivity).
  • For good overlap you actually want AUC close to 0.5.
Result

0.8351248965136829

ESS_treated_ratio

Weights used

For ATE-style IPW, the treated-arm weights are

wi;=;Dimiwi={1/mi,Di=1 when0,Di=0w_i ;=; \frac{D_i}{m_i} \quad\Rightarrow\quad w_i = \begin{cases} 1/m_i,& D_i=1\ when 0, & D_i=0 \end{cases}

so on the treated subset (&#123;i:D_i=1&#125;) the weights are simply (1/mi)(1/m_i).

Effective sample size (ESS)

Given the treated-arm weights (w_1,\ldots,w_&#123;n_1&#125;) (only for (D=1)(D=1)),

ESS=(i=1n1wi)2i=1n1wi2.\mathrm{ESS} = \frac{\left(\sum_{i=1}^{n_1} w_i\right)^2}{\sum_{i=1}^{n_1} w_i^2}.

This is exactly what _ess(w) computes.

  • If all treated weights are equal, ESS (=n1)(= n_1) (full efficiency).
  • If a few weights dominate, ESS drops (information concentrated in few units).

The reported metric

ESStreated ratio=ESSn1=(iwi)2n1iwi2\boxed{ \mathrm{ESS}_{\text{treated ratio}} = \frac{\mathrm{ESS}}{n_1} = \frac{\left(\sum_i w_i\right)^2}{n_1 \sum_i w_i^2} }

This lies in (0,1](0,1]. Near 1 ⇒ well-behaved weights; near 0 ⇒ severe instability.

Why it reflects overlap

When propensities (mi)(m_i) approach 0 for treated units, weights (1/mi)(1/m_i) explode → large CV → low ESS_treated_ratio. Hence this metric is a direct, quantitative read on how much usable information remains in the treated group after IPW.

Result

{'ess_w1': 502.960611352384, 'ess_w0': 2604.778026253534, 'ess_ratio_w1': 0.2470336990925265, 'ess_ratio_w0': 0.32706906407000674}

tails_w1_q99/med

tails_w1_q99/med=Q0.99(W1)median(W1).\boxed{ \texttt{tails\_w1\_q99/med} = \frac{Q_{0.99}(W_1)}{\mathrm{median}(W_1)}. }

Interpretation

  • It’s a tail-heaviness index for treated weights: how large the 99th-percentile weight is relative to a typical (median) weight.
  • Scale-invariant: if you re-scale weights (e.g., Hájek normalization), both numerator and denominator scale equally, so the ratio is unchanged.
  • Bigger ()(\Rightarrow) heavier right tail ()(\Rightarrow) more variance inflation for IPW (since variance depends on large (wi2)(w_i^2)). It typically coincides with a low ESS(_\text&#123;treated ratio&#125;).

Edge cases & thresholds

  • If (\text&#123;median&#125;(W_1)=0) or undefined, the ratio is not meaningful (your code returns “NA” in that case; with positive treated weights this is rare).
  • Defaults: YELLOW if any of (&#123;q95/med,q99/med,q999/med,\max/med&#125;) exceeds 10; RED if any exceed 100. tails_w1_q99/med is one of these checks, focusing specifically on the 99th percentile.

Quick example

If \mathrm&#123;median&#125;(W_1) = 1.2 and Q_&#123;0.99&#125;(W_1) = 46.8, then

tails_w1_q99/med=46.81.239,\texttt{tails\_w1\_q99/med} = \frac{46.8}{1.2} \approx 39,

indicating heavy tails and a likely unstable ATE IPW.

Result

{'w1': {'q50': 2.585563809098619, 'q95': 26.04879283279811, 'q99': 100.0, 'max': 100.0, 'median': 2.585563809098619}, 'w0': {'q50': 1.073397908573178, 'q95': 1.6626662464619888, 'q99': 22.085846285748335, 'max': 99.99999999999991, 'median': 1.073397908573178}}

ATT_identity_relerr

With estimated propensities (mi=m^(Xi))(m_i=\hat m(X_i)) and (D_i\in&#123;0,1&#125;):

  • Left-hand side (controls odds sum): LHS=i=1n(1Di),mi1mi.\text{LHS} = \sum_{i=1}^n (1-D_i),\frac{m_i}{1-m_i}.
  • Right-hand side (treated count): RHS=i=1nDi=n1.\text{RHS} = \sum_{i=1}^n D_i = n_1.

If (m^m)(\hat m\approx m) and overlap is ok, LHS ()(\approx) RHS.

You report the relative error:

ATT_identity_relerr=LHSRHSRHS\boxed{ \texttt{ATT\_identity\_relerr} = \frac{\big|\mathrm{LHS} - \mathrm{RHS}\big|}{\mathrm{RHS}} }

(when (n_1&gt;0) otherwise it’s set to ()(\infty)).

How to read the number

  • Small (\texttt&#123;relerr&#125;) (e.g., (\le 5%)) ⇒ propensities are reasonably calibrated (especially on the control side) and ATT weights won’t be wildly off in total mass.
  • Large (\texttt&#123;relerr&#125;) ⇒ possible miscalibration of (m^)(\hat m) (e.g., over/underestimation for controls), poor overlap (many controls with (mi1)(m_i\to 1) inflating (mi/(1mi)))(m_i/(1-m_i))), or clipping/trimming effects.

Your default flags (same as in the code):

  • GREEN if (\texttt&#123;relerr&#125; \le 0.05)
  • YELLOW if (0.05 &lt; \texttt&#123;relerr&#125; \le 0.10)
  • RED if (&gt; 0.10)

Quick intuition

The term (m/(1m))(m/(1-m)) is the odds of treatment. Summing that over controls should reconstruct the treated count. If it doesn’t, either the odds are off (propensity miscalibration) or the data lack support where you need it—both are red flags for ATT-IPW stability.

Result

{'lhs_sum': 2396.83831663893, 'rhs_sum': 2036.0, 'rel_err': 0.17722903567727402}

clip_m_total

look at edge_mass

Result

{'m_clip_lower': 0.0235, 'm_clip_upper': 0.0001, 'g_clip_share': nan}

calib_ECE, calib_slope, calib_intercept

calib_ECE = 0.018 (GREEN)

Math: with 10 equal-width bins,

ECE=k=110nkn,yˉkpˉk\text{ECE}=\sum_{k=1}^{10}\frac{n_k}{n},|\bar y_k-\bar p_k|

(weighted average gap between observed rate (\bar y_k) and mean prediction (\bar p_k) per bin). Result: ~1.8% average miscalibration → overall probabilities track outcomes well. Note the biggest bin error is in 0.5–0.6 (abs_error ≈ 0.162) but it’s tiny (95/10,000), so ECE stays low.

calib_slope (β) = 0.889 (GREEN)

Math (logistic recalibration):

Pr(D=1p)=σ(α+β,logit(p)).\Pr(D=1\mid p)=\sigma(\alpha+\beta,\text{logit}(p)).

Interpretation: (\beta<1) ⇒ predictions are a bit over-confident (too extreme); the optimal calibration slightly flattens them toward 0.5.

calib_intercept (α) = −0.107 (GREEN)

Math: same model as above; (\alpha) is a vertical shift on the log-odds scale. Interpretation: Negative (\alpha) nudges probabilities downward overall (your model is, on average, a bit high), consistent with bins like 0.5–0.6 where (\bar p_k &gt; \bar y_k).

All three fall well within your GREEN thresholds, so calibration looks solid despite minor mid-range overprediction.

Result

{'n': 10000, 'n_bins': 10, 'auc': 0.8351248965136829, 'brier': 0.10778483728183921, 'ece': 0.018452696253043466, 'reliability_table': bin lower upper count mean_p frac_pos abs_error 0 0 0.0 0.1 5089 0.044279 0.054431 0.010152 1 1 0.1 0.2 1724 0.148308 0.172274 0.023965 2 2 0.2 0.3 1171 0.245443 0.231426 0.014017 3 3 0.3 0.4 626 0.341419 0.316294 0.025125 4 4 0.4 0.5 251 0.440675 0.382470 0.058205 5 5 0.5 0.6 95 0.541182 0.378947 0.162235 6 6 0.6 0.7 107 0.649163 0.588785 0.060378 7 7 0.7 0.8 214 0.754985 0.785047 0.030061 8 8 0.8 0.9 427 0.851461 0.859485 0.008024 9 9 0.9 1.0 296 0.932652 0.888514 0.044139, 'recalibration': {'intercept': -0.10680601474031537, 'slope': 0.8893319945661962}, 'flags': {'ece': 'GREEN', 'slope': 'GREEN', 'intercept': 'GREEN'}, 'thresholds': {'ece_warn': 0.1, 'ece_strong': 0.2, 'slope_warn_lo': 0.8, 'slope_warn_hi': 1.2, 'slope_strong_lo': 0.6, 'slope_strong_hi': 1.4, 'intercept_warn': 0.2, 'intercept_strong': 0.4}}

Result
binloweruppercountmean_pfrac_posabs_error
000.00.150890.0442790.0544310.010152
110.10.217240.1483080.1722740.023965
220.20.311710.2454430.2314260.014017
330.30.46260.3414190.3162940.025125
440.40.52510.4406750.3824700.058205
550.50.6950.5411820.3789470.162235
660.60.71070.6491630.5887850.060378
770.70.82140.7549850.7850470.030061
880.80.94270.8514610.8594850.008024
990.91.02960.9326520.8885140.044139

Score

We need this score refutation tests for:

  • Catch overfitting/leakage: The out-of-sample moment check verifies that the AIPW score averages to ~0 on held-out folds using fold-specific θ and nuisances. If this fails, your effect can be an artifact of leakage or overfit learners rather than a real signal.
  • Verify Neyman orthogonality in practice: The Gateaux-derivative tests (orthogonality_derivatives) check that small, targeted perturbations to the nuisances (g₀, g₁, m) don’t move the score mean. Large |t| values flag miscalibration (e.g., biased propensity or outcome models) that breaks the orthogonality protection DML relies on.
  • Assess finite-sample stability: The influence diagnostics reveal heavy tails (p99/median, kurtosis) and top-influential points. Spiky ψ implies high variance and sensitivity—often due to near-0/1 propensities, poor overlap, or outliers.
  • ATTE-specific risks: For ATT/ATTE, only g₀ and m matter in the score. The added overlap metrics and trim curves show how reliant your estimate is on scarce, high-m controls—common failure mode for ATT.
Result
metricvalueflag
0se_plugin6.233980e-02NA
1psi_p99_over_med2.374779e+01RED
2psi_kurtosis3.032000e+02RED
3max_|t|_g14.350018e+00RED
4max_|t|_g02.076780e+00YELLOW
5max_|t|_m1.030583e+00GREEN
6oos_tstat_fold-2.552943e-15GREEN
7oos_tstat_strict-2.461798e-15GREEN

psi_p99_over_med

  • Let ψi\psi_i be the per-unit influence value (EIF score) for your estimator. We look at magnitudes aiψia_i \equiv |\psi_i|.

  • Define the 99th percentile and the median of these magnitudes:

    q0.99Quantile0.99(a1,,an),mmedian(a1,,an).q_{0.99} \equiv \operatorname{Quantile}_{0.99}(a_1,\dots,a_n),\qquad m \equiv \operatorname{median}(a_1,\dots,a_n).
  • The metric is the scale-free tail ratio:

    psi_p99_over_med=q0.99m\boxed{ \texttt{psi\_p99\_over\_med} = \frac{q_{0.99}}{m} }

Why this works (brief):

  • Uses ψi|\psi_i| to ignore sign (only tail size matters).
  • Dividing by the median makes it scale-invariant and robust to a few large values.
  • Large values (1)\big(\gg 1\big) mean a small fraction of observations dominate uncertainty (heavy tails → unstable SE).

Quick read:

  • 1!!5\approx 1!-!5: tails tame/stable
  • 10\gtrsim 10: caution (heavy tails)
  • 20\gtrsim 20: likely unstable; check overlap, trim/clamp propensities, or robustify learners.
Result

{'se_plugin': 0.06233979878689177, 'kurtosis': 303.1999961346597, 'p99_over_med': 23.747788009323845, 'top_influential': i psi m res_t res_c 0 6224 205.671733 0.010000 2.062391 0.0 1 1915 -180.280657 0.012198 -2.205898 -0.0 2 215 -163.974979 0.013644 -2.221508 -0.0 3 9389 131.757805 0.010585 1.393678 0.0 4 868 -101.111026 0.014727 -1.489752 -0.0 5 2741 -96.602896 0.024285 -2.345406 -0.0 6 1993 83.941140 0.028412 2.404237 0.0 7 9894 -82.585292 0.011016 -0.907962 -0.0 8 2350 70.269293 0.029162 2.063988 0.0 9 1499 70.103947 0.022092 1.576351 0.0}

psi_kurtosis

  • Let ψi\psi_i be the per-unit influence values and define centered residuals

    ψ~iψiψˉ,ψˉ1ni=1nψi.\tilde\psi_i \equiv \psi_i - \bar\psi,\qquad \bar\psi \equiv \frac{1}{n}\sum_{i=1}^n \psi_i.
  • Sample variance (with Bessel correction):

    s21n1i=1nψ~i2.s^2 \equiv \frac{1}{n-1}\sum_{i=1}^n \tilde\psi_i^2.
  • Sample 4th central moment:

    μ^41ni=1nψ~i4.\hat\mu_4 \equiv \frac{1}{n}\sum_{i=1}^n \tilde\psi_i^4.
  • The reported metric (raw kurtosis, not excess):

    psi_kurtosis=μ^4s4\boxed{ \texttt{psi\_kurtosis} = \frac{\hat{\mu}_4}{s^4} }

Interpretation (quick):

  • Normal reference 3\approx 3 (excess kurtosis =0=0).
  • Much larger \Rightarrow heavier tails / more extreme ψi\psi_i outliers.
  • Rules of thumb used in the diagnostics: 10\ge 10 = caution, 30\ge 30 = severe.
Result

{'se_plugin': 0.06233979878689177, 'kurtosis': 303.1999961346597, 'p99_over_med': 23.747788009323845, 'top_influential': i psi m res_t res_c 0 6224 205.671733 0.010000 2.062391 0.0 1 1915 -180.280657 0.012198 -2.205898 -0.0 2 215 -163.974979 0.013644 -2.221508 -0.0 3 9389 131.757805 0.010585 1.393678 0.0 4 868 -101.111026 0.014727 -1.489752 -0.0 5 2741 -96.602896 0.024285 -2.345406 -0.0 6 1993 83.941140 0.028412 2.404237 0.0 7 9894 -82.585292 0.011016 -0.907962 -0.0 8 2350 70.269293 0.029162 2.063988 0.0 9 1499 70.103947 0.022092 1.576351 0.0}

maxtg1max_|t|_g1, maxtg0max_|t|_g0, maxtmmax_|t|_m

We work with a basis of functions

{hb(X)}b=0B1(columns of Xbasish01 is the constant).\{h_b(X)\}_{b=0}^{B-1} \quad\text{(columns of }X_{\text{basis}}\text{; }h_0\equiv 1\text{ is the constant).}

Let (m_i^\tau \equiv \mathrm&#123;clip&#125;(m_i,\tau,1-\tau)) be the clipped propensity (guards against division by zero).

ATE case

For each basis function (b)(b), form a sample mean (Gateaux derivative estimator) and its standard error, then compute a t-statistic; finally take the maximum absolute value across bases.


(g1)(g_1) direction

d^g1,b=1ni=1nhb(Xi)(1Dimiτ),se(d^g1,b)=sd ⁣[hb(Xi)(1Dimiτ)]n.\widehat d_{g_1,b} = \frac{1}{n}\sum_{i=1}^n h_b(X_i) \Big(1 - \frac{D_i}{m_i^\tau}\Big), \qquad \mathrm{se}(\widehat d_{g_1,b}) = \frac{\operatorname{sd}\!\left[h_b(X_i) \left(1-\frac{D_i}{m_i^\tau}\right)\right]}{\sqrt{n}}.tg1,b=d^g1,bse(d^g1,b),maxtg1=maxbtg1,b.t_{g_1,b} = \frac{\widehat d_{g_1,b}}{\mathrm{se}(\widehat d_{g_1,b})}, \qquad \boxed{ \max_{|t|_{g_1}} = \max_b |t_{g_1,b}| }.

(g0)(g_0) direction

d^g0,b=1ni=1nhb(Xi)(1Di1miτ1),se(d^g0,b)=sd ⁣[hb(Xi)(1Di1miτ1)]n.\widehat d_{g_0,b} = \frac{1}{n}\sum_{i=1}^n h_b(X_i) \Big(\frac{1-D_i}{1-m_i^\tau} - 1\Big), \qquad \mathrm{se}(\widehat d_{g_0,b}) = \frac{\operatorname{sd}\!\left[h_b(X_i) \left(\frac{1-D_i}{1-m_i^\tau} - 1\right)\right]}{\sqrt{n}}.tg0,b=d^g0,bse(d^g0,b),maxtg0=maxbtg0,b.t_{g_0,b} = \frac{\widehat d_{g_0,b}}{\mathrm{se}(\widehat d_{g_0,b})}, \qquad \boxed{ \max_{|t|_{g_0}} = \max_b |t_{g_0,b}| }.

(m)(m) direction

SiDi(Yig1,i)(miτ)2+(1Di)(Yig0,i)(1miτ)2.S_i \equiv \frac{D_i(Y_i - g_{1,i})}{(m_i^\tau)^2} + \frac{(1-D_i)(Y_i - g_{0,i})}{(1 - m_i^\tau)^2}.d^m,b=1ni=1nhb(Xi)Si,se(d^m,b)=sd ⁣[hb(Xi)Si]n.\widehat d_{m,b} = -\frac{1}{n}\sum_{i=1}^n h_b(X_i) S_i, \qquad \mathrm{se}(\widehat d_{m,b}) = \frac{\operatorname{sd}\!\left[h_b(X_i)S_i\right]}{\sqrt{n}}.tm,b=d^m,bse(d^m,b),maxtm=maxbtm,b.t_{m,b} = \frac{\widehat d_{m,b}}{\mathrm{se}(\widehat d_{m,b})}, \qquad \boxed{ \max_{|t|_{m}} = \max_b |t_{m,b}| }.

Interpretation: under Neyman orthogonality, each derivative mean (\widehat d_&#123;\bullet,b&#125;) should be approximately zero, so all (|t_&#123;\bullet,b&#125;|) should be small. Large (\max_&#123;|t|&#125;) values flag miscalibration of the corresponding nuisance.


ATTE / ATT case

Let (p_1 = \mathbb&#123;E&#125;[D]) and define the odds (oi=miτ/(1miτ))(o_i = m_i^\tau / (1 - m_i^\tau)).

  • The (g1)(g_1) derivative is identically zero:

    maxtg1=0.\Rightarrow\quad \max_{|t|_{g_1}} = 0.
  • (g0)(g_0) direction

    d^g0,b=1nihb(Xi)(1Di)oiDip1,tg0,b=d^g0,bse(d^g0,b),maxtg0=maxbtg0,b.\widehat d_{g_0,b} = \frac{1}{n}\sum_i h_b(X_i)\frac{(1-D_i)o_i - D_i}{p_1}, \qquad t_{g_0,b} = \frac{\widehat d_{g_0,b}}{\mathrm{se}(\widehat d_{g_0,b})}, \qquad \max_{|t|_{g_0}} = \max_b |t_{g_0,b}|.
  • (m)(m) direction

    d^m,b=1nihb(Xi)(1Di)(Yig0,i)p1(1miτ)2,maxtm=maxbtm,b.\widehat d_{m,b} = -\frac{1}{n}\sum_i h_b(X_i) \frac{(1-D_i)(Y_i - g_{0,i})} {p_1(1 - m_i^\tau)^2}, \qquad \max_{|t|_{m}} = \max_b |t_{m,b}|.

Rule of thumb: (\max_&#123;|t|&#125; \lesssim 2) is “okay”; larger values indicate orthogonality breakdown — fix by recalibrating that nuisance, changing learners, features, or trimming.

Result
basisd_g1se_g1t_g1d_g0se_g0t_g0d_mse_mt_m
00-0.2330970.053585-4.3500180.0360840.0174582.0668350.3147773.7398440.084168
11-0.0124670.058847-0.2118630.0291520.0253051.1520260.5987703.9920480.149991
220.0213500.0609630.3502060.0383200.0227361.685394-5.9503115.773734-1.030583
330.1257160.0617722.0351760.0478560.0230432.0767802.4285455.3216920.456348
440.0077670.0478300.1623790.0527620.0292931.801146-1.8005072.686426-0.670224
550.0070350.0547630.1284620.0023950.0159850.1498112.0128903.4911020.576577

oos_tstat_fold, oos_tstat_strict

Here’s the math behind the two OOS (out-of-sample) moment t-stats used in the diagnostics. Assume K-fold cross-fitting with held-out index sets (Ik)(I_k) (size nkn_k) and complements (Rk)(R_k).


Step 1 — Leave-fold-out (\hat\theta_&#123;-k&#125;)

For the moment condition (\mathbb&#123;E&#125;[\psi_a(W)\,\theta + \psi_b(W)] = 0), the leave-fold-out estimate used on fold (k)(k) is

θ^k=ψˉb,Rkψˉa,Rk,ψˉ,Rk=1RkiRkψ(Wi).\hat\theta_{-k} = -\frac{\bar\psi_{b,R_k}}{\bar\psi_{a,R_k}}, \qquad \bar\psi_{\cdot,R_k} = \frac{1}{|R_k|}\sum_{i\in R_k}\psi_{\cdot}(W_i).

Step 2 — Held-out scores on fold (k)(k)

Define the fold-specific held-out score for iIki\in I_k:

ψi(k)=ψb(Wi)+ψa(Wi)θ^k.\psi_i^{(k)} = \psi_b(W_i) + \psi_a(W_i)\,\hat\theta_{-k}.

Compute per-fold mean and variance:

ψˉk=1nkiIkψi(k),sk2=1nk1iIk ⁣(ψi(k)ψˉk)2.\bar\psi_k = \frac{1}{n_k}\sum_{i\in I_k}\psi_i^{(k)}, \qquad s_k^2 = \frac{1}{n_k-1}\sum_{i\in I_k}\!\big(\psi_i^{(k)}-\bar\psi_k\big)^2.

OOS t-stat diagnostics


(\texttt&#123;oos\_tstat\_fold&#125;)

A fold-aggregated, variance-weighted t-statistic:

oos_tstat_fold=k=1Knkψˉkk=1Knksk2\boxed{ \texttt{oos\_tstat\_fold} = \frac{\displaystyle \sum_{k=1}^K n_k\,\bar\psi_k} {\displaystyle \sqrt{\sum_{k=1}^K n_k\,s_k^2}} }

Intuition: averages fold means and scales by a fold-pooled standard error.


(\texttt&#123;oos\_tstat\_strict&#125;)

A “strict” t-stat using every held-out observation directly:

N=k=1Knk,ψˉall=1Nk=1KiIkψi(k).N = \sum_{k=1}^K n_k, \qquad \bar\psi_{\text{all}} = \frac{1}{N}\sum_{k=1}^K\sum_{i\in I_k}\psi_i^{(k)}.sall2=1N1k=1KiIk(ψi(k)ψˉall)2.s_{\text{all}}^2 = \frac{1}{N-1} \sum_{k=1}^K\sum_{i\in I_k} \big(\psi_i^{(k)} - \bar\psi_{\text{all}}\big)^2.oos_tstat_strict=ψˉallsall/N\boxed{ \texttt{oos\_tstat\_strict} = \frac{\bar\psi_{\text{all}}}{s_{\text{all}}/\sqrt{N}} }

Intuition: computes a single overall mean and standard error across all held-out scores (often slightly more conservative).


Interpretation

Under a valid design and correct cross-fitting (so that \mathbb&#123;E&#125;[\psi]=0 out-of-sample), both statistics are approximately standard normal:

two-sided p-value2(1Φ(t)).\text{two-sided p-value} \approx 2\big(1 - \Phi(|t|)\big).

Values near 00 indicate that the moment condition holds out of sample. Large t|t| suggests overfitting, leakage, or nuisance miscalibration.

Result

{'fold_results': fold n psi_mean psi_var 0 0 2500 -0.002503 37.561660 1 1 2500 0.100558 54.360122 2 2 2500 0.068724 31.028728 3 3 2500 -0.166779 32.522161, 'tstat_fold_agg': -2.5529434141490394e-15, 'pvalue_fold_agg': 0.999999999999998, 'tstat_strict': -2.461798420221801e-15, 'pvalue_strict': 0.999999999999998, 'interpretation': 'Near 0 indicates moment condition holds.'}

SUTVA

Result

1.) Are your clients independent (i)? 2.) Do you measure confounders, treatment, and outcome in the same intervals? 3.) Do you measure confounders before treatment and outcome after? 4.) Do you have a consistent label of treatment, such as if a person does not receive a treatment, he has a label 0?

Those assumptions are statistically untestable. We need design of research for them

Uncofoundedness

Result
metricvalueflag
0balance_max_smd0.144968YELLOW
1balance_frac_violations0.200000YELLOW

balance\_max\_smd

For each covariate (Xj)(X_j), the (weighted) standardized mean difference is

SMDj=μ1jμ0j12(σ1j2+σ0j2).\mathrm{SMD}_j = \frac{\big|\mu_{1j} - \mu_{0j}\big|} {\sqrt{\tfrac{1}{2}\big(\sigma_{1j}^2 + \sigma_{0j}^2\big)}}.

Group means and variances are computed under the IPW weights implied by your estimand:

  • ATE: w_&#123;1i&#125; = \tfrac&#123;D_i&#125;&#123;\hat m_i&#125;, w_&#123;0i&#125; = \tfrac&#123;1-D_i&#125;&#123;1-\hat m_i&#125;
  • ATTE: w_&#123;1i&#125; = D_i, w_&#123;0i&#125; = (1-D_i)\tfrac&#123;\hat m_i&#125;&#123;1-\hat m_i&#125;

(If normalize=True, each weight vector is divided by its mean.)

Weighted means and variances:

μgj=iwgiXijiwgi,σgj2=iwgi(Xijμgj)2iwgi,g{0,1}.\mu_{gj} = \frac{\sum_i w_{gi} X_{ij}}{\sum_i w_{gi}}, \qquad \sigma_{gj}^2 = \frac{\sum_i w_{gi}(X_{ij} - \mu_{gj})^2}{\sum_i w_{gi}}, \qquad g \in \{0,1\}.

Special cases in the code:

  • If both variances are 0\approx 0 and |\mu_&#123;1j&#125;-\mu_&#123;0j&#125;| \approx 0\mathrm&#123;SMD&#125;_j = 0
  • If both variances are 0\approx 0 but means differ ⇒ \mathrm&#123;SMD&#125;_j = \infty
  • If denominator is 0\approx 0 otherwise ⇒ \mathrm&#123;SMD&#125;_j = \text&#123;NaN&#125;

Then

balance_max_smd=maxjSMDj,\textbf{balance\_max\_smd} = \max_j \mathrm{SMD}_j,

implemented as a nanmax over the vector of \mathrm&#123;SMD&#125;_j. NaNs are ignored; if any feature produced \infty, the max is \infty.

balance\_frac\_violations

Let the SMD threshold be τ\tau (default 0.100.10). Define the set of finite SMDs:

J={j: SMDj is finite}.\mathcal{J} = \{ j : \ \mathrm{SMD}_j \text{ is finite} \}.

Then the fraction of violations is

balance_frac_violations=1JjJ1{SMDjτ}.\textbf{balance\_frac\_violations} = \frac{1}{|\mathcal{J}|} \sum_{j \in \mathcal{J}} \mathbf{1}\{ \mathrm{SMD}_j \ge \tau \}.

So it’s the share of covariates whose weighted SMD exceeds the threshold, computed only over finite SMDs (NaN / Inf are excluded from the denominator).


Quick interpretation

  • Smaller is better. A common rule of thumb is \mathrm&#123;SMD&#125; \le 0.10.
  • balance_max_smd tells you the worst residual imbalance across covariates.
  • balance_frac_violations tells you how many covariates (as a fraction) still exceed the chosen threshold.

Sensitivity analysis

1) sensitivity_analysis: bias-aware CI

Goal. Start from your estimator (θ^)(\hat\theta) with sampling standard error (se)(se). Allow a controlled amount of worst-case hidden confounding through three knobs (cfy,cfd,ρ)(cf_y, cf_d, \rho). Inflate the uncertainty by an additive “max bias”.


Step A — Sampling part

  • Point estimate θ^\hat\theta, standard error sese, and zαz_\alpha for level (1α)(1-\alpha).
  • Usual sampling CI:
  • [θ^zαse, θ^+zαse].[\,\hat\theta - z_\alpha\,se,\ \hat\theta + z_\alpha\,se\,].

Step B — Confounding geometry

The code pulls sensitivity elements from the fitted IRM:

  • σ2\sigma^2: the asymptotic variance of the estimator’s EIF (so that se = \sqrt&#123;\sigma^2&#125; in the module’s normalization).

  • mα(i)0m_\alpha(i) \ge 0: per-unit weight for the outcome channel (how outcome-model misspecification moves the EIF).

  • r(i)r(i) (“riesz_rep”): per-unit weight for the treatment channel (how propensity-model misspecification moves the EIF).

We turn the user’s sensitivity knobs into a quadratic budget for adversarial confounding:

ai:=2mα(i),bi:={r(i),(default, worst-case sign)r(i),(if use_signed_rr=True)basei=ai2cfy+bi2cfd+2ρcfycfdaibi0,ν2:=En[basei].\begin{aligned} a_i &:= \sqrt{2\,m_\alpha(i)}, \\[1mm] b_i &:= \begin{cases} |r(i)|, & \text{(default, worst-case sign)} \\ r(i), & \text{(if \texttt{use\_signed\_rr=True})} \end{cases} \\[2mm] \text{base}_i &= a_i^2\,cf_y + b_i^2\,cf_d + 2\,\rho\,\sqrt{cf_y\,cf_d}\,a_i b_i \ge 0, \\[2mm] \nu^2 &:= \mathbb{E}_n[\text{base}_i]. \end{aligned}
  • cfy0cf_y \ge 0: strength of unobserved outcome disturbance
  • cfd0cf_d \ge 0: strength of unobserved treatment disturbance
  • ρ[1,1]\rho \in [-1,1]: their correlation

This ν2\nu^2 is a dimensionless bias multiplier — how sensitive the EIF is to those perturbations.


Step C — Max bias and intervals

Two equivalent forms appear in the code:

max_bias=σ2ν2=(ν2)se.\text{max\_bias} = \sqrt{\sigma^2\,\nu^2} = \big(\sqrt{\nu^2}\big)\,se.

Then the module reports:

  • Confounding bounds for θ\theta:

    [θ^max_bias,  θ^+max_bias].[\,\hat\theta - \text{max\_bias},\; \hat\theta + \text{max\_bias}\,].
  • Bias-aware CI (sampling + confounding, worst-case additive):

    [θ^(max_bias+zαse),  θ^+(max_bias+zαse)].\Big[\,\hat\theta - (\text{max\_bias} + z_\alpha\,se),\; \hat\theta + (\text{max\_bias} + z_\alpha\,se)\,\Big].

(So you’re adding sampling error and the adversarial bias linearly for a conservative envelope.)


Notes & edge handling

  • Numeric PSD clamping ensures \text&#123;base&#125;_i \ge 0; ρ\rho is clipped to [1,1][-1,1].

  • If cfy=cfd=0ν2=0cf_y = cf_d = 0 \Rightarrow \nu^2 = 0 \Rightarrow bias-aware CI collapses to the sampling CI.

  • Internally, a delta-method IF for \text&#123;max\_bias&#125; is

    ψmax(i)=σ2ψν2(i)+ν2ψσ2(i)2max_bias,\psi_{\text{max}}(i) = \frac{\sigma^2\,\psi_{\nu^2}(i) + \nu^2\,\psi_{\sigma^2}(i)} {2\,\text{max\_bias}},

    matching \text&#123;max\_bias&#125; = \sqrt&#123;\sigma^2\nu^2&#125; (used for coherent summaries).


2) sensitivity_benchmark: calibrating (cfy,cfd,ρ)(cf_y, cf_d, \rho) from omitted covariates

Goal. Pick a set ZZ of candidate “omitted” covariates (the benchmarking_set). Refit a short IRM that excludes ZZ and compare it to the long (original) model. Use how well ZZ explains residual variation to derive plausible (cfy,cfd,ρ)(cf_y, cf_d, \rho).


Step A — Long vs short estimates

  • Long: \hat\theta_&#123;\text&#123;long&#125;&#125; (original model).
  • Short: \hat\theta_&#123;\text&#123;short&#125;&#125; (drop ZZ, same learners/hyperparams).
  • Report \Delta = \hat\theta_&#123;\text&#123;long&#125;&#125; - \hat\theta_&#123;\text&#123;short&#125;&#125;.

Step B — Residuals from the long model

Let (g1,g0,m^)(g_1, g_0, \hat m) be the outcome and propensity learners:

ry:=Y(Dg1+(1D)g0),rd:=Dm^.r_y := Y - \big(D g_1 + (1-D) g_0\big), \qquad r_d := D - \hat m.

These are the EIF’s outcome and treatment residual components.


Step C — How much of each residual does ZZ explain?

Regress ryr_y on ZZ and rdr_d on ZZ (unweighted OLS; ATT case uses ATT weights):

  • Obtain Ry2R^2_y and Rd2R^2_d.

  • Convert to signal-to-noise ratios (the “strength” of confounding channels):

    cfy=Ry21Ry2,cfd=Rd21Rd2.cf_y = \frac{R^2_y}{1 - R^2_y}, \qquad cf_d = \frac{R^2_d}{1 - R^2_d}.

    (These are the same R2/(1R2)R^2 / (1 - R^2) maps used in modern partial-R2R^2 robustness frameworks.)

Compute the correlation between the fitted pieces from those two regressions:

ρ=corr ⁣(r^y(Z), r^d(Z)),\rho = \operatorname{corr}\!\big(\widehat r_y(Z),\ \widehat r_d(Z)\big),

weighted for ATT when applicable, then clipped to [1,1][-1,1].


Outputs

A one-row DataFrame (indexed by the treatment name) with

{cfy, cfd, ρ, θ^long, θ^short, Δ}.\{\, cf_y,\ cf_d,\ \rho,\ \hat\theta_{\text{long}},\ \hat\theta_{\text{short}},\ \Delta \,\}.

You can pass (cfy,cfd,ρ)(cf_y, cf_d, \rho) straight into sensitivity_analysis to get the associated bias-aware interval. Intuitively, this calibrates how strong hidden stuff would need to be by using a concrete, observed proxy ZZ.


How to read them together

  1. Use sensitivity_benchmark with a plausible omitted set ZZ to derive (cfy,cfd,ρ)(cf_y, cf_d, \rho) and observe the actual estimate shift Δ\Delta.

  2. Plug those (cfy,cfd,ρ)(cf_y, cf_d, \rho) into sensitivity_analysis to get:

    max_bias=ν2se,Bias-aware CI=θ^±(max_bias+zαse).\text{max\_bias} = \sqrt{\nu^2}\,se, \qquad \text{Bias-aware CI} = \hat\theta \pm (\text{max\_bias} + z_\alpha\,se).

Small cfcf values (or ρ0\rho \approx 0) ⇒ tiny ν2\nu^2 ⇒ bias-aware CI near the sampling CI. Large cfcf values and ρ1|\rho|\approx 1 widen it, reflecting stronger plausible hidden confounding.

Result

{'theta': 0.9917276396749556, 'se': 0.06233979878689177, 'level': 0.95, 'z': 1.959963984540054, 'sampling_ci': (0.869543879249174, 1.1139114001007373), 'theta_bounds_confounding': (0.9282979061474285, 1.0551573732024828), 'bias_aware_ci': (0.8061141457216469, 1.1773411336282644), 'max_bias': 0.06342973352752714, 'sigma2': 1.0690078122954034, 'nu2': 1.0352732234146504, 'params': {'cf_y': 0.01, 'cf_d': 0.01, 'rho': 1.0, 'use_signed_rr': False}}

Result
cf_ycf_drhotheta_longtheta_shortdelta
d0.0000011.951733e-08-1.00.9917281.064098-0.07237