IDEAS home Printed from https://ideas.repec.org/p/arx/papers/2512.25032.html

Testing Monotonicity in a Finite Population

Author

Listed:
  • Jiafeng Chen
  • Jonathan Roth
  • Jann Spiess

Abstract

We consider the extent to which we can learn from a completely randomized experiment whether all individuals have treatment effects that are weakly of the same sign, a condition we call monotonicity. From a classical sampling perspective, it is well-known that monotonicity is not falsifiable. By contrast, we show from the design-based perspective -- in which the units in the population are fixed and only treatment assignment is stochastic -- that the distribution of treatment effects in the finite population (and hence whether monotonicity holds) is formally identified. We argue, however, that the usual definition of identification is unnatural in the design-based setting because it imagines knowing the distribution of outcomes over different treatment assignments for the same units. We thus evaluate the informativeness of the data by the extent to which it enables frequentist testing and Bayesian updating. We show that frequentist tests can have nontrivial power against some alternatives, but power is generically limited. Likewise, we show that there exist (non-degenerate) Bayesian priors that never update about whether monotonicity holds. We conclude that, despite the formal identification result, the ability to learn about monotonicity from data in practice is severely limited.

Suggested Citation

  • Jiafeng Chen & Jonathan Roth & Jann Spiess, 2025. "Testing Monotonicity in a Finite Population," Papers 2512.25032, arXiv.org, revised Jan 2026.
  • Handle: RePEc:arx:papers:2512.25032
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/2512.25032
    File Function: Latest version
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Xinran Li & Peng Ding, 2017. "General Forms of Finite Population Central Limit Theorems with Applications to Causal Inference," Journal of the American Statistical Association, Taylor & Francis Journals, vol. 112(520), pages 1759-1769, October.
    2. Charles F. Manski, 1997. "Monotone Treatment Response," Econometrica, Econometric Society, vol. 65(6), pages 1311-1334, November.
    3. Ashesh Rambachan & Jonathan Roth, 2020. "Design-Based Uncertainty for Quasi-Experiments," Papers 2008.00602, arXiv.org, revised Jun 2025.
    4. Brendan Kline & Matthew A. Masten, 2025. "Finite Population Identification and Design-Based Sensitivity Analysis," Papers 2504.14127, arXiv.org, revised Mar 2026.
    5. Imbens, Guido W & Angrist, Joshua D, 1994. "Identification and Estimation of Local Average Treatment Effects," Econometrica, Econometric Society, vol. 62(2), pages 467-475, March.
    6. Alberto Abadie & Susan Athey & Guido W. Imbens & Jeffrey M. Wooldridge, 2020. "Sampling‐Based versus Design‐Based Uncertainty in Regression Analysis," Econometrica, Econometric Society, vol. 88(1), pages 265-296, January.
    7. Neil Christy & Amanda Ellen Kowalski, 2024. "Counting Defiers: A Design-Based Model of an Experiment Can Reveal Evidence Beyond the Average Effect," Papers 2412.16352, arXiv.org, revised Mar 2026.
    8. James J. Heckman & Jeffrey Smith & Nancy Clements, 1997. "Making The Most Out Of Programme Evaluations and Social Experiments: Accounting For Heterogeneity in Programme Impacts," The Review of Economic Studies, Review of Economic Studies Ltd, vol. 64(4), pages 487-535.
    9. Peng Ding & Luke W. Miratrix, 2019. "Model‐free causal inference of binary experimental data," Scandinavian Journal of Statistics, Danish Society for Theoretical Statistics;Finnish Statistical Society;Norwegian Statistical Association;Swedish Statistical Association, vol. 46(1), pages 200-214, March.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Neil Christy & Amanda Ellen Kowalski, 2024. "Counting Defiers: A Design-Based Model of an Experiment Can Reveal Evidence Beyond the Average Effect," Papers 2412.16352, arXiv.org, revised Mar 2026.
    2. Vishal Kamat, 2017. "Identifying the Effects of a Program Offer with an Application to Head Start," Papers 1711.02048, arXiv.org, revised Aug 2023.
    3. Ashesh Rambachan & Jonathan Roth, 2020. "Design-Based Uncertainty for Quasi-Experiments," Papers 2008.00602, arXiv.org, revised Jun 2025.
    4. Mogstad, Magne & Torgovitsky, Alexander, 2024. "Instrumental variables with unobserved heterogeneity in treatment effects," Handbook of Labor Economics,, Elsevier.
    5. Sung Jae Jun & Sokbae Lee, 2023. "Identifying the Effect of Persuasion," Journal of Political Economy, University of Chicago Press, vol. 131(8), pages 2032-2058.
    6. Manski, Charles F., 2000. "Identification problems and decisions under ambiguity: Empirical analysis of treatment response and normative analysis of treatment choice," Journal of Econometrics, Elsevier, vol. 95(2), pages 415-442, April.
    7. Daniel Ober-Reynolds, 2023. "Estimating Functionals of the Joint Distribution of Potential Outcomes with Optimal Transport," Papers 2311.09435, arXiv.org.
    8. Yuehao Bai & Shunzhuang Huang & Sarah Moon & Andres Santos & Azeem M. Shaikh & Edward J. Vytlacil, 2024. "Inference for Treatment Effects Conditional on Generalized Principal Strata using Instrumental Variables," Papers 2411.05220, arXiv.org, revised Nov 2025.
    9. Masayuki Sawada, 2019. "Noncompliance in randomized control trials without exclusion restrictions," Papers 1910.03204, arXiv.org, revised Jun 2021.
    10. Sungwon Lee, 2020. "Identification and Confidence Regions for Treatment Effect and its Distribution under Stochastic Dominance," Working Papers 2011, Nam Duck-Woo Economic Research Institute, Sogang University (Former Research Institute for Market Economy).
    11. Lee, Ji Hyung & Park, Byoung G., 2023. "Nonparametric identification and estimation of the extended Roy model," Journal of Econometrics, Elsevier, vol. 235(2), pages 1087-1113.
    12. Neil Christy & A. E. Kowalski, 2024. "Starting Small: Prioritizing Safety over Efficacy in Randomized Experiments Using the Exact Finite Sample Likelihood," Papers 2407.18206, arXiv.org.
    13. Jonathan Roth & Pedro H. C. Sant’Anna, 2023. "Efficient Estimation for Staggered Rollout Designs," Journal of Political Economy Microeconomics, University of Chicago Press, vol. 1(4), pages 669-709.
    14. Cl'ement de Chaisemartin & Antoine Deeb, 2024. "Estimating treatment-effect heterogeneity across sites, in multi-site randomized experiments with few units per site," Papers 2405.17254, arXiv.org, revised Dec 2024.
    15. Molinari, Francesca, 2020. "Microeconometrics with partial identification," Handbook of Econometrics, in: Steven N. Durlauf & Lars Peter Hansen & James J. Heckman & Rosa L. Matzkin (ed.), Handbook of Econometrics, edition 1, volume 7, chapter 0, pages 355-486, Elsevier.
    16. Zeyang Yu, 2024. "A Binary IV Model for Persuasion: Profiling Persuasion Types among Compliers," Papers 2411.16906, arXiv.org, revised Jul 2025.
    17. Michael Lechner, 2002. "Mikroökonometrische Evaluation arbeitsmarktpolitischer Massnahmen," University of St. Gallen Department of Economics working paper series 2002 2002-20, Department of Economics, University of St. Gallen.
    18. Charles F. Manski, 1999. "Statistical Treatment Rules for Heterogeneous Populations: With Application to Randomized Experiments," NBER Technical Working Papers 0242, National Bureau of Economic Research, Inc.
    19. Jeffrey Smith, 2000. "A Critical Survey of Empirical Methods for Evaluating Active Labor Market Policies," Swiss Journal of Economics and Statistics (SJES), Swiss Society of Economics and Statistics (SSES), vol. 136(III), pages 247-268, September.
    20. Yue Fang & Geert Ridder, 2025. "The Exact Variance of the Average Treatment Effect Estimator in Cluster Randomized Controlled Trials," Papers 2511.05801, arXiv.org, revised Dec 2025.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2512.25032. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.