IDEAS home Printed from https://ideas.repec.org/a/sae/medema/v45y2025i8p1025-1033.html
   My bibliography  Save this article

Investigating Bias in the Evaluation Model Used to Evaluate the Effect of Breast Cancer Screening: A Simulation Study

Author

Listed:
  • Eeva-Liisa Røssell

    (Department of Public Health, Aarhus University, Aarhus C, Denmark
    Steno Diabetes Center Aarhus, Aarhus University Hospital, Aarhus N, Denmark)

  • Jakob Hansen Viuff

    (Department of Clinical Epidemiology, Aarhus University, Aarhus N, Denmark)

  • Mette Lise Lousdal

    (Department of Clinical Epidemiology, Aarhus University, Aarhus N, Denmark)

  • Henrik Støvring

    (Department of Public Health, Aarhus University, Aarhus C, Denmark
    Steno Diabetes Center Aarhus, Aarhus University Hospital, Aarhus N, Denmark)

Abstract

Background. Observational studies are used to evaluate the effect of breast cancer screening programs, but their validity depends on use of different study designs. One of these is the evaluation model, which extends follow-up after screening only if women have been diagnosed with breast cancer during the screening program. However, to avoid lead-time bias, the inclusion of risk time should be based on screening invitation and not breast cancer diagnosis. The aim of this study is to investigate potential bias induced by the evaluation model. Methods. We used large-scale simulated datasets to investigate the evaluation model. Simulation model parameters for age-dependent breast cancer incidence, survival, breast cancer mortality, and all-cause mortality were obtained from Norwegian registries. Data were restricted to women aged 48 to 90 y and a period before screening implementation, 1986 to 1995. Simulation parameters were estimated for each of 2 periods (1986–1990 and 1991–1995). For the simulated datasets, 50% were randomly assigned to screening and 50% were not. Simulation scenarios depended on the magnitude of screening effect and level of overdiagnosis. For each scenario, we applied 2 study designs, the evaluation model and ordinary incidence-based mortality, to estimate breast cancer mortality rates for the screening and nonscreening groups. For each design, these rates were compared to assess potential bias. Results. In scenarios with no screening effect and no overdiagnosis, the evaluation model estimated 6% to 8% reductions in breast cancer mortality due to lead-time bias. Bias increased with overdiagnosis. Conclusions. The evaluation model was biased by lead time, especially in scenarios with overdiagnosis. Thus, the attempt to capture more of the screening effect using the evaluation model comes at the risk of introducing bias. Highlights The validity of observational studies of breast cancer screening programs depends on their study design being able to eliminate lead-time bias. The evaluation model has been used to evaluate breast cancer screening in recent studies but introduces a study design based on breast cancer diagnosis that may introduce lead-time bias. We used large-scale simulated datasets to compare study designs used to evaluate screening. We found that the evaluation model was biased by lead time and estimated reductions in breast cancer mortality in scenarios with no screening effect.

Suggested Citation

  • Eeva-Liisa Røssell & Jakob Hansen Viuff & Mette Lise Lousdal & Henrik Støvring, 2025. "Investigating Bias in the Evaluation Model Used to Evaluate the Effect of Breast Cancer Screening: A Simulation Study," Medical Decision Making, , vol. 45(8), pages 1025-1033, November.
  • Handle: RePEc:sae:medema:v:45:y:2025:i:8:p:1025-1033
    DOI: 10.1177/0272989X251352570
    as

    Download full text from publisher

    File URL: https://journals.sagepub.com/doi/10.1177/0272989X251352570
    Download Restriction: no

    File URL: https://libkey.io/10.1177/0272989X251352570?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    More about this item

    Keywords

    ;
    ;
    ;
    ;
    ;
    ;

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:sae:medema:v:45:y:2025:i:8:p:1025-1033. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: SAGE Publications (email available below). General contact details of provider: .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.