IDEAS home Printed from https://ideas.repec.org/a/nat/natcom/v16y2025i1d10.1038_s41467-025-61116-2.html
   My bibliography  Save this article

A foundational model for in vitro fertilization trained on 18 million time-lapse images

Author

Listed:
  • Suraj Rajendran

    (Weill Cornell Medicine of Cornell University
    Weill Cornell Medicine
    Weill Cornell Medicine)

  • Eeshaan Rehani

    (Weill Cornell Medicine of Cornell University
    Weill Cornell Medicine
    Cornell University)

  • William Phu

    (Weill Cornell Medicine of Cornell University
    Weill Cornell Medicine)

  • Qiansheng Zhan

    (Weill Cornell Medicine)

  • Jonas E. Malmsten

    (Weill Cornell Medicine)

  • Marcos Meseguer

    (IVIRMA Valencia
    Instituto de Investigación Sanitaria La Fe (IIS La Fe))

  • Kathleen A. Miller

    (IVF Florida Reproductive Associates)

  • Zev Rosenwaks

    (Weill Cornell Medicine)

  • Olivier Elemento

    (Weill Cornell Medicine of Cornell University
    Weill Cornell Medicine)

  • Nikica Zaninovic

    (Weill Cornell Medicine)

  • Iman Hajirasouliha

    (Weill Cornell Medicine of Cornell University
    Weill Cornell Medicine)

Abstract

Embryo assessment in in vitro fertilization (IVF) involves multiple tasks—including ploidy prediction, quality scoring, component segmentation, embryo identification, and timing of developmental milestones. Existing methods address these tasks individually, leading to inefficiencies due to high costs and lack of standardization. Here, we introduce FEMI (Foundational IVF Model for Imaging), a foundation model trained on approximately 18 million time-lapse embryo images. We evaluate FEMI on ploidy prediction, blastocyst quality scoring, embryo component segmentation, embryo witnessing, blastulation time prediction, and stage prediction. FEMI attains area under the receiver operating characteristic (AUROC) > 0.75 for ploidy prediction using only image data—significantly outpacing benchmark models. It has higher accuracy than both traditional and deep-learning approaches for overall blastocyst quality and its subcomponents. Moreover, FEMI has strong performance in embryo witnessing, blastulation-time, and stage prediction. Our results demonstrate that FEMI can leverage large-scale, unlabelled data to improve predictive accuracy in several embryology-related tasks in IVF.

Suggested Citation

  • Suraj Rajendran & Eeshaan Rehani & William Phu & Qiansheng Zhan & Jonas E. Malmsten & Marcos Meseguer & Kathleen A. Miller & Zev Rosenwaks & Olivier Elemento & Nikica Zaninovic & Iman Hajirasouliha, 2025. "A foundational model for in vitro fertilization trained on 18 million time-lapse images," Nature Communications, Nature, vol. 16(1), pages 1-15, December.
  • Handle: RePEc:nat:natcom:v:16:y:2025:i:1:d:10.1038_s41467-025-61116-2
    DOI: 10.1038/s41467-025-61116-2
    as

    Download full text from publisher

    File URL: https://www.nature.com/articles/s41467-025-61116-2
    File Function: Abstract
    Download Restriction: no

    File URL: https://libkey.io/10.1038/s41467-025-61116-2?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Suraj Rajendran & Matthew Brendel & Josue Barnes & Qiansheng Zhan & Jonas E. Malmsten & Pantelis Zisimopoulos & Alexandros Sigaras & Kwabena Ofori-Atta & Marcos Meseguer & Kathleen A. Miller & David H, 2024. "Automatic ploidy prediction and quality assessment of human blastocysts using time-lapse imaging," Nature Communications, Nature, vol. 15(1), pages 1-10, December.
    2. Michael Moor & Oishi Banerjee & Zahra Shakeri Hossein Abad & Harlan M. Krumholz & Jure Leskovec & Eric J. Topol & Pranav Rajpurkar, 2023. "Foundation models for generalist medical artificial intelligence," Nature, Nature, vol. 616(7956), pages 259-265, April.
    3. Jun Ma & Yuting He & Feifei Li & Lin Han & Chenyu You & Bo Wang, 2024. "Segment anything in medical images," Nature Communications, Nature, vol. 15(1), pages 1-9, December.
    4. Yukun Zhou & Mark A. Chia & Siegfried K. Wagner & Murat S. Ayhan & Dominic J. Williamson & Robbert R. Struyven & Timing Liu & Moucheng Xu & Mateo G. Lozano & Peter Woodward-Court & Yuka Kihara & Andre, 2023. "A foundation model for generalizable disease detection from retinal images," Nature, Nature, vol. 622(7981), pages 156-163, October.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Senliang Lu & Yehang Chen & Yuan Chen & Peijun Li & Junqi Sun & Changye Zheng & Yujian Zou & Bo Liang & Mingwei Li & Qinggeng Jin & Enming Cui & Wansheng Long & Bao Feng, 2025. "General lightweight framework for vision foundation model supporting multi-task and multi-center medical image analysis," Nature Communications, Nature, vol. 16(1), pages 1-16, December.
    2. Weijian Huang & Cheng Li & Hong-Yu Zhou & Hao Yang & Jiarun Liu & Yong Liang & Hairong Zheng & Shaoting Zhang & Shanshan Wang, 2024. "Enhancing representation in radiography-reports foundation model: a granular alignment algorithm using masked contrastive learning," Nature Communications, Nature, vol. 15(1), pages 1-12, December.
    3. Pengcheng Qiu & Chaoyi Wu & Xiaoman Zhang & Weixiong Lin & Haicheng Wang & Ya Zhang & Yanfeng Wang & Weidi Xie, 2024. "Towards building multilingual language model for medicine," Nature Communications, Nature, vol. 15(1), pages 1-15, December.
    4. Meng Wang & Tian Lin & Aidi Lin & Kai Yu & Yuanyuan Peng & Lianyu Wang & Cheng Chen & Ke Zou & Huiyu Liang & Man Chen & Xue Yao & Meiqin Zhang & Binwei Huang & Chaoxin Zheng & Peixin Zhang & Wei Chen , 2025. "Enhancing diagnostic accuracy in rare and common fundus diseases with a knowledge-rich vision-language model," Nature Communications, Nature, vol. 16(1), pages 1-17, December.
    5. Cosmin I. Bercea & Benedikt Wiestler & Daniel Rueckert & Julia A. Schnabel, 2025. "Evaluating normative representation learning in generative AI for robust anomaly detection in brain imaging," Nature Communications, Nature, vol. 16(1), pages 1-10, December.
    6. Maksim Makarenko & Arturo Burguete-Lopez & Qizhou Wang & Silvio Giancola & Bernard Ghanem & Luca Passone & Andrea Fratalocchi, 2024. "Hardware-accelerated integrated optoelectronic platform towards real-time high-resolution hyperspectral video understanding," Nature Communications, Nature, vol. 15(1), pages 1-12, December.
    7. Li Zhang & Basu Jindal & Ahmed Alaa & Robert Weinreb & David Wilson & Eran Segal & James Zou & Pengtao Xie, 2025. "Generative AI enables medical image segmentation in ultra low-data regimes," Nature Communications, Nature, vol. 16(1), pages 1-22, December.
    8. Thiers, Fabio A. & Lucy, Kimberly, 2024. "A Distinct Approach to Clinical GenAI Oversight," OSF Preprints vm6zy, Center for Open Science.
    9. Fasheng Xu & Jing Hou & Wei Chen & Karen Xie, 2025. "Generative AI and Organizational Structure in the Knowledge Economy," Papers 2506.00532, arXiv.org.
    10. Oded Rotem & Tamar Schwartz & Ron Maor & Yishay Tauber & Maya Tsarfati Shapiro & Marcos Meseguer & Daniella Gilboa & Daniel S. Seidman & Assaf Zaritsky, 2024. "Visual interpretability of image-based classification models by generative latent space disentanglement applied to in vitro fertilization," Nature Communications, Nature, vol. 15(1), pages 1-19, December.
    11. Jingbo Liu & Fan Jiang & Shinichi Tashiro & Shujun Chen & Manabu Tanaka, 2025. "A physics-informed and data-driven framework for robotic welding in manufacturing," Nature Communications, Nature, vol. 16(1), pages 1-18, December.
    12. Mylene W. M. Yao & Elizabeth T. Nguyen & Matthew G. Retzloff & L. April Gago & John E. Nichols & John F. Payne & Barry A. Ripps & Michael Opsahl & Jeremy Groll & Ronald Beesley & Gregory Neal & Jaye A, 2025. "Machine learning center-specific models show improved IVF live birth predictions over US national registry-based model," Nature Communications, Nature, vol. 16(1), pages 1-14, December.
    13. Wasfieh Nazzal & Karl Thurnhofer-Hemsi & Ezequiel López-Rubio, 2024. "Improving Medical Image Segmentation Using Test-Time Augmentation with MedSAM," Mathematics, MDPI, vol. 12(24), pages 1-22, December.
    14. Marc Schmitt & Pantelis Koutroumpis, 2025. "Cyber Shadows: Neutralizing Security Threats with AI and Targeted Policy Measures," Papers 2501.09025, arXiv.org, revised Jan 2025.
    15. Zhaochang Yang & Ting Wei & Ying Liang & Xin Yuan & RuiTian Gao & Yujia Xia & Jie Zhou & Yue Zhang & Zhangsheng Yu, 2025. "A foundation model for generalizable cancer diagnosis and survival prediction from histopathological images," Nature Communications, Nature, vol. 16(1), pages 1-16, December.
    16. Yujin Oh & Sangjoon Park & Hwa Kyung Byun & Yeona Cho & Ik Jae Lee & Jin Sung Kim & Jong Chul Ye, 2024. "LLM-driven multimodal target volume contouring in radiation oncology," Nature Communications, Nature, vol. 15(1), pages 1-14, December.
    17. Erik Cuevas & Alberto Luque & Fernando Vega & Daniel Zaldívar & Jesús López, 2024. "Social influence dynamics for image segmentation: a novel pixel interaction approach," Journal of Computational Social Science, Springer, vol. 7(3), pages 2613-2642, December.
    18. Chuang Niu & Qing Lyu & Christopher D. Carothers & Parisa Kaviani & Josh Tan & Pingkun Yan & Mannudeep K. Kalra & Christopher T. Whitlow & Ge Wang, 2025. "Medical multimodal multitask foundation model for lung cancer screening," Nature Communications, Nature, vol. 16(1), pages 1-16, December.
    19. Zhou, Wuping & Xu, Chunchun & Zhang, Lanyue & Fu, Hongqiao & Jian, Weiyan, 2025. "Behaviours and drivers of diagnosis-related group upcoding in China: A mixed-methods study," Social Science & Medicine, Elsevier, vol. 366(C).
    20. Li, Guanglei & Wang, Guohao & Luo, Tengqi & Hu, Yuxiao & Wu, Shouyuan & Gong, Guanghui & Song, Chenchen & Guo, Zhiling & Liu, Zhengguang, 2024. "SolarSAM: Building-scale photovoltaic potential assessment based on Segment Anything Model (SAM) and remote sensing for emerging city," Renewable Energy, Elsevier, vol. 237(PA).

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:nat:natcom:v:16:y:2025:i:1:d:10.1038_s41467-025-61116-2. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.nature.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.