IDEAS home Printed from https://ideas.repec.org/a/nat/nature/v638y2025i8051d10.1038_s41586-024-08378-w.html
   My bibliography  Save this article

A vision–language foundation model for precision oncology

Author

Listed:
  • Jinxi Xiang

    (Stanford University School of Medicine)

  • Xiyue Wang

    (Stanford University School of Medicine)

  • Xiaoming Zhang

    (Stanford University School of Medicine)

  • Yinghua Xi

    (Stanford University School of Medicine)

  • Feyisope Eweje

    (Stanford University School of Medicine)

  • Yijiang Chen

    (Stanford University School of Medicine)

  • Yuchen Li

    (Stanford University School of Medicine)

  • Colin Bergstrom

    (Stanford University School of Medicine)

  • Matthew Gopaulchan

    (Stanford University School of Medicine)

  • Ted Kim

    (Stanford University School of Medicine)

  • Kun-Hsing Yu

    (Harvard Medical School)

  • Sierra Willens

    (Stanford University School of Medicine)

  • Francesca Maria Olguin

    (Stanford University School of Medicine)

  • Jeffrey J. Nirschl

    (Stanford University School of Medicine)

  • Joel Neal

    (Stanford University School of Medicine)

  • Maximilian Diehn

    (Stanford University School of Medicine)

  • Sen Yang

    (Stanford University School of Medicine)

  • Ruijiang Li

    (Stanford University School of Medicine
    Stanford Institute for Human-Centered Artificial Intelligence)

Abstract

Clinical decision-making is driven by multimodal data, including clinical notes and pathological characteristics. Artificial intelligence approaches that can effectively integrate multimodal data hold significant promise in advancing clinical care1,2. However, the scarcity of well-annotated multimodal datasets in clinical settings has hindered the development of useful models. In this study, we developed the Multimodal transformer with Unified maSKed modeling (MUSK), a vision–language foundation model designed to leverage large-scale, unlabelled, unpaired image and text data. MUSK was pretrained on 50 million pathology images from 11,577 patients and one billion pathology-related text tokens using unified masked modelling. It was further pretrained on one million pathology image–text pairs to efficiently align the vision and language features. With minimal or no further training, MUSK was tested in a wide range of applications and demonstrated superior performance across 23 patch-level and slide-level benchmarks, including image-to-text and text-to-image retrieval, visual question answering, image classification and molecular biomarker prediction. Furthermore, MUSK showed strong performance in outcome prediction, including melanoma relapse prediction, pan-cancer prognosis prediction and immunotherapy response prediction in lung and gastro-oesophageal cancers. MUSK effectively combined complementary information from pathology images and clinical reports and could potentially improve diagnosis and precision in cancer therapy.

Suggested Citation

  • Jinxi Xiang & Xiyue Wang & Xiaoming Zhang & Yinghua Xi & Feyisope Eweje & Yijiang Chen & Yuchen Li & Colin Bergstrom & Matthew Gopaulchan & Ted Kim & Kun-Hsing Yu & Sierra Willens & Francesca Maria Ol, 2025. "A vision–language foundation model for precision oncology," Nature, Nature, vol. 638(8051), pages 769-778, February.
  • Handle: RePEc:nat:nature:v:638:y:2025:i:8051:d:10.1038_s41586-024-08378-w
    DOI: 10.1038/s41586-024-08378-w
    as

    Download full text from publisher

    File URL: https://www.nature.com/articles/s41586-024-08378-w
    File Function: Abstract
    Download Restriction: Access to the full text of the articles in this series is restricted.

    File URL: https://libkey.io/10.1038/s41586-024-08378-w?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:nat:nature:v:638:y:2025:i:8051:d:10.1038_s41586-024-08378-w. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.nature.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.