IDEAS home Printed from https://ideas.repec.org/a/nat/natcom/v16y2025i1d10.1038_s41467-025-62060-x.html
   My bibliography  Save this article

Large-vocabulary forensic pathological analyses via prototypical cross-modal contrastive learning

Author

Listed:
  • Chen Shen

    (Xi’an Jiaotong University)

  • Chunfeng Lian

    (Xi’an Jiaotong University
    Pazhou Lab (Huangpu))

  • Wanqing Zhang

    (Xi’an Jiaotong University)

  • Fan Wang

    (Xi’an Jiaotong University)

  • Jianhua Zhang

    (Academy of Forensic Science)

  • Shuanliang Fan

    (Xi’an Jiaotong University)

  • Xin Wei

    (Xi’an Jiaotong University)

  • Gongji Wang

    (Xi’an Jiaotong University)

  • Kehan Li

    (Xi’an Jiaotong University)

  • Hongshu Mu

    (Xian’yang Public Security Bureau)

  • Hao Wu

    (Xi’an Jiaotong University)

  • Xinggong Liang

    (Xi’an Jiaotong University)

  • Jianhua Ma

    (Pazhou Lab (Huangpu)
    Xi’an Jiaotong University)

  • Zhenyuan Wang

    (Xi’an Jiaotong University)

Abstract

Forensic pathology plays a vital role in determining the cause and manner of death through macroscopic and microscopic post-mortem examinations. However, the field faces challenges such as variability in outcomes, labor-intensive processes, and a shortage of skilled professionals. This paper introduces SongCi, a visual-language model tailored for forensic pathology. Leveraging advanced prototypical cross-modal self-supervised contrastive learning, SongCi improves the accuracy, efficiency, and generalizability of forensic analyses. Pre-trained and validated on a large multi-center dataset comprising over 16 million high-resolution image patches, 2, 228 vision-language pairs from post-mortem whole slide images, gross key findings, and 471 unique diagnostic outcomes, SongCi demonstrates superior performance over existing multi-modal models and computational pathology foundation models in forensic tasks. It matches experienced forensic pathologists’ capabilities, significantly outperforms less experienced practitioners, and offers robust multi-modal explainability.

Suggested Citation

  • Chen Shen & Chunfeng Lian & Wanqing Zhang & Fan Wang & Jianhua Zhang & Shuanliang Fan & Xin Wei & Gongji Wang & Kehan Li & Hongshu Mu & Hao Wu & Xinggong Liang & Jianhua Ma & Zhenyuan Wang, 2025. "Large-vocabulary forensic pathological analyses via prototypical cross-modal contrastive learning," Nature Communications, Nature, vol. 16(1), pages 1-20, December.
  • Handle: RePEc:nat:natcom:v:16:y:2025:i:1:d:10.1038_s41467-025-62060-x
    DOI: 10.1038/s41467-025-62060-x
    as

    Download full text from publisher

    File URL: https://www.nature.com/articles/s41467-025-62060-x
    File Function: Abstract
    Download Restriction: no

    File URL: https://libkey.io/10.1038/s41467-025-62060-x?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:nat:natcom:v:16:y:2025:i:1:d:10.1038_s41467-025-62060-x. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.nature.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.