IDEAS home Printed from https://ideas.repec.org/a/plo/pdig00/0001248.html

Leveraging deep learning to infer continuous predictions from ordinal labels in medical imaging

Author

Listed:
  • Katharina V Hoebel
  • Andréanne Lemay
  • John Peter Campbell
  • Susan Ostmo
  • Michael F Chiang
  • Christopher P Bridge
  • Matthew D Li
  • Praveer Singh
  • Aaron S Coyner
  • Jayashree Kalpathy-Cramer

Abstract

In clinical medicine, variables like disease severity are often categorized into discrete ordinal labels such as normal/mild/moderate/severe. However, these labels, commonly used to train and evaluate disease severity prediction models, simplify an underlying continuous severity spectrum. Using continuous scores can aid in detecting small severity changes more sensitively over time. We introduce a deep learning based approach that predicts continuously valued variables from medical images using only discrete ordinal labels during model development. We evaluated this approach using three medical imaging datasets: disease severity prediction for retinopathy of prematurity and knee osteoarthritis, and breast density prediction from mammograms. Deep learning models were trained with discrete labels, and model outputs were transformed into continuous scores. These were then compared against detailed expert severity assessments, which exceeded the granularity of training labels. Our study explored conventional and Monte Carlo dropout multi-class classification, ordinal classification, regression, and twin models. We found that models incorporating the ordinal nature of training labels significantly outperformed conventional multi-class classification. Notably, continuous scores from ordinal classification and regression models demonstrated a higher correlation with expert severity rankings and lower mean squared errors than multi-class models. The application of Monte Carlo dropout further enhanced the prediction accuracy of continuously valued scores, aligning closely with the continuous target variable. Our findings confirm that accurate continuous scores can be learned from discrete ordinal labels using deep learning, offering a robust method that effectively bridges the gap between discrete and continuous data across various image analysis tasks.Author summary: Physicians often describe disease severity using categories like mild, moderate, or severe. However, disease severity exists on a continuous scale, with small differences that the commonly used broad categories cannot capture. This can make it harder to track changes over time. Our study systematically assesses how deep learning models can be trained to predict more precise scores for disease severity from images, even when they are trained using only simple discrete severity labels. We tested our approach on three distinct prediction tasks in medical imaging: retinopathy of prematurity, knee osteoarthritis, and breast density. We found that models that respect the inherent ordinal structure of the training labels generate more precise continuous scores, closely aligning with expert assessments. Furthermore, incorporating Monte Carlo dropout further improved the accuracy of these predictions. In summary, our findings show that the gap between categorical ordinal labels and the continuous nature of disease progression can be closed, enabling more sensitive assessments of disease severity. Ultimately, providing more detailed automatic assessments of disease severity could improve clinical decision-making by allowing earlier detection of disease deterioration and more personalized treatment planning.

Suggested Citation

  • Katharina V Hoebel & Andréanne Lemay & John Peter Campbell & Susan Ostmo & Michael F Chiang & Christopher P Bridge & Matthew D Li & Praveer Singh & Aaron S Coyner & Jayashree Kalpathy-Cramer, 2026. "Leveraging deep learning to infer continuous predictions from ordinal labels in medical imaging," PLOS Digital Health, Public Library of Science, vol. 5(4), pages 1-17, April.
  • Handle: RePEc:plo:pdig00:0001248
    DOI: 10.1371/journal.pdig.0001248
    as

    Download full text from publisher

    File URL: https://journals.plos.org/digitalhealth/article?id=10.1371/journal.pdig.0001248
    Download Restriction: no

    File URL: https://journals.plos.org/digitalhealth/article/file?id=10.1371/journal.pdig.0001248&type=printable
    Download Restriction: no

    File URL: https://libkey.io/10.1371/journal.pdig.0001248?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pdig00:0001248. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: digitalhealth (email available below). General contact details of provider: https://journals.plos.org/digitalhealth .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.