Author
Listed:
- Junhyeok Lee
- Hyungjin Chung
- Minseok Suh
- Jeong-Hoon Lee
- Kyu Sung Choi
Abstract
Deep learning (DL) models are widely adopted in biomedical imaging, where image segmentation is increasingly recognized as a quantitative tool for extracting clinically meaningful information. However, model performance critically depends on dataset size and training configuration, including model capacity. Traditional sample size estimation methods are inadequate for DL due to its reliance on high-dimensional data and its nonlinear learning behavior. To address this gap, we propose a DL-specific framework to estimate the minimal dataset size required for stable segmentation performance. We validate this framework across two distinct clinical tasks: colorectal polyp segmentation from 2D endoscopic images (Kvasir-SEG) and glioma segmentation from 3D brain MRIs (BraTS 2020). We trained residual U-Nets—a simple, yet foundational architecture—across 200 configurations for Kvasir-SEG and 40 configurations for BraTS 2020, varying data subsets (2%–100% for the 2D task and 5%–100% for the 3D task). In both tasks, performance metrics such as the Dice Similarity Coefficient (DSC) consistently improved with increasing data and depth, but gains invariably plateaued beyond approximately 80% data usage. The best configuration for polyp segmentation (6 layers, 100% data) achieved a DSC of 0.86, while the best for brain tumor segmentation reached a DSC of 0.79. Critically, we introduce a surrogate modeling pipeline using Long Short-Term Memory (LSTM) networks to predict these performance curves. A simple uni-directional LSTM model accurately forecasted the final DSC, accurately forecasting the final DSC with low mean absolute error across both tasks. These findings demonstrate that segmentation performance can be reliably estimated with lightweight models, suggesting that collecting a moderate amount of high-quality data is often sufficient for developing clinically viable DL models. Our framework provides a practical, empirical method for optimizing resource allocation in medical AI development.
Suggested Citation
Junhyeok Lee & Hyungjin Chung & Minseok Suh & Jeong-Hoon Lee & Kyu Sung Choi, 2025.
"Deep learning for deep learning performance: How much data is needed for segmentation in biomedical imaging?,"
PLOS ONE, Public Library of Science, vol. 20(12), pages 1-16, December.
Handle:
RePEc:plo:pone00:0339064
DOI: 10.1371/journal.pone.0339064
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pone00:0339064. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: plosone (email available below). General contact details of provider: https://journals.plos.org/plosone/ .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.