Author
Listed:
- Eman Daraghmi
(Department of Computer Science, Palestine Technical University Kadoorie, Jaffa Street, Tulkarm 9993400, Palestine)
- Lour Atwe
(Department of Computer Science, Palestine Technical University Kadoorie, Jaffa Street, Tulkarm 9993400, Palestine)
- Areej Jaber
(Department of Computer Science, Palestine Technical University Kadoorie, Jaffa Street, Tulkarm 9993400, Palestine)
Abstract
This study aims to conduct a comprehensive comparative evaluation of three transformer-based models, PEGASUS, BART, and T5 variants (SMALL and BASE), for the task of abstractive text summarization. The evaluation spans across three benchmark datasets: CNN/DailyMail (long-form news articles), Xsum (extreme single-sentence summaries of BBC articles), and Samsum (conversational dialogues). Each dataset presents unique challenges in terms of length, style, and domain, enabling a robust assessment of the models’ capabilities. All models were fine-tuned under controlled experimental settings using filtered and preprocessed subsets, with token length limits applied to maintain consistency and prevent truncation. The evaluation leveraged ROUGE-1, ROUGE-2, and ROUGE-L scores to measure summary quality, while efficiency metrics such as training time were also considered. An additional qualitative assessment was conducted through expert human evaluation of fluency, relevance, and conciseness. Results indicate that PEGASUS achieved the highest ROUGE scores on CNN/DailyMail, BART excelled in Xsum and Samsum, while T5 models, particularly T5-Base, narrowed the performance gap with larger models while still offering efficiency advantages compared to PEGASUS and BART. These findings highlight the trade-offs between model performance and computational efficiency, offering practical insights into model scaling—where T5-Small favors lightweight efficiency and T5-Base provides stronger accuracy without excessive resource demands.
Suggested Citation
Eman Daraghmi & Lour Atwe & Areej Jaber, 2025.
"A Comparative Study of PEGASUS, BART, and T5 for Text Summarization Across Diverse Datasets,"
Future Internet, MDPI, vol. 17(9), pages 1-33, August.
Handle:
RePEc:gam:jftint:v:17:y:2025:i:9:p:389-:d:1736648
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jftint:v:17:y:2025:i:9:p:389-:d:1736648. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.