Author
Listed:
- Yu Feng
(Beijing Renhe Information Technology Co., Ltd.
Key Laboratory of Digital Publishing and Total Process Management of Scientific and Technical Journals)
- Wenkang An
(Beijing Renhe Information Technology Co., Ltd.)
- Hao Wang
(Beijing Renhe Information Technology Co., Ltd.)
- Zhen Yin
(Beijing Renhe Information Technology Co., Ltd.
Key Laboratory of Digital Publishing and Total Process Management of Scientific and Technical Journals)
Abstract
The exponential growth of scientific literature presents a significant challenge for researchers to efficiently access and synthesize key information. Automatic summarization techniques have become essential for addressing this issue, enabling researchers to quickly grasp core content and key findings. However, the complexity and domain-specific nature of scientific texts demand high accuracy and contextual depth, which remain challenging for existing summarization models. This paper introduces a hierarchical summarization framework that integrates contrastive learning, document section classification, customized prompt-based summarization, and Chain-of-Thought (CoT) structured reasoning. Our approach first utilizes contrastive learning to enhance section classification, ensuring accurate content segmentation. Based on this classification, section-specific prompts are designed to generate targeted summaries, which are subsequently refined and aggregated through a CoT-based reasoning process to improve coherence and informativeness. We evaluate our method on the Sci-Summary dataset, comprising 20,000 scientific articles across multiple disciplines and languages. Experimental results demonstrate that our approach outperforms state-of-the-art baseline models, achieving notable improvements in ROUGE scores, BertScore, and evaluation using GPT-4o models (G-Eval). Furthermore, the results highlight the framework’s ability to preserve factual accuracy, enhance coherence, and improve the interpretability of generated summaries. These findings underscore the potential of our method in advancing scientific literature summarization, offering a scalable and effective solution for automated knowledge extraction in research domains.
Suggested Citation
Yu Feng & Wenkang An & Hao Wang & Zhen Yin, 2025.
"Enhancing scientific literature summarization via contrastive learning and chain-of-thought prompting,"
Scientometrics, Springer;Akadémiai Kiadó, vol. 130(8), pages 4773-4799, August.
Handle:
RePEc:spr:scient:v:130:y:2025:i:8:d:10.1007_s11192-025-05397-w
DOI: 10.1007/s11192-025-05397-w
Download full text from publisher
As the access to this document is restricted, you may want to
for a different version of it.
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:spr:scient:v:130:y:2025:i:8:d:10.1007_s11192-025-05397-w. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.