IDEAS home Printed from https://ideas.repec.org/a/dba/jsppaa/v2y2026i3p38-49.html

Comparative Empirical Evaluation of Hallucination Mitigation Strategies in LLM-Based Text Generation

Author

Listed:
  • Xu, Shuyang
  • Li, Minhao
  • Zhao, Fanyi

Abstract

Large language models (LLMs) have achieved remarkable performance across natural language tasks, yet their tendency to generate factually incorrect content --- commonly termed hallucination --- remains a critical barrier to deployment in high-stakes domains. Two dominant families of mitigation strategies have emerged: retrieval-augmented generation (RAG) approaches that ground outputs in external knowledge, and prompting-based approaches that leverage self-verification without external retrieval. While both families have demonstrated promising results individually, no systematic comparative evaluation exists across standardized benchmarks under unified conditions. This paper presents a comparative empirical analysis of hallucination mitigation strategies spanning four RAG variants (Naive RAG, Self-RAG, Corrective RAG, FLARE) and three prompting-based methods (Chain-of-Verification, self-consistency decoding, self-contradiction detection) evaluated on five public benchmarks: TruthfulQA, HaluEval, FActScore, FELM, and RAGBench. Drawing exclusively from published experimental results, the analysis reveals that advanced RAG strategies achieve 10--25 percentage-point improvements in factual precision over naive baselines, while prompting-based methods offer competitive performance on reasoning-intensive tasks without retrieval infrastructure. Task-dependent performance patterns emerge: knowledge-intensive factoid tasks favor retrieval augmentation, whereas logical consistency tasks benefit from self-verification prompting. A practical decision matrix is derived to guide practitioners in selecting appropriate strategies based on task characteristics and resource constraints.

Suggested Citation

  • Xu, Shuyang & Li, Minhao & Zhao, Fanyi, 2026. "Comparative Empirical Evaluation of Hallucination Mitigation Strategies in LLM-Based Text Generation," Journal of Sustainability, Policy, and Practice, Pinnacle Academic Press, vol. 2(3), pages 38-49.
  • Handle: RePEc:dba:jsppaa:v:2:y:2026:i:3:p:38-49
    as

    Download full text from publisher

    File URL: https://pinnaclepubs.com/index.php/jspp/article/view/710/683
    Download Restriction: no
    ---><---

    More about this item

    Keywords

    ;
    ;
    ;
    ;

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:dba:jsppaa:v:2:y:2026:i:3:p:38-49. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Joseph Clark (email available below). General contact details of provider: https://pinnaclepubs.com/index.php/JSPP .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.