Author
Listed:
- Shen, Haijie
- Devaraj, Madhavi
Abstract
This paper investigates the integration of Retrieval-Augmented Generation (RAG) systems with Large Language Models (LLMs) to improve the explainability, reliability, and adaptability of sentiment analysis across various domains, including finance, healthcare, education, and customer service. Traditional LLMs, while powerful in many general tasks, often struggle to meet the specific requirements of sentiment analysis in domain-specific contexts. The proposed framework aims to address these challenges by combining RAG with LLMs, enhancing their capacity to generate more accurate and explainable sentiment interpretations. A key innovation of the framework is its incorporation of knowledge graphs, which significantly improve the interpretability of sentiment analysis results. This ensures that the analysis not only reflects the sentiment of the text but also provides a clear rationale behind the model's conclusions. Furthermore, the paper discusses critical challenges such as data quality, adversarial robustness, and privacy protection, which are particularly important when applying sentiment analysis models to sensitive domains. Data quality is a concern due to the potential for noise and bias in training data, which can negatively impact model performance. Adversarial robustness refers to the model's ability to withstand manipulative inputs designed to deceive the system, a crucial aspect in applications like financial forecasting and healthcare diagnostics. Privacy protection is equally important, as these models often handle sensitive personal data, making it essential to integrate mechanisms that ensure compliance with data privacy regulations and safeguard user confidentiality. Experimental results demonstrate that the RAG-based approach not only improves the transparency of sentiment analysis models but also maintains high accuracy in identifying and interpreting sentiment across different domains. The findings indicate that this method is especially beneficial for creating trustworthy AI systems capable of explaining their decisions in a way that is understandable to humans. This makes the RAG-based framework a key technology for building AI applications that are both transparent and reliable. By addressing the most pressing issues surrounding model explainability and robustness, this approach sets the stage for more responsible deployment of AI systems in real-world applications, ensuring they meet both technical and ethical standards. Ultimately, the integration of RAG with LLMs holds great promise for enhancing the transparency, fairness, and effectiveness of AI-driven sentiment analysis, providing a more solid foundation for AI deployment in critical sectors where trust and accountability are paramount.
Suggested Citation
Shen, Haijie & Devaraj, Madhavi, 2025.
"Research on the Application of RAG and LLMs in Explainable Sentiment Analysis,"
GBP Proceedings Series, Scientific Open Access Publishing, vol. 17, pages 403-412.
Handle:
RePEc:axf:gbppsa:v:17:y:2025:i::p:403-412
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:axf:gbppsa:v:17:y:2025:i::p:403-412. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Yuchi Liu (email available below). General contact details of provider: https://soapubs.com/index.php/GBPPS .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.