Author
Abstract
Modern recommender systems use machine learning (ML) models to predict consumer preferences based on consumption history. Although these “black-box†models achieve impressive predictive performance, they often suffer from a lack of transparency and explainability. While explainable AI research suggests a tradeoff between the two, we demonstrate that combining large language models (LLMs) with deep neural networks (DNNs) can improve both. We propose LR-Recsys, which augments state-of-the-art DNN-based recommender systems with LLMs’ reasoning capabilities. LR-Recsys introduces a contrastive-explanation generator that leverages LLMs to produce human-readable positive explanations (why a consumer might like a product) and negative explanations (why they might not). These explanations are embedded via a fine-tuned AutoEncoder and combined with consumer and product features as inputs to the DNN to produce the final predictions. Beyond offering explainability, LR-Recsys also improves learning efficiency and predictive accuracy. To understand why, we provide insights using high-dimensional multi-environment learning theory. Statistically, we show that LLMs are equipped with better knowledge of the important variables driving consumer decision-making, and that incorporating such knowledge can improve the learning efficiency of ML models. Extensive experiments on three real-world recommendation datasets demonstrate that the proposed LR-Recsys framework consistently outperforms state-of-the-art black-box and explainable recommender systems, achieving a 3–14\% improvement in predictive performance. This performance gain could translate into millions of dollars in annual revenue if deployed on leading content recommendation platforms today. Our additional analysis confirms that these gains mainly come from LLMs’ strong reasoning capabilities, rather than their external domain knowledge or summarization skills. LR-RecSys presents an effective approach to combine LLMs with traditional DNNs, two of the most widely used ML models today. Specifically, we show that LLMs can improve both the explainability and predictive performance of traditional DNNs through their reasoning capability. Beyond improving recommender systems, our findings emphasize the value of combining contrastive explanations for understanding consumer preferences and guiding managerial strategies for online platforms. These explanations provide actionable insights for consumers, sellers, and platforms, helping to build trust, optimize product offerings, and inform targeting strategies.
Suggested Citation
Wang, Yuyan & Li, Pan & Chen, Minmin, 2025.
"The Blessing of Reasoning: LLM-Based Contrastive Explanations in Black-Box Recommender Systems,"
Research Papers
4234, Stanford University, Graduate School of Business.
Handle:
RePEc:ecl:stabus:4234
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:ecl:stabus:4234. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: the person in charge (email available below). General contact details of provider: https://edirc.repec.org/data/gsstaus.html .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.