Author
Abstract
To address the persistent challenges of feature sparsity, weak generalization ability, and high computational cost faced by traditional recommendation systems in few-shot cold-start scenarios, this paper proposes a novel, lightweight large language model (LLM)-based recommendation algorithm named LLM-RecLite. As digital platforms increasingly rely on personalized content delivery, mitigating the cold-start problem remains critical for user retention. The proposed LLM-RecLite algorithm first performs rigorous domain adaptation on lightweight LLMs using parameter-efficient fine-tuning techniques, specifically QLoRA. This step effectively bridges the semantic gap between general-purpose linguistic representations and specific recommendation tasks without incurring prohibitive training costs. Secondly, the methodology incorporates a meticulously designed hierarchical prompt template that seamlessly integrates historical user-item interactions with rich content features, enabling robust semantic reasoning under strictly few-shot conditions. Finally, the framework introduces an advanced knowledge distillation mechanism to transfer the complex reasoning capabilities of the larger model to a significantly more lightweight inference model. This ensures the system meets the stringent low-latency performance requirements of real-time recommendation environments. Comprehensive experimental results conducted on two widely recognized public datasets, MovieLens-1M and Amazon Beauty, demonstrate the superior efficacy of the proposed approach. Compared with traditional cold-start algorithms and mainstream LLM-based recommendation frameworks, LLM-RecLite significantly improves the NDCG@10 metric by 18.3% and 9.7%, respectively, while simultaneously increasing inference speed by 4.2 times. Ultimately, this research effectively balances recommendation accuracy and computational efficiency, providing a highly feasible and scalable solution for few-shot cold-start recommendations in resource-constrained industrial applications.
Suggested Citation
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:dba:ejacia:v:2:y:2026:i:1:p:160-171. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Joseph Clark (email available below). General contact details of provider: https://pinnaclepubs.com/index.php/EJACI .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.