Author
Listed:
- Sayed Akif Hussain
- Chen Qiu-shi
- Syed Amer Hussain
- Syed Atif Hussain
- Asma Komal
- Muhammad Imran Khalid
Abstract
This study proposes a novel hybrid deep learning framework that integrates a Large Language Model (LLM) with a Transformer architecture for stock price forecasting. The research addresses a critical theoretical gap in existing approaches that empirically combine textual and numerical data without a formal understanding of their interaction mechanisms. We conceptualise a prompt-based LLM as a mathematically defined signal generator, capable of extracting directional market sentiment and an associated confidence score from financial news. These signals are then dynamically fused with structured historical price features through a noise-robust gating mechanism, enabling the Transformer to adaptively weigh semantic and quantitative information. Empirical evaluations demonstrate that the proposed Hybrid LLM-Transformer model significantly outperforms a Vanilla Transformer baseline, reducing the Root Mean Squared Error (RMSE) by 5.28% (p = 0.003). Moreover, ablation and robustness analyses confirm the model's stability under noisy conditions and its capacity to maintain interpretability through confidence-weighted attention. The findings provide both theoretical and empirical support for a paradigm shift from empirical observation to formalised modelling of LLM-Transformer interactions, paving the way toward explainable, noise-resilient, and semantically enriched financial forecasting systems.
Suggested Citation
Sayed Akif Hussain & Chen Qiu-shi & Syed Amer Hussain & Syed Atif Hussain & Asma Komal & Muhammad Imran Khalid, 2026.
"Improving Financial Forecasting with a Synergistic LLM-Transformer Architecture: A Hybrid Approach to Stock Price Prediction,"
Papers
2601.02878, arXiv.org.
Handle:
RePEc:arx:papers:2601.02878
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2601.02878. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.