IDEAS home Printed from https://ideas.repec.org/p/arx/papers/2510.01526.html
   My bibliography  Save this paper

One More Question is Enough, Expert Question Decomposition (EQD) Model for Domain Quantitative Reasoning

Author

Listed:
  • Mengyu Wang
  • Sotirios Sabanis
  • Miguel de Carvalho
  • Shay B. Cohen
  • Tiejun Ma

Abstract

Domain-specific quantitative reasoning remains a major challenge for large language models (LLMs), especially in fields requiring expert knowledge and complex question answering (QA). In this work, we propose Expert Question Decomposition (EQD), an approach designed to balance the use of domain knowledge with computational efficiency. EQD is built on a two-step fine-tuning framework and guided by a reward function that measures the effectiveness of generated sub-questions in improving QA outcomes. It requires only a few thousand training examples and a single A100 GPU for fine-tuning, with inference time comparable to zero-shot prompting. Beyond its efficiency, EQD outperforms state-of-the-art domain-tuned models and advanced prompting strategies. We evaluate EQD in the financial domain, characterized by specialized knowledge and complex quantitative reasoning, across four benchmark datasets. Our method consistently improves QA performance by 0.6% to 10.5% across different LLMs. Our analysis reveals an important insight: in domain-specific QA, a single supporting question often provides greater benefit than detailed guidance steps.

Suggested Citation

  • Mengyu Wang & Sotirios Sabanis & Miguel de Carvalho & Shay B. Cohen & Tiejun Ma, 2025. "One More Question is Enough, Expert Question Decomposition (EQD) Model for Domain Quantitative Reasoning," Papers 2510.01526, arXiv.org.
  • Handle: RePEc:arx:papers:2510.01526
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/2510.01526
    File Function: Latest version
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Neng Wang & Hongyang Yang & Christina Dan Wang, 2023. "FinGPT: Instruction Tuning Benchmark for Open-Source Large Language Models in Financial Datasets," Papers 2310.04793, arXiv.org, revised Nov 2023.
    2. Yi Yang & Yixuan Tang & Kar Yan Tam, 2023. "InvestLM: A Large Language Model for Investment using Financial Domain Instruction Tuning," Papers 2309.13064, arXiv.org.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Xuewen Han & Neng Wang & Shangkun Che & Hongyang Yang & Kunpeng Zhang & Sean Xin Xu, 2024. "Enhancing Investment Analysis: Optimizing AI-Agent Collaboration in Financial Research," Papers 2411.04788, arXiv.org.
    2. Liyuan Chen & Shuoling Liu & Jiangpeng Yan & Xiaoyu Wang & Henglin Liu & Chuang Li & Kecheng Jiao & Jixuan Ying & Yang Veronica Liu & Qiang Yang & Xiu Li, 2025. "Advancing Financial Engineering with Foundation Models: Progress, Applications, and Challenges," Papers 2507.18577, arXiv.org.
    3. Qilong Wu & Xiaoneng Xiang & Hejia Huang & Xuan Wang & Yeo Wei Jie & Ranjan Satapathy & Ricardo Shirota Filho & Bharadwaj Veeravalli, 2024. "SusGen-GPT: A Data-Centric LLM for Financial NLP and Sustainability Report Generation," Papers 2412.10906, arXiv.org.
    4. Jiaxin Liu & Yixuan Tang & Yi Yang & Kar Yan Tam, 2025. "Evaluating and Aligning Human Economic Risk Preferences in LLMs," Papers 2503.06646, arXiv.org, revised Sep 2025.
    5. Chiu, I-Chan & Hung, Mao-Wei, 2025. "Finance-specific large language models: Advancing sentiment analysis and return prediction with LLaMA 2," Pacific-Basin Finance Journal, Elsevier, vol. 90(C).
    6. Lars Hornuf & David J. Streich & Niklas Töllich, 2025. "Making GenAI Smarter: Evidence from a Portfolio Allocation Experiment," CESifo Working Paper Series 11862, CESifo.
    7. Yixuan Liang & Yuncong Liu & Neng Wang & Hongyang Yang & Boyu Zhang & Christina Dan Wang, 2024. "FinGPT: Enhancing Sentiment-Based Stock Movement Prediction with Dissemination-Aware and Context-Enriched LLMs," Papers 2412.10823, arXiv.org, revised Jun 2025.
    8. Haohang Li & Yupeng Cao & Yangyang Yu & Shashidhar Reddy Javaji & Zhiyang Deng & Yueru He & Yuechen Jiang & Zining Zhu & Koduvayur Subbalakshmi & Guojun Xiong & Jimin Huang & Lingfei Qian & Xueqing Pe, 2024. "INVESTORBENCH: A Benchmark for Financial Decision-Making Tasks with LLM-based Agent," Papers 2412.18174, arXiv.org.
    9. Yuqi Nie & Yaxuan Kong & Xiaowen Dong & John M. Mulvey & H. Vincent Poor & Qingsong Wen & Stefan Zohren, 2024. "A Survey of Large Language Models for Financial Applications: Progress, Prospects and Challenges," Papers 2406.11903, arXiv.org.
    10. Zhizhuo Kou & Holam Yu & Junyu Luo & Jingshu Peng & Xujia Li & Chengzhong Liu & Juntao Dai & Lei Chen & Sirui Han & Yike Guo, 2024. "Automate Strategy Finding with LLM in Quant Investment," Papers 2409.06289, arXiv.org, revised Nov 2025.
    11. Shijie Han & Jingshu Zhang & Yiqing Shen & Kaiyuan Yan & Hongguang Li, 2025. "FinSphere, a Real-Time Stock Analysis Agent Powered by Instruction-Tuned LLMs and Domain Tools," Papers 2501.12399, arXiv.org, revised Jul 2025.
    12. Christian Fieberg & Lars Hornuf & Maximilian Meiler & David J. Streich, 2025. "Using Large Language Models for Financial Advice," CESifo Working Paper Series 11666, CESifo.
    13. David Kuo Chuen Lee & Chong Guan & Yinghui Yu & Qinxu Ding, 2024. "A Comprehensive Review of Generative AI in Finance," FinTech, MDPI, vol. 3(3), pages 1-19, September.
    14. Qianggang Ding & Haochen Shi & Jiadong Guo & Bang Liu, 2024. "TradExpert: Revolutionizing Trading with Mixture of Expert LLMs," Papers 2411.00782, arXiv.org, revised May 2025.
    15. Fernando Spadea & Oshani Seneviratne, 2025. "Aligning Language Models with Investor and Market Behavior for Financial Recommendations," Papers 2510.15993, arXiv.org.
    16. Zhiyu Cao & Zachary Feinstein, 2024. "Large Language Model in Financial Regulatory Interpretation," Papers 2405.06808, arXiv.org, revised Jul 2024.
    17. Congluo Xu & Zhaobin Liu & Ziyang Li, 2025. "FinArena: A Human-Agent Collaboration Framework for Financial Market Analysis and Forecasting," Papers 2503.02692, arXiv.org.
    18. Jean Lee & Nicholas Stevens & Soyeon Caren Han & Minseok Song, 2024. "A Survey of Large Language Models in Finance (FinLLMs)," Papers 2402.02315, arXiv.org.
    19. Baptiste Lefort & Eric Benhamou & Beatrice Guez & Jean-Jacques Ohana & Ethan Setrouk & Alban Etienne, 2025. "FinMarBa: A Market-Informed Dataset for Financial Sentiment Classification," Papers 2507.22932, arXiv.org.
    20. Ali Elahi & Fatemeh Taghvaei, 2024. "Combining Financial Data and News Articles for Stock Price Movement Prediction Using Large Language Models," Papers 2411.01368, arXiv.org.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2510.01526. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.