IDEAS home Printed from https://ideas.repec.org/p/arx/papers/2404.10555.html
   My bibliography  Save this paper

Construction of Domain-specified Japanese Large Language Model for Finance through Continual Pre-training

Author

Listed:
  • Masanori Hirano
  • Kentaro Imajo

Abstract

Large language models (LLMs) are now widely used in various fields, including finance. However, Japanese financial-specific LLMs have not been proposed yet. Hence, this study aims to construct a Japanese financial-specific LLM through continual pre-training. Before tuning, we constructed Japanese financial-focused datasets for continual pre-training. As a base model, we employed a Japanese LLM that achieved state-of-the-art performance on Japanese financial benchmarks among the 10-billion-class parameter models. After continual pre-training using the datasets and the base model, the tuned model performed better than the original model on the Japanese financial benchmarks. Moreover, the outputs comparison results reveal that the tuned model's outputs tend to be better than the original model's outputs in terms of the quality and length of the answers. These findings indicate that domain-specific continual pre-training is also effective for LLMs. The tuned model is publicly available on Hugging Face.

Suggested Citation

  • Masanori Hirano & Kentaro Imajo, 2024. "Construction of Domain-specified Japanese Large Language Model for Finance through Continual Pre-training," Papers 2404.10555, arXiv.org.
  • Handle: RePEc:arx:papers:2404.10555
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/2404.10555
    File Function: Latest version
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Boyu Zhang & Hongyang Yang & Xiao-Yang Liu, 2023. "Instruct-FinGPT: Financial Sentiment Analysis by Instruction Tuning of General-Purpose Large Language Models," Papers 2306.12659, arXiv.org.
    2. Shijie Wu & Ozan Irsoy & Steven Lu & Vadim Dabravolski & Mark Dredze & Sebastian Gehrmann & Prabhanjan Kambadur & David Rosenberg & Gideon Mann, 2023. "BloombergGPT: A Large Language Model for Finance," Papers 2303.17564, arXiv.org, revised Dec 2023.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Thanos Konstantinidis & Giorgos Iacovides & Mingxue Xu & Tony G. Constantinides & Danilo Mandic, 2024. "FinLlama: Financial Sentiment Classification for Algorithmic Trading Applications," Papers 2403.12285, arXiv.org.
    2. Yinheng Li & Shaofei Wang & Han Ding & Hang Chen, 2023. "Large Language Models in Finance: A Survey," Papers 2311.10723, arXiv.org.
    3. Lezhi Li & Ting-Yu Chang & Hai Wang, 2023. "Multimodal Gen-AI for Fundamental Investment Research," Papers 2401.06164, arXiv.org.
    4. Xiao-Yang Liu & Guoxuan Wang & Hongyang Yang & Daochen Zha, 2023. "FinGPT: Democratizing Internet-scale Data for Financial Large Language Models," Papers 2307.10485, arXiv.org, revised Nov 2023.
    5. Frank Xing, 2024. "Designing Heterogeneous LLM Agents for Financial Sentiment Analysis," Papers 2401.05799, arXiv.org.
    6. Seppälä, Timo & Mucha, Tomasz & Mattila, Juri, 2023. "Beyond AI, Blockchain Systems, and Digital Platforms: Digitalization Unlocks Mass Hyper-Personalization and Mass Servitization," ETLA Working Papers 106, The Research Institute of the Finnish Economy.
    7. Zhaofeng Zhang & Banghao Chen & Shengxin Zhu & Nicolas Langren'e, 2024. "From attention to profit: quantitative trading strategy based on transformer," Papers 2404.00424, arXiv.org.
    8. Wentao Zhang & Lingxuan Zhao & Haochong Xia & Shuo Sun & Jiaze Sun & Molei Qin & Xinyi Li & Yuqing Zhao & Yilei Zhao & Xinyu Cai & Longtao Zheng & Xinrun Wang & Bo An, 2024. "A Multimodal Foundation Agent for Financial Trading: Tool-Augmented, Diversified, and Generalist," Papers 2402.18485, arXiv.org, revised Feb 2024.
    9. Haoqiang Kang & Xiao-Yang Liu, 2023. "Deficiency of Large Language Models in Finance: An Empirical Examination of Hallucination," Papers 2311.15548, arXiv.org.
    10. Mamalis, Marios & Kalampokis, Evangelos & Karamanou, Areti & Brimos, Petros & Tarabanis, Konstantinos, 2023. "Can Large Language Models Revolutionalize Open Government Data Portals? A Case of Using ChatGPT in statistics.gov.scot," OSF Preprints 9b35z, Center for Open Science.
    11. Claudia Biancotti & Carolina Camassa, 2023. "Loquacity and visible emotion: ChatGPT as a policy advisor," Questioni di Economia e Finanza (Occasional Papers) 814, Bank of Italy, Economic Research and International Relations Area.
    12. Zihan Dong & Xinyu Fan & Zhiyuan Peng, 2024. "FNSPID: A Comprehensive Financial News Dataset in Time Series," Papers 2402.06698, arXiv.org.
    13. Alejandro Lopez-Lira & Yuehua Tang, 2023. "Can ChatGPT Forecast Stock Price Movements? Return Predictability and Large Language Models," Papers 2304.07619, arXiv.org, revised Sep 2023.
    14. Hongyang Yang & Xiao-Yang Liu & Christina Dan Wang, 2023. "FinGPT: Open-Source Financial Large Language Models," Papers 2306.06031, arXiv.org.
    15. Eric Fischer & Rebecca McCaughrin & Saketh Prazad & Mark Vandergon, 2023. "Fed Transparency and Policy Expectation Errors: A Text Analysis Approach," Staff Reports 1081, Federal Reserve Bank of New York.
    16. Dat Mai, 2024. "StockGPT: A GenAI Model for Stock Prediction and Trading," Papers 2404.05101, arXiv.org, revised Apr 2024.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2404.10555. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.