IDEAS home Printed from https://ideas.repec.org/p/arx/papers/2508.11152.html
   My bibliography  Save this paper

AlphaAgents: Large Language Model based Multi-Agents for Equity Portfolio Constructions

Author

Listed:
  • Tianjiao Zhao
  • Jingrao Lyu
  • Stokes Jones
  • Harrison Garber
  • Stefano Pasquali
  • Dhagash Mehta

Abstract

The field of artificial intelligence (AI) agents is evolving rapidly, driven by the capabilities of Large Language Models (LLMs) to autonomously perform and refine tasks with human-like efficiency and adaptability. In this context, multi-agent collaboration has emerged as a promising approach, enabling multiple AI agents to work together to solve complex challenges. This study investigates the application of role-based multi-agent systems to support stock selection in equity research and portfolio management. We present a comprehensive analysis performed by a team of specialized agents and evaluate their stock-picking performance against established benchmarks under varying levels of risk tolerance. Furthermore, we examine the advantages and limitations of employing multi-agent frameworks in equity analysis, offering critical insights into their practical efficacy and implementation challenges.

Suggested Citation

  • Tianjiao Zhao & Jingrao Lyu & Stokes Jones & Harrison Garber & Stefano Pasquali & Dhagash Mehta, 2025. "AlphaAgents: Large Language Model based Multi-Agents for Equity Portfolio Constructions," Papers 2508.11152, arXiv.org.
  • Handle: RePEc:arx:papers:2508.11152
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/2508.11152
    File Function: Latest version
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Yuqi Nie & Yaxuan Kong & Xiaowen Dong & John M. Mulvey & H. Vincent Poor & Qingsong Wen & Stefan Zohren, 2024. "A Survey of Large Language Models for Financial Applications: Progress, Prospects and Challenges," Papers 2406.11903, arXiv.org.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Xuewen Han & Neng Wang & Shangkun Che & Hongyang Yang & Kunpeng Zhang & Sean Xin Xu, 2024. "Enhancing Investment Analysis: Optimizing AI-Agent Collaboration in Financial Research," Papers 2411.04788, arXiv.org.
    2. Liyuan Chen & Shuoling Liu & Jiangpeng Yan & Xiaoyu Wang & Henglin Liu & Chuang Li & Kecheng Jiao & Jixuan Ying & Yang Veronica Liu & Qiang Yang & Xiu Li, 2025. "Advancing Financial Engineering with Foundation Models: Progress, Applications, and Challenges," Papers 2507.18577, arXiv.org.
    3. Arnav Grover, 2025. "FinRLlama: A Solution to LLM-Engineered Signals Challenge at FinRL Contest 2024," Papers 2502.01992, arXiv.org.
    4. Jizhou Wang & Xiaodan Fang & Lei Huang & Yongfeng Huang, 2025. "TaxAgent: How Large Language Model Designs Fiscal Policy," Papers 2506.02838, arXiv.org.
    5. Tianyu Zhou & Pinqiao Wang & Yilin Wu & Hongyang Yang, 2024. "FinRobot: AI Agent for Equity Research and Valuation with Large Language Models," Papers 2411.08804, arXiv.org.
    6. Hoyoung Lee & Wonbin Ahn & Suhwan Park & Jaehoon Lee & Minjae Kim & Sungdong Yoo & Taeyoon Lim & Woohyung Lim & Yongjae Lee, 2025. "THEME: Enhancing Thematic Investing with Semantic Stock Representations and Temporal Dynamics," Papers 2508.16936, arXiv.org, revised Aug 2025.
    7. Muhammed Golec & Maha AlabdulJalil, 2025. "Interpretable LLMs for Credit Risk: A Systematic Review and Taxonomy," Papers 2506.04290, arXiv.org, revised Jun 2025.
    8. Shanyan Lai, 2025. "Asset Pricing in Pre-trained Transformer," Papers 2505.01575, arXiv.org, revised May 2025.
    9. Yoontae Hwang & Yaxuan Kong & Stefan Zohren & Yongjae Lee, 2025. "Decision-informed Neural Networks with Large Language Model Integration for Portfolio Optimization," Papers 2502.00828, arXiv.org.
    10. Shijie Han & Jingshu Zhang & Yiqing Shen & Kaiyuan Yan & Hongguang Li, 2025. "FinSphere, a Real-Time Stock Analysis Agent Powered by Instruction-Tuned LLMs and Domain Tools," Papers 2501.12399, arXiv.org, revised Jul 2025.
    11. Joel R. Bock, 2024. "Generating long-horizon stock "buy" signals with a neural language model," Papers 2410.18988, arXiv.org.
    12. Haochen Luo & Yuan Zhang & Chen Liu, 2025. "EFS: Evolutionary Factor Searching for Sparse Portfolio Optimization Using Large Language Models," Papers 2507.17211, arXiv.org.
    13. Felix Drinkall & Janet B. Pierrehumbert & Stefan Zohren, 2024. "Forecasting Credit Ratings: A Case Study where Traditional Methods Outperform Generative LLMs," Papers 2407.17624, arXiv.org, revised Jan 2025.
    14. Alejandro Lopez-Lira & Jihoon Kwon & Sangwoon Yoon & Jy-yong Sohn & Chanyeol Choi, 2025. "Bridging Language Models and Financial Analysis," Papers 2503.22693, arXiv.org.
    15. Zonghan Wu & Junlin Wang & Congyuan Zou & Chenhan Wang & Yilei Shao, 2025. "Towards Competent AI for Fundamental Analysis in Finance: A Benchmark Dataset and Evaluation," Papers 2506.07315, arXiv.org.
    16. Yuzhe Yang & Yifei Zhang & Yan Hu & Yilin Guo & Ruoli Gan & Yueru He & Mingcong Lei & Xiao Zhang & Haining Wang & Qianqian Xie & Jimin Huang & Honghai Yu & Benyou Wang, 2024. "UCFE: A User-Centric Financial Expertise Benchmark for Large Language Models," Papers 2410.14059, arXiv.org, revised Feb 2025.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2508.11152. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.