IDEAS home Printed from https://ideas.repec.org/p/arx/papers/2511.02458.html

Prompting for Policy: Forecasting Macroeconomic Scenarios with Synthetic LLM Personas

Author

Listed:
  • Giulia Iadisernia
  • Carolina Camassa

Abstract

We evaluate whether persona-based prompting improves Large Language Model (LLM) performance on macroeconomic forecasting tasks. Using 2,368 economics-related personas from the PersonaHub corpus, we prompt GPT-4o to replicate the ECB Survey of Professional Forecasters across 50 quarterly rounds (2013-2025). We compare the persona-prompted forecasts against the human experts panel, across four target variables (HICP, core HICP, GDP growth, unemployment) and four forecast horizons. We also compare the results against 100 baseline forecasts without persona descriptions to isolate its effect. We report two main findings. Firstly, GPT-4o and human forecasters achieve remarkably similar accuracy levels, with differences that are statistically significant yet practically modest. Our out-of-sample evaluation on 2024-2025 data demonstrates that GPT-4o can maintain competitive forecasting performance on unseen events, though with notable differences compared to the in-sample period. Secondly, our ablation experiment reveals no measurable forecasting advantage from persona descriptions, suggesting these prompt components can be omitted to reduce computational costs without sacrificing accuracy. Our results provide evidence that GPT-4o can achieve competitive forecasting accuracy even on out-of-sample macroeconomic events, if provided with relevant context data, while revealing that diverse prompts produce remarkably homogeneous forecasts compared to human panels.

Suggested Citation

  • Giulia Iadisernia & Carolina Camassa, 2025. "Prompting for Policy: Forecasting Macroeconomic Scenarios with Synthetic LLM Personas," Papers 2511.02458, arXiv.org.
  • Handle: RePEc:arx:papers:2511.02458
    as

    Download full text from publisher

    File URL: http://arxiv.org/pdf/2511.02458
    File Function: Latest version
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Argyle, Lisa P. & Busby, Ethan C. & Fulda, Nancy & Gubler, Joshua R. & Rytting, Christopher & Wingate, David, 2023. "Out of One, Many: Using Language Models to Simulate Human Samples," Political Analysis, Cambridge University Press, vol. 31(3), pages 337-351, July.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Eric Hitz & Mingmin Feng & Radu Tanase & Ren'e Algesheimer & Manuel S. Mariani, 2025. "The amplifier effect of artificial agents in social contagion," Papers 2502.21037, arXiv.org, revised Mar 2025.
    2. Koji Takahashi & Joon Suk Park, 2025. "Generative AI for Surveys on Payment Apps: AIs' View on Privacy and Technology," IMES Discussion Paper Series 25-E-13, Institute for Monetary and Economic Studies, Bank of Japan.
    3. Hui Chen & Antoine Didisheim & Luciano Somoza & Hanqing Tian, 2025. "A Financial Brain Scan of the LLM," Papers 2508.21285, arXiv.org.
    4. repec:osf:osfxxx:r3qng_v1 is not listed on IDEAS
    5. Matthew O. Jackson & Qiaozhu Me & Stephanie W. Wang & Yutong Xie & Walter Yuan & Seth Benzell & Erik Brynjolfsson & Colin F. Camerer & James Evans & Brian Jabarian & Jon Kleinberg & Juanjuan Meng & Se, 2025. "AI Behavioral Science," Papers 2509.13323, arXiv.org.
    6. George Gui & Seungwoo Kim, 2025. "Leveraging LLMs to Improve Experimental Design: A Generative Stratification Approach," Papers 2509.25709, arXiv.org.
    7. Jeon, June & Kim, Lanu & Park, Jaehyuk, 2025. "The ethics of generative AI in social science research: A qualitative approach for institutionally grounded AI research ethics," Technology in Society, Elsevier, vol. 81(C).
    8. repec:osf:osfxxx:udz28_v2 is not listed on IDEAS
    9. Erkan Gunes & Christoffer Koch Florczak, 2025. "Replacing or enhancing the human coder? Multiclass classification of policy documents with large language models," Journal of Computational Social Science, Springer, vol. 8(2), pages 1-20, May.
    10. Aliya Amirova & Theodora Fteropoulli & Nafiso Ahmed & Martin R Cowie & Joel Z Leibo, 2024. "Framework-based qualitative analysis of free responses of Large Language Models: Algorithmic fidelity," PLOS ONE, Public Library of Science, vol. 19(3), pages 1-33, March.
    11. Sugat Chaturvedi & Rochana Chaturvedi, 2025. "Who Gets the Callback? Generative AI and Gender Bias," Papers 2504.21400, arXiv.org.
    12. Haoyi Zhang & Tianyi Zhu, 2025. "Neither Consent nor Property: A Policy Lab for Data Law," Papers 2510.26727, arXiv.org, revised Jan 2026.
    13. Anne Lundgaard Hansen & Seung Jung Lee, 2025. "Financial Stability Implications of Generative AI: Taming the Animal Spirits," Papers 2510.01451, arXiv.org.
    14. Hua Li & Qifang Wang & Ye Wu, 2025. "From Mobile Media to Generative AI: The Evolutionary Logic of Computational Social Science Across Data, Methods, and Theory," Mathematics, MDPI, vol. 13(19), pages 1-17, September.
    15. Ben Weidmann & Yixian Xu & David J. Deming, 2025. "Measuring Human Leadership Skills with Artificially Intelligent Agents," Papers 2508.02966, arXiv.org.
    16. Navid Ghaffarzadegan & Aritra Majumdar & Ross Williams & Niyousha Hosseinichimeh, 2024. "Generative agent‐based modeling: an introduction and tutorial," System Dynamics Review, System Dynamics Society, vol. 40(1), January.
    17. Seung Jung Lee & Anne Lundgaard Hansen, 2025. "Financial Stability Implications of Generative AI: Taming the Animal Spirits," Finance and Economics Discussion Series 2025-090, Board of Governors of the Federal Reserve System (U.S.).
    18. Ayato Kitadai & Yusuke Fukasawa & Nariaki Nishino, 2025. "Bias-Adjusted LLM Agents for Human-Like Decision-Making via Behavioral Economics," Papers 2508.18600, arXiv.org.
    19. Paola Cillo & Gaia Rubera, 2025. "Generative AI in innovation and marketing processes: A roadmap of research opportunities," Journal of the Academy of Marketing Science, Springer, vol. 53(3), pages 684-701, May.
    20. Seo, Jibeom & Kim, Beom Jun, 2025. "Opinion dynamics model of collaborative learning," Physica A: Statistical Mechanics and its Applications, Elsevier, vol. 672(C).
    21. Sanchaita Hazra & Bodhisattwa Prasad Majumder & Tuhin Chakrabarty, 2025. "AI Safety Should Prioritize the Future of Work," Papers 2504.13959, arXiv.org, revised Jul 2025.
    22. Darija Barak & Miguel Costa-Gomes, 2025. "Humans expect rationality and cooperation from LLM opponents in strategic games," Papers 2505.11011, arXiv.org.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:arx:papers:2511.02458. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: arXiv administrators (email available below). General contact details of provider: http://arxiv.org/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.