IDEAS home Printed from https://ideas.repec.org/p/fip/fedlwp/96478.html
   My bibliography  Save this paper

Artificial Intelligence and Inflation Forecasts

Author

Abstract

We explore the ability of Large Language Models (LLMs) to produce in-sample conditional inflation forecasts during the 2019-2023 period. We use a leading LLM (Google AI's PaLM) to produce distributions of conditional forecasts at different horizons and compare these forecasts to those of a leading source, the Survey of Professional Forecasters (SPF). We find that LLM forecasts generate lower mean-squared errors overall in most years, and at almost all horizons. LLM forecasts exhibit slower reversion to the 2% inflation anchor.

Suggested Citation

  • Miguel Faria-e-Castro & Fernando Leibovici, 2023. "Artificial Intelligence and Inflation Forecasts," Working Papers 2023-015, Federal Reserve Bank of St. Louis, revised 26 Feb 2024.
  • Handle: RePEc:fip:fedlwp:96478
    DOI: 10.20955/wp.2023.015
    as

    Download full text from publisher

    File URL: https://doi.org/10.20955/wp.2023.015
    File Function: Full text
    Download Restriction: no

    File URL: https://libkey.io/10.20955/wp.2023.015?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    Other versions of this item:

    References listed on IDEAS

    as
    1. John J. Horton, 2023. "Large Language Models as Simulated Economic Agents: What Can We Learn from Homo Silicus?," NBER Working Papers 31122, National Bureau of Economic Research, Inc.
    2. John J. Horton, 2023. "Large Language Models as Simulated Economic Agents: What Can We Learn from Homo Silicus?," Papers 2301.07543, arXiv.org.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Chen Gao & Xiaochong Lan & Nian Li & Yuan Yuan & Jingtao Ding & Zhilun Zhou & Fengli Xu & Yong Li, 2024. "Large language models empowered agent-based modeling and simulation: a survey and perspectives," Humanities and Social Sciences Communications, Palgrave Macmillan, vol. 11(1), pages 1-24, December.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Kirshner, Samuel N., 2024. "GPT and CLT: The impact of ChatGPT's level of abstraction on consumer recommendations," Journal of Retailing and Consumer Services, Elsevier, vol. 76(C).
    2. Hui Chen & Antoine Didisheim & Luciano Somoza & Hanqing Tian, 2025. "A Financial Brain Scan of the LLM," Papers 2508.21285, arXiv.org.
    3. Elif Akata & Lion Schulz & Julian Coda-Forno & Seong Joon Oh & Matthias Bethge & Eric Schulz, 2025. "Playing repeated games with large language models," Nature Human Behaviour, Nature, vol. 9(7), pages 1380-1390, July.
    4. Nir Chemaya & Daniel Martin, 2024. "Perceptions and detection of AI use in manuscript preparation for academic journals," PLOS ONE, Public Library of Science, vol. 19(7), pages 1-16, July.
    5. Lijia Ma & Xingchen Xu & Yong Tan, 2024. "Crafting Knowledge: Exploring the Creative Mechanisms of Chat-Based Search Engines," Papers 2402.19421, arXiv.org.
    6. Ali Goli & Amandeep Singh, 2023. "Exploring the Influence of Language on Time-Reward Perceptions in Large Language Models: A Study Using GPT-3.5," Papers 2305.02531, arXiv.org, revised Jun 2023.
    7. Evangelos Katsamakas, 2024. "Business models for the simulation hypothesis," Papers 2404.08991, arXiv.org.
    8. Yuan Gao & Dokyun Lee & Gordon Burtch & Sina Fazelpour, 2024. "Take Caution in Using LLMs as Human Surrogates: Scylla Ex Machina," Papers 2410.19599, arXiv.org, revised Jan 2025.
    9. Umberto Collodel, 2025. "Interpreting the Interpreter: Can We Model post-ECB Conferences Volatility with LLM Agents?," Papers 2508.13635, arXiv.org, revised Oct 2025.
    10. Jiaxin Liu & Yixuan Tang & Yi Yang & Kar Yan Tam, 2025. "Evaluating and Aligning Human Economic Risk Preferences in LLMs," Papers 2503.06646, arXiv.org, revised Sep 2025.
    11. George Gui & Seungwoo Kim, 2025. "Leveraging LLMs to Improve Experimental Design: A Generative Stratification Approach," Papers 2509.25709, arXiv.org.
    12. Christoph Engel & Max R. P. Grossmann & Axel Ockenfels, 2023. "Integrating machine behavior into human subject experiments: A user-friendly toolkit and illustrations," Discussion Paper Series of the Max Planck Institute for Research on Collective Goods 2024_01, Max Planck Institute for Research on Collective Goods.
    13. Yiting Chen & Tracy Xiao Liu & You Shan & Songfa Zhong, 2023. "The emergence of economic rationality of GPT," Proceedings of the National Academy of Sciences, Proceedings of the National Academy of Sciences, vol. 120(51), pages 2316205120-, December.
    14. Jiafu An & Difang Huang & Chen Lin & Mingzhu Tai, 2024. "Measuring Gender and Racial Biases in Large Language Models," Papers 2403.15281, arXiv.org.
    15. Aliya Amirova & Theodora Fteropoulli & Nafiso Ahmed & Martin R Cowie & Joel Z Leibo, 2024. "Framework-based qualitative analysis of free responses of Large Language Models: Algorithmic fidelity," PLOS ONE, Public Library of Science, vol. 19(3), pages 1-33, March.
    16. Fulin Guo, 2023. "GPT in Game Theory Experiments," Papers 2305.05516, arXiv.org, revised Dec 2023.
    17. Zareh Asatryan & Carlo Birkholz & Friedrich Heinemann, 2025. "Evidence-based policy or beauty contest? An LLM-based meta-analysis of EU cohesion policy evaluations," International Tax and Public Finance, Springer;International Institute of Public Finance, vol. 32(2), pages 625-655, April.
    18. Fabio Motoki & Valdemar Pinho Neto & Victor Rodrigues, 2024. "More human than human: measuring ChatGPT political bias," Public Choice, Springer, vol. 198(1), pages 3-23, January.
    19. Hua Li & Qifang Wang & Ye Wu, 2025. "From Mobile Media to Generative AI: The Evolutionary Logic of Computational Social Science Across Data, Methods, and Theory," Mathematics, MDPI, vol. 13(19), pages 1-17, September.
    20. Ben Weidmann & Yixian Xu & David J. Deming, 2025. "Measuring Human Leadership Skills with Artificially Intelligent Agents," Papers 2508.02966, arXiv.org.

    More about this item

    Keywords

    ;
    ;
    ;

    JEL classification:

    • C45 - Mathematical and Quantitative Methods - - Econometric and Statistical Methods: Special Topics - - - Neural Networks and Related Topics
    • C53 - Mathematical and Quantitative Methods - - Econometric Modeling - - - Forecasting and Prediction Models; Simulation Methods
    • E31 - Macroeconomics and Monetary Economics - - Prices, Business Fluctuations, and Cycles - - - Price Level; Inflation; Deflation
    • E37 - Macroeconomics and Monetary Economics - - Prices, Business Fluctuations, and Cycles - - - Forecasting and Simulation: Models and Applications

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:fip:fedlwp:96478. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Scott St. Louis (email available below). General contact details of provider: https://edirc.repec.org/data/frbslus.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.