IDEAS home Printed from https://ideas.repec.org/p/chf/rpseri/rp2637.html

Scaling Point-in-Time Language Models

Author

Listed:
  • Bryan T. Kelly

    (Yale SOM; AQR Capital Management, LLC; National Bureau of Economic Research (NBER))

  • Semyon Malamud

    (Ecole Polytechnique Federale de Lausanne; Centre for Economic Policy Research (CEPR); Swiss Finance Institute)

  • Johannes Schwab

    (École Polytechnique Fédérale de Lausanne (EPFL))

  • Teng Andrea Xu

    (AQR Capital Management, LLC; École Polytechnique Fédérale de Lausanne (EPFL))

Abstract

Large language models trained on unrestricted internet corpora inevitably embed information from the future, introducing lookahead bias that compromises the validity of backtests and causal inference in finance and the social sciences. Point-in-time language models-trained exclusively on text available up to each calendar date-eliminate this leakage by construction, but existing efforts typically produce models that lag substantially behind their unconstrained counterparts. We show that this performance gap can be substantially narrowed through scale. Training decoder-only transformers with up to 4 billion parameters on 1 trillion chronologically filtered tokens from FineWeb, we construct a sequence of monthly model checkpoints spanning 2013-2024. Across a range of common-sense reasoning and language understanding benchmarks, our models approach the performance of leading open-weight models of comparable size (e.g., Gemma-3-4B and LLaMA-7B) trained on temporally unrestricted data, although a performance gap remains on several tasks. Instruction fine-tuning via LoRA further improves downstream usability. We release the complete pipeline-including dataset construction, training infrastructure, and evaluation code-to enable reproducible point-in-time language modeling and to support research applications that require strict temporal validity. Models are available on Hugging Face and code is available on GitHub.

Suggested Citation

  • Bryan T. Kelly & Semyon Malamud & Johannes Schwab & Teng Andrea Xu, 2026. "Scaling Point-in-Time Language Models," Swiss Finance Institute Research Paper Series 26-37, Swiss Finance Institute.
  • Handle: RePEc:chf:rpseri:rp2637
    as

    Download full text from publisher

    File URL: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6681860
    Download Restriction: no
    ---><---

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:chf:rpseri:rp2637. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Ridima Mittal (email available below). General contact details of provider: https://edirc.repec.org/data/fameech.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.