IDEAS home Printed from https://ideas.repec.org/a/plo/pcsy00/0000085.html

Can adversarial attacks by large language models be attributed?

Author

Listed:
  • Manuel Cebrian
  • Andres Abeliuk
  • Jan Arne Telle

Abstract

Attributing outputs from Large Language Models (LLMs) in adversarial settings—such as cyberattacks and disinformation campaigns—presents significant challenges that are likely to grow in importance. We approach this attribution problem from both a theoretical and empirical perspective, drawing on formal language theory (identification in the limit) and data-driven analysis of the expanding LLM ecosystem. By modeling an LLM’s set of possible outputs as a formal language, we analyze whether finite samples of text can uniquely pinpoint the originating model. Our results show that under mild assumptions of overlapping capabilities among models, certain classes of LLMs are fundamentally non-identifiable from their outputs alone. We delineate four regimes of theoretical identifiability: (1) an infinite class of deterministic (discrete) LLM languages is not identifiable (Gold’s classical result from 1967); (2) an infinite class of probabilistic LLMs is also not identifiable (by extension of the deterministic case); (3) a finite class of deterministic LLMs is identifiable (consistent with Angluin’s tell-tale criterion); and (4) even a finite class of probabilistic LLMs can be non-identifiable (we provide a new counterexample establishing this negative result). Complementing these theoretical insights, we quantify the explosion in the number of plausible model origins (hypothesis space) for a given output in recent years. Even under conservative assumptions (each open-source model fine-tuned on at most one new dataset), the count of distinct candidate models doubles approximately every 0.5 years, and allowing multi-dataset fine-tuning combinations yields doubling times as short as 0.28 years. This combinatorial growth, alongside the extraordinary computational cost of brute-force likelihood attribution across all models and potential users renders exhaustive attribution infeasible in practice. Our findings highlight an urgent need for new strategies and proactive governance to mitigate risks posed by un-attributable, adversarial use of LLMs as their influence continues to expand.Author summary: When AI-generated attacks—from disinformation to cyberattacks—occur, can we reliably trace them back to their originating language model? This paper establishes theoretical limits, showing that in realistic settings, attributing outputs to specific large-language models is provably impossible, even with unlimited data. Empirically, we quantify the explosive growth in the number of plausible model origins, demonstrating how quickly attribution becomes infeasible in practice. These combined results have stark implications for cybersecurity, misinformation mitigation, and AI governance.

Suggested Citation

  • Manuel Cebrian & Andres Abeliuk & Jan Arne Telle, 2026. "Can adversarial attacks by large language models be attributed?," PLOS Complex Systems, Public Library of Science, vol. 3(2), pages 1-21, February.
  • Handle: RePEc:plo:pcsy00:0000085
    DOI: 10.1371/journal.pcsy.0000085
    as

    Download full text from publisher

    File URL: https://journals.plos.org/complexsystems/article?id=10.1371/journal.pcsy.0000085
    Download Restriction: no

    File URL: https://journals.plos.org/complexsystems/article/file?id=10.1371/journal.pcsy.0000085&type=printable
    Download Restriction: no

    File URL: https://libkey.io/10.1371/journal.pcsy.0000085?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pcsy00:0000085. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: complexsystem (email available below). General contact details of provider: https://journals.plos.org/complexsystems/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.