IDEAS home Printed from https://ideas.repec.org/a/taf/amstat/v80y2026i1p164-176.html

A Comparison of DeepSeek and other LLMs

Author

Listed:
  • Tianchen Gao
  • Jiashun Jin
  • Zheng Tracy Ke
  • Gabriel Moryoussef

Abstract

Recently, DeepSeek has been the focus of attention in and beyond the AI community. An interesting problem is how DeepSeek compares to other large language models (LLMs). There are many tasks an LLM can do, and in this article, we use the task of predicting an outcome using a short text for comparison. We consider two settings, an authorship classification setting and a citation classification setting. In the first one, the goal is to determine whether a short text is written by human or AI. In the second one, the goal is to classify a citation into one of four types using the textual content. For each experiment, we compare DeepSeek with four popular LLMs: Claude, Gemini, GPT, and Llama. We find that, in terms of classification accuracy, DeepSeek outperforms Gemini, GPT, and Llama in most cases, but underperforms Claude. We also find that DeepSeek is comparably slower than others but with a low cost to use, while Claude is much more expensive than all the others. Finally, we find that in terms of similarity, the output of DeepSeek is most similar to those of Gemini and Claude (and among all five LLMs, Claude and Gemini have the most similar outputs). In this article, we also present a fully-labeled dataset collected by ourselves, and propose a recipe where we can use the LLMs and a recent dataset, MADStat, to generate new datasets. The datasets in our article can be used as benchmarks for future study on LLMs.

Suggested Citation

  • Tianchen Gao & Jiashun Jin & Zheng Tracy Ke & Gabriel Moryoussef, 2026. "A Comparison of DeepSeek and other LLMs," The American Statistician, Taylor & Francis Journals, vol. 80(1), pages 164-176, January.
  • Handle: RePEc:taf:amstat:v:80:y:2026:i:1:p:164-176
    DOI: 10.1080/00031305.2025.2611010
    as

    Download full text from publisher

    File URL: http://hdl.handle.net/10.1080/00031305.2025.2611010
    Download Restriction: Access to full text is restricted to subscribers.

    File URL: https://libkey.io/10.1080/00031305.2025.2611010?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to

    for a different version of it.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:taf:amstat:v:80:y:2026:i:1:p:164-176. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Chris Longhurst (email available below). General contact details of provider: http://www.tandfonline.com/UTAS20 .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.