IDEAS home Printed from https://ideas.repec.org/a/gam/jdataj/v10y2025i10p156-d1761793.html
   My bibliography  Save this article

Automated Test Generation Using Large Language Models

Author

Listed:
  • Marcin Andrzejewski

    (GenerativeAI Academic Research Team (GART), Capgemini Insights & Data, 54-202 Wroclaw, Poland
    These authors contributed equally to this work.)

  • Nina Dubicka

    (GenerativeAI Academic Research Team (GART), Capgemini Insights & Data, 54-202 Wroclaw, Poland
    These authors contributed equally to this work.)

  • Jędrzej Podolak

    (GenerativeAI Academic Research Team (GART), Capgemini Insights & Data, 54-202 Wroclaw, Poland)

  • Marek Kowal

    (GenerativeAI Academic Research Team (GART), Capgemini Insights & Data, 54-202 Wroclaw, Poland)

  • Jakub Siłka

    (Faculty of Applied Mathematics, Silesian University of Technology, 44-100 Gliwice, Poland)

Abstract

This study explores the potential of generative AI, specifically Large Language Models (LLMs), in automating unit test generation in Python 3.13. We analyze tests, both those created by programmers and those generated by LLM models, for fifty source code cases. Our main focus is on how the choice of model, the difficulty of the source code, and the prompting strategy influence the quality of the generated tests. The results show that AI models can help automate test creation for simple code, but their effectiveness decreases for more complex tasks. We introduce an embedding-based similarity analysis to assess how closely AI-generated tests resemble human-written ones, revealing that AI outputs often lack semantic diversity. The study also highlights the potential of AI models for rapid test prototyping, which can significantly speed up the software development cycle. However, further customization and training of the models on specific use cases is needed to achieve greater precision. Our findings provide practical insights into integrating LLMs into software testing workflows and emphasize the importance of prompt design and model selection.

Suggested Citation

  • Marcin Andrzejewski & Nina Dubicka & Jędrzej Podolak & Marek Kowal & Jakub Siłka, 2025. "Automated Test Generation Using Large Language Models," Data, MDPI, vol. 10(10), pages 1-20, September.
  • Handle: RePEc:gam:jdataj:v:10:y:2025:i:10:p:156-:d:1761793
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/2306-5729/10/10/156/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/2306-5729/10/10/156/
    Download Restriction: no
    ---><---

    More about this item

    Keywords

    ;
    ;
    ;
    ;
    ;

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jdataj:v:10:y:2025:i:10:p:156-:d:1761793. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.