Author
Listed:
- Yanet Sáez Iznaga
(Dectech, Rua Circular Norte do Parque Industrial e Tecnológico de Évora, Lote 2, 7005-841 Evora, Portugal
These authors contributed equally to this work.)
- Luís Rato
(VISTA Lab, ALGORITMI Research Center/LASI, University of Évora, 7000-671 Evora, Portugal)
- Pedro Salgueiro
(VISTA Lab, ALGORITMI Research Center/LASI, University of Évora, 7000-671 Evora, Portugal)
- Javier Lamar León
(VISTA Lab, ALGORITMI Research Center/LASI, University of Évora, 7000-671 Evora, Portugal
These authors contributed equally to this work.)
Abstract
This work investigates the use of LLMs to enhance automation in software testing, with a particular focus on generating high-quality, context-aware test scripts from natural language descriptions, while addressing both text-to-code and text+code-to-code generation tasks. The Codestral Mamba model was fine-tuned by proposing a way to integrate LoRA matrices into its architecture, enabling efficient domain-specific adaptation and positioning Mamba as a viable alternative to Transformer-based models. The model was trained and evaluated on two benchmark datasets: CONCODE/CodeXGLUE and the proprietary TestCase2Code dataset. Through structured prompt engineering, the system was optimized to generate syntactically valid and semantically meaningful code for test cases. Experimental results demonstrate that the proposed methodology successfully enables the automatic generation of code-based test cases using large language models. In addition, this work reports secondary benefits, including improvements in test coverage, automation efficiency, and defect detection when compared to traditional manual approaches. The integration of LLMs into the software testing pipeline also showed potential for reducing time and cost while enhancing developer productivity and software quality. The findings suggest that LLM-driven approaches can be effectively aligned with continuous integration and deployment workflows. This work contributes to the growing body of research on AI-assisted software engineering and offers practical insights into the capabilities and limitations of current LLM technologies for testing automation.
Suggested Citation
Yanet Sáez Iznaga & Luís Rato & Pedro Salgueiro & Javier Lamar León, 2025.
"Integrating Large Language Models into Automated Software Testing,"
Future Internet, MDPI, vol. 17(10), pages 1-25, October.
Handle:
RePEc:gam:jftint:v:17:y:2025:i:10:p:476-:d:1774373
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jftint:v:17:y:2025:i:10:p:476-:d:1774373. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.