IDEAS home Printed from https://ideas.repec.org/a/gam/jftint/v17y2025i9p412-d1745033.html
   My bibliography  Save this article

GPT-4.1 Sets the Standard in Automated Experiment Design Using Novel Python Libraries

Author

Listed:
  • Nuno Fachada

    (Copelabs, Lusófona University, Campo Grande, 376, 1749-024 Lisboa, Portugal
    Center of Technology and Systems (UNINOVA-CTS) and Associated Lab of Intelligent Systems (LASI), 2829-516 Caparica, Portugal)

  • Daniel Fernandes

    (Copelabs, Lusófona University, Campo Grande, 376, 1749-024 Lisboa, Portugal)

  • Carlos M. Fernandes

    (Copelabs, Lusófona University, Campo Grande, 376, 1749-024 Lisboa, Portugal
    Center of Technology and Systems (UNINOVA-CTS) and Associated Lab of Intelligent Systems (LASI), 2829-516 Caparica, Portugal)

  • Bruno D. Ferreira-Saraiva

    (Copelabs, Lusófona University, Campo Grande, 376, 1749-024 Lisboa, Portugal
    CICANT, Lusófona University, Campo Grande, 376, 1749-024 Lisboa, Portugal)

  • João P. Matos-Carvalho

    (Center of Technology and Systems (UNINOVA-CTS) and Associated Lab of Intelligent Systems (LASI), 2829-516 Caparica, Portugal
    LASIGE and Departamento de Informática, Faculdade de Ciências, University of Lisbon, Campo Grande, 1749-016 Lisboa, Portugal)

Abstract

Large language models (LLMs) have advanced rapidly as tools for automating code generation in scientific research, yet their ability to interpret and use unfamiliar Python APIs for complex computational experiments remains poorly characterized. This study systematically benchmarks a selection of state-of-the-art LLMs in generating functional Python code for two increasingly challenging scenarios: conversational data analysis with the ParShift library, and synthetic data generation and clustering using pyclugen and scikit-learn . Both experiments use structured, zero-shot prompts specifying detailed requirements but omitting in-context examples. Model outputs are evaluated quantitatively for functional correctness and prompt compliance over multiple runs, and qualitatively by analyzing the errors produced when code execution fails. Results show that only a small subset of models consistently generate correct, executable code. GPT-4.1 achieved a 100% success rate across all runs in both experimental tasks, whereas most other models succeeded in fewer than half of the runs, with only Grok-3 and Mistral-Large approaching comparable performance. In addition to benchmarking LLM performance, this approach helps identify shortcomings in third-party libraries, such as unclear documentation or obscure implementation bugs. Overall, these findings highlight current limitations of LLMs for end-to-end scientific automation and emphasize the need for careful prompt design, comprehensive library documentation, and continued advances in language model capabilities.

Suggested Citation

  • Nuno Fachada & Daniel Fernandes & Carlos M. Fernandes & Bruno D. Ferreira-Saraiva & João P. Matos-Carvalho, 2025. "GPT-4.1 Sets the Standard in Automated Experiment Design Using Novel Python Libraries," Future Internet, MDPI, vol. 17(9), pages 1-28, September.
  • Handle: RePEc:gam:jftint:v:17:y:2025:i:9:p:412-:d:1745033
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1999-5903/17/9/412/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1999-5903/17/9/412/
    Download Restriction: no
    ---><---

    More about this item

    Keywords

    ;
    ;
    ;

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jftint:v:17:y:2025:i:9:p:412-:d:1745033. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.