IDEAS home Printed from https://ideas.repec.org/a/nat/natcom/v16y2025i1d10.1038_s41467-025-63913-1.html
   My bibliography  Save this article

Risks of AI scientists: prioritizing safeguarding over autonomy

Author

Listed:
  • Xiangru Tang

    (Yale University)

  • Qiao Jin

    (National Institutes of Health)

  • Kunlun Zhu

    (Mila-Quebec AI Institute)

  • Tongxin Yuan

    (Shanghai Jiao Tong University)

  • Yichi Zhang

    (Yale University)

  • Wangchunshu Zhou

    (OPPO Research Institute)

  • Meng Qu

    (Mila-Quebec AI Institute)

  • Yilun Zhao

    (Yale University)

  • Jian Tang

    (Mila-Quebec AI Institute)

  • Zhuosheng Zhang

    (Shanghai Jiao Tong University)

  • Arman Cohan

    (Yale University)

  • Dov Greenbaum

    (Reichman University
    Yale University)

  • Zhiyong Lu

    (National Institutes of Health)

  • Mark Gerstein

    (Yale University
    Yale University
    Yale University
    Yale University)

Abstract

AI scientists powered by large language models have demonstrated substantial promise in autonomously conducting experiments and facilitating scientific discoveries across various disciplines. While their capabilities are promising, these agents also introduce novel vulnerabilities that require careful consideration for safety. However, there has been limited comprehensive exploration of these vulnerabilities. This perspective examines vulnerabilities in AI scientists, shedding light on potential risks associated with their misuse, and emphasizing the need for safety measures. We begin by providing an overview of the potential risks inherent to AI scientists, taking into account user intent, the specific scientific domain, and their potential impact on the external environment. Then, we explore the underlying causes of these vulnerabilities and provide a scoping review of the limited existing works. Based on our analysis, we propose a triadic framework involving human regulation, agent alignment, and an understanding of environmental feedback (agent regulation) to mitigate these identified risks. Furthermore, we highlight the limitations and challenges associated with safeguarding AI scientists and advocate for the development of improved models, robust benchmarks, and comprehensive regulations.

Suggested Citation

  • Xiangru Tang & Qiao Jin & Kunlun Zhu & Tongxin Yuan & Yichi Zhang & Wangchunshu Zhou & Meng Qu & Yilun Zhao & Jian Tang & Zhuosheng Zhang & Arman Cohan & Dov Greenbaum & Zhiyong Lu & Mark Gerstein, 2025. "Risks of AI scientists: prioritizing safeguarding over autonomy," Nature Communications, Nature, vol. 16(1), pages 1-11, December.
  • Handle: RePEc:nat:natcom:v:16:y:2025:i:1:d:10.1038_s41467-025-63913-1
    DOI: 10.1038/s41467-025-63913-1
    as

    Download full text from publisher

    File URL: https://www.nature.com/articles/s41467-025-63913-1
    File Function: Abstract
    Download Restriction: no

    File URL: https://libkey.io/10.1038/s41467-025-63913-1?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Murray Shanahan & Kyle McDonell & Laria Reynolds, 2023. "Role play with large language models," Nature, Nature, vol. 623(7987), pages 493-498, November.
    2. Karan Singhal & Shekoofeh Azizi & Tao Tu & S. Sara Mahdavi & Jason Wei & Hyung Won Chung & Nathan Scales & Ajay Tanwani & Heather Cole-Lewis & Stephen Pfohl & Perry Payne & Martin Seneviratne & Paul G, 2023. "Publisher Correction: Large language models encode clinical knowledge," Nature, Nature, vol. 620(7973), pages 19-19, August.
    3. Daniil A. Boiko & Robert MacKnight & Ben Kline & Gabe Gomes, 2023. "Autonomous chemical research with large language models," Nature, Nature, vol. 624(7992), pages 570-578, December.
    4. Junyi Wu & Shari Shang, 2020. "Managing Uncertainty in AI-Enabled Decision Making and Achieving Sustainability," Sustainability, MDPI, vol. 12(21), pages 1-17, October.
    5. Xu, Zhaoyi & Saleh, Joseph Homer, 2021. "Machine learning for reliability engineering and safety applications: Review of current status and future opportunities," Reliability Engineering and System Safety, Elsevier, vol. 211(C).
    6. Karan Singhal & Shekoofeh Azizi & Tao Tu & S. Sara Mahdavi & Jason Wei & Hyung Won Chung & Nathan Scales & Ajay Tanwani & Heather Cole-Lewis & Stephen Pfohl & Perry Payne & Martin Seneviratne & Paul G, 2023. "Large language models encode clinical knowledge," Nature, Nature, vol. 620(7972), pages 172-180, August.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Chen Gao & Xiaochong Lan & Nian Li & Yuan Yuan & Jingtao Ding & Zhilun Zhou & Fengli Xu & Yong Li, 2024. "Large language models empowered agent-based modeling and simulation: a survey and perspectives," Humanities and Social Sciences Communications, Palgrave Macmillan, vol. 11(1), pages 1-24, December.
    2. Maxime Griot & Coralie Hemptinne & Jean Vanderdonckt & Demet Yuksel, 2025. "Large Language Models lack essential metacognition for reliable medical reasoning," Nature Communications, Nature, vol. 16(1), pages 1-10, December.
    3. Ching-Nam Hang & Pei-Duo Yu & Roberto Morabito & Chee-Wei Tan, 2024. "Large Language Models Meet Next-Generation Networking Technologies: A Review," Future Internet, MDPI, vol. 16(10), pages 1-29, October.
    4. Arslon Ruziboev & Dilmurod Turimov & Jiyoun Kim & Wooseong Kim, 2025. "Multiclass Classification of Sarcopenia Severity in Korean Adults Using Machine Learning and Model Fusion Approaches," Mathematics, MDPI, vol. 13(18), pages 1-22, September.
    5. Chao-Chun Hsu & Ziad Obermeyer & Chenhao Tan, 2025. "A machine learning model using clinical notes to identify physician fatigue," Nature Communications, Nature, vol. 16(1), pages 1-10, December.
    6. Ali Nemati & Mohammad Assadi Shalmani & Qiang Lu & Jake Luo, 2025. "Benchmarking Large Language Models from Open and Closed Source Models to Apply Data Annotation for Free-Text Criteria in Healthcare," Future Internet, MDPI, vol. 17(4), pages 1-27, March.
    7. Yang Zhao & Pu Wang & Yibo Zhao & Hongru Du & Hao Frank Yang, 2025. "SafeTraffic Copilot: adapting large language models for trustworthy traffic safety assessments and decision interventions," Nature Communications, Nature, vol. 16(1), pages 1-17, December.
    8. Ofir Ben Shoham & Nadav Rappoport, 2024. "CPLLM: Clinical prediction with large language models," PLOS Digital Health, Public Library of Science, vol. 3(12), pages 1-15, December.
    9. Sheng Wang & Fangyuan Zhao & Dechao Bu & Yunwei Lu & Ming Gong & Hongjie Liu & Zhaohui Yang & Xiaoxi Zeng & Zhiyuan Yuan & Baoping Wan & Jingbo Sun & Yang Wu & Lianhe Zhao & Xirun Wan & Wei Huang & Ta, 2025. "LINS: A general medical Q&A framework for enhancing the quality and credibility of LLM-generated responses," Nature Communications, Nature, vol. 16(1), pages 1-20, December.
    10. Venkat Ram Reddy Ganuthula & Krishna Kumar Balaraman, 2025. "The Paradox of Professional Input: How Expert Collaboration with AI Systems Shapes Their Future Value," Papers 2504.12654, arXiv.org.
    11. Cheng-Yi Li & Kao-Jung Chang & Cheng-Fu Yang & Hsin-Yu Wu & Wenting Chen & Hritik Bansal & Ling Chen & Yi-Ping Yang & Yu-Chun Chen & Shih-Pin Chen & Shih-Jen Chen & Jiing-Feng Lirng & Kai-Wei Chang & , 2025. "Towards a holistic framework for multimodal LLM in 3D brain CT radiology report generation," Nature Communications, Nature, vol. 16(1), pages 1-14, December.
    12. Kevin Wu & Eric Wu & Kevin Wei & Angela Zhang & Allison Casasola & Teresa Nguyen & Sith Riantawan & Patricia Shi & Daniel Ho & James Zou, 2025. "An automated framework for assessing how well LLMs cite relevant medical references," Nature Communications, Nature, vol. 16(1), pages 1-10, December.
    13. Tingmingke Lu, 2025. "Maximum Hallucination Standards for Domain-Specific Large Language Models," Papers 2503.05481, arXiv.org.
    14. Zheng, Shuwen & Pan, Kai & Liu, Jie & Chen, Yunxia, 2024. "Empirical study on fine-tuning pre-trained large language models for fault diagnosis of complex systems," Reliability Engineering and System Safety, Elsevier, vol. 252(C).
    15. van Kolfschooten, Hannah & van Oirschot, Janneke, 2024. "The EU Artificial Intelligence Act (2024): Implications for healthcare," Health Policy, Elsevier, vol. 149(C).
    16. Soroosh Tayebi Arasteh & Tianyu Han & Mahshad Lotfinia & Christiane Kuhl & Jakob Nikolas Kather & Daniel Truhn & Sven Nebelung, 2024. "Large language models streamline automated machine learning for clinical studies," Nature Communications, Nature, vol. 15(1), pages 1-12, December.
    17. Zhou, Zhen & Gu, Ziyuan & Qu, Xiaobo & Liu, Pan & Liu, Zhiyuan & Yu, Wenwu, 2024. "Urban mobility foundation model: A literature review and hierarchical perspective," Transportation Research Part E: Logistics and Transportation Review, Elsevier, vol. 192(C).
    18. Jean Ogier du Terrail & Quentin Klopfenstein & Honghao Li & Imke Mayer & Nicolas Loiseau & Mohammad Hallal & Michael Debouver & Thibault Camalon & Thibault Fouqueray & Jorge Arellano Castro & Zahia Ya, 2025. "FedECA: federated external control arms for causal inference with time-to-event data in distributed settings," Nature Communications, Nature, vol. 16(1), pages 1-22, December.
    19. Hossam A. Gabber & Omar S. Hemied, 2024. "Domain-Specific Large Language Model for Renewable Energy and Hydrogen Deployment Strategies," Energies, MDPI, vol. 17(23), pages 1-25, December.
    20. Chaoyi Wu & Xiaoman Zhang & Ya Zhang & Hui Hui & Yanfeng Wang & Weidi Xie, 2025. "Towards generalist foundation model for radiology by leveraging web-scale 2D&3D medical data," Nature Communications, Nature, vol. 16(1), pages 1-22, December.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:nat:natcom:v:16:y:2025:i:1:d:10.1038_s41467-025-63913-1. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.nature.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.