Published
December 9, 2025
Author(s)
Mengdi Wang, Zaixi Zhang, Amrit Singh Bedi, Alvaro Velasquez, Stephanie Guerra, Sheng Lin-Gibson, Le Cong, Megan Blewett, Yuanhao Qu, Jian Ma, Eric Xing, George Church, Souradip Chakraborty
Abstract
The rapid adoption of generative AI (GenAI) in biotechnology offers immense potential but also raises serious safety concerns. AI models for protein engineering, genome editing, and molecular synthesis can be misused to enhance viral virulence, design toxins, or modify human embryos, while ethical and policy discussions lag behind technological advances. This Correspondence calls for proactive, built-in, AI-native safeguards within GenAI tools. With more research and development, emerging AI safety technologies-watermarking, alignment, anti-jailbreak methods, and unlearning-can complement governance policies and provide scalable biosecurity solutions. We also stress the global community's role in researching, developing, testing, and implementing these measures to ensure the responsible GenAI deployment in biotechnology.
Citation
Nature Biotechnology
Keywords
generative AI, biosecurity
Citation
Wang, M. , Zhang, Z. , Bedi, A. , Velasquez, A. , Guerra, S. , Lin-Gibson, S. , Cong, L. , Blewett, M. , Qu, Y. , Ma, J. , Xing, E. , Church, G. and Chakraborty, S. (2025), A Call for Built-In Biosecurity Safeguards for Generative AI Tools, Nature Biotechnology (Accessed December 10, 2025)
Additional citation formats
Issues
If you have any questions about this publication or are having problems accessing it, please contact [email protected].