NIST - National Institute of Standards and Technology

12/09/2025 | Press release | Distributed by Public on 12/10/2025 03:20

A Call for Built-In Biosecurity Safeguards for Generative AI Tools

Published
December 9, 2025

Author(s)

Mengdi Wang, Zaixi Zhang, Amrit Singh Bedi, Alvaro Velasquez, Stephanie Guerra, Sheng Lin-Gibson, Le Cong, Megan Blewett, Yuanhao Qu, Jian Ma, Eric Xing, George Church, Souradip Chakraborty

Abstract

The rapid adoption of generative AI (GenAI) in biotechnology offers immense potential but also raises serious safety concerns. AI models for protein engineering, genome editing, and molecular synthesis can be misused to enhance viral virulence, design toxins, or modify human embryos, while ethical and policy discussions lag behind technological advances. This Correspondence calls for proactive, built-in, AI-native safeguards within GenAI tools. With more research and development, emerging AI safety technologies-watermarking, alignment, anti-jailbreak methods, and unlearning-can complement governance policies and provide scalable biosecurity solutions. We also stress the global community's role in researching, developing, testing, and implementing these measures to ensure the responsible GenAI deployment in biotechnology.
Citation
Nature Biotechnology
Pub Type
Journals

Keywords

generative AI, biosecurity

Citation

Wang, M. , Zhang, Z. , Bedi, A. , Velasquez, A. , Guerra, S. , Lin-Gibson, S. , Cong, L. , Blewett, M. , Qu, Y. , Ma, J. , Xing, E. , Church, G. and Chakraborty, S. (2025), A Call for Built-In Biosecurity Safeguards for Generative AI Tools, Nature Biotechnology (Accessed December 10, 2025)

Additional citation formats

Issues

If you have any questions about this publication or are having problems accessing it, please contact [email protected].

NIST - National Institute of Standards and Technology published this content on December 09, 2025, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on December 10, 2025 at 09:20 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]