Microsoft Corporation

10/02/2025 | Press release | Distributed by Public on 10/02/2025 13:25

Researchers find — and help fix — a hidden biosecurity threat

Proteins are the engines and building blocks of biology - powering how organisms adapt, think and function. AI is helping scientists design new protein structures from amino acid sequences, opening doors to new therapies and cures.

But with that power also comes serious responsibility: Many of these tools are open source and could be susceptible to misuse.

To understand the risk, Microsoft scientists showed how open-source AI protein design (AIPD) tools could be harnessed to generate thousands of synthetic versions of a specific toxin - altering its amino acid sequence while preserving its structure and potentially its function. The experiment, done by computer simulation, revealed that most of these redesigned toxins might evade screening systems used by DNA synthesis companies.

That discovery exposed a blind spot in biosecurity and ultimately led to the creation of a collaborative, cross-sector effort dedicated to making DNA screening systems more resilient to AI advances. Over the course of 10 months, the team worked discreetly and rapidly to address the risk, formulating and applying new biosecurity "red-teaming" processes to develop a "patch" that was distributed globally to DNA synthesis companies. Their peer-reviewed paper, published in Science on Oct. 2 , details their initial findings and subsequent actions that strengthened global biosecurity safeguards.

Eric Horvitz, chief scientific officer of Microsoft and project lead, explains more about what this all means:

In the simplest terms, what question did your study set out to answer, and what did you find?

I set out with Bruce Wittmann, a senior applied bioscientist on my team, to answer the question, "Could today's late-breaking AI protein design tools be used to redesign toxic proteins to preserve their structure - and potentially their function - while evading detection by current screening tools?" The answer to that question was yes, they could.

The second question was, "Could we design methods and a systematic study that would enable us to work quickly and quietly with key stakeholders to update or patch those screening tools to make them more AI resilient?" Thanks to the study and efforts of dedicated collaborators, we can now say yes.

What does your research reveal about the limitations of current biosecurity systems, and how vulnerable are we today?

W e found that screening software and processes were inadequate at detecting a "paraphrased" version of concerning protein sequences. AI powered protein design is one of the most exciting, fast-paced areas of AI right now, but that speed also raises concerns about potential malevolent uses of AIPD tools. Following the launch of the Paraphrase Project, we believe that we've come quite far in characterizing and addressing the initial concerns in a relatively short period of time.

There are multiple ways in which AI could be misused to engineer biology - including areas beyond proteins. We expect these challenges to persist, so there will be a continuing need to identify and address emerging vulnerabilities. We hope our study provides guidance on methods and best practices that others can adapt or build on. This includes adapting methods from cybersecurity emergency response scenarios and developed techniques for "red-teaming" for AI in biology - s imulating both attacker and defender roles to iteratively test, evade and improve detection of AI generated threats.

What surprised you the most about your findings?

There were several surprises along the way. It was surprising to see how effectively a cross-sector team could come together so quickly and collaborate so very closely at speed, forming a cohesive group that met regularly for months. We recognized the risks, aligned on approach, adapted to a series of findings and committed to the process and effort until we developed and distributed a fix.

We were also surprised - and inspired - by the power of widely available AIPD tools in the biological sciences, not just for predicting protein structure but for enabling custom protein design. AI protein design tools are making this work easier and more accessible. That accessibility lowers the barrier of expertise required, accelerating progress in biology and medicine - but may also increase the risk of misuse. I expect some of the biggest wins of AI will come in the life sciences and health, but our study highlights why we must stay proactive, diligent and creative in managing risks.

Can you explain why everyday people should care about AI being used in biology? What are the benefits, and what are the real-world risks?

I think it's important that everybody understands the power and promise of these AI tools, considering both their incredible potential to enable game-changing breakthroughs in biology and medicine and our collective responsibility to ensure that they benefit society rather than cause harm.

Being able to identify and design new protein structures opens pathways to understanding biology more deeply: how our cells operate at the foundations of health, wellness and disease - and how to develop new cures and therapies. Some of the earliest applications involved proteins added to laundry detergents, optimized to remove stains. More recently, progress has shifted toward sophisticated efforts to custom-build proteins for specific biological functions such as new antidotes for counteracting snake venom.

T hese paradigm-shifting advances will likely lead, in our lifetimes, to breakthroughs such as slowing or curing cancers, addressing immune diseases, improving therapies, unlocking biological mysteries and detecting and mitigating health threats before they spread. At the same time, these tools can be exploited in harmful ways. That's why it's critical to pair innovation with safeguards: proactive technical advances of the form that we focused on in our work, regulatory oversight and informed citizens.

What do you want the wider public to take away from your study? Should we be concerned, optimistic or both?

Almost all major scientific advances are "dual use" - they offer profound benefits but also carry risk. It's important to shield against the dangers while harnessing the benefits - especially in AI for biology and medicine, where the potential for progress in health is enormous.

Our study shows that it's possible to invest simultaneously in innovation and safeguards. By building guardrails, policies and technical defenses, we can help to ensure that people and society benefit from AI's promise while reducing the risk of harmful misuse. This dual approach doesn't just apply to biology - it's a framework for how humanity should invest in managing AI advances across disciplines and domains.

Lead image: Researchers discovered it was possible to preserve the active sites of the protein (illustrated by the letters K E S), while the amino acid sequence was rewritten.

Microsoft Corporation published this content on October 02, 2025, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on October 02, 2025 at 19:25 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]