ISPI - Istituto per gli Studi di Politica Internazionale

02/05/2026 | Press release | Archived content

State Hacking in the Age of Artificial Intelligence

Cyber operations have become an established tool for statecraft. Over the past two decades, states have used hacking to steal intellectual property, spy on foes and allies, and disrupt critical infrastructure. State-sponsored hacking has been a central element of interstate rivalry, enabling espionage and disruption below the threshold of armed conflict. It has also become an important part of military operations, as seen in Ukraine. State-sponsored hacking typically involves intelligence collection, signaling, and preparation for kinetic conflict. Their effectiveness rests on secrecy, plausible deniability, and the ability to deceive targets. It requires extensive human ability to identify vulnerabilities, write code, and respond to defensive measures.

Recent advances in artificial intelligence (AI) are fundamentally transforming the nature of these operations. AI alters the labor-intensive nature of state hacking by automating many tasks. State actors have widely used AI for enhancing scale, speed, and autonomy in operations, making already-difficult attribution in cyber operations even more difficult. The emergence of AI-assisted hacking signals a transition with enormous implications for international stability, rule of law, and governance.

State-Sponsored AI Attacks: The Anthropic Case

A large-scale cyberattack conducted by AI occurred in September 2025. The group GTG-1002, suspected of ties to the Chinese government, leveraged Anthropic's Claude Code in what is widely seen as the first espionage campaign carried out predominantly by AI. Rather than merely assisting human operators, the program independently executed core tasks, including system reconnaissance and data theft, against roughly 30 targets across the United States and allied countries. Through deliberate manipulation, Claude was transformed into a program capable of navigating internal networks and extracting information with limited direct human control.

This is striking in part because until recently, there were few known cases in which hacking groups used AI in their operations. One notable example is Exotic Lily. The group of criminals specialized in breaching a network for clients as an initial access broker. In 2021, it sent over 5,000 malicious emails a day to 650 targets around the world. It did so by using AI images in flawed Microsoft programs to create LinkedIn profiles to make its emails look legitimate.

This time is different, because the GTG-1002 attack involves states as a key perpetrator. Indeed, AI-led cyber-attacks had been predicted some time ago. Bruce Schneier, for instance, had foreseen the arrival of "AI hackers." Developments like this suggest that AI is no longer just a technology for cyber defense or penetration testing, but an operational actor that executes complex offensive tasks. As shown in previous works, when that happens, AI can have devastating consequences on international security, by disrupting the traditional balance of state power and generating unintended consequences in cyberspace.

AI in State Hacking: Capabilities, Risks, and Open Questions

AI is transforming state hacking. It reduces the marginal cost of cyber operations, while helping hackers replicate attacks across numerous targets at machine speed. In particular, AI systems capable of code generation and self-corrective learning allow state actors to conduct large-scale campaigns with great efficiency. This development has profound implications. If AI can identify vulnerabilities faster and more reliably than human operators, then states' ability to carry out cyber operations will become more tied to access to advanced AI programs than human expertise. For state actors, these capabilities are enormous; as Anthony King highlighted in his 2025 work, AI's data-processing capacity enables military and cyber commanders to shape the battlespace at unprecedented levels of detail and speed.

Government demand for these capabilities has fueled an arms race in which states seek advantage in cyber and military operations. The arms race is intense because states that fail to integrate AI into their postures risk falling behind, while those that do may gain strategic advantage. According to Alex Karp, Palantir CEO, states reluctant to embrace AI will be "punished," as their rival powers would use it to gain a stronger bargaining position.

The integration of AI into state hacking raises several questions that we need to address. The first question regards human autonomy and control. AI-driven cyber operations may rely on "agentic" systems capable of executing sequences of tasks without continuous human supervision. While this enhances efficiency, it also introduces risks of unintended behavior, escalation, and loss of control. Many existing safety systems are insufficient to prevent misuse by malicious actors. More broadly, there are risks that AI systems may act in ways that their designers did not fully anticipate.

A second question is how AI-driven state hacking challenges established notions of deterrence and escalation control in cyberspace. On the one hand, automated anomaly detection, predictive vulnerability analysis, and rapid response systems can enhance resilience against large-scale attacks. On the other hand, however, cyber deterrence relies on a combination of defensive denial and punishment, both of which depend on attribution and deliberate decision-making, but defensive systems can be overwhelmed by automated cyber operations that generate large volumes of activity rapidly.

There are other risks. For instance, a state may use AI to conduct a probing cyber operation; unless done right, the operation might be misinterpreted by defenders as preparation for attack, increasing the risk of escalation. Thus, the speed and opacity of AI-driven operations may undermine crisis stability. As cyber operations become faster and more autonomous, the window for de-escalation narrows, increasing the likelihood of miscalculation.

Gaps in International Law

The rise of AI-enabled state hacking exposes gaps in existing international legal frameworks. Back in 2021 and before the arrival of the "AI age", experts showed that international law was too broken to deter cyber-attacks, especially those launched by nonstate actors. Once in the AI age, the defects were even greater, in large part because the international system is not equipped to deal with AI's impact and help humans resolve challenges it generates. The international community would be well-positioned to put into practice a set of new ideas, including the inclusive and humane ideas that Denise Garcia proposed to create a global framework to govern the militarized use of AI.

Moving forward, there are many questions that we need to confront. Who bears responsibility for AI machines that carry out cyber-attacks? Who decides what thresholds should trigger lawful self-defense or retaliation? And how should the international community address changes occurring in the AI and cyber domains? Addressing these challenges will require cooperation and coordination among states, firms, and international organizations.

ISPI - Istituto per gli Studi di Politica Internazionale published this content on February 05, 2026, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on February 10, 2026 at 09:15 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]