Results

ISPI - Istituto per gli Studi di Politica Internazionale

02/05/2026 | Press release | Archived content

The role of cyber-proxy in cyber threat intelligence

In the contemporary theatre of twenty-first-century geopolitics, a new class of actors has taken center stage in the cyberspace world: the so-called cyber-proxy groups. These entities move in the digital undergrowth that lies between states, criminal organizations, and ideological movements, exploiting the global interconnectedness of networks to project power far beyond their physical borders. In doing so, they contribute to transforming cyberspace into a strategic domain.

The notion of proxy warfare is as old as organized conflict itself, as states have long relied on intermediaries - be they militias, privateers, or paramilitary formations - to advance their interests while limiting the political and material costs of direct confrontation. In the cyber domain, which constitutes a relatively new but now consolidated battlefield, cyber-proxy groups represent the digital evolution of this practice. They have become a cornerstone of modern hybrid warfare doctrines, in which diplomatic pressure, information operations, economic coercion, and cyber acts are deliberately blended. Within this framework, the traditional boundary between war and peace, or between internal security and external aggression, is intentionally blurred, making it harder to identify when a situation has escalated into open conflict.

These actors are, therefore, not merely conventional criminal groups motivated solely by financial gain. Rather, they increasingly function as instruments of foreign policy and strategic influence, enabling states to conduct espionage, sabotage, disruption of critical infrastructure, and large-scale influence operations in the information space. Activities that, if carried out overtly by regular armed forces or official government agencies, might be interpreted as an armed attack or an act of war can, when delegated to cyber-proxies, be reframed as deniable incidents, criminal acts, or the work of loosely affiliated "patriotic hackers". In this way, states exploit the inherent opacity of cyberspace to test red lines, probe adversaries' defenses, and shape the strategic environment while attempting to avoid the legal and diplomatic consequences of direct attribution.

From a legal and political standpoint, cyber-proxy groups operate in a persistent gray zone. Their formal independence from state structures - whether real or merely asserted - provides sponsoring governments with the most valuable asset in an era of great-power competition: plausible deniability, which is the ability to credibly contest or obfuscate responsibility for a cyber operation. By outsourcing or informally tolerating hostile activities to such intermediaries, states seek to preserve strategic ambiguity, complicate attribution, and reduce the risk of large-scale escalation, all while continuing to exert meaningful pressure on adversaries through the cyber domain. As a result, cyber-proxy groups have become emblematic of the broader logic of contemporary hybrid conflict, where influence, disruption, and coercion increasingly replace open armed confrontation.

Definition of Cyber Proxy Groups

A Cyber Proxy is a non-state threat actor that conducts offensive cyber operations[1] on behalf of, or in support of, the strategic objectives of a state threat actor, without formal and state-sanctioned military or intelligence affiliation.

The nature of "proxy" is defined by three fundamental technical and strategic attributes:

  1. Non-state affiliation: the actors are not official members of the armed forces or intelligence services of a state. They may include:
    • Patriotic hacker collectives: ideologically aligned groups that act out of a sense of nationalist duty;
    • Cybercriminal syndicates: criminal organizations primarily motivated by financial gain, which are exploited by a state for their technical capabilities, logistical infrastructure (e.g., money laundering networks), or access to specific black markets;
    • Hacktivist movements: politically motivated groups whose objectives may temporarily align with a state's foreign policy. The state may provide them with intelligence or resources to strike a common adversary;
    • Private sector security firms: "cyber-mercenary" companies that develop and sell sophisticated offensive tools and "hack-for-hire" services to governmental clients.
  2. Asymmetric, objective-driven relationship: the state (the principal) and the proxy (the agent) enter into a transactional relationship with exchange of benefits such as:
    • The state provides funding, intelligence (target lists, vulnerabilities), sophisticated tooling (zero-day exploits, customized malware), secure command and control infrastructure (C2), and protection from domestic or international law enforcement;
    • The proxy provides technical expertise, pre-existing operational infrastructure (botnets, network access), a layer of separation between the operation and the state, and an established criminal or "underground" persona[2] that can obfuscate attribution.
  3. Plausible deniability[3]: the primary purpose of a state using a proxy is to create a gap in attribution, whereby it can deny any knowledge and responsibility for an attack. This is achieved through:
    • Separation of Infrastructure: the proxy uses its own or third-party infrastructure, not government-owned servers;
    • Operational Secrecy: the relationship between the two parts is managed through intermediaries and segregated communication channels;
    • False Flag Operations: the proxy can deliberately use tools, techniques, or languages associated with other groups (e.g., a Russian group using tools and comments in Chinese language) to further confuse analysts. Thus, the relationship between a state and the proxy group is deliberately ambiguous to grant the state a degree of plausible deniability.

Cyber proxy groups can be considered as the digital equivalent of the "little green men" - unmarked military personnel deployed in hybrid operations - in that both represent threat actors whose state affiliation is deliberately ambiguous, allowing the sponsoring state to achieve strategic objectives while operating below the threshold that would typically justify a conventional military response or unified international sanctions regime. One of the most evident laboratory of the integration of cyber proxies into a hybrid warfare strategy is the current war in Ukraine, where cyber operations preceded, accompanied, and followed kinetic military maneuvers on the ground.

In the complex ecosystem of modern cyber threat intelligence, cyber proxy groups represent one of the most sophisticated and challenging elements for analysis and attribution.

The need for states to conduct offensive operations in cyberspace while maintaining a degree of separation that permits plausible deniability has created a situation where the line between state, parastate, and criminal actors has progressively blurred, requiring analysts to develop increasingly sophisticated methodologies to understand the actual dynamics at play.

Attribution Activities of Cyber Proxy Groups: Technical Attribution vs Political Attribution

The distinction between technical attribution and political attribution represents one of the fundamental concepts for understanding the complexity of analyzing cyber proxy groups. These two approaches, while interconnected, operate on different levels and require distinct methodologies, skills, and considerations:

  1. Technical attribution focuses on the forensic analysis of technical indicators left during a cyber operation. This process includes detailed examination of the infrastructure used, analysis of the malware employed, study of the techniques, tactics, and procedures (TTP) adopted, and identification of recurring behavioral patterns. Technical analysts work with concrete data such as Indicators of Compromise (IoC) and analysis of peculiarities in attack implementation, aiming to construct a technical profile of the attacker based on verifiable and reproducible digital evidence. Technical attribution is based on different levels of confidence. At the lowest level, analysts can identify similarities between different campaigns based on infrastructure overlap or code reuse. At intermediate levels, distinctive behavioral patterns may emerge, such as activity times that correlate with specific time zones or the recurring use of particular evasion techniques. At the highest level, elements such as code comments in specific languages, cultural references embedded in malware, or characteristic programming errors can provide more direct clues about the attackers' origin.
  2. Political attribution, on the other hand, operates on a strategic and geopolitical plane. This process considers the broader context in which attacks occur, including strategic motivations, geopolitical objectives, timing relative to international events, and alignment with specific national interests. Political attribution requires a deep understanding of geopolitical dynamics, the cyber capabilities of states, the doctrines they have adopted in cyber offensive operations, and finally, the organizational structures of intelligence services.Analysts working on political attribution must consider elements such as who benefits from the attack, what information was stolen and its strategic value, how the operation fits into broader campaigns of influence or espionage, and which states have the technical capabilities and motivations to conduct such operations. This type of attribution often requires intelligence from multiple sources, including HUMINT, SIGINT, and open-source analysis, to construct a complete picture.

The tension between these two approaches becomes particularly evident when analyzing cyber proxy groups, as - while technical attribution might indicate the use of infrastructure and tools not associable with specific threat actors - political attribution might reveal a choice of targets and timing that suggests state strategic direction. This discrepancy is not accidental but often represents the deliberate result of obfuscation strategies.

Figure 1 - The different levels of attribution

Source: author's elaboration

Main Complexities in Attribution of Attacks by Cyber Proxy Groups

The attribution of attacks to cyber proxy groups represents one of the most complex challenges in modern cyber threat intelligence. This complexity derives from the intrinsically stratified and deliberately obscured nature of these operations, specifically designed to resist analysis and identification. In particular, four different levels of complexity in attribution can be distinguished:

  1. Fundamental Technical Complexities:
    • Advanced anti-forensics constitutes the first significant barrier. Cyber-proxy groups implement sophisticated techniques designed to eliminate or falsify digital evidence. This includes memory and disk wiping, timestomping to alter temporal metadata, selective log and artifact deletion, and use of in-memory techniques that leave no persistent traces;
    • The growing complexity of infrastructure poses an additional, major challenge. Modern cyber proxy operations use globally distributed infrastructure that exploits multiple layers of obfuscation. Analysts find themselves having to trace connections through concatenated VPNs, cascading SOCKS proxies, compromised Tor exit nodes, and infrastructure-as-a-service acquired with false identities. In practice, at each step, complexity and potential for loss of visibility increases;
    • Tool Reuse and Sharing introduce substantial ambiguity into malware-based attribution, since any given sample observed in an attack may be reused by unrelated actors. In some cases, the malware may have been stolen or leaked from other groups and repurposed to support false flag operations; in others, threat actors may rely on commercial frameworks acquired through front companies, or on modular components assembled from a variety of publicly or privately available sources. This "democratization" of offensive tools means that the presence of a particular malware no longer necessarily indicates a particular actor.
  1. Complexities in Intelligence and Analysis
    • The paradox of False Flag operations represents a fundamental epistemological challenge. When attackers deliberately insert misleading indicators, analysts face a dilemma: every piece of evidence could be genuine or deliberately constructed. In this context of analysis, it happens that:
      • Indicators that appear "too obvious" themselves become suspicious, leveraging the intuition that if something is too easy to read, it is likely deceptive, and therefore obvious clues can be safely left in place because analysts will tend to discard them. This logic produces a spiral of second-guessing[4] that paralyzes analysis;
      • Other sophistications used in false flags are the insertion of credible but misleading linguistic errors, the use of infrastructure in specific countries to suggest origin, operational patterns that mimic known groups, and controlled leaks of "evidence" that support false narratives.
    • The Intelligence Gap constitutes a structural barrier to analysis conducted primarily by private companies. Indeed, definitive attribution often requires HUMINT-type intelligence on organizational structures, SIGINT on internal communications, economic information on financial flows, etc., which is often classified, compartmentalized, or simply not available. Thus, private sector analysts operate with limited visibility. In reality, even government agencies face significant gaps, especially for operations conducted by non-allied states;
    • The Temporal Analysis Challenge is another aspect of complexity in analysis that emerges from the prolonged and fragmented nature of modern campaigns. Cyber proxy groups can:
      • Operate with deliberate pauses of months or years between phases;
      • Use infrastructure with extremely brief lifecycles;
      • Coordinate activities across time zones to obscure origin;
      • Modify TTPs between campaigns to avoid correlation.
    • This anti-analysis technique exploits the general difficulty of any analyst group to maintain focus on specific events.
  1. Organizational and Geopolitical Complexities:
    • Political pressure and bias profoundly influence the attribution process. Analysts operate in an environment where:
      • Political pressures can drive premature or predetermined attributions, and national biases influence the interpretation of evidence.
      • Competing narratives[5] from different states create "fog of war" in information. This aspect, effectively creating a manipulation of the analytical process, is particularly problematic when attribution has significant geopolitical consequences.
    • The fragmentation of the security community severely hinders the effective aggregation of intelligence. Different stakeholders often hold only partial insights into the same phenomenon, yet commercial interests, legal constraints, and a lack of mutual trust prevent meaningful information exchange. As a result, many information-sharing initiatives stagnate or fail outright due to misaligned incentives and pervasive concerns over legal liability.
    • Attribution shopping is an emerging problem in which victims or other interested parties actively seek out the attribution narrative that best aligns with their own interests. This dynamic can manifest through:
      • Selecting vendors or analysts who are known to hold a particular bias;
      • Cherry picking evidence that supports a preferred narrative while downplaying or ignoring conflicting data;
      • Commissioning multiple, parallel analyses until a desired conclusion is produced;
      • Selectively amplifying and publicizing attributions that are favorable, while sidelining less convenient assessments.
  1. Methodological and Analytical Complexities:
    • The issue of assigning confidence levels remains a persistent challenge in any attribution effort. Analysts must carefully balance:
      • The pressure to deliver definitive assessments despite inherent uncertainty, and the need to communicate these limitations clearly to non-technical decision-makers;
      • The requirement to produce timely, actionable intelligence, which can come at the expense of analytical rigor, for instance, having to run Analysis of Competing Hypotheses[6] (ACH) with only fragmentary or limited evidence.
    • Standards on Attribution and its methodologies remain fragmented and are based only on common high-level frameworks (e.g., the Attribution Triangle Framework). However, there is no consensus on:
      • What types of evidence are sufficient for different levels of attribution;
      • How to valorize/weigh technical evidence versus contextual evidence;
      • Mechanisms for peer review and validation.

As a result of these misalignments, different organizations can reach different conclusions even though they were based on the same data.

  • The Problem of "negative proof" is especially acute. Demonstrating that a given group was not responsible for an operation is often practically impossible, and this inherent impossibility is routinely exploited by accused states, which can always argue that the available digital evidence was deliberately fabricated or manipulated to frame them.

All the challenges that have just been discussed carry highly significant repercussions on operational and response activities. As the complexity of cyber proxy group attribution activities continues to grow with technological and geopolitical evolution, the future success of these activities will depend on the security community's capacity to adapt rapidly, collaborate effectively, and accept that in the era of cyber proxies, ambiguity is a feature to be managed.

State Management of Groups: Nation-state vs Cyber Proxy Group

The fundamental distinction between directly managed nation-state groups and cyber proxies lies in the degree of control, accountability, and formal separation from official state structures.

Directly managed nation-state groups: they operate as formally integrated units within military or intelligence structures. They receive direct input, operate with allocated state budgets, follow formal chains of command, and their members are often government employees with appropriate security clearances. Their operations are planned and approved through formal bureaucratic processes, with direct oversight and control by state authorities. Examples include dedicated military cyber units or specialized divisions within intelligence agencies. The management of these groups follows established military or intelligence protocols. Operations are planned with clear strategic objectives, defined rules of engagement (RoE), and deconfliction[7] mechanisms to prevent interference with other state operations. Personnel receive formal training, have access to advanced technological resources developed internally, and operate from secure government facilities.

Figure 2 - Deconfliction layers

Source: author's elaboration

Cyber proxy groups: they exist in a liminal space characterized by formal separation but informal control. These groups can take various forms such as private contractors working on commission, criminal groups receiving protection in exchange for services, ideologically aligned hacktivists receiving indirect support, or front commercial entities masking intelligence operations. Coordination occurs through multiple layers of intermediation, such as the commonly used "digitaldead drops"[8], which allow information exchange without direct contact and coded communication protocols.

Financial control represents a critical but vulnerable vector. Cyber proxy groups use cryptocurrencies with mixing services to obscure financial flows, shell companies in opaque jurisdictions to channel funds, and commodity trading or online gambling as money laundering mechanisms. Some more sophisticated groups have also developed internal economies based on criminal services that generate self-financing, thereby reducing dependence on traceable state funds.

The management of knowledge and technical capabilities requires balancing effective sharing with compartmentalization. Tools and exploits are distributed through encrypted repositories with granular access, controlled underground marketplaces that mask state transfers as criminal transactions, and deliberate "tool leaks" that permit distribution while maintaining deniability. Training occurs through anonymous online platforms, seemingly public technical documents with steganographic messages, and remote mentoring through "digital personas"[9].

In the context of cyber threat intelligence, Cutout is a person, entity, system, or infrastructural resource that serves as an intermediary between the primary actor (e.g., an APT group) and the target or other operational nodes, with the objective of protecting the identity or direct responsibility of the primary actor. In the technical-operational context, a cutout is often an element of technical or logistical infrastructure, such as VPS acquired from third parties or via stolen credit cards and used for C2 relay, compromised bots used for traffic proxying to real C2 infrastructure, Tor exit nodes, no-log VPNs or reverse proxies, CDN abuse (e.g., via Cloudflare, Akamai) to protect the real backend.

How states use cyber proxies

To illustrate the layered relationships among a state, its nation-state group, a cyber proxy group, and a cutout intermediary, consider a simplified, hypothetical scenario drawn from documented patterns in cyber operations (such as those observed in state-sponsored espionage campaigns).

In this model:

  • The state (e.g., a major power) sets strategic objectives, such as intelligence gathering on foreign infrastructure, but avoids direct involvement to maintain plausible deniability.
  • The nation-state group (e.g., an elite APT unit within military intelligence) develops tools, tactics, and initial targeting data, then delegates execution to external actors to obscure direct ties.
  • The cyber proxy group (e.g., a loosely affiliated hacker collective with ideological or financial motivations) receives indirect guidance via the cutout, conducting the actual offensive operations like network infiltration or data exfiltration using adapted tools from the nation-state group.
  • The Cutout (e.g., a third-party service provider, shell company, or encrypted communication broker) acts as the intermediary, handling resource transfers (e.g., funding or malware kits) and operational instructions without explicit state affiliation, further complicating attribution chains.
  • The target is the final objective of the cyber operation.

Figure 3 - Example of relationships among a state, its nation-state group, a cyber proxy group, and a cutout intermediary

Source: author's elaboration

This structure exemplifies how states project influence through deniable layers, blurring responsibility while achieving geopolitical aims. Naturally, the diagram shown in Figure 3 illustrates only a simplified scenario; in practice, far more intricate configurations can arise, in which the relationships between the same actors and the operational infrastructures they employ become significantly more complex.

The following figure shows a scenario where the "deniability" of the sponsoring state is achieved through different services and technologies:

Figure 4 - State-sponsored cyber proxy ecosystem and operational relationships diagram

Source: author's elaboration

Figure 4 provides a concrete illustration of the main dynamics within a typical state cyber proxy ecosystem. In this example, the political and military leadership sit at the apex of the structure, defining the overarching national strategic objectives. These objectives are then translated into operational priorities by the primary intelligence services, which both retain their own organic cyber capabilities and cultivate relationships with proxy groups. The lead intelligence service coordinates these activities through a dedicated center that functions as a central hub for planning, deconfliction, and the allocation of resources across operations.

The intermediate layer is crucial for maintaining plausible separation. Government contractors provide advanced technical capabilities while maintaining formal separation from state structures. Seemingly independent think tanks and research institutes develop offensive capabilities under the cover of security research. Universities with specialized programs serve as recruitment pipelines, identifying and cultivating talent that can be directed toward proxy groups. Handlers operate as cutouts, maintaining separation between state control and proxy operations.

The supporting infrastructure completes and reinforces the entire ecosystem, enabling it to operate at scale and with resilience. Bulletproof hosting providers underpin the durability of offensive infrastructure by offering hosting services that remain resistant to takedown efforts and law-enforcement pressure. Money laundering networks process and obfuscate payments, preserving financial anonymity for the actors involved. Recruitment channels continuously identify, vet, and attract new talent into the ecosystem. Dedicated development teams design, build, and maintain offensive tools, which are then circulated, adapted, and reused across the broader ecosystem.

Figure 5 - Real-world example of the Dual-Purpose Model use of the APT41 threat actor

Source: author's elaboration

Figure 5 represents a real-world example of the state-sponsored threat actor APT41 (also known as Double Dragon, Barium, Winnti umbrella), which is a cyber proxy group sponsored by the Chinese state. It operates as a semi-autonomous entity, carrying out both:

  • Cyber espionage and sabotage operations on behalf of the Ministry of State Security (MSS);
  • Criminal activities for personal profit, including intrusions into online gaming platforms, theft of virtual currency, and ransomware deployment.

This dual role - state-directed and criminal - is the defining characteristic of a cyber proxy.

Russian model for managing cyber proxy groups

Each of the major nation-states (Russia, China, North Korea, and Iran) - whose threat actors are continuously monitored - has developed a distinct model for managing proxy groups.

A particularly relevant case study is the model adopted by the Russian Federation, which has developed one of the most sophisticated and extensive cyber proxy ecosystems in the global threat landscape. This ecosystem is characterized by a complex web of relationships among state intelligence services, military units, cybercriminal organizations, and private contractors. It represents a strategic evolution from traditional state-sponsored cyber operations toward a more "nuanced approach", leveraging the capabilities and resources of non-state actors while maintaining strategic control and plausible deniability.

The organizational structure of Russian cyber proxy operations is built around three primary intelligence and military entities:

  • the Main Intelligence Directorate (GRU);
  • the Foreign Intelligence Service (SVR);
  • the Federal Security Service (FSB).

Each of these organizations has developed distinct approaches to proxy relationships, reflecting their different operational mandates, target sets, and strategic objectives.

  • The GRU - a military intelligence service - has primarily focused on cultivating relationships with cybercriminal groups and private contractors for operations targeting military and defense-related objectives.
  • The SVR - responsible for foreign intelligence collection - has emphasized long-term penetration operations conducted via technically advanced proxy groups.
  • The FSB - with responsibilities for domestic security - has developed proxy relationships mainly for operations targeting domestic dissidents and foreign entities operating within Russia's sphere of influence.

The relationship between Russian state actors and cybercriminal groups has evolved significantly over the past decade, shifting from what researchers have termed "passive tolerance" to "active management." This evolution has taken the form of the so-called "Dark Covenant model," a sophisticated proxy-management approach that grants cybercriminal groups "controlled impunity" in exchange for operational support and alignment with state objectives. Under the Dark Covenant model, cybercriminal groups are allowed to conduct financially motivated operations with minimal interference from Russian law enforcement, provided they refrain from targeting Russian entities and remain available for state-directed operations when requested. This arrangement offers several strategic advantages for the Russian state, including access to advanced technical capabilities, established operational infrastructure and the ability to conduct operations with enhanced plausible deniability. For the cybercriminal groups, instead, the agreement provides protection from law enforcement action as well as access to intelligence and resources that enhance their operational capabilities

The implementation of the Dark Covenant model involves sophisticated coordination mechanisms between state actors and cybercriminal groups. Intelligence services provide targeting information, technical resources, and operational protection to cybercriminal groups, in exchange for conducting specific operations or providing access to compromised networks. This coordination is typically mediated through intermediary organizations, which provide operational security and compartmentalization while maintaining strategic control over high-level objectives.

Plausible deniability: the strategic cornerstone of proxy operations

Plausible deniability in the cyber domain rests on the inherently anonymous and borderless nature of cyberspace. Unlike traditional kinetic operations, where responsibility can often be inferred from tangible physical evidence, in cyberspace achieving definitive attribution may be technically unattainable or may require levels of capability, resources, and access to sensitive information that lie beyond what most victim organizations or states can realistically muster.

The strategy of plausible deniability by states offers multiple advantages: it allows avoiding direct diplomatic consequences, maintains stability of bilateral relations even during offensive operations, allows testing adversary defenses without formal escalation, and offers operational flexibility without the constraints of international law applicable to traditional military operations.

This framework, however, requires meticulous planning and significant investments. It is not simply about denying involvement post-facto, but about structuring operations from the start so that definitive attribution is technically and politically problematic. This compels the creation of multiple layers of obfuscation, compartmentalization of information, and development of credible alternative narratives.

In summary, the aspects that states exploit to achieve plausible deniability are the following:

  • Technical obfuscation represents the first and most fundamental layer of plausible deniability. This dimension comprises all techniques used to hide, obscure, or falsify technical indicators that could lead to attribution.
    • Infrastructure obfuscation constitutes the basis of technical obfuscation. Cyber proxy groups use complex and stratified infrastructure designed to resist forensic analysis. This includes the use of concatenated commercial VPNs to mask traffic origin, compromise of legitimate servers in non-collaborative jurisdictions to host command and control, use of bulletproof hosting services that resist law enforcement requests, and frequent rotation of infrastructure to limit the observation window. Advanced techniques include the use of fast-flux DNS to rapidly change IP addresses associated with malicious domains, domain fronting to hide C2 traffic behind legitimate CDN services, and abuse of legitimate cloud services to blend in with normal traffic. More sophisticated groups implement "living off the land" strategies, using exclusively legitimate tools and services to reduce footprint;
    • TTP obfuscation concerns the deliberate manipulation of techniques, tactics, and procedures to confuse attribution. Cyber proxy groups adopt various strategies, such as deliberate emulation of known group TTPs for false flag operations, randomization of non-critical elements of operations to avoid recognizable patterns, use of publicly available or stolen tools from other groups to obscure origin, and modification of timing and operational patterns to avoid correlations.
      • A particularly sophisticated aspect is the use of "tradecraft pollution", the deliberate introduction of false technical indicators designed to lead analysts toward erroneous attributions. This can include inserting strings in foreign languages in malware, using falsified timestamps that suggest different time zones, or implementing techniques characteristic of other known groups;
      • Malware management itself becomes an exercise in balancing operational effectiveness and deniability. Groups use multiple packers and crypters to obscure code, implement sophisticated anti-analysis mechanisms, remove or falsify metadata, and use supply chain compromise to distribute malware through apparently legitimate channels.
  • Operational compartmentalization represents the second fundamental pillar of plausible deniability. This organizational principle, derived from best practices in traditional intelligence, assumes particularly elaborate and technologically advanced forms in the cyber context.

Operational compartmentalization manifests at multiple organizational and technical levels. At the strategic level, operations are segmented into independent cells with limited knowledge of the overall picture. Each cell knows only its own part of the mission, without visibility into ultimate objectives or principals. This "need-to-know" model limits the risk that compromise of a single element can expose the entire operation. At the tactical level, compartmentalization extends to technical expertise. Separate teams manage malware development, infrastructure acquisition, initial access operations, lateral movement, and data exfiltration.

This separation not only improves operational efficiency through specialization but also creates natural barriers to complete reconstruction of operations by analysts. Compartmentalization is also reflected in the technical architecture of operations. Cyber proxy groups implement rigorous infrastructure segmentation, with dedicated servers for different attack phases and absence of direct connections between critical components.

They use separate communication channels for command and control, data exfiltration, and operational coordination. They implement "kill switch" mechanisms that allow selective deactivation of parts of the operation without compromising other components. A crucial aspect is temporal compartmentalization. Operations are structured in discrete phases with deliberate pauses between phases to complicate correlation. Moreover, different teams can operate in separate time windows, creating the appearance of multiple uncoordinated actors. This temporal fragmentation is particularly effective against detection systems that rely on patterns of continuous activity.

The management of operational identities represents a critical element of compartmentalization. Operators use separate digital personas for different operations, with credible backstories and digital footprints constructed over time. These identities are maintained rigorously separate, with dedicated devices, separate Internet connections, and operational protocols that prevent cross-contamination.

  • The third pillar of plausible deniability resides in information operations and disinformation campaigns that accompany and follow cyber operations. This dimension perhaps represents the most significant evolution in cyber proxy strategies in recent years, transforming attribution from a technical problem into a battle for narrative control. Information operations in the context of cyber proxies operate on different timelines and through different vectors:
    • Pre-operation, narratives are carefully engineered and cultivated to lay the groundwork for future deniability. This involves fabricating false personas of ostensibly independent or criminal hackers, constructing credible backstories for proxy groups, and seeding disinformation designed to complicate subsequent attribution efforts. Sophisticated actors also maintain an active and sustained presence on underground forums, social media, and various communication platforms in order to build credibility, embed themselves within relevant ecosystems, and cultivate relationships that can be leveraged in later influence operations;
    • During operations, disinformation campaigns pursue several complementary objectives. False or contradictory claims issued by multiple actors are used to muddy the waters of attribution. Carefully curated leaks of accurate but contextually misleading technical details steer investigators toward dead ends. Coordinated influence operations on social media amplify narratives that reinforce deniability. Meanwhile, the manipulation of traditional and online media through seemingly independent outlets helps create echo chambers that recycle and solidify these preferred narratives over time;
    • Post-operation, information operations intensify to control public narrative. This includes coordinated campaigns to discredit accurate attributions, promotion of alternative theories through "useful idiots" and influencers, creation of "evidence" supporting alternative narratives, and exploitation of cognitive biases and political polarization to fragment consensus on attribution.

The integration of cyber operations with information operations generates powerful reinforcing effects, as technical intrusions are deliberately paired with narrative manipulation. Stolen data is selectively leaked to sustain pre-defined storylines, while subtle alterations to exfiltrated documents inject hard-to-detect disinformation into the public domain. The timing of these releases is carefully synchronized with key geopolitical events to maximize psychological impact and sow confusion among target audiences, and technical false flags are amplified through coordinated influence campaigns that further distort attribution and perception.

Cyber proxy classification: Maurer model

Tim Maurer's classification model represents the first systematic and empirically grounded framework for categorizing relationships between states and non-state cyber actors. The model is based on two orthogonal dimensions:

1) The state-proxy relationship dimension, which identifies three distinct models along a spectrum of state control:

  • Delegation: formal contractual relationships where the state exercises direct control through specific tasking, ongoing supervision, and financial compensation. The proxy is effectively an extension of the state apparatus.
  • Orchestration: semi-formal relationships where the state provides general strategic direction, resources, or support, but the proxy maintains significant operational autonomy. The link is often based on ideological alignment.
  • Sanctioning: the state is aware of the proxy's activities, which benefit state interests, but deliberately refrains from intervening, providing passive support through inaction (sanctuary, non-prosecution).

2) The technical-operational sophistication dimension, which classifies actors along a six-tier scale from Tier I to VI:

  • Tier VI: Cyber superpowers (e.g., NSA, GCHQ, Unit 8200, China's MSS, Russia's FSB/GRU)
  • Tier V: Major state APT units and elite contractors (APT28, APT29, APT41, etc.)
  • Tier IV: Sophisticated proxies, professional contractors, academic research nexuses, advanced cybercriminals
  • Tier III: Organized cybercrime and hacktivist groups
  • Tier II: Basic cybercriminals
  • Tier I: Script kiddies (minimal capabilities, no strategic impact)

Figure 6 - Proxy Classification using Maurer framework

Source: author's elaboration

Within this classification, a fundamental trade-off becomes evident: Control and deniability are inversely correlated. This translates in the way the state operates:

  • In Delegation maintains high control but low deniability;
  • In Orchestration maintains medium control and medium deniability;
  • In Sanctioning has low control but high deniability.

In this context, the concept of a "Sweet Spot" is used to denote the optimal combination of characteristics that maximizes operational utility for the state sponsor, balancing the control-deniability trade-off. The Sweet Spot corresponds to the Tier IV x Orchestration intersection.

Empirical observations of APT groups documented by MITRE ATT&CK (which catalogs 14 tactics and over 200 techniques observed in cyber operations) show the following distribution:

  • Tier I-II actors: use 1-10 techniques, mainly focused on Initial Access and Execution tactics
  • Tier III-IV actors: use 10-40 techniques, covering 6-10 ATT&CK tactics
  • Tier V-VI actors: use 40-100+ techniques, covering 12-14tactics, and include rare/advanced techniques

As states increasingly resort to cyber proxies to pursue strategic objectives while avoiding escalation, the inadequacy of existing legal and regulatory frameworks becomes apparent. In order to address this issue, technical innovation will be necessary, as will international dialogue to redefine responsibilities within the cyber domain.

[1] Offensive Cyber Operations (OCO) refer to deliberately designed actions to penetrate, degrade, destroy, or manipulate information systems, networks, and digital assets of an adversary, in order to obtain strategic, tactical, or operational advantages. Unlike defense operations or "passive" exploitation operations (e.g., CNE-Computer Network Exploitation), OCO falls within the category of CAN-Computer Network Attack, according to DoD/NATO taxonomy. In the CTI context, OCO constitutes the most advanced expression of state or state-sponsored actors' offensive capabilities, and their study is essential for technical and strategic attribution, analysis of APT capability and intent, prediction of future campaigns or attacks on critical infrastructure and triage of high-impact incidents.

[2] In the context of cyber operations and threat intelligence, a persona refers to a carefully constructed digital identity or public profile that a threat actor assumes to conceal their true nature, origin, or intentions. This fabricated identity typically includes a documented history of activities, attributed nicknames, associated accounts across multiple platforms, and a consistent pattern of behavior that aligns with a specific criminal or activist narrative. By maintaining an established persona - often spanning years of underground activities or public attribution - cyber proxies and state-sponsored actors can deflect attribution from their true sponsors, as investigators and security researchers may incorrectly attribute ongoing operations to the historical persona rather than recognizing the shift in operational patterns or the involvement of a state actor. A persona thus serves as a crucial misdirection tool in hybrid cyber operations, allowing actors to exploit existing reputational associations and law enforcement blind spots.

[3] Plausible Deniability refers to a state's ability to credibly disavow responsibility for hostile actions carried out on its behalf by non-state actors. It is maintained through operational separation, use of third-party infrastructure, intermediaries, and deliberate attribution confusion, allowing states to achieve strategic objectives while avoiding direct accountability and escalation risks.

[4] Second-guessing is an analytical phenomenon in which an intelligence analyst refrains from formulating or sharing a definitive assessment (e.g., on attribution, intent, or capability) for fear of being wrong, often deferring conclusions, softening language, or assuming excessively cautious and indecisive positions.

[5] Competing narratives are competing narratives - often conveyed by analysts, states, CTI vendors, or media - that describe the same cyber event differently, sometimes with divergent informational purposes.

[6] Analysis of Competing Hypotheses (ACH) is a structured analytic technique in which an analyst systematically compares multiple, mutually exclusive hypotheses against the available evidence, focusing on disproving less likely explanations in order to identify the hypothesis that best fits the observed facts.

[7] In a state context with cyber proxies, deconfliction is the process by which a state intelligence service ensures that cyber offensive (or clandestine) activities conducted by a proxy group do not interfere, collide with, or compromise ongoing operations (by other units or the state itself), sensitive assets of strategic interest, or other allied actors or undercover structures.

[8] From an operational perspective, a digital dead drop is an indirect communication technique in which a threat actor leaves a message, payload, cryptographic key, or command in a predefined virtual location, which another actor can access subsequently, without direct interaction between sender and recipient. Examples of dead drops can be uploads of encrypted files to public GitHub repositories, commands embedded in metadata or comments on YouTube videos, etc.

[9] A digital persona is a fabricated identity, complete with digital identifiers (names, accounts, behaviors, technical and social attributes), designed to interact credibly online for the purpose of offensive operations (cyber, disinformation, influence), infiltration of closed or clandestine communities, cover for C2 or exfiltration activities, and masking in phishing, social engineering, or credential harvesting campaigns, etc.

Bibliography

ISPI - Istituto per gli Studi di Politica Internazionale published this content on February 05, 2026, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on February 10, 2026 at 09:35 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]