When North Korea’s estimated gains from cybercrime exceeded its total legitimate international trade in 2022, something fundamental shifted in the nature of strategic competition.[1] Pyongyang generated $1.7 billion through cyber operations while earning only $1.59 billion from legal exports. This should not be considered as supplementary revenue or criminal opportunism, but instead it represents a primary mechanism of state finance that funds weapons programmes and regime stability.
Yet this development barely registered in mainstream security studies literature. The reason reveals a deeper problem in how adversaries now compete strategically by deliberately exploiting the boundaries between categories that International Security Studies treats as distinct. Cyber operations succeed precisely because they operate simultaneously as economic extraction, cognitive manipulation, and potential kinetic threat, defying classification within inherited taxonomies of war, crime, espionage, and influence. The field’s failure to recognise this boundary exploitation as a coherent form of strategic competition leaves it unable to analyse how states actually generate and contest power in the digital age.
The Clausewitzian Checkpoint
Thomas Rid performed a valuable service in his 2012 article “Cyber War Will Not Take Place.”[2] He punctured inflated rhetoric around “cyber Pearl Harbors” by demonstrating that cyber operations don’t meet classical Clausewitzian criteria for warfare. They lack intrinsic violence, don’t compel through force, and haven’t fundamentally altered political orders through armed conflict. This deflation of threat inflation was intellectually rigorous and necessary.
But Rid’s framework created an unintended consequence. Many scholars concluded that if cyber operations aren’t war, they aren’t strategically significant. This logic fails when confronted with the evidence. As Lucas Kello argued in his 2013 response, cyber operations are “expanding the range of possible harm and outcomes between the concepts of war and peace” with profound implications for national and international security.[3] The problem isn’t that cyber operations constitute war in the traditional sense. The problem is that adversaries have discovered they can achieve strategic objectives by deliberately operating in the definitional gaps our frameworks create.
Consider what cyber operations actually are: sustained, below-threshold activities that extract resources, manipulate cognition, or degrade systems in ways that cumulatively influence state power and political outcomes without triggering the recognition mechanisms that would mobilise conventional responses. This positive definition reveals why inherited categories often fail and that cyber operations falling between war and crime is no accident. They are designed to exploit that boundary.
Economic Extraction as Strategic Competition
The classification of state-sponsored cyber operations as “cybercrime” represents a fundamental category error. Widely cited estimates place global cybercrime costs at approximately $9.5 trillion annually for 2024.[4] These figures carry methodological challenges and conflate direct losses with indirect economic effects. Yet even treating them as indicative of order of magnitude rather than precise valuations, the scale remains extraordinary. This exceeds Japan’s entire GDP and dwarfs global military spending.
More importantly, conservative estimates suggest that state-sponsored operations account for 15 to 20 percent of total cybercrime costs, representing perhaps $1.5 to $2 trillion in annual state-directed activity.[5] This is roughly double the entire US intelligence budget and comparable to China’s official military spending. These figures indicate systematic extraction at a scale fundamentally incompatible with treating such activity as mere crime.
The real insight concerns returns on investment. Realists pay attention to power and cost ratios. A state gaining billion-dollar effects from million-dollar inputs should trigger theoretical reassessment. North Korea reportedly spends tens of millions to gain billions, producing returns that conventional military operations cannot match. Russia tolerates or directly controls ransomware ecosystems that generate massive revenue while maintaining plausible deniability. China conducts industrial-scale intellectual property theft that allows it to leapfrog decades of research and development investment.
This isn’t “crime” in any meaningful sense and yet neither is it war. It belongs to a family of coercive economic statecraft that includes sanctions, financial leverage, and strategic resource control. The field has conceptual machinery for thinking about these tools. What’s missing is recognition that cyber extraction sits within this lineage. When states systematically extract resources from other states without triggering traditional conflict responses, they’re engaging in economic warfare conducted through digital methodologies. The boundary between legitimate economic competition, sanctions, illicit finance, and cyber theft has been deliberately blurred by adversaries who understand that this ambiguity protects them from response.
The 2017 NotPetya attack illustrates this boundary exploitation perfectly. Widely attributed to Russian military intelligence, it caused an estimated $10 billion in global damage, disrupted Maersk’s worldwide shipping operations, paralysed pharmaceutical manufacturing, and crippled critical infrastructure across Ukraine.[6] Was this economic warfare? Sabotage? Organised crime? The answer is that it was designed to be all of these simultaneously, making classification impossible and response difficult. Because it operated through exploitation rather than traditional violence, it failed to generate sustained policy response despite extraordinary economic dislocation.
Cognitive Operations and the Active Measures Tradition
The cognitive dimension represents boundary exploitation of a different order. What Andreas Krieg terms “subversion”, or the strategic weaponization of narratives to exploit socio-psychological vulnerabilities and erode sociopolitical consensus, isn’t unprecedented.[7] It builds directly on the Soviet tradition of active measures during the Cold War. The KGB’s disinformation campaigns, front organisations, and cultivation of influence networks were designed to achieve precisely what contemporary information operations attempt: reflexive control whereby adversaries voluntarily make decisions predetermined by the subverting party.
What has changed is scale and effectiveness. The contemporary information environment’s infrastructural vulnerabilities make such operations far more potent than their Cold War predecessors. Russian information operations around the 2016 US election, Brexit-related influence campaigns, and subsequent efforts across European democracies represent sustained attempts to manipulate democratic processes. Whether individually decisive or not, their cumulative effect has been to inject pervasive uncertainty into democratic deliberation.
Krieg’s framework is particularly useful because it recognises that effective subversion requires high levels of both orchestration and mobilisation.[8] Successful operations coordinate across multiple domains: cultivating academic surrogates who provide legitimacy, deploying media networks that amplify messages, and ultimately mobilising real-world activism or policy changes. This isn’t merely spreading disinformation online. It’s cognitive warfare that operates simultaneously in the informational space, the expert-policy nexus, and physical civil society mobilisation.
Here again we see deliberate boundary exploitation. Is this espionage? Influence operations? Public diplomacy? Domestic political activism? The answer is that it’s designed to be all of these, making it impossible to counter with tools designed for any single category. The operations succeed not despite their categorical ambiguity but because of it. International Security Studies has failed not because cyber-enabled cognitive operations are unprecedented, but because the field hasn’t updated its frameworks for thinking about covert influence in an age where scale, speed, and micro-targeting have fundamentally changed the strategic payoff.
Threshold Manipulation as Structural Feature
The attribution problem receives significant attention in cyber security literature, as do the challenges of compressed decision timelines. These are real difficulties. But the more profound challenge is threshold manipulation itself. Adversaries deliberately operate below levels that would trigger Article 5 commitments or justify conventional military responses. Each individual operation seems manageable, falling into the category of crime, espionage, or influence activity rather than warfare. Yet the cumulative strategic effect potentially exceeds many conventional conflicts.
This isn’t an implementation problem. It’s a structural feature of how cyber competition operates. States are competing precisely by exploiting the cognitive thresholds that anchor our definitions of conflict. Russia can conduct NotPetya through proxies with sufficient ambiguity that attribution takes months and response remains constrained. China can exfiltrate terabytes of intellectual property through operations that blend state intelligence services, military contractors, and “patriotic” hackers. North Korea can steal billions through operations routed through multiple jurisdictions with cryptocurrency payouts. Each operation maintains just enough deniability to complicate response, just enough restraint to avoid crossing recognised warfare thresholds.
This threshold manipulation undermines every inherited category of International Security Studies. It’s not that cyber operations are hard to classify. It’s that they’re designed to exploit the classification system itself. The boundaries between war and peace, state and non-state action, economic and military competition, legitimate influence and hostile subversion aren’t natural categories adversaries accidentally fall between. They’re strategic terrain adversaries deliberately occupy because they understand these boundaries constrain Western responses.
Recognising What We’re Not Seeing
The policy stakes extend beyond academic taxonomy. We’re not experiencing Pearl Harbor-equivalent events annually in some literal sense. We’re experiencing something potentially more destabilising: systematic strategic competition that bypasses the recognition mechanisms that would trigger mobilisation. Our threat-recognition systems are calibrated to identify territorial aggression, military build-ups, and kinetic attacks. They’re not calibrated to identify systematic resource extraction masquerading as crime, cognitive operations masquerading as activism, or infrastructure compromise masquerading as espionage.
The result is systematic underresponse to strategic challenges of the first order. We’re not mobilising resources, not building institutional capacity, not developing deterrent strategies, not coordinating international responses, because these events don’t trip the conceptual wires that would classify them as national security emergencies. Meanwhile, adversaries conduct what amounts to unrestricted economic and cognitive warfare whilst maintaining that they’re merely engaged in “law enforcement matters,” “intelligence operations,” or “civil society activism.”
We’ve constrained ourselves through definitional conservatism. We’ve accepted that if it’s not “war,” it doesn’t merit war-level responses, while adversaries operate under no such constraints. They’ve discovered they can achieve strategic objectives through boundary exploitation, and our theoretical frameworks leave us unable to recognise, analyse, or respond effectively.
Toward Frameworks Adequate to Strategic Reality
What would it mean for International Security Studies to take cyber operations seriously? It requires recognising that the boundaries between economic, cognitive, and kinetic competition have been deliberately collapsed by adversaries who understand that this collapse protects them from response.
This means developing frameworks that can analyse strategic competition operating across these domains simultaneously. It means recognising that when North Korea funds its regime primarily through cyber theft, when Russia conducts operations that are simultaneously economic warfare and intelligence operations, when China’s influence campaigns blend state direction with autonomous proxy action. They’re exemplars of how strategic competition now operates.
It means accepting that “security” in conditions of radical technological change looks sufficiently different that inherited categories have limited explanatory power. The field needs concepts and metrics for assessing threats that are diffuse, cumulative, and often only apparent in retrospect. It needs frameworks that can recognise systematic resource extraction, cognitive manipulation, and infrastructure compromise as coherent forms of strategic competition rather than unrelated criminal, intelligence, and influence activities.
Most fundamentally, it means understanding that the definitional boundaries the field has inherited are not neutral analytical categories. They’re strategic terrain that adversaries compete by exploiting them. Until International Security Studies recognises boundary exploitation itself as the central feature of contemporary cyber competition, the field will remain unable to provide the analytical guidance policymakers urgently need.
Bibliography
Buchanan, Ben. The Hacker and the State: Cyber Attacks and the New Normal of Geopolitics. Cambridge, MA: Harvard University Press, 2020.
“Cost of Cybercrime.” Cybersecurity Ventures, 2024. https://cybersecurityventures.com/cybercrime-damages-6-trillion-by-2021/.
Greenberg, Andy. “The Untold Story of NotPetya, the Most Devastating Cyberattack in History.” Wired, August 22, 2018. https://www.wired.com/story/notpetya-cyberattack-ukraine-russia-code-crashed-the-world/.
Kello, Lucas. “The Meaning of the Cyber Revolution: Perils to Theory and Statecraft.” International Security 38, no. 2 (Fall 2013): 7-40.
Krieg, Andreas. Subversion: The Strategic Weaponization of Narratives. Washington, DC: Georgetown University Press, 2023.
Rid, Thomas. “Cyber War Will Not Take Place.” Journal of Strategic Studies 35, no. 1 (February 2012): 5-32.
Notes
[1] Ben Buchanan, The Hacker and the State: Cyber Attacks and the New Normal of Geopolitics (Cambridge, MA: Harvard University Press, 2020), 47-48.
[2] Thomas Rid, “Cyber War Will Not Take Place,” Journal of Strategic Studies 35, no. 1 (February 2012): 5-32.
[3] Lucas Kello, “The Meaning of the Cyber Revolution: Perils to Theory and Statecraft,” International Security 38, no. 2 (Fall 2013): 8.
[4] “Cost of Cybercrime,” Cybersecurity Ventures, 2024, https://cybersecurityventures.com/cybercrime-damages-6-trillion-by-2021/. These figures aggregate direct theft, business disruption, and opportunity costs, and should be treated as indicative of order of magnitude rather than precise valuations.
[5] Kello, “Meaning of the Cyber Revolution,” 24-26.
[6] Andy Greenberg, “The Untold Story of NotPetya, the Most Devastating Cyberattack in History,” Wired, August 22, 2018, https://www.wired.com/story/notpetya-cyberattack-ukraine-russia-code-crashed-the-world/.
[7] Andreas Krieg, Subversion: The Strategic Weaponization of Narratives (Washington, DC: Georgetown University Press, 2023), 89.
[8] Krieg, Subversion, 101-103.