Thoughts, reflections and experiences

icy banner

Tag: cyber strategy

The Missing Adversary: What the UK’s Cyber Security Strategy Leaves Out

The UK Government Cyber Security Strategy 2022–2030 is 84 pages long. It mentions offensive cyber capability once, in a subordinate clause, in a focus box, on page 49. The remaining 83 pages describe risk management, asset discovery, vulnerability reporting, supply chain assurance, incident response, workforce development, and the adoption of the Cyber Assessment Framework across the public sector. It is thorough, competent, and well-structured. It is also not a strategy.

Rather than an issue of drafting, the document cannot contain the thing that would make it genuinely strategic, because that thing — the reciprocal logic of state cyber competition, including the UK’s own offensive posture and the adversary intent it provokes — is precisely what politics dictate that a public document must leave out. As such, if you are a practitioner building your cyber defences from this document, that structural gap has consequences you need to understand.

Strategy requires an adversary

A strategy, in any serious usage, is a theory of how applying your means produces a desired effect on an adversary pursuing their own ends. It demands that you understand not just what you are defending, but who is attacking, what they want, and why your organisation matters to them. The UK’s cyber security strategy does none of this. Its five objectives (manage risk, protect, detect, respond, develop skills) are the components of a security operations programme. While they do describe how to administer resilience, they do not link it to the strategic intentions of the actors generating the threat.

The document’s aim is that all government organisations should be resilient to known vulnerabilities and attack methods by 2030. That is a maturity target, and a reasonable one. But a maturity target is to strategy what vehicle maintenance is to a campaign plan. It ensures your equipment works. It does not tell you where to concentrate your forces or why.

In practice, if you are a CISO implementing the Cyber Assessment Framework, you are sizing your defences against a threat profile. But the threat profile describes adversary capabilities and methods, not adversary objectives. It tells you what is being done to you. It cannot tell you why, because answering that question honestly would require the UK government to describe a two-sided strategic contest in which it is an active participant, not a passive recipient. And that is not politically possible.

The adversaries have strategies. You don’t.

The actors referenced obliquely throughout the document — never named, often gestured at — do operate with genuine strategic intent, even if their cyber operations vary enormously in sophistication and purpose. North Korea’s cyber theft programme serves regime survival by generating hard currency that sanctions deny through conventional channels. China’s operations span industrial espionage, domestic surveillance, and the suppression of external dissent, each serving distinct but related political objectives. Russia employs cyber operations as one component of a broader approach to degrading institutional coherence in adversary states. These are necessarily simplified descriptions of complex state behaviours, but they share a common feature: in each case, cyber operations are means directed toward stable, rational and identifiable political ends.

The UK’s published strategy, by contrast, presents the threat environment as something to be endured and managed rather than understood as a competitive interaction. The result is a document that describes the defensive half of a strategic relationship as though it were the whole picture.

Your threat model is incomplete, and the framework cannot fix it

The Cyber Assessment Framework provides tiered profiles matched to threat levels, and organisations assess against the profile that corresponds to their function’s criticality. Sound in principle, but in practice it produces a uniform defensive posture calibrated to adversary capability rather than adversary intent. But the difference between being scanned by an automated botnet and being targeted by a state actor pursuing a specific intelligence requirement is not a difference of degree. It is a difference of kind. The botnet will move on when it encounters adequate defences. The state actor will adapt, persist, and find another way in, because the objective driving the operation has not changed. Achieving your Cyber Assessment Framework (CAF) profile outcomes addresses the first situation. It does not reliably address the second, because the second is shaped by strategic logic your threat profile does not and cannot capture.

The question you need to answer, which the strategy will not answer for you, is: what does my adversary want from me specifically? Not what tools they use. Not what vulnerabilities they exploit. What political or economic objective my organisation’s compromise would serve.

Identify your centre of gravity

This is ultimately a question about your own centre of gravity — the asset, function, or data set whose compromise would cause disproportionate harm and which a strategically motivated adversary would therefore prioritise.

If you are a government department managing classified defence procurement, your centre of gravity is not your email server. It is the data that reveals UK capability trajectories to a competitor. If you manage health infrastructure, your centre of gravity shifts depending on context: during a crisis, it is service continuity; in normal operations, it is patient data at population scale. If you regulate energy, it is the dependency mapping that would allow an adversary to identify cascading failure points across the national grid.

Once you identify what a strategically motivated adversary would actually want from you, your defensive posture should concentrate around that. Not distribute itself uniformly to meet a standardised profile. Compliance with the CAF is necessary. But the gap between compliance and genuine strategic defence is precisely the space the document’s unwritten section would have occupied — the section that explains who is coming for you, what they want, and what that means for where you put your resources.

Reading the silence

The Government Cyber Security Strategy is not a bad document produced by people who don’t understand strategy. It is a public document produced under conditions that make honest strategic disclosure impossible. You cannot publish a candid account of reciprocal cyber competition without exposing capabilities, revealing intelligence sources, and acknowledging that the UK’s own operations shape the threat environment its departments face. The genre of ‘published national cyber strategy’ is therefore structurally evasive, given it must present administrable resilience in place of the adversarial logic that would make it genuinely strategic, because that logic is classified, diplomatically sensitive, or both.

For practitioners, the implication is not that you should disregard the strategy. Implement the CAF. Build shared capabilities. Invest in your workforce and your detection capacity. But do not mistake the document for a complete account of the strategic environment you operate in. It is the publicly sayable portion of a larger contest whose most important dynamics, the ones that determine why you are being targeted and what your adversary considers worth taking, cannot appear in a document with an ISBN number.

The unwritten section is the one that matters most. Plan as though it exists.

Cybercriminal

Beyond the War/Crime Binary and Why International Security Studies Must Reckon with Cyber Operations

When North Korea’s estimated gains from cybercrime exceeded its total legitimate international trade in 2022, something fundamental shifted in the nature of strategic competition.[1] Pyongyang generated $1.7 billion through cyber operations while earning only $1.59 billion from legal exports. This should not be considered as supplementary revenue or criminal opportunism, but instead it represents a primary mechanism of state finance that funds weapons programmes and regime stability.

Yet this development barely registered in mainstream security studies literature. The reason reveals a deeper problem in how adversaries now compete strategically by deliberately exploiting the boundaries between categories that International Security Studies treats as distinct. Cyber operations succeed precisely because they operate simultaneously as economic extraction, cognitive manipulation, and potential kinetic threat, defying classification within inherited taxonomies of war, crime, espionage, and influence. The field’s failure to recognise this boundary exploitation as a coherent form of strategic competition leaves it unable to analyse how states actually generate and contest power in the digital age.

The Clausewitzian Checkpoint

Thomas Rid performed a valuable service in his 2012 article “Cyber War Will Not Take Place.”[2] He punctured inflated rhetoric around “cyber Pearl Harbors” by demonstrating that cyber operations don’t meet classical Clausewitzian criteria for warfare. They lack intrinsic violence, don’t compel through force, and haven’t fundamentally altered political orders through armed conflict. This deflation of threat inflation was intellectually rigorous and necessary.

But Rid’s framework created an unintended consequence. Many scholars concluded that if cyber operations aren’t war, they aren’t strategically significant. This logic fails when confronted with the evidence. As Lucas Kello argued in his 2013 response, cyber operations are “expanding the range of possible harm and outcomes between the concepts of war and peace” with profound implications for national and international security.[3] The problem isn’t that cyber operations constitute war in the traditional sense. The problem is that adversaries have discovered they can achieve strategic objectives by deliberately operating in the definitional gaps our frameworks create.

Consider what cyber operations actually are: sustained, below-threshold activities that extract resources, manipulate cognition, or degrade systems in ways that cumulatively influence state power and political outcomes without triggering the recognition mechanisms that would mobilise conventional responses. This positive definition reveals why inherited categories often fail and that cyber operations falling between war and crime is no accident. They are designed to exploit that boundary.

Economic Extraction as Strategic Competition

The classification of state-sponsored cyber operations as “cybercrime” represents a fundamental category error. Widely cited estimates place global cybercrime costs at approximately $9.5 trillion annually for 2024.[4] These figures carry methodological challenges and conflate direct losses with indirect economic effects. Yet even treating them as indicative of order of magnitude rather than precise valuations, the scale remains extraordinary. This exceeds Japan’s entire GDP and dwarfs global military spending.

More importantly, conservative estimates suggest that state-sponsored operations account for 15 to 20 percent of total cybercrime costs, representing perhaps $1.5 to $2 trillion in annual state-directed activity.[5] This is roughly double the entire US intelligence budget and comparable to China’s official military spending. These figures indicate systematic extraction at a scale fundamentally incompatible with treating such activity as mere crime.

The real insight concerns returns on investment. Realists pay attention to power and cost ratios. A state gaining billion-dollar effects from million-dollar inputs should trigger theoretical reassessment. North Korea reportedly spends tens of millions to gain billions, producing returns that conventional military operations cannot match. Russia tolerates or directly controls ransomware ecosystems that generate massive revenue while maintaining plausible deniability. China conducts industrial-scale intellectual property theft that allows it to leapfrog decades of research and development investment.

This isn’t “crime” in any meaningful sense and yet neither is it war. It belongs to a family of coercive economic statecraft that includes sanctions, financial leverage, and strategic resource control. The field has conceptual machinery for thinking about these tools. What’s missing is recognition that cyber extraction sits within this lineage. When states systematically extract resources from other states without triggering traditional conflict responses, they’re engaging in economic warfare conducted through digital methodologies. The boundary between legitimate economic competition, sanctions, illicit finance, and cyber theft has been deliberately blurred by adversaries who understand that this ambiguity protects them from response.

The 2017 NotPetya attack illustrates this boundary exploitation perfectly. Widely attributed to Russian military intelligence, it caused an estimated $10 billion in global damage, disrupted Maersk’s worldwide shipping operations, paralysed pharmaceutical manufacturing, and crippled critical infrastructure across Ukraine.[6] Was this economic warfare? Sabotage? Organised crime? The answer is that it was designed to be all of these simultaneously, making classification impossible and response difficult. Because it operated through exploitation rather than traditional violence, it failed to generate sustained policy response despite extraordinary economic dislocation.

Cognitive Operations and the Active Measures Tradition

The cognitive dimension represents boundary exploitation of a different order. What Andreas Krieg terms “subversion”, or the strategic weaponization of narratives to exploit socio-psychological vulnerabilities and erode sociopolitical consensus, isn’t unprecedented.[7] It builds directly on the Soviet tradition of active measures during the Cold War. The KGB’s disinformation campaigns, front organisations, and cultivation of influence networks were designed to achieve precisely what contemporary information operations attempt: reflexive control whereby adversaries voluntarily make decisions predetermined by the subverting party.

What has changed is scale and effectiveness. The contemporary information environment’s infrastructural vulnerabilities make such operations far more potent than their Cold War predecessors. Russian information operations around the 2016 US election, Brexit-related influence campaigns, and subsequent efforts across European democracies represent sustained attempts to manipulate democratic processes. Whether individually decisive or not, their cumulative effect has been to inject pervasive uncertainty into democratic deliberation.

Krieg’s framework is particularly useful because it recognises that effective subversion requires high levels of both orchestration and mobilisation.[8] Successful operations coordinate across multiple domains: cultivating academic surrogates who provide legitimacy, deploying media networks that amplify messages, and ultimately mobilising real-world activism or policy changes. This isn’t merely spreading disinformation online. It’s cognitive warfare that operates simultaneously in the informational space, the expert-policy nexus, and physical civil society mobilisation.

Here again we see deliberate boundary exploitation. Is this espionage? Influence operations? Public diplomacy? Domestic political activism? The answer is that it’s designed to be all of these, making it impossible to counter with tools designed for any single category. The operations succeed not despite their categorical ambiguity but because of it. International Security Studies has failed not because cyber-enabled cognitive operations are unprecedented, but because the field hasn’t updated its frameworks for thinking about covert influence in an age where scale, speed, and micro-targeting have fundamentally changed the strategic payoff.

Threshold Manipulation as Structural Feature

The attribution problem receives significant attention in cyber security literature, as do the challenges of compressed decision timelines. These are real difficulties. But the more profound challenge is threshold manipulation itself. Adversaries deliberately operate below levels that would trigger Article 5 commitments or justify conventional military responses. Each individual operation seems manageable, falling into the category of crime, espionage, or influence activity rather than warfare. Yet the cumulative strategic effect potentially exceeds many conventional conflicts.

This isn’t an implementation problem. It’s a structural feature of how cyber competition operates. States are competing precisely by exploiting the cognitive thresholds that anchor our definitions of conflict. Russia can conduct NotPetya through proxies with sufficient ambiguity that attribution takes months and response remains constrained. China can exfiltrate terabytes of intellectual property through operations that blend state intelligence services, military contractors, and “patriotic” hackers. North Korea can steal billions through operations routed through multiple jurisdictions with cryptocurrency payouts. Each operation maintains just enough deniability to complicate response, just enough restraint to avoid crossing recognised warfare thresholds.

This threshold manipulation undermines every inherited category of International Security Studies. It’s not that cyber operations are hard to classify. It’s that they’re designed to exploit the classification system itself. The boundaries between war and peace, state and non-state action, economic and military competition, legitimate influence and hostile subversion aren’t natural categories adversaries accidentally fall between. They’re strategic terrain adversaries deliberately occupy because they understand these boundaries constrain Western responses.

Recognising What We’re Not Seeing

The policy stakes extend beyond academic taxonomy. We’re not experiencing Pearl Harbor-equivalent events annually in some literal sense. We’re experiencing something potentially more destabilising: systematic strategic competition that bypasses the recognition mechanisms that would trigger mobilisation. Our threat-recognition systems are calibrated to identify territorial aggression, military build-ups, and kinetic attacks. They’re not calibrated to identify systematic resource extraction masquerading as crime, cognitive operations masquerading as activism, or infrastructure compromise masquerading as espionage.

The result is systematic underresponse to strategic challenges of the first order. We’re not mobilising resources, not building institutional capacity, not developing deterrent strategies, not coordinating international responses, because these events don’t trip the conceptual wires that would classify them as national security emergencies. Meanwhile, adversaries conduct what amounts to unrestricted economic and cognitive warfare whilst maintaining that they’re merely engaged in “law enforcement matters,” “intelligence operations,” or “civil society activism.”

We’ve constrained ourselves through definitional conservatism. We’ve accepted that if it’s not “war,” it doesn’t merit war-level responses, while adversaries operate under no such constraints. They’ve discovered they can achieve strategic objectives through boundary exploitation, and our theoretical frameworks leave us unable to recognise, analyse, or respond effectively.

Toward Frameworks Adequate to Strategic Reality

What would it mean for International Security Studies to take cyber operations seriously? It requires recognising that the boundaries between economic, cognitive, and kinetic competition have been deliberately collapsed by adversaries who understand that this collapse protects them from response.

This means developing frameworks that can analyse strategic competition operating across these domains simultaneously. It means recognising that when North Korea funds its regime primarily through cyber theft, when Russia conducts operations that are simultaneously economic warfare and intelligence operations, when China’s influence campaigns blend state direction with autonomous proxy action. They’re exemplars of how strategic competition now operates.

It means accepting that “security” in conditions of radical technological change looks sufficiently different that inherited categories have limited explanatory power. The field needs concepts and metrics for assessing threats that are diffuse, cumulative, and often only apparent in retrospect. It needs frameworks that can recognise systematic resource extraction, cognitive manipulation, and infrastructure compromise as coherent forms of strategic competition rather than unrelated criminal, intelligence, and influence activities.

Most fundamentally, it means understanding that the definitional boundaries the field has inherited are not neutral analytical categories. They’re strategic terrain that adversaries compete by exploiting them. Until International Security Studies recognises boundary exploitation itself as the central feature of contemporary cyber competition, the field will remain unable to provide the analytical guidance policymakers urgently need.


Bibliography

Buchanan, Ben. The Hacker and the State: Cyber Attacks and the New Normal of Geopolitics. Cambridge, MA: Harvard University Press, 2020.

“Cost of Cybercrime.” Cybersecurity Ventures, 2024. https://cybersecurityventures.com/cybercrime-damages-6-trillion-by-2021/.

Greenberg, Andy. “The Untold Story of NotPetya, the Most Devastating Cyberattack in History.” Wired, August 22, 2018. https://www.wired.com/story/notpetya-cyberattack-ukraine-russia-code-crashed-the-world/.

Kello, Lucas. “The Meaning of the Cyber Revolution: Perils to Theory and Statecraft.” International Security 38, no. 2 (Fall 2013): 7-40.

Krieg, Andreas. Subversion: The Strategic Weaponization of Narratives. Washington, DC: Georgetown University Press, 2023.

Rid, Thomas. “Cyber War Will Not Take Place.” Journal of Strategic Studies 35, no. 1 (February 2012): 5-32.


Notes

[1] Ben Buchanan, The Hacker and the State: Cyber Attacks and the New Normal of Geopolitics (Cambridge, MA: Harvard University Press, 2020), 47-48.

[2] Thomas Rid, “Cyber War Will Not Take Place,” Journal of Strategic Studies 35, no. 1 (February 2012): 5-32.

[3] Lucas Kello, “The Meaning of the Cyber Revolution: Perils to Theory and Statecraft,” International Security 38, no. 2 (Fall 2013): 8.

[4] “Cost of Cybercrime,” Cybersecurity Ventures, 2024, https://cybersecurityventures.com/cybercrime-damages-6-trillion-by-2021/. These figures aggregate direct theft, business disruption, and opportunity costs, and should be treated as indicative of order of magnitude rather than precise valuations.

[5] Kello, “Meaning of the Cyber Revolution,” 24-26.

[6] Andy Greenberg, “The Untold Story of NotPetya, the Most Devastating Cyberattack in History,” Wired, August 22, 2018, https://www.wired.com/story/notpetya-cyberattack-ukraine-russia-code-crashed-the-world/.

[7] Andreas Krieg, Subversion: The Strategic Weaponization of Narratives (Washington, DC: Georgetown University Press, 2023), 89.

[8] Krieg, Subversion, 101-103.

Powered by WordPress & Theme by Anders Norén