Thoughts, reflections and experiences

icy banner

Tag: warfare Page 1 of 2

Header image for an article on PGM usage by Russia

Precision Reconsidered: Russia’s Shift from Guided Missiles to Mass Bombardment

Previously Russia positioned itself as a modern power able to cripple an adversary’s decision-cycle with carefully targeted precision-guided munitions. Three years of war in Ukraine have punctured that story. The Kremlin now relies on volume rather than accuracy, trading the prestige of “surgical” strikes for the blunt attrition of drone swarms and repurposed air-defence missiles. What follows traces that transition and asks what it does to Western analytical assumptions about technology, ethics and power.

From Surgical Imagery to Saturation Practice

Pentagon tallies show that Russian forces loosed more than one thousand one hundred guided missiles in the first month of the invasion, yet many exploded in apartment blocks rather than command nodes (Department of Defense, 2022). Domestic production never kept pace. By mid-2023 Russian factories were turning out roughly sixty new cruise missiles a month, a fraction of operational demand (Williams, 2023).

Facing depletion, Moscow shifted firepower architecture. S-300 surface-to-air missiles were redirected at ground targets, increasing miss distances, while Iranian-designed Shahed drones began to pad nightly salvos (Army Recognition, 2024). Guided-missile launches fell steadily while drone use soared, reaching an estimated total of four thousand deployed by the first quarter of 2025 (Atalan and Jensen, 2025). The identity of the high-tech precision striker gave way to the practicalities of magazine depth and industrial capacity.

Implications for Western Analysis

Western security discourse long treated accuracy as a twin proof of technical mastery and ethical restraint (Schmitt and Widmar, 2014; Wilson, 2020). Russia’s practice weakens both pillars. Norms endure through consistent observance and recognition; when a major power claims the vocabulary of precision while accepting wide error margins, the social meaning of accuracy erodes (Tannenwald, 2017).

The episode therefore offers a methodological caution. Counting missiles without attending to their symbolic weight risks analytical short-sightedness. The shift towards low-cost saturation munitions signals a recalibration of Russian strategic identity and alters the deterrence calculus of adversaries who must now defend against continuous drone attrition rather than episodic cruise-missile raids. Civilian resilience, alliance solidarity and arms-control expectations all pivot on how quickly that new reality is understood.

In Summary

Russia’s move from precision-guided missiles to mass bombardment is more than a supply-chain story. It marks the point where an identity built on technological finesse buckled under material constraint, transforming both the battlefield and the normative landscape around it. Analysts tracking future conflicts would do well to remember that weapons categories are not only hardware inventories but carriers of meaning, and that meaning can shift faster than production lines.


Bibliography

Army Recognition 2024. ‘Russia Repurposes S-300 Surface-to-Air Missiles for Ground Attacks Against Kharkiv’, 5 January.

Atalan, Y. and Jensen, B. 2025. Drone Saturation: Russia’s Shahed Campaign. CSIS Brief, 13 May.

Department of Defense 2022. ‘Pentagon Press Secretary John F. Kirby Holds a Press Briefing’, 21 March.

Schmitt, M. and Widmar, E. 2014. ‘On Target: Precision and Balance in the Contemporary Law of Targeting’. Journal of National Security Law and Policy, 7(3).

Tannenwald, N. 2017. ‘How Strong Are the Nuclear Taboo and the Chemical Weapons Ban?’ The Washington Quarterly, 40(1), 79–98.

Williams, I. 2023. ‘Russia Isn’t Going to Run Out of Missiles’. CSIS Analysis, 28 June.

Wilson, N. 2020. ‘The Ambiguities of Precision Warfare’, Intimacies of Remote Warfare commentary, 12 June. 

Cyber operators working at screens

From Theory to the Trenches: Introducing “Cyber Centres of Gravity”

The nature of warfare is in constant flux. Clausewitz’s timeless insight that war is a “contest of wills” remains central, yet the means by which this contest is waged are transforming. Traditionally, Centres of Gravity (CoGs) were often seen as physical entities: armies, capitals, industrial capacity. The thinking was that neutralising these would cripple an adversary’s warfighting ability. However, it’s crucial to recognise, as scholars like Echevarria highlight, that Clausewitz himself acknowledged non-material CoGs, such as national will. The concept isn’t entirely new, but modern interpretations significantly expand upon it, especially in the context of cyberspace.

Today, the pervasive nature of information networks prompts us to consider what this means for strategic targeting. What happens when the critical vulnerabilities lie not just in the physical domain, but in an enemy’s belief systems, the legitimacy of their leadership, or their very grasp of shared reality? This is where exploring an emerging concept – what this article terms “Cyber Centres of Gravity” (Cyber CoGs) – becomes vital for contemporary military strategists. While “Cyber CoG” as a distinct term is still evolving and not yet firmly established in formal doctrine (which tends to use adjacent terms like cognitive targets or information influence objectives, as noted by analysts like Pawlak), its exploration helps us grapple with these new strategic challenges. Ignoring these intangible, yet increasingly critical, aspects in our information-saturated world could represent a significant strategic blind spot.

Understanding “Cyber CoGs”

So, what might a “Cyber CoG” entail? It can be conceptualised as a critical source of an adversary’s moral or political cohesion, their collective resolve, or a foundational element of their operative reality-construct that underpins their ability or will to resist your strategic objectives. The key idea is that significant degradation of such a “Cyber CoG,” predominantly through cyber-enabled means, could fundamentally unravel an enemy’s capacity or desire to continue a conflict, perhaps by altering their perception of the strategic landscape.

This isn’t merely about disrupting networks or servers, though such actions might play a role. A true “Cyber CoG,” in this conceptualisation, is intrinsically linked to these deeper wellsprings of an enemy’s will, cohesion, or their understanding of reality. If an operation doesn’t aim to decisively alter the strategic balance by impacting these moral, political, or epistemic foundations, it’s more likely an operational objective rather than an attack on a strategic “Cyber CoG”.

Clausewitz identified the CoG as “the hub of all power and movement, on which everything depends”. In an age increasingly defined by information, this hub can often be found in the cognitive and informational realms. When societal “passion” can be manipulated through digital narratives, when a military’s operating environment is shaped by perception as much as by physical friction, and when governmental “reason” is threatened by the decay of a shared factual basis, cyberspace becomes an increasingly central domain in shaping strategic outcomes. While kinetic, economic, and geopolitical power still hold immense, often primary, sway in high-stakes confrontations (a point Gartzke’s work on the “Myth of Cyberwar” reminds us to consider), the cyber domain offers potent avenues to contest the very “reality” upon which an adversary’s will is constructed. Here, strategic success may rely less on physical destruction and more on the ability to influence or disrupt an adversary’s cognitive and narrative environments.

Identifying Potential “Cyber CoGs”: A Framework for Analysis

Pinpointing these potential “Cyber CoGs” requires a nuanced analytical approach, considering factors such as:

  1. Strategic Relevance: Does the potential target truly sustain the enemy’s will to fight or their core strategic calculus? This involves looking at national cohesion, public legitimacy, dominant narratives, key alliances, or shared assumptions underpinning their strategy. Its degradation should aim to undermine their strategic purpose or resolve.
  2. Cyber Primacy in Effect: Can cyber-enabled operations offer a uniquely effective, or significantly complementary, method for impacting this CoG, especially when compared or combined with kinetic, economic, or diplomatic levers? Some intangible CoGs may be less susceptible to physical attack but highly vulnerable to informational strategies.
  3. Potential for Decisive Influence: Is the intended effect of targeting the “Cyber CoG” likely to be decisive, whether through an irreversible loss of trust (e.g., in institutions or information), a critical breakdown in a foundational narrative, or a fundamental, lasting shift in the adversary’s perception of their strategic environment? It could also be a cumulative effect, eroding coherence and resolve over time.
  4. Linkage to Moral and Political Dimensions (Clausewitzian Character): Is the “Cyber CoG” intrinsically tied to the enemy’s unity, cohesion, will to resist, or the shared narratives defining their interests and threats? It’s not just a system or infrastructure but is linked to the collective spirit or governing principles.
  5. Strategic Viability and Responsibility: Can the proposed operation be conducted with a rigorous assessment of attribution risks, potential for unintended escalation, and broader second-order societal effects? This includes careful consideration of evolving international norms and legal frameworks.

Implications for Military Planners

Strategically engaging potential “Cyber CoGs” would necessitate evolving current approaches:

  • Integrated Intelligence: Identifying and understanding these “Cyber CoGs” demands a deep, multidisciplinary intelligence effort, fusing technical insights with profound cultural, political, cognitive, and narrative analysis. This requires collaboration between experts in fields like anthropology, sociology, political science, and data science to map the ‘human terrain’ and ‘narrative architecture’.
  • Dynamic and Adaptive Campaigning: Operations targeting “Cyber CoGs” are unlikely to be single events. Influencing moral cohesion or perceived reality is a complex, interactive process involving continuous adaptation to feedback loops, narrative shifts, and adversary countermeasures. The aim is often cognitive degradation or displacement, subtly altering the adversary’s decision-making calculus over time.
  • Strategic, Not Just Tactical, Focus: While drawing on tools from traditional information warfare or psychological operations, the concept of “Cyber CoGs” pushes for a more strategically ambitious focus on these Clausewitzian centers of power, wherever they may reside. When a CoG itself is located in the moral, political, or epistemic domains, cyber-enabled operations can become a key component of strategic engagement.

Navigating the Ethical and Legal Landscape

The capacity to strategically influence an adversary’s societal beliefs and perceived reality carries a profound ethical burden and operates within a complex legal landscape. Responsible statecraft demands a deliberate moral calculus, especially in the ambiguous “grey zone”. The Tallinn Manual 2.0, for instance, provides detailed interpretations of how international law applies to cyber operations, including complex issues around sovereignty, non-intervention, and due diligence. Operations that aim to alter perception or manipulate societal beliefs can brush up against these established and evolving legal interpretations. Pursuing strategic goals through such means requires careful navigation to avoid widespread societal disruption or unintended consequences that could undermine international order. There is also the risk of “blow-back,” where the methods used externally could erode internal democratic norms if not carefully managed.

Integrating New Concepts into Strategic Thinking

The future of conflict is undeniably intertwined with the contested terrains of perception, belief, and societal cohesion. Exploring concepts like “Cyber Centres of Gravity” can help us theorise and analyse these critical nodes of will, unity, and perceived reality. This endeavor is less about new technologies and more about refining our understanding of strategy itself: to influence an adversary’s will or alter their perceived reality to achieve strategic aims, through means that are proportionate, precise, and adapted to the evolving character of modern conflict.

Failing to adapt our thinking, to build the necessary multidisciplinary approaches, and to foster the institutional agility to operate in this transformed strategic landscape is a risk to our future strategic effectiveness.

Selected Bibliography

  • Brittain-Hale, Angela. “Clausewitzian Theory of War in the Age of Cognitive Warfare.” The Defense Horizon Journal (2023): 1–19.
  • Clausewitz, Carl von. On War. Edited and translated by Michael Howard and Peter Paret. Princeton, NJ: Princeton University Press, 1976.
  • Echevarria, A. J. (2002). “Clausewitz’s Center of Gravity: Changing Our Warfighting Doctrine—Again!” Strategic Studies Institute.
  • Gartzke, E. (2013). “The Myth of Cyberwar: Bringing War in Cyberspace Back Down to Earth.” International Security, 38(2), 41–73.
  • Krieg, Andreas. Subversion: The Strategic Weaponization of Narratives. London: Routledge, 2023.
  • Lin, Herbert, and Jackie Kerr. “On Cyber-Enabled Information/Influence Warfare and Manipulation.” Center for International Security and Cooperation, Stanford University, 2017.
  • Pawlak, P. (2022). “Cognitive Warfare: Between Psychological Operations and Narrative Control.” EUISS Brief.
  • Schmitt, M. N. (Ed.). (2017). Tallinn Manual 2.0 on the International Law Applicable to Cyber Operations. Cambridge University Press.

GCAP Fighter

Is GCAP a Necessary Investment in UK Air Power Sovereignty, or a High-Risk Gamble?

The United Kingdom’s commitment to the Global Combat Air Programme (GCAP), in partnership with Italy and Japan, represents the most significant defence investment decision of this generation. Faced with an increasingly contested and volatile world and the limitations of current air power assets against proliferating advanced threats, the UK seeks a sixth-generation capability intended to secure air dominance and strategic advantage well into the mid-21st century. This analysis contends that while the strategic desire for GCAP is understandable, particularly the drive for sovereign capability, its necessity hinges critically on unproven technological assumptions, optimistic cost and schedule projections, and a specific view of future warfare that may not materialise. Therefore, continued UK participation should be contingent on meeting stringent, pre-defined cost, schedule, and capability gateways, with failure triggering consolidation or cancellation.

Defining the Sixth-Generation Ambition

GCAP aims to deliver more than just a replacement for the RAF’s Eurofighter Typhoon; it embodies a conceptual leap towards a ‘system of systems.’ The envisioned capability includes a core manned stealth platform (‘Tempest’) acting as a command node, integrated with uncrewed Collaborative Combat Aircraft (CCAs or ‘loyal wingmen’), all connected via a resilient ‘combat cloud’. Key technological differentiators include advanced AI for data fusion and decision support, next-generation sensors providing unprecedented situational awareness (such as the developmental ISANKE/ICS suite), adaptive engines offering performance flexibility, and an open systems architecture for rapid upgrades. This technological ambition, pursued trilaterally under dedicated governmental (GIGO) and industrial joint venture structures headquartered in the UK, aims to deliver not just an aircraft, but a step-change in air combat capability by its ambitious 2035 target date. However, this vision immediately flags a core vulnerability: the entire concept is critically dependent on secure, high-bandwidth connectivity that is a prime target for adversary electronic warfare and cyber-attacks.

Strategic Rationale

GCAP is positioned as essential for UK grand strategy, aligning with the Integrated Review’s goals of technological advantage, global power projection (including the Indo-Pacific tilt), and contributing high-end capability to NATO. A primary driver is the pursuit of national sovereignty – defined as “Freedom of Action” and “Freedom of Modification” – avoiding dependence on allies, particularly the US. Past experiences, such as reported US control over integrating certain UK weapons onto the F-35 platform, fuel this desire for independent control over critical capabilities.

Yet, this pursuit of sovereignty within a deeply collaborative international programme creates inherent tensions. True freedom of action requires open technology sharing between partners, potentially conflicting with national industrial interests or security concerns, as highlighted by recent Italian ministerial comments about UK reluctance on tech access. Furthermore, the incorporation of some US subsystems – for example, advanced Gallium Nitride (GaN) transmitter modules crucial for next-generation radar and electronic warfare systems, which often fall under strict US export controls – could still subject GCAP to US ITAR restrictions. This would potentially negate the desired export freedom and sovereignty regardless of trilateral agreements. The strategic question is whether the immense premium paid for national control via GCAP outweighs the proven capability and interoperability benefits of alternatives, like an expanded F-35 fleet.

Military Utility

The core military case for GCAP rests on its ability to operate in the most highly contested environments anticipated post-2035, specifically penetrating and dismantling advanced Integrated Air Defence Systems (IADS). This high-end SEAD/DEAD mission is presented as a capability gap that existing platforms cannot fill. Enhanced range, beneficial for UK global deployments, is another selling point. However, the likelihood of the UK needing to conduct such demanding missions unilaterally is debatable.

Many analysts wonder if cost justifies niche capability. Could upgraded Typhoons (contingent on successful ECRS Mk2 radar integration) and the existing F-35 fleet, armed with next-generation stand-off missiles and supported by more numerous, cheaper drones, achieve strategically sufficient effects against likely threats? While GCAP promises the ultimate air dominance tool – a bespoke rapier for peer conflict – the UK might derive better overall utility from a more flexible, affordable mix of capabilities resembling a Swiss Army knife.

Costs

Transparency on GCAP’s ultimate cost remains elusive. The UK has committed £2 billion initially and budgeted £12 billion over the next decade, while partner estimates suggest a total programme investment potentially exceeding €40 billion by 2035 merely to reach initial production. Unit fly-away cost estimates are highly speculative but frequently placed in the £150-£250 million range per core aircraft – significantly higher than the F-35B. This excludes the substantial costs of developing and procuring the necessary CCA fleets – with public estimates for ‘loyal wingman’ concepts varying widely, typically between £5 million and £25 million per drone – plus ground infrastructure, and network hardening.

Illustrative Unit Cost Impact (UK Share – Hypothetical 100 core aircraft buy):

  • @ £150m/unit: £15 billion procurement
  • @ £200m/unit: £20 billion procurement
  • @ £250m/unit: £25 billion procurement (Note: Illustrative procurement costs for core platform only, excluding R&D share, CCA costs, and lifetime support)

This level of expenditure inevitably forces stark choices. Within defence, it competes directly with funding for the Royal Navy, the Army’s modernisation, and crucial investments in space and cyber domains. Outside defence, this sum dwarfs spending on critical public services. The opportunity cost is immense, demanding certainty that GCAP delivers uniquely essential capability unavailable through less expensive means.

Industrial Strategy vs. Economic Reality

The argument for GCAP often leans heavily on industrial benefits: sustaining the UK’s sovereign combat air sector, supporting tens of thousands of high-skilled jobs, driving R&D, and enabling exports. Partnering with Italy and Japan is key to achieving the scale necessary for viability. However, large defence programmes create path dependency, making it politically difficult to cancel or curtail the programme even if strategic or financial justifications weaken. The programme must deliver genuine value for money, not just serve as industrial life support.

Technological Risk

GCAP is predicated on successfully mastering multiple cutting-edge technologies concurrently, presenting significant risk. Key areas include:

  • Adaptive Engines: Achieving a mature, reliable variable-cycle engine certified for flight by the required date remains a major hurdle, with full demonstrator engines yet to complete testing. Risk: High
  • AI/Autonomy: Developing certifiable AI for mission-critical functions and effective human-machine teaming is technologically complex and ethically challenging. Integrating this seamlessly with CCA control adds layers of difficulty. Risk: High
  • Stealth & Materials: Achieving next-generation broadband stealth requires advanced materials and manufacturing techniques still scaling up. Risk: Medium
  • Networking & Software: Creating a secure, resilient, interoperable ‘combat cloud’ integrating systems from three nations is the highest risk area, prone to delays and vulnerabilities. Risk: Very High

Failure or significant delay in any one of these critical paths will derail the entire programme or force capability compromises that undermine its rationale. The F-35’s protracted software development provides a stark warning.

Systemic Vulnerabilities and Integration Challenges

The network-centric ‘system of systems’ concept, while powerful in theory, is inherently vulnerable. The reliance on continuous data flow makes the combat cloud a prime target for jamming, cyber-attack, and kinetic strikes against space assets. Ensuring resilience requires costly hardening measures often excluded from baseline programme costs. Integrating GCAP effectively with legacy UK platforms (Typhoon, F-35B) and wider NATO systems presents significant technical hurdles, particularly regarding secure data-link compatibility. Furthermore, the parallel, nationally-led development of CCAs creates a major integration risk – ensuring these vital adjuncts are ready, affordable, and fully interoperable by 2035 is far from guaranteed.

Failure Scenarios

While outright cancellation carries severe consequences – a major capability gap as Typhoons retire (whose operational life depends on successful upgrades), industrial collapse, and irreparable diplomatic damage – significant delays also pose serious threats. A slip of 2-5 years past the 2035 IOC would necessitate costly life-extension programmes for the Typhoon fleet, potentially overlap awkwardly with F-35B support cycles, and could force a reconsideration of procuring land-based F-35As for the RAF to bridge the gap. Such delays would inevitably inflate overall programme costs and erode partner confidence, risking a slow collapse.

A Framework for Managing the Risks

Given the immense stakes and inherent uncertainties, the UK requires clear decision points and off-ramps for GCAP. Continued investment should be conditional:

  1. Sovereignty Definition: Explicitly define the specific sovereign modification and action freedoms GCAP must deliver (beyond F-35 limitations) and verify these are achievable without ITAR constraints on core systems.
  2. Budgetary Ceiling & Trade-offs: Establish a firm ceiling for the UK’s total R&D and procurement contribution, linked to clear decisions in the upcoming Strategic Defence Review on which other capabilities will be curtailed or cancelled to fund it.
  3. Performance Gates & Kill-Switch: Define non-negotiable technical milestones (e.g., successful demonstrator flight by 2027/28, integrated core systems test by 2030) and cost/schedule thresholds. A breach beyond a pre-agreed margin (e.g., 20% cost overrun or 2-year schedule slip by 2028-2030) should trigger an automatic review with consolidation or cancellation as default options unless compelling justification for continuation is presented.

Conclusion

Does the UK need GCAP? Ultimately, yes. Given that maintaining a fully independent capability to defeat the most advanced air defences globally post-2035 is a non-negotiable strategic requirement, and the industrial and geopolitical benefits of leading a trilateral programme outweigh the risks, then GCAP becomes a strategic necessity. However, this necessity is predicated on assumptions about future threats, technological feasibility, cost control, and partner reliability that are far from certain.

It is not a programme to be pursued out of blind faith or industrial inertia. Proceeding demands rigorous scrutiny, transparent accounting, realistic assessment of alternatives, and clearly defined performance metrics with consequences. Without such discipline, the UK risks pouring vast resources into a programme that, while technologically dazzling, may arrive too late, cost too much, or address yesterday’s perceived threats, ultimately failing to deliver the security it promises. The strategic wager has been placed. Ensuring it doesn’t break the bank requires vigilance, realism, and the political courage to fold if the odds turn decisively against it.

Bibliography

BAE Systems. “Assessment of the expected economic impact of the Future Combat Air System programme (2025-2070)” Accessed via BAE Systems website, October 28, 2024. 

BAE Systems. “GLOBAL COMBAT AIR PROGRAMME. ” BAE Systems Media. Accessed April 22, 2025. 

Bronk, Justin. “The Global Combat Air Programme is Writing Cheques that Defence Can’t Cash | Royal United Services Institute.” RUSI Commentary, April 27, 2023.

Bronk, Justin. “Integrating Typhoon and F-35: The Key to Future British Air Power.” RUSI Defence Systems, February 2016.

Bronk, Justin. “Large, Crewed Sixth-Generation Aircraft Have Unique Value in the Indo-Pacific.” RUSI Commentary, March 5, 2025.

Bronk, Justin. “Unlocking Sixth-Gen Air Power: Inside the Military Capability for GCAP.” RUSI Commentary. Accessed April 22, 2025.

Cranny-Evans, Sam, and Justin Bronk. “How Export Controls Endanger the West’s Military Technology Advantage.” RUSI Commentary, August 2, 2024.

House of Commons Library. “The forthcoming strategic defence review 2025: FAQ.” Research Briefing CBP-10153, March 26, 2025.

House of Commons Library. “What is the Global Combat Air Programme (GCAP)?” Research Briefing CBP-10143. Accessed April 22, 2025.

IAI (Istituto Affari Internazionali). “New Partnership among Italy, Japan and the UK on the Global Combat Air Programme (GCAP).” IAI Papers 25 | 03 – March 2025.The 

Japan, Ministry of Defense. “Global Combat Air Programme.” MoD Website. Accessed April 22, 2025.

The Aviationist. “The GCAP Program: A Step Toward Europe’s Military Autonomy and Interoperability.” March 17, 2025.

The Aviationist. “Delivering GCAP by 2035 Is Not Easy as it Needs to Break the Mold and Avoid Mistakes, Says UK Report.” January 15, 2025.

UK Defence Journal. “Report highlights challenges for new British stealth jet.” January 14, 2025.

UK Government. “Defence’s response to a more contested and volatile world.” Defence Command Paper 2023. Accessed April 22, 2025.

UK Government. “Integrated Review Refresh 2023: Responding to a more contested and volatile world.” Accessed April 22, 2025.

UK Parliament. Committees. Defence Committee. “Global Combat Air Programme. ” HC 598, January 14, 2025.

Watkins, Peter. “The Damage from Doubt: Labour’s Clumsy Handling of the GCAP Programme | Royal United Services Institute. ” RUSI Commentary, September 12, 2024.

Zona Militar. “Italy accuses the United Kingdom of not sharing key technologies for the development of the new sixth-generation GCAP fighter.” April 21, 2025.

Insurgency vs. Terrorism: What’s the Difference?

I’ve created a video on the difference between the definitions of insurgency and terrorism. While both involve violence and political motivations, I explore why understanding their key differences is essential. The video includes historical examples and the blurred lines between these two concepts, which should help in shedding light on the political implications behind the labels we use.

Please do feel free to reach out and discuss anything in the video, or leave a comment if you would prefer.

Chessboard with smoke floating over the pieces

How Grey Zone Warfare Exploits the West’s Risk Aversion

Western democracies are caught in a strategic bind. Adversaries, skilled at operating in the murky “grey zone” between peace and open warfare, are exploiting a fundamental Western characteristic: risk aversion. Grey zone warfare blends cyberattacks, disinformation, economic coercion, and proxy warfare to achieve strategic goals without triggering a full-scale military response. The risk is not merely theoretical. One might argue that the resulting ambiguity produces a kind of strategic paralysis, one that leaves Western states unable or unwilling to respond decisively to threats that refuse the comfort of clear categorisation.

A 21st-Century Threat

Grey zone warfare encompasses more than just cyberattacks and disinformation. Think of cyberattacks that cripple infrastructure but stop short of causing mass casualties, disinformation campaigns that sow discord and erode trust in institutions, and the use of proxy forces to destabilise a region. Crucially, it also includes economic coercion. China’s Belt and Road Initiative, with its potential for creating debt traps and strategic dependencies, is a prime example. Russia’s use of energy supplies as a political weapon, particularly against European nations, is another. The key is plausible deniability and making it difficult for the target to definitively attribute actions. This in turn makes it more challenging for states to justify a strong response. The underlying ambition is to achieve strategic objectives, be it weakening an adversary, gaining leverage, or shaping policy outcomes, all while avoiding the threshold of open military conflict. We see this in China’s response to Lithuania’s engagement with Taiwan, where trade sanctions were used as a punitive measure. Similarly, the West’s reliance on Chinese rare earth minerals creates a vulnerability that can be exploited for political leverage.

Grey Zones as a Strategic Vulnerability

The West, particularly Europe and North America, has a deeply ingrained preference for diplomacy and de-escalation. This isn’t necessarily a bad thing as it stems from a genuine desire to avoid the horrors of large-scale war and maintain a stable global order. But this risk aversion, while understandable, has become a strategic vulnerability. Adversaries see this hesitation and tailor their actions accordingly. They operate just below the threshold of what would trigger a decisive military response, creating a constant dilemma for Western leaders: how to respond effectively without escalating the situation into a wider conflict?

Ukraine is a tragic textbook example of grey zone warfare in action. Russia’s strategy goes far beyond conventional military force. It includes crippling cyberattacks on Ukrainian infrastructure, a relentless barrage of disinformation aimed at undermining the Ukrainian government and sowing discord, and the backing of separatist movements to create internal instability. These actions are calculated to achieve Russia’s goals while staying below the threshold that would provoke a direct military intervention from NATO. The Western response, consisting primarily of sanctions and diplomatic pressure, reveals the core problem. While intended to punish Russia and deter further aggression, this relatively restrained approach has, perhaps, enabled Russia to continue its grey zone operations, demonstrating the difficulty of countering these tactics without risking a wider war. The continued, grinding conflict, and the incremental nature of Western support, highlight the limitations of a purely reactive, risk-averse strategy.

The Erosion of American Global Leadership and Europe’s Quest for Strategic Autonomy

One might observe that the erosion of American global leadership (accelerated, though not solely caused, by the Trump administration) has unsettled the transatlantic alliance in ways that are still playing out. Actions such as imposing tariffs on allies, questioning NATO’s relevance, and the perceived (and sometimes explicit) wavering of commitment to Article 5’s collective defence clause have created a climate of uncertainty. European nations are now grappling with a fundamental question: can they rely on the US security umbrella? This doubt isn’t just theoretical; it’s driving concrete policy changes.

This uncertainty has fuelled a push for European “strategic autonomy” and the ability to act independently in defence and foreign policy. Figures like French President Macron have long championed this idea, and it’s gaining traction across the continent. Even in the UK, traditionally a staunch US ally, Labour leader Keir Starmer has emphasised the need for increased defence spending and closer European security cooperation. Germany’s Zeitenwende, its historic shift towards rearmament, is a direct response to this new reality. These are not just rhetorical flourishes; they represent a fundamental rethinking of European security, driven by a perceived need to fill the void left by a less predictable and less engaged United States. The debate over a European army, or a more coordinated European defence force, is no longer fringe; it’s becoming mainstream.

Strategic Paralysis Under the Clausewitzian Lens

This brings us to the heart of the matter: strategic paralysis. The West, caught between a desire to avoid escalation and the need to respond effectively, often finds itself frozen. This is the sort of effect to which grey zone tactics aspire, though whether paralysis is a design or an emergent consequence remains open to debate. By fostering ambiguity, where traditional responses appear either disproportionate or politically fraught, adversaries create the very conditions in which Western decision-making risks becoming paralysed. The fear of “provoking” a larger conflict becomes a weapon in itself. As Clausewitz argued, war is an extension of politics. Grey zone conflict is simply an extension of war by subtler means, one designed to neutralise the West’s ability to make political decisions with clarity.

Looking at the situation, it could be suggested that Western states would do well to move beyond rhetorical condemnation or reactive sanctions. Addressing the breadth of grey zone threats requires not only the technical apparatus to respond, but also a reconsideration of what risks must be borne, and what forms of resilience truly matter. Societal awareness, for instance, is not a panacea, but a necessary condition for resisting disinformation and political interference.

If Western governments are to avoid strategic paralysis, their response cannot rely solely on traditional deterrence or diplomatic ritual. Perhaps the focus should shift toward nurturing resilience – not just through technological investment or alliance-building, but by cultivating an informed citizenry, capable of recognising manipulation in its many guises. The challenge is not merely technical, nor simply a matter of resolve either.

Concluding Reflections

Grey zone tactics have flourished amid Western risk aversion and a prevailing uncertainty over deterrence. It could be suggested that the greater risk, at times, lies in mistaking inertia for prudence. Whether Western policymakers can recalibrate their tolerance for ambiguity, and adapt to the subtler forms of coercion now in play, remains an open question – one on which the resilience of the international order may quietly depend. I would argue that it is not merely the West’s material strength, but the demonstration of resolve (and a measure of unpredictability) that will matter most. Whether Western states can move beyond a posture of predictable restraint, or whether caution will continue to invite opportunism, remains to be seen. In the end, the future of the international liberal order may depend less on declarations of intent than on the willingness to accept calculated risk. Whether the West can adapt to this new era of conflict remains the most pressing question.

Bibliography

American Military University. “Gray Zone Attacks by Russia Being Used to Undermine Ukraine.” AMU Edge, May 12, 2023. https://amuedge.com/gray-zone-attacks-by-russia-being-used-to-undermine-ukraine/.

Chivvis, Christopher S. Understanding Russian “Hybrid Warfare” and What Can Be Done About It. Santa Monica, CA: RAND Corporation, 2017. https://www.rand.org/pubs/testimonies/CT468.html.

Gray, Colin S. Another Bloody Century: Future Warfare. London: Phoenix, 2005.

Military Strategy Magazine. “Deterring War Without Threatening War: Rehabilitating the West’s Risk-Averse Approach to Deterrence.” Military Strategy Magazine,1 April 2023. https://www.militarystrategymagazine.com/article/deterring-war-without-threatening-war-rehabilitating-the-wests-risk-averse-approach-to-deterrence/.

Onsolve. “Gray Zone Warfare: What Business Leaders Need to Know.” Onsolve Blog, March 2024. https://www.onsolve.com/blog/sra-gray-zone-warfare-business-leaders/.

Rid, Thomas. Cyber War Will Not Take Place. London: C. Hurst & Co., 2013.

The Wall Street Journal. “Trump Is Overturning the World Order That America Built.” WSJ, January 25, 2024. https://www.wsj.com/world/trump-is-overturning-the-world-order-that-america-built-10981637.

The New Yorker. “What’s Next for Ukraine?” The New Yorker, February 2024. https://www.newyorker.com/news/the-lede/whats-next-for-ukraine.

Why Technology Alone Doesn’t Win Wars

We often assume that the latest military technology will define the future of warfare. AI, cyber weapons, and autonomous drones are hailed as game-changers, just as tanks, aircraft, and nuclear weapons were in past eras. But history tells a different story, one where new technology is only as effective as the strategy, doctrine, and human adaptation behind it.

In this video, we explore David Edgerton’s critique of technological determinism, the idea that wars are shaped by cutting-edge innovation alone. From ancient weapons to modern cyber warfare, we show why old technologies persist, how armies adapt, and why war remains a contest of resilience, not just hardware.

The Real Lesson of Military Technology

The biggest mistake in war isn’t failing to develop new technology, it’s assuming that technology alone will guarantee victory. History proves that the best weapons don’t always win battles; those who adapt, integrate, and sustain their forces over time do.

What do you think? Are we overhyping AI and cyber warfare today, just as people once overhyped battleships or air power?

Europe's Leadership Vacuum in the Shadow of Russia and America

Europe’s Leadership Vacuum in the Shadow of Russia and America

The concept of ‘strategic culture’ as critiqued in Hew Strachan’s “The Direction of War: Contemporary Strategy in Historical Perspective” emphasises continuity and a nation’s resistance to change, shaped historically and geographically. Strategic culture includes historical memory, institutional norms, core national values, and collective threat perceptions, all contributing to a nation’s strategic posture. This comprehensive framework is valuable when examining Europe’s contemporary security challenges, specifically the strategic vacuum highlighted by the ongoing war in Ukraine and America’s ongoing withdrawal from global leadership.

Europe’s Strategic Culture

European strategic culture, forged during the Cold War, assumed stability through American military and diplomatic leadership. Strachan argues convincingly that such cultural assumptions hinder strategic flexibility, creating vulnerabilities when geopolitical realities shift dramatically, as they have since Russia’s invasion of Ukraine in 2022.

NATO-centric thinking, predicated on the guarantee of American power projection, has revealed problematic inertia… European states, notably the UK and the EU members, have found themselves scrambling to define a coherent, autonomous response.

America’s Strategic Shift from Protector to Competitor

America’s strategic withdrawal from Europe, evidenced by Obama’s pivot to Asia, that accelerated by Trump V1.0’s transactional approach, Biden’s reticence and culminating with Trump 2.0’s recent dramatic geopolitical hand grenades. This reflects not merely a change in policy but a radical break from previous expectations. This withdrawal is a revolutionary, not evolutionary, shift in global strategy, shattering Europe’s assumption of guaranteed U.S. engagement.

Strategically, this creates immediate tensions:

  • The U.S. increasingly frames its engagement with Europe as transactional and conditional upon shared responsibilities, as demonstrated by U.S. ambivalence toward NATO under Trump and Biden’s conditional engagement in Ukraine.
  • Simultaneously, Russia’s aggression has starkly shown that the belief in a diminished threat from inter-state warfare, fashionable among policymakers since the Cold War’s end, is dangerously misplaced. Strachan’s scepticism about overly optimistic predictions of war’s obsolescence resonates strongly here, given recent events.

This combination reveals Europe’s strategic culture as critically unprepared for the harsh geopolitical realities of the 21st century.

Europe’s Strategic Awakening

Europe has not been entirely inactive. The EU’s Strategic Compass, adopted in 2022, and the UK’s Integrated Review Refresh in 2023 demonstrate genuine acknowledgment of new realities. These documents move beyond purely reactive policies and represent Europe’s incremental shift towards strategic autonomy:

  • Increased defence expenditure: Germany’s Zeitenwende is a prime example.
  • Increased EU defence coordination, exemplified by the European Peace Facility funding Ukraine’s defence.
  • Renewed commitment to territorial defence and enhanced military deployments in Eastern Europe.

Yet, despite these efforts, the doctrinal and strategic mindset change has been incomplete. European policies continue to implicitly rely on the assumption of sustained U.S. involvement, despite public and political statements affirming Europe’s need for self-sufficiency.

Russia and America as Mirrors

The actions of Russia and the retreat of America each independently expose the inadequacies of Europe’s current strategic posture:

Russia’s Actions: Highlighted Europe’s continuing strategic vulnerability, emphasising weaknesses in rapid military deployment, critical capability gaps (such as long-range precision munitions and air defence), and dependence on U.S. logistical, intelligence, and strategic capabilities.

America’s Pivot Away: Underscores that strategic autonomy isn’t merely desirable but imperative. Starting with Biden administration’s reluctance to escalate beyond certain lines in Ukraine and Washington’s growing Indo-Pacific focus expose a stark misalignment between European expectations and American strategy. The most recent signals from Trump are an unequivocal message to Europe: unless there is something in it for America, you are on your own.

The Limits of Integration and NATO

While deeper European integration and renewed commitment to NATO might appear sufficient, these solutions alone are inadequate. Integration without clear autonomous capabilities risks perpetual dependency, and NATO’s structure, inherently reliant on American leadership, cannot compensate for America’s strategic reorientation. As Strachan underscores, relying purely on continuity without adaptability is strategically naive.

From Reactive Culture to Proactive Realism

Europe’s security doctrine requires nuanced recalibration rather than wholesale abandonment. The gap is not merely military, it is doctrinal, conceptual, and philosophical. A robust European strategic doctrine should:

  1. Recognise NATO’s Limitations: Explicitly acknowledge NATO’s limitations without undermining its centrality to European defence.
  2. Embed Strategic Autonomy: Clearly outline Europe’s independent capabilities and strategic objectives, moving beyond rhetoric to practical operational frameworks. Europe must realistically assess scenarios in which it may need to act without guaranteed American backing.
  3. Rethink Strategic Culture: Move beyond traditional assumptions of continuity—what previously seemed unthinkable, such as large-scale inter-state conflict, must become integral to planning and preparedness again.

Engaging Broader Perspectives

Drawing briefly from constructivist insights, strategic culture is not immutable but socially constructed, implying that European nations have the agency to reshape it consciously. Additionally, realist thinkers like John Mearsheimer caution against complacency in alliance politics, reinforcing the need for independent European capabilities.

Rethinking Doctrine for Strategic Resilience

The UK’s Integrated Review and the EU’s Strategic Compass represent valuable first steps toward a more strategic and independent Europe. However, they still fall short of addressing the fundamental gap that Russia’s aggression and America’s strategic recalibration have exposed.

Addressing Europe’s leadership vacuum demands overcoming historical and cultural inertia. It requires strategic humility: recognising that the stability provided by Cold War-era assumptions no longer applies, that threats are tangible, and that peace through strength must be anchored not in external assurances, but in Europe’s credible, independently sustainable power.

Europe must confront this reality head-on, accepting change not merely rhetorically but operationally, doctrinally, and culturally. Only then will Europe secure genuine strategic autonomy, prepared not just for today’s threats but also for tomorrow’s inevitable uncertainties.

Bibliography

  • Strachan, Hew. The Direction of War: Contemporary Strategy in Historical Perspective. Cambridge University Press, 2013.
  • European Union. “Strategic Compass for Security and Defence.” 2022.
  • United Kingdom Government. “Integrated Review Refresh.” 2023.
  • Mearsheimer, John J. The Tragedy of Great Power Politics. W. W. Norton & Company, 2001.
  • Smith, Rupert. The Utility of Force: The Art of War in the Modern World. Penguin, 2005.

[Video] UK and EU AI Influence

Artificial intelligence isn’t just reshaping industries—it’s reshaping reality. While the UK and EU focus on regulating AI and combating misinformation, adversarial states like Russia and China are weaponizing it for influence warfare. The AI-driven disinformation battle isn’t coming; it’s already here.

In my latest article, “Why the UK and EU Are Losing the AI Influence War”, I explore how Europe’s slow response, defensive posture, and reliance on outdated regulatory approaches are leaving it vulnerable to AI-enhanced propaganda campaigns.

To bring these ideas to life, I’ve created a video that visualises the scale of the challenge and why urgent action is needed. Watch it below:

The AI influence war is no longer a hypothetical—it’s unfolding in real-time. Europe’s current strategies are reactive and insufficient, while adversaries leverage AI to manipulate narratives at unprecedented speed. Without a cognitive security unit, AI-powered countermeasures, and a national security-driven approach, the UK and EU risk losing control of their own information space.

The question isn’t whether AI will reshape public perception, it’s who will be in control of that perception. Will Europe rise to the challenge, or will it remain a passive battleground for AI-driven narratives?

What do you think? Should the UK and EU take a more aggressive stance in countering AI-enhanced disinformation? Feel free to discuss in the comments.

WHY THE UK AND EU ARE LOSING THE AI INFLUENCE WAR

Why the UK and EU Are Losing the AI Influence War

Abstract

Western democracies face a new front in conflict: the cognitive battlespace, where artificial intelligence (AI) is leveraged to shape public opinion and influence behaviour. This article argues that the UK and EU are currently losing this AI-driven influence war. Authoritarian adversaries like Russia and China are deploying AI tools in sophisticated disinformation and propaganda campaigns, eroding trust in democratic institutions and fracturing social cohesion. In contrast, the UK and EU response, focused on regulation, ethical constraints, and defensive measures, has been comparatively slow and fragmented. Without a more proactive and unified strategy to employ AI in information operations and bolster societal resilience against cognitive warfare, Western nations risk strategic disadvantage. This article outlines the nature of the cognitive battle-space, examines adversarial use of AI in influence operations, evaluates UK/EU efforts and shortcomings, and suggests why urgent action is needed to regain the initiative.

Introduction

Modern conflict is no longer confined to conventional battlefields; it has expanded into the cognitive domain. The term “cognitive battlespace” refers to the arena of information and ideas, where state and non-state actors vie to influence what people think and how they behave. Today, advances in AI have supercharged this domain, enabling more sophisticated influence operations that target the hearts and minds of populations at scale. Adversaries can weaponise social media algorithms, deepfakes, and data analytics to wage psychological warfare remotely and relentlessly.

Western governments, particularly the United Kingdom and European Union member states, find themselves on the defensive. They face a deluge of AI-enhanced disinformation from authoritarian rivals but are constrained by ethical, legal, and practical challenges in responding. Early evidence suggests a troubling imbalance: Russia and China are aggressively exploiting AI for propaganda and disinformation, while the UK/EU struggle to adapt their policies and capabilities. As a result, analysts warn that Western democracies are “losing the battle of the narrative” in the context of AI (sciencebusiness.net). The stakes are high: if the UK and EU cannot secure the cognitive high ground, they risk erosion of public trust, social discord, and strategic loss of influence on the world stage.

This article explores why the UK and EU are lagging in the AI influence war. It begins by defining the cognitive battlespace and the impact of AI on information warfare. It then examines how adversaries are leveraging AI in influence operations. Next, it assesses the current UK and EU approach to cognitive warfare and highlights key shortcomings. Finally, it discusses why Western efforts are falling behind and what the implications are for future security.

The Cognitive Battlespace in the Age of AI

In cognitive warfare, the human mind becomes the battlefield. As one expert succinctly put it, the goal is to “change not only what people think, but how they think and act” (esdc.europa.eu). This form of conflict aims to shape perceptions, beliefs, and behaviours in a way that favours the aggressor’s objectives. If waged effectively over time, cognitive warfare can even fragment an entire society, gradually sapping its will to resist an adversary.

Artificial intelligence has become a force multiplier in this cognitive domain. AI algorithms can curate individualised propaganda feeds, amplify false narratives through bot networks, and create realistic fake images or videos (deepfakes) that blur the line between truth and deception. According to NATO’s Allied Command Transformation, cognitive warfare encompasses activities to affect attitudes and behaviours by influencing human cognition, effectively “modifying perceptions of reality” as a new norm of conflict (act.nato.int). In essence, AI provides powerful tools to conduct whole-of-society manipulation, turning social media platforms and information systems into weapons.

A vivid example of the cognitive battlespace in action occurred in May 2023, when an AI-generated image of a false Pentagon explosion went viral. The fake image, disseminated by bots, briefly fooled enough people that it caused a sharp but temporary dip in the U.S. stock market. Though quickly debunked, this incident demonstrated the “catastrophic potential” of AI-driven disinformation to trigger real-world consequences at machine speed (mwi.westpoint.edu) . Generative AI can manufacture convincing yet false content on a massive scale, making it increasingly difficult for populations to discern fact from fabrication.

In the cognitive battlespace, such AI-enabled tactics give malign actors a potent advantage. They can rapidly deploy influence campaigns with minimal cost or risk, while defenders struggle to identify and counter each new false narrative. As the information environment becomes saturated with AI-amplified propaganda, the traditional defenders of truth, journalists, fact-checkers, and institutions, find themselves overwhelmed. This asymmetry is at the heart of why liberal democracies are in danger of losing the cognitive war if they do not adapt quickly.

Adversaries’ AI-Driven Influence Operations

Russia and China have emerged as leading adversaries in the AI-enabled influence war, honing techniques to exploit Western vulnerabilities in the cognitive domain. Russia has a long history of information warfare against the West and has eagerly integrated AI into these efforts. Through troll farms and automated bot networks, Russia pushes AI-generated propaganda designed to destabilise societies. Moscow views cognitive warfare as a strategic tool to “destroy [the West] from within” without firing a shot. Rather than direct military confrontation with NATO (which Russia knows it would likely lose), the Kremlin invests in “cheap and highly effective” cognitive warfare to undermine Western democracies from inside (kew.org.pl) .

Russian military thinkers refer to this concept as “reflexive control,” essentially their doctrine of cognitive warfare. The idea is to manipulate an adversary’s perception and decision-making so thoroughly that the adversary “defeats themselves”. In practice, this means saturating the information space with tailored disinformation, conspiracy theories, and emotionally charged content to break the enemy’s will to resist. As one analysis describes, the battleground is the mind of the Western citizen, and the weapon is the manipulation of their understanding and cognition. By exploiting human cognitive biases, our tendencies toward emotional reaction, confirmation bias, and confusion, Russia seeks to leave citizens “unable to properly assess reality”, thus incapable of making rational decisions (for example, in elections). The goal is a weakened democratic society, rife with internal divisions and distrust, that can no longer present a united front against Russian aggression.

Concrete examples of Russia’s AI-fuelled influence operations abound. Beyond the fake Pentagon incident, Russian operatives have used generative AI to create deepfake videos of European politicians, forge fake news stories, and impersonate media outlets. Ahead of Western elections, Russian disinformation campaigns augmented with AI have aimed to sow discord and polarise public opinion. U.K. intelligence reports and independent researchers have noted that Russia’s automated bot accounts are evolving to produce more “human-like and persuasive” messages with the help of AI language models. These tactics amplify the reach and realism of propaganda, making it harder to detect and counter. Even if such interference does not always change election outcomes, it erodes public trust in information and institutions, a long-term win for the Kremlin.

China, while a newer player in European information spaces, is also investing heavily in AI for influence operations. Chinese military strategy incorporates the concept of “cognitive domain operations”, which merge AI with psychological and cyber warfare. Beijing’s aim is to shape global narratives and public opinion in its favour, deterring opposition to China’s interests. For instance, China has deployed swarms of AI-driven social media bots to spread disinformation about topics like the origins of COVID-19 and the status of Taiwan. Chinese propaganda operations use AI to generate deepfake news anchors and social media personas that promote pro-China narratives abroad. According to NATO analysts, China describes cognitive warfare as using public opinion and psychological manipulation to achieve victory, and invests in technologies (like emotion-monitoring systems for soldiers) that reveal the importance it places on the information domain. While China’s influence efforts in Europe are less overt than Russia’s, they represent a growing challenge as China seeks to project soft power and shape perceptions among European audiences, often to dissuade criticism of Beijing or divide Western unity.

The aggressive use of AI by authoritarian adversaries has put Western nations on the back foot in the information environment. Adversaries operate without the legal and ethical constraints that bind democracies. They capitalise on speed, volume, and ambiguity, launching influence campaigns faster than defenders can react. Authoritarian regimes also coordinate these efforts as part of broader hybrid warfare strategies, aligning cyber-attacks, diplomatic pressure, and economic coercion with information operations to maximise impact. In summary, Russia and China have seized the initiative in the cognitive battlespace, leaving the UK, EU, and their allies scrambling to catch up.

UK and EU Responses: Strategies and Shortcomings

Confronted with these threats, the United Kingdom and European Union have begun to recognise the urgency of the cognitive warfare challenge. In recent years, officials and strategists have taken steps to improve defences against disinformation and malign influence. However, the Western approach has so far been largely reactive and constrained, marked by cautious policy frameworks and fragmented efforts that lag the adversary’s pace of innovation.

United Kingdom: The UK government acknowledges that AI can significantly amplify information warfare. The Ministry of Defence’s Defence Artificial Intelligence Strategy (2022) warns that “AI could also be used to intensify information operations, disinformation campaigns and fake news,” for example by deploying deepfakes and bogus social media accounts. British military doctrine, including the Integrated Operating Concept (2020), emphasises that information operations are increasingly important to counter false narratives in modern conflicts (gov.uk). London’s approach has included establishing units dedicated to “strategic communications” and cyber influence and working with partners like NATO to improve information security.

The UK has also invested in research on AI and influence. For instance, the Alan Turing Institute’s research centre (CETaS) published analyses on AI-enabled influence operations in the 2024 UK elections, identifying emerging threats such as deepfake propaganda and AI-generated political smear campaigns. These studies, while finding that AI’s impact on recent elections was limited, highlighted serious concerns like AI-driven hate incitement and voter confusion (cetas.turing.ac.uk) . The implication is clear: the UK cannot be complacent. Even if traditional disinformation methods still dominate, the rapid evolution of AI means influence threats could scale up dramatically in the near future. British policymakers have started to discuss new regulations (for example, requiring transparency in AI political ads) and bolstering media literacy programs to inoculate the public against fake content.

Despite this awareness, critics argue that the UK’s response remains disjointed and under-resourced. There is no publicly articulated doctrine for cognitive warfare equivalent to adversaries’ strategies. Efforts are split among various agencies (from GCHQ handling cyber, to the Army’s 77th Brigade for information ops, to the Foreign Office for counter-disinformation), making coordination challenging. Moreover, while defensive measures (like fact-checking services and takedown of fake accounts) have improved, the UK appears reluctant to consider more assertive offensive information operations that could pre-empt adversary narratives. Legal and ethical norms, as well as fear of escalation, likely restrain such tactics. The result is that Britain often plays catch-up, reacting to disinformation waves after they have already influenced segments of the population.

European Union: The EU, as a bloc of democracies, faces additional hurdles in confronting cognitive warfare. Brussels has treated disinformation chiefly as a policy and regulatory issue tied to election security and digital platform accountability. Over the past few years, the EU implemented a Code of Practice on Disinformation (a voluntary agreement with tech companies) and stood up teams like the East StratCom Task Force (known for its EUvsDisinfo project debunking pro-Kremlin myths). Following high-profile meddling in elections and referendums, EU institutions have grown more vocal: they label Russia explicitly as the chief source of disinformation targeting Europe. The European Commission also incorporated anti-disinformation clauses into the Digital Services Act (DSA), requiring large online platforms to assess and mitigate risks from fake content.

When it comes to AI, the EU’s landmark AI Act – primarily a regulatory framework to govern AI uses – indirectly addresses some information manipulation concerns (for example, by requiring transparency for deepfakes). However, EU efforts are fundamentally defensive and norm-driven. They seek to police platforms and inform citizens, rather than actively engage in influence operations. EU leaders are wary of blurring the line between counter-propaganda and propaganda of their own, given Europe’s commitment to free expression. This creates a dilemma: open societies find it difficult to wage information war with the ruthlessness of authoritarian regimes.

European security experts are starting to grapple with this challenge. A recent EU security and defence college course underscored that cognitive warfare is an “emerging challenge” for the European Union (esdc.europa.eu) . Participants discussed the need for technological tools to detect, deter, and mitigate cognitive threats. Yet, outside of specialised circles, there is no EU-wide military command focused on cognitive warfare (unlike traditional domains such as land, sea, cyber, etc.). NATO, which includes most EU countries, has taken the lead in conceptualising cognitive warfare, but NATO’s role in offensive information activities is limited by its mandate.

A telling critique comes from a Royal United Services Institute (RUSI) commentary on disinformation and AI threats. It notes that NATO’s 2024 strategy update acknowledged the dangers of AI-enabled disinformation, using unusually strong language about the urgency of the challenge. However, the same strategy “makes no reference to how AI could be used” positively for strategic communications or to help counter disinformation (rusi.org) . In other words, Western nations are emphasising protection and defence, strengthening **governance standards, public resilience, and truth-checking mechanisms, **but they are not yet leveraging AI offensively to seize the initiative in the info sphere. This cautious approach may be ceding ground to adversaries who have no such reservations.

Why the West Is Losing the AI Influence War

Several interrelated factors explain why the UK, EU, and their allies appear to be losing ground in the cognitive domain against AI-equipped adversaries:

Reactive Posture vs. Proactive Strategy: Western responses have been largely reactive. Democracies often respond to disinformation campaigns after damage is done, issuing fact-checks or diplomatic condemnations. There is a lack of a proactive, comprehensive strategy to dominate the information environment. Adversaries, by contrast, set the narrative by deploying influence operations first and fast.

Ethical and Legal Constraints: The UK and EU operate under strict norms – adherence to truth, rule of law, and respect for civil liberties – which limit tactics in information warfare. Propaganda or deception by government is domestically unpopular and legally fraught. This makes it hard to match the scale and aggressiveness of Russian or Chinese influence operations without undermining democratic values. Authoritarians face no such constraints.

Fragmented Coordination: In Europe, tackling cognitive threats cuts across multiple jurisdictions and agencies (domestic, EU, NATO), leading to fragmentation. A unified command-and-control for information operations is lacking. Meanwhile, adversaries often orchestrate their messaging from a centralised playbook, giving them agility and consistency.

Regulatory Focus Over Capabilities: The EU’s inclination has been to regulate (AI, social media, data) to create guardrails – a necessary but slow process. However, regulation alone does not equal capability. Rules might curb some harmful content but do not stop a determined adversary. The West has invested less in developing its own AI tools for strategic communication, psyops, or rapid counter-messaging. This capability gap means ceding the technological edge to opponents.

Underestimation of Cognitive Warfare: Historically, Western security doctrine prioritised physical and cyber threats, sometimes underestimating the impact of information warfare. The concept of a sustained “cognitive war” waged in peacetime is relatively new to Western planners. Initial responses were tepid – for example, before 2016, few anticipated that online influence could significantly affect major votes. This lag in appreciation allowed adversaries to build momentum.

These factors contribute to a situation where, despite growing awareness, the UK and EU have struggled to turn rhetoric into effective countermeasures on the cognitive front. As a result, authoritarian influence campaigns continue to find fertile ground in Western societies. Each viral conspiracy theory that goes unchecked, each wedge driven between communities via disinformation, and each doubt cast on democratic institutions chips away at the West’s strategic advantage. NATO officials warn that information warfare threats “must neither be overlooked nor underestimated” in the face of the AI revolution. Yet current efforts remain a step behind the onslaught of AI-generated falsehoods.

Conclusion and Implications

If the UK, EU, and like-minded democracies do not rapidly adapt to the realities of AI-driven cognitive warfare, they risk strategic defeat in an important realm of 21st-century conflict. Losing the AI influence war doesn’t happen with a formal surrender; instead, it manifests as a gradual erosion of democratic resilience. Societies may grow deeply divided, citisens lose trust in media and governments, and adversarial narratives become entrenched. In the long run, this could weaken the political will and cohesion needed to respond to more conventional security threats. As one analysis grimly observed, the cost of inaction is high – allowing adversaries to exploit AI for malign influence can lead to a “strategic imbalance favouring adversaries”, with a flood of false narratives eroding public trust and even devastating democratic institutions if left unchecked.

Reversing this trajectory will require Western nations to elevate the priority of the cognitive battlespace in national security planning. Some broad imperatives emerge:

Develop Offensive and Defensive AI Capabilities: The UK and EU should invest in AI tools not just to detect and debunk disinformation, but also to disseminate counter-narratives that truthfully push back against authoritarian propaganda. Ethical guidelines for such operations must be established, but fear of using AI at all in information ops leaves the field open to adversaries.

Whole-of-Society Resilience: Building public resilience is crucial. Education in media literacy and critical thinking, transparency about threats, and empowering independent journalism are all part of inoculating society. A populace that can recognise manipulation is the best defence against cognitive warfare. The goal is to ensure citizens can engage with digital information sceptically, blunting the impact of fake or AI-manipulated content.

International Coordination: The transatlantic alliance and democratic partners need better coordination in the information domain. NATO’s work on cognitive warfare should be complemented by EU and UK joint initiatives to share intelligence on disinformation campaigns and align responses. A unified front can deny adversaries the ability to play divide-and-conquer with different countries.

Adaptive Governance: Western policymakers must make their regulatory frameworks more agile in the face of technological change. This might include faster mechanisms to hold platforms accountable, updated election laws regarding AI-generated content, and perhaps narrowly tailored laws against the most dangerous forms of disinformation (such as deceptive media that incites violence). The challenge is doing so without undermining free speech – a balance that requires constant calibration as AI technology evolves.

In summary, the UK and EU are at a crossroads. They can continue on the current path – risking that AI-enabled influence attacks will outpace their responses – or they can strategise anew and invest in winning the cognitive fight. The latter will demand political will and creativity: treating information space as a domain to be secured, much like land, sea, air, cyber and space. It also means confronting uncomfortable questions about using emerging technologies in ways that align with democratic values yet neutralise malign propaganda.

The cognitive battle-space is now a permanent feature of international security. Western democracies must not cede this battlefield. Maintaining an open society does not mean being defenceless. With prudent adoption of AI for good, and a staunch defence of truth, the UK, EU, and their allies can start to turn the tide in the AI influence war. Failing to do so will only embolden those who seek to “attack the democratic pillars of the West” through information manipulation. In this contest for minds and hearts, as much as in any other domain of conflict, strength and resolve will determine who prevails.

Bibliography

1. NATO Allied Command Transformation. “Cognitive Warfare.” NATO ACT, Norfolk VA.

2. Bryc, Agnieszka. “Destroy from within: Russia’s cognitive warfare on EU democracy.” Kolegium Europy Wschodniej, 27 Nov 2024.

3. European Security & Defence College (ESDC). “Cognitive warfare in the new international competition: an emerging challenge for the EU,” 28 May 2024.

4. Williams, Cameron (Modern War Institute). “Persuade, Change, and Influence with AI: Leveraging Artificial Intelligence in the Information Environment.” Modern War Institute at West Point, 14 Nov 2023.

5. UK Ministry of Defence. Defence Artificial Intelligence Strategy, June 2022. UK

6. Fitz-Gerald, Ann M., and Halyna Padalko (RUSI). “The Need for a Strategic Approach to Disinformation and AI-Driven Threats.” RUSI Commentary, 25 July 2024.

7. Science Business News. “EU is ‘losing the narrative battle’ over AI Act, says UN adviser,” 05 Dec 2024.

Past meets Future

Rethinking Warfare: Clausewitz in the Age of Cyber and Hybrid Conflict

Carl von Clausewitz’s claim that war is “a continuation of politics by other means” has survived railways, radio and nuclear weapons.  Today the “other means” range from data-wiping malware that bricks ventilators to viral deep-fakes that never fired a shot.  The central puzzle is whether these novelties merely change the character of war (the tools, tempo and terrain) or whether they erode its immutable nature of violence, chance and political purpose (Echevarria 2002; Strachan 2013). 

International lawyers behind the Tallinn Manual 2.0 accept that a non-international armed conflict may now consist solely of cyber operations if the effects rival kinetic force (Schmitt 2017).  Thomas Rid counters that almost all cyber activity is better classed as espionage, sabotage or subversion. Such attacks may be potent, but not war in the Clausewitzian sense (Rid 2017).  The Russian “AcidRain” attack of February 2022 sits precisely on that fault-line: a single wiper disabled thousands of Ukrainian satellite modems and 5,800 German wind-turbines, yet no bombs fell (SentinelLabs 2022; Greenberg 2023).  If violence is judged by effect on human life rather than by the immediate mechanics of injury, Clausewitz still works; if it is judged by physical harm alone, he wobbles. 

The 2022 US National Defense Strategy elevates “integrated deterrence”, urging day-to-day campaigning below the armed-attack threshold (US DoD 2022).  US Cyber Command’s doctrine of persistent engagement pushes the same logic into practice, contesting adversaries continually rather than waiting for crises (USCYBERCOM 2022).  Fischerkeller and Harknett argue that such calibrated friction stabilises the domain; Lynch casts it as a new “power-sinew contest” in which outright war is the exception, not the rule (Fischerkeller & Harknett 2019; Lynch 2024).  The danger is conceptual inflation: call every malicious packet “war” and escalation thresholds blur, yet forcing every new tactic into Clausewitz’s vocabulary risks missing genuine novelty. 

Frank Hoffman’s once-handy term “hybrid warfare” now covers almost any sub-threshold activity.  NATO’s recent work on cognitive warfare goes further, treating perception itself as decisive terrain and calling for a fresh taxonomy of “acts of cognitive war” (NATO Innovation Hub 2023).  Clausewitz, writing in an age of limited literacy, rarely considered the deliberate collapse of an adversary’s shared reality as a line of operation.  The gap is undeniable – but it need not be fatal if his categories can stretch. 

Clausewitzian elementDigital-age inflectionIllustrative case
ViolencePhysical harm or systemic disruption that produces downstream human sufferingAcidRain modem wipe, 2022
ChanceAmplified by tightly coupled networks where small code changes trigger cascading failuresLog4j exploit cascade, 2021
Political purposeTerritorial control plus cognitive or behavioural manipulation2016 US election interference

The table shows how old categories bend.  Violence migrates into infrastructure; chance spikes in opaque systems; political purpose colonises the infosphere.  None of these shifts removes politics from the centre – precisely why the trinity still maps the ground.

There are 3 key areas where Clausewitz’s wisdom holds strongly:

  1. Politics first.  Colin Gray insists that strategy is the orchestration of means to political ends; replacing artillery with algorithms does not move that lodestar (Gray 1999).
  2. Escalation logic.  Even in cyberspace, deterrence depends on adversaries reading tacit red lines.  Clausewitz’s emphasis on uncertainty and friction remains apt.
  3. Human cost.  Cyber operations hurt indirectly – frozen hospital wards, confused electorates – but the harm is felt by bodies in time and space, not by circuits.

There are however, a number of places where the strain shows, namely where:

  • Systemic cyber harm approaches “force” while sidestepping bodily violence.
  • Persistent, below-threshold campaigning blurs the war–peace boundary Clausewitz assumed.
  • The trinity was never meant to classify acts aimed at belief rather than battalions.

For now, Rid’s scepticism still holds true – most cyber operations do not meet Clausewitz’s threshold of war.  Yet as societies entangle their critical functions ever more tightly with code, the line between systemic disruption and physical violence narrows.  Clausewitz’s trinity of violence, chance, political purpose – still offers the clearest compass, because politics, not technology, remains the centre of gravity of strategy.  The compass, however, is being asked to steer across novel terrain.  Should a future campaign achieve political aims through cyber-enabled systemic coercion alone, the Prussian might finally need more than a tune-up.  Until then, his core logic endures, and while needing adaptation, it has not been eclipsed.

Bibliography

Clausewitz, C. v. (1832) On War.  Berlin: Ferdinand Dümmler.

Echevarria, A. J. (2002) ‘Clausewitz’s Center of Gravity: Changing Our Warfighting Doctrine – Again!’.  Carlisle, PA: US Army Strategic Studies Institute. 

Fischerkeller, M. P. and Harknett, R. J. (2019) ‘Persistent Engagement, Agreed Competition, and Cyberspace Interaction Dynamics’. The Cyber Defense Review

Gray, C. S. (1999) Modern Strategy.  Oxford: Oxford University Press. 

Greenberg, A. (2023) ‘Ukraine Suffered More Wiper Malware in 2022 Than Anywhere, Ever’. WIRED, 22 February. 

Lynch, T. F. III (2024) ‘Forward Persistence in Great Power Cyber Competition’.  Washington, DC: National Defense University. 

NATO Innovation Hub (2023) The Cognitive Warfare Concept.  Norfolk, VA: NATO ACT. 

Rid, T. (2017) Cyber War Will Not Take Place.  Oxford: Oxford University Press. 

Schmitt, M. N. (ed.) (2017) Tallinn Manual 2.0 on the International Law Applicable to Cyber Operations.  Cambridge: Cambridge University Press. 

SentinelLabs (2022) ‘AcidRain: A Modem Wiper Rains Down on Europe’.  SentinelOne Labs Blog, 31 March. 

US Cyber Command (2022) ‘CYBER 101 – Defend Forward and Persistent Engagement’.  Press release, 25 October. 

US Department of Defense (2022) National Defense Strategy of the United States of America.  Washington, DC. 

Page 1 of 2

Powered by WordPress & Theme by Anders Norén