Matthew Toy

Thoughts, reflections and experiences

icy banner
Strategic Chessboard

The Unfolding Strategic Environment: Reconciling Enduring Principles with Revolutionary Change

The contemporary strategic environment presents a paradox. On one hand, the fundamental nature of war as a political instrument, driven by human factors and subject to friction and uncertainty, appears timeless. Carl von Clausewitz’s assertion that war serves political objectives remains a crucial anchor, forcing strategists to connect means with ends, even amidst technological fascination. Similarly, Sun Tzu’s principles regarding deception, intelligence, and achieving advantage with minimal direct confrontation resonate strongly in an era increasingly defined by non-traditional operations and persistent competition below the threshold of open warfare.

Yet, the character of conflict is undergoing a profound transformation. Technological disruption, particularly in the digital domain, is eroding traditional military advantages, intensifying “grey zone” activities, empowering non-state actors, and blurring the very definitions of war and peace. This necessitates a critical re-examination of established strategic paradigms and a forward-looking approach to national security. The challenge for policymakers and strategists lies in reconciling the enduring nature of war with its rapidly evolving character.

From Deterrence by Punishment to Deterrence by Resilience?

The Cold War’s strategic stability, largely built upon the concept of Mutually Assured Destruction (MAD), faces fundamental challenges in the digital age. While nuclear deterrence created a precarious balance, its logic struggles to adapt to threats operating outside its established framework. Cyberspace and information warfare lack the clear attribution mechanisms and proportional response options that underpin traditional deterrence by punishment. As Thomas Rid notes, establishing credibility and effective retaliation in these domains is problematic. Jeffrey Knopf’s work on “Fourth Wave” deterrence highlights how emerging threats disrupt existing models.

Furthermore, the strategic landscape is no longer solely dominated by states. Powerful technology firms, transnational terrorist organisations, and ideologically driven groups operate with increasing autonomy and influence, complicating deterrence calculations built on state-centric assumptions. The conflict in Ukraine provides stark examples, where companies like SpaceX have deployed capabilities, such as Starlink, that significantly impact battlefield communications and information warfare dynamics, challenging the state’s traditional monopoly on such strategic assets. This diffusion of power necessitates a broader conception of deterrence, moving beyond punishment towards denial, resilience, deception, and proactive information operations. Security may increasingly depend on the ability to withstand, adapt, and operate effectively within a contested information environment, rather than solely on the threat of overwhelming retaliation.

The Digital Revolution and the Transformation of Conflict Logic

The digital revolution represents more than just the introduction of new tools; it signifies a potential “change of consciousness” in warfare, as Christopher Coker suggests. Conflict becomes less geographically bounded and more psychological, abstract, and continuous, eroding distinctions between wartime and peacetime. Cyber operations, AI-enabled decision-making, and sophisticated disinformation campaigns are not merely adjuncts to traditional military power; they are becoming central components of strategic competition. China’s “Three Warfares” doctrine—integrating psychological operations, public opinion manipulation, and legal maneuvering—exemplifies how state actors are weaponising the information domain to achieve strategic aims.

This shift challenges classical strategic concepts. How is escalation controlled when cyberattacks lack clear attribution? How is victory defined when conflict plays out continuously in the non-physical domain? The Ukraine conflict serves as a real-world laboratory, demonstrating the strategic significance of cyber defenses, AI-driven targeting, and narrative warfare alongside conventional operations. It highlights how eroding conventional advantages forces a rethink of the very currency of power. Non-state actors, like ISIS, have also adeptly exploited the digital realm for recruitment, propaganda, and operational coordination, demonstrating the asymmetric advantages offered by this environment.

Systemic Fragility, Strategic Agility, and Redefined Victory

The deep integration of technology across society creates unprecedented efficiencies but also introduces systemic fragility. Interconnectedness means that disruptions—whether from cyberattacks, pandemics, financial crises, or supply chain breakdowns—can cascade rapidly with significant security implications. Consequently, building national resilience—encompassing robust cybersecurity, hardened infrastructure, diversified supply chains, and societal preparedness—becomes a core strategic imperative.

Alongside resilience, strategic agility is paramount. The accelerating pace of technological and geopolitical change means that strategies and institutions must be capable of rapid adaptation. The failure of European powers to adapt their doctrines to the realities of industrialised warfare before World War I, as chronicled by Barbara Tuchman, serves as a potent warning against strategic rigidity. Fostering agility requires institutional cultures that embrace learning and experimentation, empower decentralised action, and anticipate change.

This evolving landscape also forces a re-evaluation of “victory”. As warfare expands beyond purely military considerations to encompass cyber, economic, and informational domains, success becomes more ambiguous. Robert Mandel’s distinction between “war-winning” (tactical success) and “peace-winning” (achieving sustainable political outcomes) is increasingly pertinent. Future conflicts, likely to be protracted and involve multiple actors with divergent goals, may necessitate strategies focused on achieving iterative, adaptable political objectives rather than decisive military triumphs.

Adapting Strategy for an Unfolding Future

While some argue that classical, state-centric models of war are obsolete, discarding the foundational insights of strategists like Clausewitz and Sun Tzu would be premature. As Lawrence Freedman emphasises, war remains shaped by human agency and political motives, regardless of technology. The core task is not replacement but adaptation: applying enduring principles to navigate the complexities of the contemporary environment.

Successfully navigating the future strategic environment requires a conceptual shift. Technological foresight, AI-driven analysis, and robust cyber capabilities are necessary but insufficient. The decisive factor may be institutional and cultural: the capacity for continuous learning, adaptation, and innovation. Strategy must become truly multidimensional, integrating all instruments of national power—diplomatic, informational, military, and economic—within a coherent framework that acknowledges both the timeless nature and the transforming character of conflict. The future belongs to those who can master this complex, dynamic interplay.


Bibliography

  • Awan, Imran. “Cyber-Extremism: Isis and the Power of Social Media.” Society 54, no. 2 (April 2017): 138–49. https://www.proquest.com/scholarly-journals/cyber-extremism-isis-power-social-media/docview/1881683052/se-2.
  • Coker, Christopher. Future War. Polity Press, 2015.
  • Freedman, Lawrence. The Evolution of Nuclear Strategy. New York: Palgrave Macmillan, 2003.
  • Freedman, Lawrence. The Future of War: A History. New York: PublicAffairs, 2017.
  • Gray, Colin S. The Strategy Bridge: Theory for Practice. Oxford: Oxford University Press, 2010. Online edn, Oxford Academic, September 1, 2010.
  • Greggs, David. “Violent Limitation: Cyber Effects Reveal Gaps in Clausewitzian Theory.” The Cyber Defense Review 9, no. 1 (2024): 73–86. [invalid URL removed].
  • Jervis, Robert. The Meaning of the Nuclear Revolution: Statecraft and the Prospect of Armageddon. Ithaca: Cornell University Press, 1989.
  • Kaldor, Mary. New and Old Wars: Organized Violence in a Global Era. 3rd ed. Cambridge: Polity Press, 2012.
  • Kania, Elsa B. “The PLA’s Latest Strategic Thinking on the Three Warfares.” The Jamestown Foundation, August 22, 2016. https://jamestown.org/program/the-plas-latest-strategic-thinking-on-the-three-warfares/.
  • Knopf, Jeffrey W. The Second Nuclear Age: Strategy, Danger, and the New Power Politics. Washington, D.C.: Council on Foreign Relations Press, 2002.
  • Layton, Peter. “Fighting artificial intelligence battles: Operational concepts for future ai-enabled wars.” Network 4, no. 20 (2021): 1-100.
  • Mandel, Robert. The Meaning of Military Victory. Boulder, CO: Lynne Rienner, 2006.
  • Morozov, Evgeny. To Save Everything, Click Here: The Folly of Technological Solutionism. New York: PublicAffairs, 2013.
  • Rid, Thomas. Cyber War Will Not Take Place. London: Hurst, 2013.
  • Skove, Sam. “How Elon Musk’s Starlink Is Still Helping Ukraine’s Defenders.” Defense One, March 1, 2023. https://www.defenseone.com.
  • Sun Tzu. The Art of War. Newburyport: Dover Publications, Incorporated, 2002.
  • Tiwari, Sachin. “Cyber Operations in the Grey Zone.” The Digital Humanities Journal, November 14, 2023. https://tdhj.org/blog/post/cyber-operations-grey-zone/.
  • Tuchman, Barbara W. The Guns of August. New York: Macmillan, 1962.
  • Van Creveld, Martin. Transformation of War. New York: Free Press, 1991.
  • Von Clausewitz, Carl. On War. Edited and translated by Michael Howard and Peter Paret. Princeton: Princeton University Press,1 1984.
Chessboard with smoke floating over the pieces

How Grey Zone Warfare Exploits the West’s Risk Aversion

Western democracies are caught in a strategic bind. Adversaries, skilled at operating in the murky “grey zone” between peace and open warfare, are exploiting a fundamental Western characteristic: risk aversion. Grey zone warfare blends cyberattacks, disinformation, economic coercion, and proxy warfare to achieve strategic goals without triggering a full-scale military response. This isn’t just a theoretical problem; it’s causing a kind of strategic paralysis, hindering our ability to respond to threats that don’t fit neatly into traditional military boxes.

A 21st-Century Threat

Grey zone warfare encompasses more than just cyberattacks and disinformation. Think of cyberattacks that cripple infrastructure but stop short of causing mass casualties, disinformation campaigns that sow discord and erode trust in institutions, and the use of proxy forces to destabilise a region. Crucially, it also includes economic coercion. China’s Belt and Road Initiative, with its potential for creating debt traps and strategic dependencies, is a prime example. Russia’s use of energy supplies as a political weapon, particularly against European nations, is another. The key is plausible deniability – making it hard for the target to definitively point the finger and justify a strong response. The goal? To achieve strategic aims, weakening an adversary, gaining territory, influencing policy, without triggering a full-blown military conflict. We see this in China’s response to Lithuania’s engagement with Taiwan, where trade sanctions were used as a punitive measure. Similarly, the West’s reliance on Chinese rare earth minerals creates a vulnerability that can be exploited for political leverage.

A Strategic Vulnerability

The West, particularly Europe and North America, has a deeply ingrained preference for diplomacy and de-escalation. This isn’t necessarily a bad thing as it stems from a genuine desire to avoid the horrors of large-scale war and maintain a stable global order. But this risk aversion, while understandable, has become a strategic vulnerability. Adversaries see this hesitation and tailor their actions accordingly. They operate just below the threshold of what would trigger a decisive military response, creating a constant dilemma for Western leaders: how to respond effectively without escalating the situation into a wider conflict?

Ukraine and Grey Zone Warfare

Ukraine is a tragic textbook example of grey zone warfare in action. Russia’s strategy goes far beyond conventional military force. It includes crippling cyberattacks on Ukrainian infrastructure, a relentless barrage of disinformation aimed at undermining the Ukrainian government and sowing discord, and the backing of separatist movements to create internal instability. These actions are calculated to achieve Russia’s goals while staying below the threshold that would provoke a direct military intervention from NATO. The Western response, consisting primarily of sanctions and diplomatic pressure, reveals the core problem. While intended to punish Russia and deter further aggression, this relatively restrained approach has, arguably, enabled Russia to continue its grey zone operations, demonstrating the difficulty of countering these tactics without risking a wider war. The continued, grinding conflict, and the incremental nature of Western support, highlight the limitations of a purely reactive, risk-averse strategy.

The Erosion of American Global Leadership

The erosion of American global leadership, accelerated by but not solely attributable to the Trump administration, has profoundly shaken the transatlantic alliance. Actions like imposing tariffs on allies, questioning NATO’s relevance, and the perceived (and sometimes explicit) wavering of commitment to Article 5’s collective defence clause have created a climate of uncertainty. European nations are now grappling with a fundamental question: can they rely on the US security umbrella? This doubt isn’t just theoretical; it’s driving concrete policy changes.

Europe’s Quest for Strategic Autonomy

This uncertainty has fuelled a push for European “strategic autonomy” – the ability to act independently in defence and foreign policy. Figures like French President Macron have long championed this idea, and it’s gaining traction across the continent. Even in the UK, traditionally a staunch US ally, Labour leader Keir Starmer has emphasised the need for increased defence spending and closer European security cooperation. Germany’s Zeitenwende, its historic shift towards rearmament, is a direct response to this new reality. These are not just rhetorical flourishes; they represent a fundamental rethinking of European security, driven by a perceived need to fill the void left by a less predictable and less engaged United States. The debate over a European army, or a more coordinated European defence force, is no longer fringe; it’s becoming mainstream.

The Heart of the Matter: Strategic Paralysis and the Clausewitzian Lens

This brings us to the heart of the matter: strategic paralysis. The West, caught between a desire to avoid escalation and the need to respond effectively, often finds itself frozen. This is exactly the outcome grey zone warfare is designed to achieve. By creating ambiguous situations where traditional military responses seem disproportionate or politically risky, adversaries effectively paralyze Western decision-making. The fear of “provoking” a larger conflict becomes a weapon in itself. As Clausewitz argued, war is an extension of politics. Grey zone conflict is simply an extension of war by subtler means, one designed to neutralise the West’s ability to make political decisions with clarity.

Breaking Free: A Strategy for the Grey Zone

Breaking free from this strategic paralysis requires a fundamental shift in thinking. The West needs a strategy that’s as agile and adaptable as the grey zone tactics it faces. This means:

  • Develop Comprehensive Policies: Craft policies that address the full spectrum of threats, from conventional warfare to subtle disinformation campaigns and economic coercion, ensuring a flexible and rapid response capability.
  • Enhance Cyber and Information Warfare Capabilities: Invest heavily in both defensive and offensive cyber capabilities and develop robust strategies to counter disinformation and protect critical infrastructure.
  • Strengthen Alliances: Revitalise existing alliances like NATO, and forge new partnerships based on shared values and a common understanding of the grey zone threat. This is about more than just military cooperation; it’s about diplomatic and economic solidarity.
  • Promote Resilience: Build societal resilience through public awareness campaigns, media literacy education, and measures to counter foreign interference in elections and democratic processes. A well-informed and engaged citizenry is the best defence against disinformation.
  • Re-evaluate Risk Thresholds: This is the most challenging, but most crucial step. The West must carefully recalibrate its risk tolerance. This doesn’t mean reckless escalation, but it does mean accepting that a degree of risk is unavoidable in confronting grey zone aggression. A posture of constant de-escalation, in the face of persistent provocation, is ultimately self-defeating.

Conclusion: Deterrence Requires the Will to Act

Grey zone warfare thrives on Western risk aversion and, crucially, weak deterrence. Overcoming this strategic paralysis requires a profound shift: acknowledging that inaction is also a choice, and often a dangerous one. The West must develop a more agile, resilient, and – crucially – a less predictable strategy. Western policymakers must recognise that deterrence is not just military strength; it is the will to act. A state that is predictable in its restraint is one that invites coercion. The future of the international order may well depend on the West’s ability to adapt to this new era of conflict.

Bibliography

American Military University. “Gray Zone Attacks by Russia Being Used to Undermine Ukraine.” AMU Edge, May 12, 2023. https://amuedge.com/gray-zone-attacks-by-russia-being-used-to-undermine-ukraine/.

Chivvis, Christopher S. Understanding Russian “Hybrid Warfare” and What Can Be Done About It. Santa Monica, CA: RAND Corporation, 2017. https://www.rand.org/pubs/testimonies/CT468.html.

Gray, Colin S. Another Bloody Century: Future Warfare. London: Phoenix, 2005.

Military Strategy Magazine. “Deterring War Without Threatening War: Rehabilitating the West’s Risk-Averse Approach to Deterrence.” Military Strategy Magazine,1 April 2023. https://www.militarystrategymagazine.com/article/deterring-war-without-threatening-war-rehabilitating-the-wests-risk-averse-approach-to-deterrence/.

Onsolve. “Gray Zone Warfare: What Business Leaders Need to Know.” Onsolve Blog, March 2024. https://www.onsolve.com/blog/sra-gray-zone-warfare-business-leaders/.

Rid, Thomas. Cyber War Will Not Take Place. London: C. Hurst & Co., 2013.

The Wall Street Journal. “Trump Is Overturning the World Order That America Built.” WSJ, January 25, 2024. https://www.wsj.com/world/trump-is-overturning-the-world-order-that-america-built-10981637.

The New Yorker. “What’s Next for Ukraine?” The New Yorker, February 2024. https://www.newyorker.com/news/the-lede/whats-next-for-ukraine.

Why Technology Alone Doesn’t Win Wars

We often assume that the latest military technology will define the future of warfare. AI, cyber weapons, and autonomous drones are hailed as game-changers, just as tanks, aircraft, and nuclear weapons were in past eras. But history tells a different story, one where new technology is only as effective as the strategy, doctrine, and human adaptation behind it.

In this video, we explore David Edgerton’s critique of technological determinism, the idea that wars are shaped by cutting-edge innovation alone. From ancient weapons to modern cyber warfare, we show why old technologies persist, how armies adapt, and why war remains a contest of resilience, not just hardware.

The Real Lesson of Military Technology

The biggest mistake in war isn’t failing to develop new technology, it’s assuming that technology alone will guarantee victory. History proves that the best weapons don’t always win battles; those who adapt, integrate, and sustain their forces over time do.

What do you think? Are we overhyping AI and cyber warfare today, just as people once overhyped battleships or air power?

Europe's Leadership Vacuum in the Shadow of Russia and America

Europe’s Leadership Vacuum in the Shadow of Russia and America

The concept of ‘strategic culture’ as critiqued in Hew Strachan’s “The Direction of War: Contemporary Strategy in Historical Perspective” emphasises continuity and a nation’s resistance to change, shaped historically and geographically. Strategic culture includes historical memory, institutional norms, core national values, and collective threat perceptions, all contributing to a nation’s strategic posture. This comprehensive framework is valuable when examining Europe’s contemporary security challenges, specifically the strategic vacuum highlighted by the ongoing war in Ukraine and America’s ongoing withdrawal from global leadership.

Europe’s Strategic Culture

European strategic culture, forged during the Cold War, assumed stability through American military and diplomatic leadership. Strachan argues convincingly that such cultural assumptions hinder strategic flexibility, creating vulnerabilities when geopolitical realities shift dramatically, as they have since Russia’s invasion of Ukraine in 2022.

NATO-centric thinking, predicated on the guarantee of American power projection, has revealed problematic inertia… European states, notably the UK and the EU members, have found themselves scrambling to define a coherent, autonomous response.

America’s Strategic Shift from Protector to Competitor

America’s strategic withdrawal from Europe, evidenced by Obama’s pivot to Asia, that accelerated by Trump V1.0’s transactional approach, Biden’s reticence and culminating with Trump 2.0’s recent dramatic geopolitical hand grenades. This reflects not merely a change in policy but a radical break from previous expectations. This withdrawal is a revolutionary, not evolutionary, shift in global strategy, shattering Europe’s assumption of guaranteed U.S. engagement.

Strategically, this creates immediate tensions:

  • The U.S. increasingly frames its engagement with Europe as transactional and conditional upon shared responsibilities, as demonstrated by U.S. ambivalence toward NATO under Trump and Biden’s conditional engagement in Ukraine.
  • Simultaneously, Russia’s aggression has starkly shown that the belief in a diminished threat from inter-state warfare, fashionable among policymakers since the Cold War’s end, is dangerously misplaced. Strachan’s scepticism about overly optimistic predictions of war’s obsolescence resonates strongly here, given recent events.

This combination reveals Europe’s strategic culture as critically unprepared for the harsh geopolitical realities of the 21st century.

Europe’s Strategic Awakening

Europe has not been entirely inactive. The EU’s Strategic Compass, adopted in 2022, and the UK’s Integrated Review Refresh in 2023 demonstrate genuine acknowledgment of new realities. These documents move beyond purely reactive policies and represent Europe’s incremental shift towards strategic autonomy:

  • Increased defence expenditure: Germany’s Zeitenwende is a prime example.
  • Increased EU defence coordination, exemplified by the European Peace Facility funding Ukraine’s defence.
  • Renewed commitment to territorial defence and enhanced military deployments in Eastern Europe.

Yet, despite these efforts, the doctrinal and strategic mindset change has been incomplete. European policies continue to implicitly rely on the assumption of sustained U.S. involvement, despite public and political statements affirming Europe’s need for self-sufficiency.

Russia and America as Mirrors

The actions of Russia and the retreat of America each independently expose the inadequacies of Europe’s current strategic posture:

Russia’s Actions: Highlighted Europe’s continuing strategic vulnerability, emphasising weaknesses in rapid military deployment, critical capability gaps (such as long-range precision munitions and air defence), and dependence on U.S. logistical, intelligence, and strategic capabilities.

America’s Pivot Away: Underscores that strategic autonomy isn’t merely desirable but imperative. Starting with Biden administration’s reluctance to escalate beyond certain lines in Ukraine and Washington’s growing Indo-Pacific focus expose a stark misalignment between European expectations and American strategy. The most recent signals from Trump are an unequivocal message to Europe: unless there is something in it for America, you are on your own.

The Limits of Integration and NATO

While deeper European integration and renewed commitment to NATO might appear sufficient, these solutions alone are inadequate. Integration without clear autonomous capabilities risks perpetual dependency, and NATO’s structure, inherently reliant on American leadership, cannot compensate for America’s strategic reorientation. As Strachan underscores, relying purely on continuity without adaptability is strategically naive.

From Reactive Culture to Proactive Realism

Europe’s security doctrine requires nuanced recalibration rather than wholesale abandonment. The gap is not merely military, it is doctrinal, conceptual, and philosophical. A robust European strategic doctrine should:

  1. Recognise NATO’s Limitations: Explicitly acknowledge NATO’s limitations without undermining its centrality to European defence.
  2. Embed Strategic Autonomy: Clearly outline Europe’s independent capabilities and strategic objectives, moving beyond rhetoric to practical operational frameworks. Europe must realistically assess scenarios in which it may need to act without guaranteed American backing.
  3. Rethink Strategic Culture: Move beyond traditional assumptions of continuity—what previously seemed unthinkable, such as large-scale inter-state conflict, must become integral to planning and preparedness again.

Engaging Broader Perspectives

Drawing briefly from constructivist insights, strategic culture is not immutable but socially constructed, implying that European nations have the agency to reshape it consciously. Additionally, realist thinkers like John Mearsheimer caution against complacency in alliance politics, reinforcing the need for independent European capabilities.

Rethinking Doctrine for Strategic Resilience

The UK’s Integrated Review and the EU’s Strategic Compass represent valuable first steps toward a more strategic and independent Europe. However, they still fall short of addressing the fundamental gap that Russia’s aggression and America’s strategic recalibration have exposed.

Addressing Europe’s leadership vacuum demands overcoming historical and cultural inertia. It requires strategic humility: recognising that the stability provided by Cold War-era assumptions no longer applies, that threats are tangible, and that peace through strength must be anchored not in external assurances, but in Europe’s credible, independently sustainable power.

Europe must confront this reality head-on, accepting change not merely rhetorically but operationally, doctrinally, and culturally. Only then will Europe secure genuine strategic autonomy, prepared not just for today’s threats but also for tomorrow’s inevitable uncertainties.

Bibliography

  • Strachan, Hew. The Direction of War: Contemporary Strategy in Historical Perspective. Cambridge University Press, 2013.
  • European Union. “Strategic Compass for Security and Defence.” 2022.
  • United Kingdom Government. “Integrated Review Refresh.” 2023.
  • Mearsheimer, John J. The Tragedy of Great Power Politics. W. W. Norton & Company, 2001.
  • Smith, Rupert. The Utility of Force: The Art of War in the Modern World. Penguin, 2005.

[Video] UK and EU AI Influence

Artificial intelligence isn’t just reshaping industries—it’s reshaping reality. While the UK and EU focus on regulating AI and combating misinformation, adversarial states like Russia and China are weaponizing it for influence warfare. The AI-driven disinformation battle isn’t coming; it’s already here.

In my latest article, “Why the UK and EU Are Losing the AI Influence War”, I explore how Europe’s slow response, defensive posture, and reliance on outdated regulatory approaches are leaving it vulnerable to AI-enhanced propaganda campaigns.

To bring these ideas to life, I’ve created a video that visualises the scale of the challenge and why urgent action is needed. Watch it below:

The AI influence war is no longer a hypothetical—it’s unfolding in real-time. Europe’s current strategies are reactive and insufficient, while adversaries leverage AI to manipulate narratives at unprecedented speed. Without a cognitive security unit, AI-powered countermeasures, and a national security-driven approach, the UK and EU risk losing control of their own information space.

The question isn’t whether AI will reshape public perception, it’s who will be in control of that perception. Will Europe rise to the challenge, or will it remain a passive battleground for AI-driven narratives?

What do you think? Should the UK and EU take a more aggressive stance in countering AI-enhanced disinformation? Feel free to discuss in the comments.

WHY THE UK AND EU ARE LOSING THE AI INFLUENCE WAR

Why the UK and EU Are Losing the AI Influence War

Abstract

Western democracies face a new front in conflict: the cognitive battlespace, where artificial intelligence (AI) is leveraged to shape public opinion and influence behaviour. This article argues that the UK and EU are currently losing this AI-driven influence war. Authoritarian adversaries like Russia and China are deploying AI tools in sophisticated disinformation and propaganda campaigns, eroding trust in democratic institutions and fracturing social cohesion. In contrast, the UK and EU response, focused on regulation, ethical constraints, and defensive measures, has been comparatively slow and fragmented. Without a more proactive and unified strategy to employ AI in information operations and bolster societal resilience against cognitive warfare, Western nations risk strategic disadvantage. This article outlines the nature of the cognitive battle-space, examines adversarial use of AI in influence operations, evaluates UK/EU efforts and shortcomings, and suggests why urgent action is needed to regain the initiative.

Introduction

Modern conflict is no longer confined to conventional battlefields; it has expanded into the cognitive domain. The term “cognitive battlespace” refers to the arena of information and ideas, where state and non-state actors vie to influence what people think and how they behave. Today, advances in AI have supercharged this domain, enabling more sophisticated influence operations that target the hearts and minds of populations at scale. Adversaries can weaponise social media algorithms, deepfakes, and data analytics to wage psychological warfare remotely and relentlessly.

Western governments, particularly the United Kingdom and European Union member states, find themselves on the defensive. They face a deluge of AI-enhanced disinformation from authoritarian rivals but are constrained by ethical, legal, and practical challenges in responding. Early evidence suggests a troubling imbalance: Russia and China are aggressively exploiting AI for propaganda and disinformation, while the UK/EU struggle to adapt their policies and capabilities. As a result, analysts warn that Western democracies are “losing the battle of the narrative” in the context of AI (sciencebusiness.net). The stakes are high: if the UK and EU cannot secure the cognitive high ground, they risk erosion of public trust, social discord, and strategic loss of influence on the world stage.

This article explores why the UK and EU are lagging in the AI influence war. It begins by defining the cognitive battlespace and the impact of AI on information warfare. It then examines how adversaries are leveraging AI in influence operations. Next, it assesses the current UK and EU approach to cognitive warfare and highlights key shortcomings. Finally, it discusses why Western efforts are falling behind and what the implications are for future security.

The Cognitive Battlespace in the Age of AI

In cognitive warfare, the human mind becomes the battlefield. As one expert succinctly put it, the goal is to “change not only what people think, but how they think and act” (esdc.europa.eu). This form of conflict aims to shape perceptions, beliefs, and behaviours in a way that favours the aggressor’s objectives. If waged effectively over time, cognitive warfare can even fragment an entire society, gradually sapping its will to resist an adversary.

Artificial intelligence has become a force multiplier in this cognitive domain. AI algorithms can curate individualised propaganda feeds, amplify false narratives through bot networks, and create realistic fake images or videos (deepfakes) that blur the line between truth and deception. According to NATO’s Allied Command Transformation, cognitive warfare encompasses activities to affect attitudes and behaviours by influencing human cognition, effectively “modifying perceptions of reality” as a new norm of conflict (act.nato.int). In essence, AI provides powerful tools to conduct whole-of-society manipulation, turning social media platforms and information systems into weapons.

A vivid example of the cognitive battlespace in action occurred in May 2023, when an AI-generated image of a false Pentagon explosion went viral. The fake image, disseminated by bots, briefly fooled enough people that it caused a sharp but temporary dip in the U.S. stock market. Though quickly debunked, this incident demonstrated the “catastrophic potential” of AI-driven disinformation to trigger real-world consequences at machine speed (mwi.westpoint.edu) . Generative AI can manufacture convincing yet false content on a massive scale, making it increasingly difficult for populations to discern fact from fabrication.

In the cognitive battlespace, such AI-enabled tactics give malign actors a potent advantage. They can rapidly deploy influence campaigns with minimal cost or risk, while defenders struggle to identify and counter each new false narrative. As the information environment becomes saturated with AI-amplified propaganda, the traditional defenders of truth, journalists, fact-checkers, and institutions, find themselves overwhelmed. This asymmetry is at the heart of why liberal democracies are in danger of losing the cognitive war if they do not adapt quickly.

Adversaries’ AI-Driven Influence Operations

Russia and China have emerged as leading adversaries in the AI-enabled influence war, honing techniques to exploit Western vulnerabilities in the cognitive domain. Russia has a long history of information warfare against the West and has eagerly integrated AI into these efforts. Through troll farms and automated bot networks, Russia pushes AI-generated propaganda designed to destabilise societies. Moscow views cognitive warfare as a strategic tool to “destroy [the West] from within” without firing a shot. Rather than direct military confrontation with NATO (which Russia knows it would likely lose), the Kremlin invests in “cheap and highly effective” cognitive warfare to undermine Western democracies from inside (kew.org.pl) .

Russian military thinkers refer to this concept as “reflexive control,” essentially their doctrine of cognitive warfare. The idea is to manipulate an adversary’s perception and decision-making so thoroughly that the adversary “defeats themselves”. In practice, this means saturating the information space with tailored disinformation, conspiracy theories, and emotionally charged content to break the enemy’s will to resist. As one analysis describes, the battleground is the mind of the Western citizen, and the weapon is the manipulation of their understanding and cognition. By exploiting human cognitive biases, our tendencies toward emotional reaction, confirmation bias, and confusion, Russia seeks to leave citizens “unable to properly assess reality”, thus incapable of making rational decisions (for example, in elections). The goal is a weakened democratic society, rife with internal divisions and distrust, that can no longer present a united front against Russian aggression.

Concrete examples of Russia’s AI-fuelled influence operations abound. Beyond the fake Pentagon incident, Russian operatives have used generative AI to create deepfake videos of European politicians, forge fake news stories, and impersonate media outlets. Ahead of Western elections, Russian disinformation campaigns augmented with AI have aimed to sow discord and polarise public opinion. U.K. intelligence reports and independent researchers have noted that Russia’s automated bot accounts are evolving to produce more “human-like and persuasive” messages with the help of AI language models. These tactics amplify the reach and realism of propaganda, making it harder to detect and counter. Even if such interference does not always change election outcomes, it erodes public trust in information and institutions, a long-term win for the Kremlin.

China, while a newer player in European information spaces, is also investing heavily in AI for influence operations. Chinese military strategy incorporates the concept of “cognitive domain operations”, which merge AI with psychological and cyber warfare. Beijing’s aim is to shape global narratives and public opinion in its favour, deterring opposition to China’s interests. For instance, China has deployed swarms of AI-driven social media bots to spread disinformation about topics like the origins of COVID-19 and the status of Taiwan. Chinese propaganda operations use AI to generate deepfake news anchors and social media personas that promote pro-China narratives abroad. According to NATO analysts, China describes cognitive warfare as using public opinion and psychological manipulation to achieve victory, and invests in technologies (like emotion-monitoring systems for soldiers) that reveal the importance it places on the information domain. While China’s influence efforts in Europe are less overt than Russia’s, they represent a growing challenge as China seeks to project soft power and shape perceptions among European audiences, often to dissuade criticism of Beijing or divide Western unity.

The aggressive use of AI by authoritarian adversaries has put Western nations on the back foot in the information environment. Adversaries operate without the legal and ethical constraints that bind democracies. They capitalise on speed, volume, and ambiguity, launching influence campaigns faster than defenders can react. Authoritarian regimes also coordinate these efforts as part of broader hybrid warfare strategies, aligning cyber-attacks, diplomatic pressure, and economic coercion with information operations to maximise impact. In summary, Russia and China have seized the initiative in the cognitive battlespace, leaving the UK, EU, and their allies scrambling to catch up.

UK and EU Responses: Strategies and Shortcomings

Confronted with these threats, the United Kingdom and European Union have begun to recognise the urgency of the cognitive warfare challenge. In recent years, officials and strategists have taken steps to improve defences against disinformation and malign influence. However, the Western approach has so far been largely reactive and constrained, marked by cautious policy frameworks and fragmented efforts that lag the adversary’s pace of innovation.

United Kingdom: The UK government acknowledges that AI can significantly amplify information warfare. The Ministry of Defence’s Defence Artificial Intelligence Strategy (2022) warns that “AI could also be used to intensify information operations, disinformation campaigns and fake news,” for example by deploying deepfakes and bogus social media accounts. British military doctrine, including the Integrated Operating Concept (2020), emphasises that information operations are increasingly important to counter false narratives in modern conflicts (gov.uk). London’s approach has included establishing units dedicated to “strategic communications” and cyber influence and working with partners like NATO to improve information security.

The UK has also invested in research on AI and influence. For instance, the Alan Turing Institute’s research centre (CETaS) published analyses on AI-enabled influence operations in the 2024 UK elections, identifying emerging threats such as deepfake propaganda and AI-generated political smear campaigns. These studies, while finding that AI’s impact on recent elections was limited, highlighted serious concerns like AI-driven hate incitement and voter confusion (cetas.turing.ac.uk) . The implication is clear: the UK cannot be complacent. Even if traditional disinformation methods still dominate, the rapid evolution of AI means influence threats could scale up dramatically in the near future. British policymakers have started to discuss new regulations (for example, requiring transparency in AI political ads) and bolstering media literacy programs to inoculate the public against fake content.

Despite this awareness, critics argue that the UK’s response remains disjointed and under-resourced. There is no publicly articulated doctrine for cognitive warfare equivalent to adversaries’ strategies. Efforts are split among various agencies (from GCHQ handling cyber, to the Army’s 77th Brigade for information ops, to the Foreign Office for counter-disinformation), making coordination challenging. Moreover, while defensive measures (like fact-checking services and takedown of fake accounts) have improved, the UK appears reluctant to consider more assertive offensive information operations that could pre-empt adversary narratives. Legal and ethical norms, as well as fear of escalation, likely restrain such tactics. The result is that Britain often plays catch-up, reacting to disinformation waves after they have already influenced segments of the population.

European Union: The EU, as a bloc of democracies, faces additional hurdles in confronting cognitive warfare. Brussels has treated disinformation chiefly as a policy and regulatory issue tied to election security and digital platform accountability. Over the past few years, the EU implemented a Code of Practice on Disinformation (a voluntary agreement with tech companies) and stood up teams like the East StratCom Task Force (known for its EUvsDisinfo project debunking pro-Kremlin myths). Following high-profile meddling in elections and referendums, EU institutions have grown more vocal: they label Russia explicitly as the chief source of disinformation targeting Europe. The European Commission also incorporated anti-disinformation clauses into the Digital Services Act (DSA), requiring large online platforms to assess and mitigate risks from fake content.

When it comes to AI, the EU’s landmark AI Act – primarily a regulatory framework to govern AI uses – indirectly addresses some information manipulation concerns (for example, by requiring transparency for deepfakes). However, EU efforts are fundamentally defensive and norm-driven. They seek to police platforms and inform citizens, rather than actively engage in influence operations. EU leaders are wary of blurring the line between counter-propaganda and propaganda of their own, given Europe’s commitment to free expression. This creates a dilemma: open societies find it difficult to wage information war with the ruthlessness of authoritarian regimes.

European security experts are starting to grapple with this challenge. A recent EU security and defence college course underscored that cognitive warfare is an “emerging challenge” for the European Union (esdc.europa.eu) . Participants discussed the need for technological tools to detect, deter, and mitigate cognitive threats. Yet, outside of specialised circles, there is no EU-wide military command focused on cognitive warfare (unlike traditional domains such as land, sea, cyber, etc.). NATO, which includes most EU countries, has taken the lead in conceptualising cognitive warfare, but NATO’s role in offensive information activities is limited by its mandate.

A telling critique comes from a Royal United Services Institute (RUSI) commentary on disinformation and AI threats. It notes that NATO’s 2024 strategy update acknowledged the dangers of AI-enabled disinformation, using unusually strong language about the urgency of the challenge. However, the same strategy “makes no reference to how AI could be used” positively for strategic communications or to help counter disinformation (rusi.org) . In other words, Western nations are emphasising protection and defence, strengthening **governance standards, public resilience, and truth-checking mechanisms, **but they are not yet leveraging AI offensively to seize the initiative in the info sphere. This cautious approach may be ceding ground to adversaries who have no such reservations.

Why the West Is Losing the AI Influence War

Several interrelated factors explain why the UK, EU, and their allies appear to be losing ground in the cognitive domain against AI-equipped adversaries:

Reactive Posture vs. Proactive Strategy: Western responses have been largely reactive. Democracies often respond to disinformation campaigns after damage is done, issuing fact-checks or diplomatic condemnations. There is a lack of a proactive, comprehensive strategy to dominate the information environment. Adversaries, by contrast, set the narrative by deploying influence operations first and fast.

Ethical and Legal Constraints: The UK and EU operate under strict norms – adherence to truth, rule of law, and respect for civil liberties – which limit tactics in information warfare. Propaganda or deception by government is domestically unpopular and legally fraught. This makes it hard to match the scale and aggressiveness of Russian or Chinese influence operations without undermining democratic values. Authoritarians face no such constraints.

Fragmented Coordination: In Europe, tackling cognitive threats cuts across multiple jurisdictions and agencies (domestic, EU, NATO), leading to fragmentation. A unified command-and-control for information operations is lacking. Meanwhile, adversaries often orchestrate their messaging from a centralised playbook, giving them agility and consistency.

Regulatory Focus Over Capabilities: The EU’s inclination has been to regulate (AI, social media, data) to create guardrails – a necessary but slow process. However, regulation alone does not equal capability. Rules might curb some harmful content but do not stop a determined adversary. The West has invested less in developing its own AI tools for strategic communication, psyops, or rapid counter-messaging. This capability gap means ceding the technological edge to opponents.

Underestimation of Cognitive Warfare: Historically, Western security doctrine prioritised physical and cyber threats, sometimes underestimating the impact of information warfare. The concept of a sustained “cognitive war” waged in peacetime is relatively new to Western planners. Initial responses were tepid – for example, before 2016, few anticipated that online influence could significantly affect major votes. This lag in appreciation allowed adversaries to build momentum.

These factors contribute to a situation where, despite growing awareness, the UK and EU have struggled to turn rhetoric into effective countermeasures on the cognitive front. As a result, authoritarian influence campaigns continue to find fertile ground in Western societies. Each viral conspiracy theory that goes unchecked, each wedge driven between communities via disinformation, and each doubt cast on democratic institutions chips away at the West’s strategic advantage. NATO officials warn that information warfare threats “must neither be overlooked nor underestimated” in the face of the AI revolution. Yet current efforts remain a step behind the onslaught of AI-generated falsehoods.

Conclusion and Implications

If the UK, EU, and like-minded democracies do not rapidly adapt to the realities of AI-driven cognitive warfare, they risk strategic defeat in an important realm of 21st-century conflict. Losing the AI influence war doesn’t happen with a formal surrender; instead, it manifests as a gradual erosion of democratic resilience. Societies may grow deeply divided, citisens lose trust in media and governments, and adversarial narratives become entrenched. In the long run, this could weaken the political will and cohesion needed to respond to more conventional security threats. As one analysis grimly observed, the cost of inaction is high – allowing adversaries to exploit AI for malign influence can lead to a “strategic imbalance favouring adversaries”, with a flood of false narratives eroding public trust and even devastating democratic institutions if left unchecked.

Reversing this trajectory will require Western nations to elevate the priority of the cognitive battlespace in national security planning. Some broad imperatives emerge:

Develop Offensive and Defensive AI Capabilities: The UK and EU should invest in AI tools not just to detect and debunk disinformation, but also to disseminate counter-narratives that truthfully push back against authoritarian propaganda. Ethical guidelines for such operations must be established, but fear of using AI at all in information ops leaves the field open to adversaries.

Whole-of-Society Resilience: Building public resilience is crucial. Education in media literacy and critical thinking, transparency about threats, and empowering independent journalism are all part of inoculating society. A populace that can recognise manipulation is the best defence against cognitive warfare. The goal is to ensure citizens can engage with digital information sceptically, blunting the impact of fake or AI-manipulated content.

International Coordination: The transatlantic alliance and democratic partners need better coordination in the information domain. NATO’s work on cognitive warfare should be complemented by EU and UK joint initiatives to share intelligence on disinformation campaigns and align responses. A unified front can deny adversaries the ability to play divide-and-conquer with different countries.

Adaptive Governance: Western policymakers must make their regulatory frameworks more agile in the face of technological change. This might include faster mechanisms to hold platforms accountable, updated election laws regarding AI-generated content, and perhaps narrowly tailored laws against the most dangerous forms of disinformation (such as deceptive media that incites violence). The challenge is doing so without undermining free speech – a balance that requires constant calibration as AI technology evolves.

In summary, the UK and EU are at a crossroads. They can continue on the current path – risking that AI-enabled influence attacks will outpace their responses – or they can strategise anew and invest in winning the cognitive fight. The latter will demand political will and creativity: treating information space as a domain to be secured, much like land, sea, air, cyber and space. It also means confronting uncomfortable questions about using emerging technologies in ways that align with democratic values yet neutralise malign propaganda.

The cognitive battle-space is now a permanent feature of international security. Western democracies must not cede this battlefield. Maintaining an open society does not mean being defenceless. With prudent adoption of AI for good, and a staunch defence of truth, the UK, EU, and their allies can start to turn the tide in the AI influence war. Failing to do so will only embolden those who seek to “attack the democratic pillars of the West” through information manipulation. In this contest for minds and hearts, as much as in any other domain of conflict, strength and resolve will determine who prevails.

Bibliography

1. NATO Allied Command Transformation. “Cognitive Warfare.” NATO ACT, Norfolk VA.

2. Bryc, Agnieszka. “Destroy from within: Russia’s cognitive warfare on EU democracy.” Kolegium Europy Wschodniej, 27 Nov 2024.

3. European Security & Defence College (ESDC). “Cognitive warfare in the new international competition: an emerging challenge for the EU,” 28 May 2024.

4. Williams, Cameron (Modern War Institute). “Persuade, Change, and Influence with AI: Leveraging Artificial Intelligence in the Information Environment.” Modern War Institute at West Point, 14 Nov 2023.

5. UK Ministry of Defence. Defence Artificial Intelligence Strategy, June 2022. UK

6. Fitz-Gerald, Ann M., and Halyna Padalko (RUSI). “The Need for a Strategic Approach to Disinformation and AI-Driven Threats.” RUSI Commentary, 25 July 2024.

7. Science Business News. “EU is ‘losing the narrative battle’ over AI Act, says UN adviser,” 05 Dec 2024.

The Grey Mirage: Navigating Strategic Uncertainty and the Elusive Victory in Grey Zone Conflicts

Imagine a world where war is waged not with bombs and bullets, but with lines of code and viral misinformation. This is the reality of grey zone conflicts, a persistent feature of modern geopolitics characterised by cyber operations, economic coercion, and disinformation. While many initially hailed these tactics as a revolutionary new form of strategic competition, a critical examination reveals that they not only fundamentally fail to achieve strategic victory in a traditional Clausewitzian sense but also introduce profound strategic uncertainty and volatility into the international system. Extending Thomas Rid’s compelling argument that “cyber war will not take place” due to the inherent lack of decisive physical destruction, this critique applies even more broadly to the entire spectrum of grey zone conflicts.

To understand the inherent limitations of these operations, we must return to the foundational strategic thought of Carl von Clausewitz. His framework remains a lodestar: tactical successes must always serve political objectives, and the very essence of war is to impose one’s will upon the enemy. As Michael Handel succinctly summarises, Clausewitzian war aims at the destruction of enemy forces, control of vital resources, and the sway of public opinion. Grey zone tactics, however, are structurally incapable of achieving these aims in the decisive manner Clausewitz envisioned. They may sow disruption and discord, but they rarely deliver battlefield outcomes, nor can they compel political compliance in the way traditional military campaigns do. Consider, for instance, the persistent cyberattacks between nations; while disruptive and costly, they have yet to force a nation to fundamentally alter its core strategic direction.

The very nature of grey zone strategies – their calculated avoidance of outright force and immediately recognisable acts of aggression – means they cannot truly compel an adversary to accept a fundamentally new strategic order. Cyber operations, as Rid convincingly argues, rarely inflict the kind of lasting, tangible damage comparable to conventional military strikes. Disinformation campaigns, while capable of eroding trust in institutions and even mobilising populations, as seen in the Arab Spring uprisings, cannot on their own force political capitulation. Economic sanctions, though often painful and strategically useful in shaping behaviour, are notoriously slow and far from guaranteed to change a determined state’s core strategic calculations.

This inherent strategic limitation is further underscored by Colin Gray’s assertion that strategy is fundamentally about the application of force to achieve political objectives. For Gray, war is fundamentally about contesting and achieving control, and without the capacity to impose a decisive order, grey zone tactics fall drastically short of true strategic efficacy. He cautions that the absence of decisive engagement in contemporary conflicts leads not to resolution, but to a debilitating strategic paralysis. This resonates deeply with Clausewitz’s core tenet that successful war must culminate in the decisive defeat of the enemy. Grey zone conflicts, by their very nature, do not and cannot fulfil this criterion. At best, they generate protracted stalemates; at worst, they risk unintended escalation into open, conventional warfare.

Countering the Cumulative Argument and Embracing Ambiguity: Incrementalism vs. Decisiveness

It is important to acknowledge a key counterargument: that grey zone tactics, while rarely decisive alone, gain strategic effect cumulatively over time. Proponents argue that persistent cyber intrusions, disinformation, and economic pressure can erode an adversary’s strength and will. This view sees grey zone warfare as long-term shaping, not a knockout blow, exemplified by China’s “Three Warfares” doctrine.

Furthermore, the ambiguity of grey zone conflicts can be strategically useful, like nuclear deterrence. Bernard Brodie argued nuclear war’s cost shifted strategy to prevention, redefining “victory” as avoiding war. Similarly, grey zone tactics might deter and manage competition below open conflict. Incremental disruption, like cyberattacks on Iran’s nuclear program, can also shift power balances.

Hurting Without Winning and the Zero-Sum Nature of Grey Zone Competition

Thomas Schelling noted, “Victory is no longer a prerequisite for hurting the enemy.” This is key to grey zone tactics, which can aim to inflict pain and signal resolve without overt war. Even non-military gains – diplomatic wins, sanctions, legal advantages achieved through disinformation and cyber influence – become strategic victories in this zero-sum competition. This is particularly relevant as tech-savvy strategists recognise the advantages of ambiguity in these operations.

However, pursuing overwhelming military victory can backfire, escalating conflict. Grey zone tactics offer a way to avoid this, operating below the threshold of conventional war. Yet, this ambiguity breeds volatility, with miscalculation and escalation always looming.

Strategic Victory as Peace-Winning and the Challenge of Subjectivity

Rethinking “strategic victory” beyond military terms is crucial. Robert Mandel distinguishes “war-winning” from “peace-winning,” arguing true strategic victory is “peace-winning” – a multi-dimensional achievement across information, politics, economics, and diplomacy. Grey zone tactics align with this broader view, especially as public mobilisation and decentralised networks shape geopolitics.

Yet, “victory” in the grey zone remains subjective and hard to measure. Ethan Kapstein highlights the difficulty of defining metrics, gaining consensus, and obtaining reliable data in grey zone operations. Progress in one area may undermine another, increasing strategic uncertainty. Whether grey zone tactics are a “strategic win” depends on perspective and chosen metrics.

Taiwan: Strategic Uncertainty in Action

Taiwan exemplifies the inherent volatility of grey zone warfare: while hybrid strategies can pressure an opponent, they provide no clear pathway to a controlled, predictable outcome. The lack of definitive thresholds makes grey zone tactics as much a risk as an opportunity for the aggressor. Imagine China using grey zone tactics against Taiwan: cyberattacks, disinformation, and economic pressure. While this might weaken Taiwan, it’s unlikely to force capitulation without risking wider conflict. Taiwan’s reaction, U.S. responses, and the ever-present risk of miscalculation create a strategic dilemma.

While Russia has shown resilience to external grey zone pressures by controlling information, societal resilience only mitigates, not eliminates, strategic uncertainty. Even the most robust resilience strategies cannot eliminate the risk of miscalculation or escalation, underscoring the inherent volatility of grey zone conflicts. Because grey zone conflicts operate ambiguously, even careful campaigns can unexpectedly escalate, making control and predictability elusive.

Policy Implications: Actively Shaping the Grey Zone for Advantage

The inherent strategic uncertainty of grey zone conflicts demands proactive policies:

  1. Sharpen Intelligence and Active Disruption: Enhance intelligence to understand adversary intentions and develop capabilities to actively disrupt their grey zone operations.
  2. Develop Flexible and Escalatory Response Options: Create a wider range of responses, including calibrated counter-grey zone tactics and clear signalling for de-escalation and conflict management. As artificial intelligence and automation continue to reshape information warfare, states must anticipate how AI-driven disinformation, deepfake technology, and autonomous cyber operations will further complicate grey zone conflicts. Developing countermeasures that integrate AI-based detection and rapid-response systems will be critical for maintaining strategic advantage.
  3. Promote Transparency to Force Predictability: Actively expose adversary actions to force them into a more predictable strategic posture, enhancing transparency and accountability in the grey zone.
  4. Focus on Proactive Crisis Management: Develop proactive crisis management to prevent crises, including clear communication, de-escalation protocols, and persistent low-intensity engagement for stability.
  5. Re-evaluate “Victory” and Embrace Persistent Engagement: Shift from traditional victory metrics to measures of resilience, deterrence, and long-term shaping, embracing persistent engagement as the norm in grey zone competition.

Conclusion: Embracing Uncertainty, Seeking Control Through Persistent Engagement

Russia’s pre-2022 hybrid warfare campaign in Ukraine – combining cyber operations, disinformation, and economic pressure – demonstrated the limitations of grey zone tactics. Rather than coercing Ukraine into submission, these operations reinforced Ukrainian national resistance and galvanised Western military support, ultimately leading to Russia’s full-scale invasion. This case underscores the strategic volatility of grey zone competition: while these tactics can create disruption, they provide no guarantee of controlled, predictable outcomes.

This highlights how grey zone tactics, while seemingly flexible, are unlikely to deliver traditional strategic victory and introduce significant strategic uncertainty. While ambiguity and “peace-winning” are modern adaptations, they don’t guarantee predictable outcomes or escalation control. The grey zone is a volatile battlespace defined by miscalculation and instability. Navigating the grey zone requires embracing uncertainty, prioritising crisis management, and actively shaping the battlespace. In this new era of perpetual contestation, mastering the grey zone is not about winning – it is about ensuring that one’s adversaries never can.


References

  1. Brodie, Bernard. “The Absolute Weapon: Atomic Power and World Order.” The Yale Review 35, no. 3 (Spring 1946): 456-472.
  2. Gray, Colin S. The Strategy Bridge: Theory for Practice. Oxford: Oxford University Press, 2010.
  3. Handel, Michael I. Masters of War: Classical Strategic Thought. London: Frank Cass, 2001.
  4. Kania, Elsa B. “The PLA’s Latest Strategic Thinking on the Three Warfares.” The Jamestown Foundation, August 22, 2016. https://jamestown.org/program/the-plas-latest-strategic-thinking-on-the-three-warfares/.
  5. Kapstein, Ethan B. “Measuring Success in Complex Operations.” The Journal of Strategic Studies 34, no. 2 (April 2011): 267-285.
  6. Mandel, Robert. “Thinking about Victory in Strategy.” The Journal of Strategic Studies 34, no. 2 (April 2011): 199-200.
  7. Monaghan, Sean. “Twitter Revolutions? Social Media and the Arab Spring.” Whitehall Papers 69, no. 1 (2011): 21-22.
  8. Rid, Thomas. Cyber War Will Not Take Place. London: Hurst, 2013.
  9. Sanger, David E., and William J. Broad. “Obama Order Sped Up Wave of Cyberattacks Against Iran.” The New York Times, June 1, 2012. https://www.nytimes.com/2012/06/01/world/middleeast/obama-ordered-wave-of-cyberattacks-against-iran.html.
  10. Schelling, Thomas C. Arms and Influence. New Haven: Yale University Press, 1966.
  11. Simons, Greg. “Russia and information confrontation: perceptions, strategies and responses.” Journal of strategic studies 42, no. 1 (2019): 139-140.

Rethinking Warfare: Clausewitz in the Age of Cyber and Hybrid Conflict

Warfare in the age of cyber and hybrid conflict

Given the shifting sands of contemporary conflict, do we need to reassess the meaning of warfare? Clausewitz famously called war ‘a continuation of politics by other means’ (1832). But does that idea still hold up today? These days, conflicts play out on social media, in cyberspace, and even in elections—often without a single shot fired. Today’s battlespace incorporates cyber operations, climate change, mass-urbanisation, space weaponisation, and continuous strategic competition. This blurs the lines between war and peace. While classical theorists maintain that war’s fundamental nature has not changed, modern conflicts increasingly challenge traditional frameworks.

Historically, warfare was characterised by physical destruction, decisive battles, and territorial conquest. Modern conflicts, however, do not always adhere to this pattern. For instance, cyber warfare has shown that states and non-state actors can achieve strategic effects without kinetic violence. Thomas Rid (2017) contends that cyber operations can coerce, disrupt, and deceive, thereby challenging Clausewitz’s notion that war is inherently violent. The 2007 cyberattacks on Estonia and the Stuxnet virus, which incapacitated Iranian nuclear facilities, are stark reminders of strategic aggression that did not involve traditional warfare.

Clausewitz and Sun Tzu never saw Twitter battles or deepfake propaganda coming. But here we are. Rather than fighting discrete wars, we’re in a period of ongoing strategic competition. The 2018 U.S. National Defence Strategy even describes it as ‘long-term strategic competition’ (Department of Defence, 2018). This shift undermines the traditional Westphalian model, where war and peace were regarded as distinct states. Hybrid warfare thrives in ambiguity. Hoffman (2017) describes it as a mix of misinformation, economic coercion, cyberattacks, and proxy forces. The goal? Stay below the conventional threshold of war. The Russian annexation of Crimea in 2014, involving cyber operations, disinformation, and unmarked troops, serves as an exemplary case.

Despite these transformations, Clausewitz’s core concepts continue to be highly relevant. His idea of the trinity of “violence, chance, and political purpose” continues to offer a valuable framework for understanding modern conflicts. Colin Gray (1999) underscores that strategy is fundamentally about applying means to achieve political ends, irrespective of technological advancements. The risk, however, lies in excessively broadening the definition of war. If every act of geopolitical rivalry, such as economic sanctions, election interference, or cyber espionage, is termed “war,” it risks conceptual dilution. Gartzke (2013) cautions that this approach could end with unnecessary escalation, with states treating cyber incidents as casus belli when they might be closer to espionage or subversion.

So where do we go from here? Rather than discarding classical strategic theory, we should reinterpret its principles to align with current realities. Clausewitz’s trinity can evolve: “violence” can encompass non-kinetic coercion; “chance” is amplified by the unpredictability of interconnected digital systems; and “political purpose” now includes influence operations and behavioural shaping alongside territorial ambitions. Warfare may not appear as it did in Clausewitz’s era, but its essence, driven by politics and strategy, remains unchanged.

The Future of War: AI and Strategy

When looking at strategy, Clausewitz taught us that war is shaped by chance, friction, and human judgment and Colin Gray emphasised the enduring nature of strategy, despite technological change. Yet, artificial intelligence (AI) is accelerating decision-making beyond human speeds, raising a critical question: Are we entering an era where machines – not strategists – dictate the course of conflict?

The Transformative Power of AI in Strategy

AI-driven systems now process intelligence, optimise battlefield decisions, and launch cyber operations at speeds unimaginable just two decades ago. OSINT, GEOINT, and SIGINT can be ingested, analysed, and summarised into actionable insights in real time. AI-enhanced wargaming and strategic forecasting are helping policymakers anticipate threats with greater accuracy. But does this lead to better strategy, or does it introduce new vulnerabilities?

The Erosion of Traditional Strategic Advantages

Historically, military and strategic advantages were state monopolies due to the vast resources required to develop cutting-edge capabilities, but AI is breaking down these barriers. The latest open-source AI models, commercial AI applications, and dual-use technologies mean that non-state actors, corporations, and even criminal groups now have access to tools once reserved for governments.

Consider Russia’s use of AI-driven disinformation campaigns during the 2016 U.S. elections and Ukraine conflict, where AI-powered bots and deepfake technology have enabled influence operations that are difficult to counter. Similarly, China’s AI-enabled surveillance state represents a new model of strategic power – one that fuses military and civilian AI applications for geopolitical advantage.

Blurring the Lines Between War and Peace

AI does not just change warfare; it changes the very definition of conflict. The use of AI-driven cyber and information operations enables continuous engagement below the threshold of open war. Instead of clear distinctions between peace and conflict, we are witnessing an era of persistent, AI-enhanced competition.

Using China as an example again, their civil-military fusion strategy integrates AI research and applications across both sectors, allowing for rapid technological advancement with strategic implications. Will the UK and its allies struggle to counter this approach within their existing regulatory and legal frameworks?

The Impact on Deterrence and Escalation

Deterrence has traditionally relied on rational actors making calculated decisions. But what happens when autonomous systems can pre-emptively engage threats or retaliate without clear human oversight? The risk of unintended escalation grows if AI-driven platforms misinterpret data or are manipulated by adversarial AI systems.

The Pentagon’s Project Maven, which employs AI to analyse drone surveillance footage, highlights the advantages AI brings to intelligence processing. But it also raises ethical concerns – how much decision-making should be delegated to machines? And if state actors develop autonomous weapons with AI-controlled engagement protocols, does this make deterrence more fragile?

Limitations of AI in Strategy

Despite AI’s capabilities, it still struggles with unpredictability—something central to strategy. AI models are excellent at processing historical patterns but often fail in novel or asymmetric situations. This reinforces the importance of human judgment in strategic decision-making. AI-driven strategy also raises concerns about bias, such as how commercial AI models (e.g., ChatGPT, DeepSeek) reflect the interests of their creators, whether corporate or state-sponsored. If strategic decision-making increasingly relies on black-box models with unknown biases, how do we ensure accountability and transparency?

Strategic Recommendations: The Path Forward

Rather than replacing human decision-makers, I believe that AI should be seen as a force multiplier. Governments and militaries must develop frameworks for human-AI hybrid decision-making, ensuring that AI informs but does not dictate strategy.

Additionally, fail-safe mechanisms must be built into autonomous systems to prevent unintended escalation. Given the rapid development of adversarial AI defences it will be critical as states and non-state actors seek to manipulate AI-driven decision-making processes.

Finally, it is critical that military and civilian leaders must rethink strategic education in the AI era. Understanding AI’s capabilities, limitations, and strategic implications should be a core component of professional military education and policymaker training.

Are we Seeing the Automation of Strategy?

Clausewitz’s fog of war was once an unavoidable condition. If AI offers real-time clarity, does it eliminate uncertainty – or create new vulnerabilities that adversaries can exploit? As AI increasingly influences military and strategic decision-making, are we witnessing the automation of strategy itself? If so, what does that mean for accountability, escalation, and deterrence?

The future strategic environment will be defined by those who can integrate AI effectively—without surrendering human judgment to machines. The challenge ahead is ensuring that AI serves strategy, rather than strategy being dictated by AI.

My Experiences in Japan / 日本での私の経験

So here I am, about a month after having returned to London, reflecting back on my experience in Japan. The most common question I’ve had from friends is “was it worth it?” or “did you learn what you wanted to learn?” and the answer to both is a resounding yes. However it isn’t as simple as that, as there are many layers to the question which need to be unpacked a little.

The first layer is understanding what it is that I went off to Japan to learn in the first place. Of course there is the obvious “Japanese Language” side of things, but there is much more to it than that. The real learning I was hoping to take away was about myself. And if you want to learn about yourself, one of the best ways is to teach others. Which thanks to Phil’s company I was able to do exactly that with a bunch of middle school kids over 6 days of English language camps!

Class photo!

The camp is worth a whole post by itself, but the takeaway is how teaching kids made me feel about myself and reflect on what I enjoy doing professionally. It allowed me to understand what I value (honesty, enthusiasm, progress) in a much more immersive way than a work environment would.  I’ll definitely be taking this into my next role and it has helped me mature bit more as an individual.

Alongside teaching, there were many other examples of “bonus” learning opportunities I was able to take away from the experience.  One key aspect was taking myself out of a familiar environment and getting the mental/physical space to learn and reflect.  That alone was worth the proverbial price of admission.

The next layer is about “what” it is I did.  While it is obvious to some, but not always to me, it isn’t the destination that matters so much, but who you spend it with and the attitude you take with you.  I enjoy spending time by myself (quite a lot) but all my most enjoyable experiences are with friends and family.  It isn’t just quality though, it is quantity too.  People can help you overcome natural inertia (read: laziness) to get out there and do more. In fact, when Julie came to visit for a week, we crammed in more stuff than I did in the previous months!

Fun and hi-jinks!

Also, being in the right mindset (a positive one), I was able to value those times much more and care a little bit less about the latest distracting “must-own-thing”.  But it is easily forgotten and I have to remind myself often to focus on new experiences with loved ones and less on new, shiny, technology…

There is a lot more I was able to get from my travels, but that’s about my limit for for self reflection today.  I think I need to do a round up of all the beers I forgot to mention in a new post…

Page 1 of 3

Powered by WordPress & Theme by Anders Norén