Thoughts, reflections and experiences

icy banner

Tag: warfare

Rethinking Warfare: Clausewitz in the Age of Cyber and Hybrid Conflict

Warfare in the age of cyber and hybrid conflict

Given the shifting sands of contemporary conflict, do we need to reassess the meaning of warfare? Clausewitz famously called war ‘a continuation of politics by other means’ (1832). But does that idea still hold up today? These days, conflicts play out on social media, in cyberspace, and even in elections—often without a single shot fired. Today’s battlespace incorporates cyber operations, climate change, mass-urbanisation, space weaponisation, and continuous strategic competition. This blurs the lines between war and peace. While classical theorists maintain that war’s fundamental nature has not changed, modern conflicts increasingly challenge traditional frameworks.

Historically, warfare was characterised by physical destruction, decisive battles, and territorial conquest. Modern conflicts, however, do not always adhere to this pattern. For instance, cyber warfare has shown that states and non-state actors can achieve strategic effects without kinetic violence. Thomas Rid (2017) contends that cyber operations can coerce, disrupt, and deceive, thereby challenging Clausewitz’s notion that war is inherently violent. The 2007 cyberattacks on Estonia and the Stuxnet virus, which incapacitated Iranian nuclear facilities, are stark reminders of strategic aggression that did not involve traditional warfare.

Clausewitz and Sun Tzu never saw Twitter battles or deepfake propaganda coming. But here we are. Rather than fighting discrete wars, we’re in a period of ongoing strategic competition. The 2018 U.S. National Defence Strategy even describes it as ‘long-term strategic competition’ (Department of Defence, 2018). This shift undermines the traditional Westphalian model, where war and peace were regarded as distinct states. Hybrid warfare thrives in ambiguity. Hoffman (2017) describes it as a mix of misinformation, economic coercion, cyberattacks, and proxy forces. The goal? Stay below the conventional threshold of war. The Russian annexation of Crimea in 2014, involving cyber operations, disinformation, and unmarked troops, serves as an exemplary case.

Despite these transformations, Clausewitz’s core concepts continue to be highly relevant. His idea of the trinity of “violence, chance, and political purpose” continues to offer a valuable framework for understanding modern conflicts. Colin Gray (1999) underscores that strategy is fundamentally about applying means to achieve political ends, irrespective of technological advancements. The risk, however, lies in excessively broadening the definition of war. If every act of geopolitical rivalry, such as economic sanctions, election interference, or cyber espionage, is termed “war,” it risks conceptual dilution. Gartzke (2013) cautions that this approach could end with unnecessary escalation, with states treating cyber incidents as casus belli when they might be closer to espionage or subversion.

So where do we go from here? Rather than discarding classical strategic theory, we should reinterpret its principles to align with current realities. Clausewitz’s trinity can evolve: “violence” can encompass non-kinetic coercion; “chance” is amplified by the unpredictability of interconnected digital systems; and “political purpose” now includes influence operations and behavioural shaping alongside territorial ambitions. Warfare may not appear as it did in Clausewitz’s era, but its essence, driven by politics and strategy, remains unchanged.

The Future of War: AI and Strategy

When looking at strategy, Clausewitz taught us that war is shaped by chance, friction, and human judgment and Colin Gray emphasised the enduring nature of strategy, despite technological change. Yet, artificial intelligence (AI) is accelerating decision-making beyond human speeds, raising a critical question: Are we entering an era where machines – not strategists – dictate the course of conflict?

The Transformative Power of AI in Strategy

AI-driven systems now process intelligence, optimise battlefield decisions, and launch cyber operations at speeds unimaginable just two decades ago. OSINT, GEOINT, and SIGINT can be ingested, analysed, and summarised into actionable insights in real time. AI-enhanced wargaming and strategic forecasting are helping policymakers anticipate threats with greater accuracy. But does this lead to better strategy, or does it introduce new vulnerabilities?

The Erosion of Traditional Strategic Advantages

Historically, military and strategic advantages were state monopolies due to the vast resources required to develop cutting-edge capabilities, but AI is breaking down these barriers. The latest open-source AI models, commercial AI applications, and dual-use technologies mean that non-state actors, corporations, and even criminal groups now have access to tools once reserved for governments.

Consider Russia’s use of AI-driven disinformation campaigns during the 2016 U.S. elections and Ukraine conflict, where AI-powered bots and deepfake technology have enabled influence operations that are difficult to counter. Similarly, China’s AI-enabled surveillance state represents a new model of strategic power – one that fuses military and civilian AI applications for geopolitical advantage.

Blurring the Lines Between War and Peace

AI does not just change warfare; it changes the very definition of conflict. The use of AI-driven cyber and information operations enables continuous engagement below the threshold of open war. Instead of clear distinctions between peace and conflict, we are witnessing an era of persistent, AI-enhanced competition.

Using China as an example again, their civil-military fusion strategy integrates AI research and applications across both sectors, allowing for rapid technological advancement with strategic implications. Will the UK and its allies struggle to counter this approach within their existing regulatory and legal frameworks?

The Impact on Deterrence and Escalation

Deterrence has traditionally relied on rational actors making calculated decisions. But what happens when autonomous systems can pre-emptively engage threats or retaliate without clear human oversight? The risk of unintended escalation grows if AI-driven platforms misinterpret data or are manipulated by adversarial AI systems.

The Pentagon’s Project Maven, which employs AI to analyse drone surveillance footage, highlights the advantages AI brings to intelligence processing. But it also raises ethical concerns – how much decision-making should be delegated to machines? And if state actors develop autonomous weapons with AI-controlled engagement protocols, does this make deterrence more fragile?

Limitations of AI in Strategy

Despite AI’s capabilities, it still struggles with unpredictability—something central to strategy. AI models are excellent at processing historical patterns but often fail in novel or asymmetric situations. This reinforces the importance of human judgment in strategic decision-making. AI-driven strategy also raises concerns about bias, such as how commercial AI models (e.g., ChatGPT, DeepSeek) reflect the interests of their creators, whether corporate or state-sponsored. If strategic decision-making increasingly relies on black-box models with unknown biases, how do we ensure accountability and transparency?

Strategic Recommendations: The Path Forward

Rather than replacing human decision-makers, I believe that AI should be seen as a force multiplier. Governments and militaries must develop frameworks for human-AI hybrid decision-making, ensuring that AI informs but does not dictate strategy.

Additionally, fail-safe mechanisms must be built into autonomous systems to prevent unintended escalation. Given the rapid development of adversarial AI defences it will be critical as states and non-state actors seek to manipulate AI-driven decision-making processes.

Finally, it is critical that military and civilian leaders must rethink strategic education in the AI era. Understanding AI’s capabilities, limitations, and strategic implications should be a core component of professional military education and policymaker training.

Are we Seeing the Automation of Strategy?

Clausewitz’s fog of war was once an unavoidable condition. If AI offers real-time clarity, does it eliminate uncertainty – or create new vulnerabilities that adversaries can exploit? As AI increasingly influences military and strategic decision-making, are we witnessing the automation of strategy itself? If so, what does that mean for accountability, escalation, and deterrence?

The future strategic environment will be defined by those who can integrate AI effectively—without surrendering human judgment to machines. The challenge ahead is ensuring that AI serves strategy, rather than strategy being dictated by AI.

Powered by WordPress & Theme by Anders Norén