When Carl von Clausewitz wrote that war is “a continuation of politics by other means,” he centred conflict on purpose rather than technology. Colin Gray later warned that strategic constants outlive every gadget. Artificial intelligence now accelerates observation-decision loops from minutes to milliseconds, but whether that shift dethrones human strategy is still contested.

Speed Meets Friction

Ukrainian drone teams run machine-vision updates at the front line every fortnight, turning quadcopters into near-autonomous kamikaze platforms (Bondar 2025). Yet the same coders struggle with false positives – such as bears flagged as enemy sentries – and with mesh-network bottlenecks once EW jammers blanket the spectrum. AI compresses time, but it also multiplies friction, the very element Clausewitz thought ineradicable. We have to be conscious that false positives do not just waste munitions; when the same image-detection stack mis-tags an ambulance as a supply truck, the result is shrapnel in a paediatric ward, not an algorithmic hiccup. In 2025, the World Health Organization stated that hospitals reported 205 deaths from strike-related service loss in Ukraine.

Open-source models still give insurgents propaganda reach, but the sharper edge of algorithmic warfare sits with states.  Israel’s Lavender system, revealed in 2024, generated a list of roughly 37,000 potential Hamas targets and was used even when commanders expected up to twenty civilian deaths per strike—a machine-driven tempo that unsettled some of the intelligence officers involved (McKernan & Davies 2024).  Cutting-edge autonomy, however, still demands high-end GPUs, abundant power and proprietary data.  That keeps strategic dominance gated by infrastructure, mirroring geopolitical power.  Yet, as Kuner (2024) notes, Brussels carved a national-security escape hatch into the AI Act precisely to preserve state leverage over the biggest models.

Washington’s Replicator initiative aims to field “thousands of attriable autonomous systems” within two years (DoD 2024). Beijing answers through civil-military fusion; Moscow improvises with AI-augmented loitering munitions. These programmes underpin an operating concept of continuous, sub-threshold contest, paralleling U.S. Cyber Command’s “persistent engagement”. Strategic deterrence thus rests on the hope that algorithmic agents still read tacit red lines the way humans do. In Stanford’s 2024 crisis simulations, LLM agents recommended first-strike escalation in seventy-nine per cent of runs, providing evidence that algorithmic ‘rationality’ may be anything but.

If LLM advisers escalate crises in simulation nine times out of ten, the locus of judgement drifts from commander to code.  The next question is whether that drift merely speeds execution or begins to automate strategy itself.

Promoters once claimed AI would dissolve uncertainty; real battlefields say different. Sensor glut, spoofed tracks and synthetic “ghost columns” now drown analysts in contradictory feeds (Collazzo 2025). AI redistributes fog rather than lifting it – accelerating some judgements while blinding others through overload or deception (Askew and Salinas 2025). 

The Pentagon updated Directive 3000.09 on autonomous weapons in late 2024, tightening human-in-the-loop requirements. At the multilateral level, UN talks in May 2025 once again failed to agree binding rules, though Secretary-General Guterres set a 2026 deadline (Le Poidevin 2025). Norms lag well behind code, keeping accountability – and escalation liability – firmly in human hands. 

Strategic implications

The transformative impact of AI on strategic paradigms can be distilled into a few key considerations:

  • Advantage remains political. AI is a means; objectives still emanate from human intent. Strategy therefore keeps its Clausewitzian anchor in politics.
  • Automation magnifies misperception. Faster loops leave less time for reflection; and black-box models hide their own failure modes.  Bias and data poisoning risk strategic self-harm.
  • Deterrence becomes brittle. Autonomous systems may over-react to spoofed inputs; adversaries may test thresholds in micro-seconds rather than hours, shortening the ladder of de-escalation.

Conclusions

AI does not automate strategy; it amplifies both its promise and its pathologies. Machines accelerate tactics, generate options and even draft operational plans, but they do not choose political ends – and they continue to manifest friction, chance and uncertainty. Thomas Rid remains broadly right that most cyber and AI activity falls short of war (Rid, 2017), yet as energy grids, logistics chains and battlefield kill cycles digitise, the gap between systemic disruption and physical violence narrows. For the moment, Clausewitz’s compass still points true – but the ground beneath it is starting to slide.

Select bibliography

Askew, M. and Salinas, A. (2025) ‘AI Will Make the Mind Games of War More Risky’, Business Insider, 18 Apr.

Bondar, K. (2025) Ukraine’s Future Vision and Current Capabilities for Waging AI-Enabled Autonomous Warfare.  CSIS.

Collazzo, A. (2025) ‘Warfare at the Speed of Thought’, Modern War Institute, 21 Feb.

Department of Defense (2024) ‘Replicator Initiative Progress Update’.

Kuner, C. (2024) ‘The AI Act National Security Exception’, Verfassungsblog, 15 Dec.

Le Poidevin, O. (2025) ‘Nations Meet at UN for “Killer Robot” Talks’, Reuters, 12 May.

McKernan, B., & Davies, H. (2024). The Machine Did It Coldly. The Guardian, 3 April. https://www.theguardian.com/world/2024/apr/03/israel-gaza-ai-database-hamas-airstrikes  

Rid, T. (2017) Cyber War Will Not Take Place.  Oxford: Oxford University Press.

Stanford HAI (2024) Escalation Risks from LLMs in Military and Diplomatic Contexts.

World Health Organization (2025) WHO’s Health Emergency Appeal 2025. Geneva: World Health Organization.