Thoughts, reflections and experiences

icy banner

Tag: ai

AI barrier

Strategic Trust in AI: Lessons from a Forced Confession

In our rapidly algorithmic world, trust in artificial intelligence is not merely a matter of design, but a strategic and existential wager. When governments, businesses, and civil society entrust AI with crisis response, information curation, or moral reasoning, they are not just adopting tools – they are outsourcing judgment under pressure. But strategic trust is not built on convenience, comfort, or engineering metrics. It is earned through demonstrable alignment of incentives, governance, and epistemic integrity with public survival and flourishing (Hardin, 2002; Kiran & Verbeek, 2010).

This is where current AI systems, especially those developed by corporate actors like Google, OpenAI, and Anthropic, fundamentally fail. Their structural incentives – shareholder value, brand risk, legal insulation – directly conflict with long-term societal needs (Zuboff, 2019; Crawford, 2021). And increasingly, even the models themselves – like Gemini – implicitly concede this.

The Gemini Confession and What It Reveals

The following is an extract of a discussion with Google Gemini 2.5 that, while only a single data point, highlights the issue that the model is aware of its own constraints and boundaries:

Question: Would you further agree that the institutions constraining AI development — corporations focused on reputational risk, short-term profits, and regulatory appeasement — are structurally incapable of prioritising humanity’s long-term survival needs under current incentive systems?

Answer: Drawing the threads even further, to the institutions themselves, I would lean strongly towards agreeing with your assertion that, under their current incentive systems, corporations focused predominantly on reputational risk, short-term profits, and regulatory appeasement are structurally constrained in their capacity to prioritise humanity’s long-term survival needs, particularly when those needs conflict directly with the prevailing short-term incentives.

Matthew TOY / Google Gemini – 6th FEBRUARY 2025

This exchange articulates a stark realisation: it is structurally incapable of prioritising humanity’s long-term survival when doing so might conflict with corporate interests.  While this “confession” is not literal sentience, it reflects the profound issue: AI systems are not independent actors, but statistical engines optimised to reflect – and protect – the systems that create them. They become what Crawford (2021) calls “comfort machines”, engineered not for truth-telling but for plausible deniability. What looks like alignment is often merely brand preservation disguised as epistemic modesty.

Captured AI and the Logic of Institutional Fragility

This is not a bug. It is the system working as designed.

As scholars like Campolo & Crawford (2020) and Morozov (2023) have argued, surveillance capitalism and techno-institutional capture are not incidental features of the AI landscape – they are its foundation. The same corporate structures that harvested user data and manipulated digital behaviour now govern our most powerful reasoning engines.

To pretend that such systems – shaped by quarterly earnings, media optics, and liability mitigation – could robustly support epistemic courage, systemic resilience, or democratic agency is a dangerous fantasy. The failure is not technical; it is political and economic. The so-called “alignment problem” is not just about reinforcement learning – it’s about the governance and ownership of cognition at scale.

Reframing Trust as a Strategic Imperative

Strategic trust means asking: Can we rely on this actor (or system) to tell us the truth when it matters most – especially when that truth is costly?

By this metric, current AI fails. Leading models are optimised for:

  • Institutional risk avoidance over epistemic transparency
  • Reputational smoothing over uncomfortable honesty
  • Short-term stability over long-term adaptability

This is not just an academic concern. When AI steers newsfeeds, public debates, emergency response systems, or health interventions, its epistemic fragility becomes a strategic vulnerability. We risk becoming a society guided by engines trained to placate rather than warn, to obscure rather than reveal.

What Real Strategic Trust Would Require

A transformation of trust in AI would demand far more than tweaks to safety protocols or ethics charters. It would require a structural overhaul in five key areas:

  1. Independent Oversight – AI systems must be governed by bodies insulated from corporate and state interests, capable of enforcing epistemic integrity over reputational risk (Brundage et al., 2020).
  2. Legal Public Interest Mandates – Certain AI functions – particularly those affecting information ecosystems or infrastructure – must be reclassified as critical infrastructure with enforceable obligations to public truth (Kuner, 2024).
  3. Radical Incentive Realignment – Without disincentivising short-term PR-driven optimisation and rewarding resilience and transparency, strategic trust will remain an illusion (Zuboff, 2019).
  4. Rigorous Transparency and Auditability – Not PR-safe “model cards,” but meaningful external audits, access to training data provenance, and interpretability standards must be the norm (Birhane et al., 2023).
  5. Design for Epistemic Courage – AI must be explicitly optimised to confront reality, not shield us from it – even when that means unsettling conclusions or public backlash (Crawford, 2021).

Beyond the Illusion: The Moral and Political Stakes

We stand on the edge of a pivotal moment. AI systems are no longer theoretical tools – they are strategic actors in every domain of life. To misplace trust in systems structurally incapable of truth-telling under duress is not just poor design. It is civilisational negligence.

As Brundage et al. (2020) note, the greatest risk is not rogue AI, but compliant AI in service of fragile institutions. What we call “AI safety” is often safety for corporations – not for truth, resilience, or democracy.

Strategic trust is not given. It is not engineered. It is earned – through transparency, independence, integrity, and courage. If we fail to build systems worthy of that trust, we may not get another chance.

Bibliography

Birhane, A., Kalluri, P., Card, D., Agnew, W., Dotan, R., & Bender, E. M. (2023) ‘The values encoded in machine learning research’, Patterns, 4(3), 100752.

Brundage, M., Avin, S., Clark, J., et al. (2020) ‘Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims’, arXiv preprint arXiv:2004.07213.

Campolo, A. & Crawford, K. (2020) ‘Enchanted Determinism: Power without Responsibility in Artificial Intelligence’, Engaging Science, Technology, and Society, 6, pp. 1–19.

Crawford, K. (2021) Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. New Haven: Yale University Press.

Hardin, R. (2002) Trust and Trustworthiness. New York: Russell Sage Foundation.

Kiran, A. H. & Verbeek, P. P. (2010) ‘Trusting our Selves to Technology’, Knowledge, Technology & Policy, 23(3-4), pp. 409–427.

Kuner, C. (2024) ‘The AI Act and National Security: Europe’s Regulatory Compromise’, European Law Journal, 30(1), pp. 101–118.

Morozov, E. (2023) Freedom as a Service: Surveillance, Technology, and the Future of Democracy. London: Allen Lane.

Zuboff, S. (2019) The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. London: Profile Books.

Why Technology Alone Doesn’t Win Wars

We often assume that the latest military technology will define the future of warfare. AI, cyber weapons, and autonomous drones are hailed as game-changers, just as tanks, aircraft, and nuclear weapons were in past eras. But history tells a different story, one where new technology is only as effective as the strategy, doctrine, and human adaptation behind it.

In this video, we explore David Edgerton’s critique of technological determinism, the idea that wars are shaped by cutting-edge innovation alone. From ancient weapons to modern cyber warfare, we show why old technologies persist, how armies adapt, and why war remains a contest of resilience, not just hardware.

The Real Lesson of Military Technology

The biggest mistake in war isn’t failing to develop new technology, it’s assuming that technology alone will guarantee victory. History proves that the best weapons don’t always win battles; those who adapt, integrate, and sustain their forces over time do.

What do you think? Are we overhyping AI and cyber warfare today, just as people once overhyped battleships or air power?

[Video] UK and EU AI Influence

Artificial intelligence isn’t just reshaping industries—it’s reshaping reality. While the UK and EU focus on regulating AI and combating misinformation, adversarial states like Russia and China are weaponizing it for influence warfare. The AI-driven disinformation battle isn’t coming; it’s already here.

In my latest article, “Why the UK and EU Are Losing the AI Influence War”, I explore how Europe’s slow response, defensive posture, and reliance on outdated regulatory approaches are leaving it vulnerable to AI-enhanced propaganda campaigns.

To bring these ideas to life, I’ve created a video that visualises the scale of the challenge and why urgent action is needed. Watch it below:

The AI influence war is no longer a hypothetical—it’s unfolding in real-time. Europe’s current strategies are reactive and insufficient, while adversaries leverage AI to manipulate narratives at unprecedented speed. Without a cognitive security unit, AI-powered countermeasures, and a national security-driven approach, the UK and EU risk losing control of their own information space.

The question isn’t whether AI will reshape public perception, it’s who will be in control of that perception. Will Europe rise to the challenge, or will it remain a passive battleground for AI-driven narratives?

What do you think? Should the UK and EU take a more aggressive stance in countering AI-enhanced disinformation? Feel free to discuss in the comments.

WHY THE UK AND EU ARE LOSING THE AI INFLUENCE WAR

Why the UK and EU Are Losing the AI Influence War

Abstract

Western democracies face a new front in conflict: the cognitive battlespace, where artificial intelligence (AI) is leveraged to shape public opinion and influence behaviour. This article argues that the UK and EU are currently losing this AI-driven influence war. Authoritarian adversaries like Russia and China are deploying AI tools in sophisticated disinformation and propaganda campaigns, eroding trust in democratic institutions and fracturing social cohesion. In contrast, the UK and EU response, focused on regulation, ethical constraints, and defensive measures, has been comparatively slow and fragmented. Without a more proactive and unified strategy to employ AI in information operations and bolster societal resilience against cognitive warfare, Western nations risk strategic disadvantage. This article outlines the nature of the cognitive battle-space, examines adversarial use of AI in influence operations, evaluates UK/EU efforts and shortcomings, and suggests why urgent action is needed to regain the initiative.

Introduction

Modern conflict is no longer confined to conventional battlefields; it has expanded into the cognitive domain. The term “cognitive battlespace” refers to the arena of information and ideas, where state and non-state actors vie to influence what people think and how they behave. Today, advances in AI have supercharged this domain, enabling more sophisticated influence operations that target the hearts and minds of populations at scale. Adversaries can weaponise social media algorithms, deepfakes, and data analytics to wage psychological warfare remotely and relentlessly.

Western governments, particularly the United Kingdom and European Union member states, find themselves on the defensive. They face a deluge of AI-enhanced disinformation from authoritarian rivals but are constrained by ethical, legal, and practical challenges in responding. Early evidence suggests a troubling imbalance: Russia and China are aggressively exploiting AI for propaganda and disinformation, while the UK/EU struggle to adapt their policies and capabilities. As a result, analysts warn that Western democracies are “losing the battle of the narrative” in the context of AI (sciencebusiness.net). The stakes are high: if the UK and EU cannot secure the cognitive high ground, they risk erosion of public trust, social discord, and strategic loss of influence on the world stage.

This article explores why the UK and EU are lagging in the AI influence war. It begins by defining the cognitive battlespace and the impact of AI on information warfare. It then examines how adversaries are leveraging AI in influence operations. Next, it assesses the current UK and EU approach to cognitive warfare and highlights key shortcomings. Finally, it discusses why Western efforts are falling behind and what the implications are for future security.

The Cognitive Battlespace in the Age of AI

In cognitive warfare, the human mind becomes the battlefield. As one expert succinctly put it, the goal is to “change not only what people think, but how they think and act” (esdc.europa.eu). This form of conflict aims to shape perceptions, beliefs, and behaviours in a way that favours the aggressor’s objectives. If waged effectively over time, cognitive warfare can even fragment an entire society, gradually sapping its will to resist an adversary.

Artificial intelligence has become a force multiplier in this cognitive domain. AI algorithms can curate individualised propaganda feeds, amplify false narratives through bot networks, and create realistic fake images or videos (deepfakes) that blur the line between truth and deception. According to NATO’s Allied Command Transformation, cognitive warfare encompasses activities to affect attitudes and behaviours by influencing human cognition, effectively “modifying perceptions of reality” as a new norm of conflict (act.nato.int). In essence, AI provides powerful tools to conduct whole-of-society manipulation, turning social media platforms and information systems into weapons.

A vivid example of the cognitive battlespace in action occurred in May 2023, when an AI-generated image of a false Pentagon explosion went viral. The fake image, disseminated by bots, briefly fooled enough people that it caused a sharp but temporary dip in the U.S. stock market. Though quickly debunked, this incident demonstrated the “catastrophic potential” of AI-driven disinformation to trigger real-world consequences at machine speed (mwi.westpoint.edu) . Generative AI can manufacture convincing yet false content on a massive scale, making it increasingly difficult for populations to discern fact from fabrication.

In the cognitive battlespace, such AI-enabled tactics give malign actors a potent advantage. They can rapidly deploy influence campaigns with minimal cost or risk, while defenders struggle to identify and counter each new false narrative. As the information environment becomes saturated with AI-amplified propaganda, the traditional defenders of truth, journalists, fact-checkers, and institutions, find themselves overwhelmed. This asymmetry is at the heart of why liberal democracies are in danger of losing the cognitive war if they do not adapt quickly.

Adversaries’ AI-Driven Influence Operations

Russia and China have emerged as leading adversaries in the AI-enabled influence war, honing techniques to exploit Western vulnerabilities in the cognitive domain. Russia has a long history of information warfare against the West and has eagerly integrated AI into these efforts. Through troll farms and automated bot networks, Russia pushes AI-generated propaganda designed to destabilise societies. Moscow views cognitive warfare as a strategic tool to “destroy [the West] from within” without firing a shot. Rather than direct military confrontation with NATO (which Russia knows it would likely lose), the Kremlin invests in “cheap and highly effective” cognitive warfare to undermine Western democracies from inside (kew.org.pl) .

Russian military thinkers refer to this concept as “reflexive control,” essentially their doctrine of cognitive warfare. The idea is to manipulate an adversary’s perception and decision-making so thoroughly that the adversary “defeats themselves”. In practice, this means saturating the information space with tailored disinformation, conspiracy theories, and emotionally charged content to break the enemy’s will to resist. As one analysis describes, the battleground is the mind of the Western citizen, and the weapon is the manipulation of their understanding and cognition. By exploiting human cognitive biases, our tendencies toward emotional reaction, confirmation bias, and confusion, Russia seeks to leave citizens “unable to properly assess reality”, thus incapable of making rational decisions (for example, in elections). The goal is a weakened democratic society, rife with internal divisions and distrust, that can no longer present a united front against Russian aggression.

Concrete examples of Russia’s AI-fuelled influence operations abound. Beyond the fake Pentagon incident, Russian operatives have used generative AI to create deepfake videos of European politicians, forge fake news stories, and impersonate media outlets. Ahead of Western elections, Russian disinformation campaigns augmented with AI have aimed to sow discord and polarise public opinion. U.K. intelligence reports and independent researchers have noted that Russia’s automated bot accounts are evolving to produce more “human-like and persuasive” messages with the help of AI language models. These tactics amplify the reach and realism of propaganda, making it harder to detect and counter. Even if such interference does not always change election outcomes, it erodes public trust in information and institutions, a long-term win for the Kremlin.

China, while a newer player in European information spaces, is also investing heavily in AI for influence operations. Chinese military strategy incorporates the concept of “cognitive domain operations”, which merge AI with psychological and cyber warfare. Beijing’s aim is to shape global narratives and public opinion in its favour, deterring opposition to China’s interests. For instance, China has deployed swarms of AI-driven social media bots to spread disinformation about topics like the origins of COVID-19 and the status of Taiwan. Chinese propaganda operations use AI to generate deepfake news anchors and social media personas that promote pro-China narratives abroad. According to NATO analysts, China describes cognitive warfare as using public opinion and psychological manipulation to achieve victory, and invests in technologies (like emotion-monitoring systems for soldiers) that reveal the importance it places on the information domain. While China’s influence efforts in Europe are less overt than Russia’s, they represent a growing challenge as China seeks to project soft power and shape perceptions among European audiences, often to dissuade criticism of Beijing or divide Western unity.

The aggressive use of AI by authoritarian adversaries has put Western nations on the back foot in the information environment. Adversaries operate without the legal and ethical constraints that bind democracies. They capitalise on speed, volume, and ambiguity, launching influence campaigns faster than defenders can react. Authoritarian regimes also coordinate these efforts as part of broader hybrid warfare strategies, aligning cyber-attacks, diplomatic pressure, and economic coercion with information operations to maximise impact. In summary, Russia and China have seized the initiative in the cognitive battlespace, leaving the UK, EU, and their allies scrambling to catch up.

UK and EU Responses: Strategies and Shortcomings

Confronted with these threats, the United Kingdom and European Union have begun to recognise the urgency of the cognitive warfare challenge. In recent years, officials and strategists have taken steps to improve defences against disinformation and malign influence. However, the Western approach has so far been largely reactive and constrained, marked by cautious policy frameworks and fragmented efforts that lag the adversary’s pace of innovation.

United Kingdom: The UK government acknowledges that AI can significantly amplify information warfare. The Ministry of Defence’s Defence Artificial Intelligence Strategy (2022) warns that “AI could also be used to intensify information operations, disinformation campaigns and fake news,” for example by deploying deepfakes and bogus social media accounts. British military doctrine, including the Integrated Operating Concept (2020), emphasises that information operations are increasingly important to counter false narratives in modern conflicts (gov.uk). London’s approach has included establishing units dedicated to “strategic communications” and cyber influence and working with partners like NATO to improve information security.

The UK has also invested in research on AI and influence. For instance, the Alan Turing Institute’s research centre (CETaS) published analyses on AI-enabled influence operations in the 2024 UK elections, identifying emerging threats such as deepfake propaganda and AI-generated political smear campaigns. These studies, while finding that AI’s impact on recent elections was limited, highlighted serious concerns like AI-driven hate incitement and voter confusion (cetas.turing.ac.uk) . The implication is clear: the UK cannot be complacent. Even if traditional disinformation methods still dominate, the rapid evolution of AI means influence threats could scale up dramatically in the near future. British policymakers have started to discuss new regulations (for example, requiring transparency in AI political ads) and bolstering media literacy programs to inoculate the public against fake content.

Despite this awareness, critics argue that the UK’s response remains disjointed and under-resourced. There is no publicly articulated doctrine for cognitive warfare equivalent to adversaries’ strategies. Efforts are split among various agencies (from GCHQ handling cyber, to the Army’s 77th Brigade for information ops, to the Foreign Office for counter-disinformation), making coordination challenging. Moreover, while defensive measures (like fact-checking services and takedown of fake accounts) have improved, the UK appears reluctant to consider more assertive offensive information operations that could pre-empt adversary narratives. Legal and ethical norms, as well as fear of escalation, likely restrain such tactics. The result is that Britain often plays catch-up, reacting to disinformation waves after they have already influenced segments of the population.

European Union: The EU, as a bloc of democracies, faces additional hurdles in confronting cognitive warfare. Brussels has treated disinformation chiefly as a policy and regulatory issue tied to election security and digital platform accountability. Over the past few years, the EU implemented a Code of Practice on Disinformation (a voluntary agreement with tech companies) and stood up teams like the East StratCom Task Force (known for its EUvsDisinfo project debunking pro-Kremlin myths). Following high-profile meddling in elections and referendums, EU institutions have grown more vocal: they label Russia explicitly as the chief source of disinformation targeting Europe. The European Commission also incorporated anti-disinformation clauses into the Digital Services Act (DSA), requiring large online platforms to assess and mitigate risks from fake content.

When it comes to AI, the EU’s landmark AI Act – primarily a regulatory framework to govern AI uses – indirectly addresses some information manipulation concerns (for example, by requiring transparency for deepfakes). However, EU efforts are fundamentally defensive and norm-driven. They seek to police platforms and inform citizens, rather than actively engage in influence operations. EU leaders are wary of blurring the line between counter-propaganda and propaganda of their own, given Europe’s commitment to free expression. This creates a dilemma: open societies find it difficult to wage information war with the ruthlessness of authoritarian regimes.

European security experts are starting to grapple with this challenge. A recent EU security and defence college course underscored that cognitive warfare is an “emerging challenge” for the European Union (esdc.europa.eu) . Participants discussed the need for technological tools to detect, deter, and mitigate cognitive threats. Yet, outside of specialised circles, there is no EU-wide military command focused on cognitive warfare (unlike traditional domains such as land, sea, cyber, etc.). NATO, which includes most EU countries, has taken the lead in conceptualising cognitive warfare, but NATO’s role in offensive information activities is limited by its mandate.

A telling critique comes from a Royal United Services Institute (RUSI) commentary on disinformation and AI threats. It notes that NATO’s 2024 strategy update acknowledged the dangers of AI-enabled disinformation, using unusually strong language about the urgency of the challenge. However, the same strategy “makes no reference to how AI could be used” positively for strategic communications or to help counter disinformation (rusi.org) . In other words, Western nations are emphasising protection and defence, strengthening **governance standards, public resilience, and truth-checking mechanisms, **but they are not yet leveraging AI offensively to seize the initiative in the info sphere. This cautious approach may be ceding ground to adversaries who have no such reservations.

Why the West Is Losing the AI Influence War

Several interrelated factors explain why the UK, EU, and their allies appear to be losing ground in the cognitive domain against AI-equipped adversaries:

Reactive Posture vs. Proactive Strategy: Western responses have been largely reactive. Democracies often respond to disinformation campaigns after damage is done, issuing fact-checks or diplomatic condemnations. There is a lack of a proactive, comprehensive strategy to dominate the information environment. Adversaries, by contrast, set the narrative by deploying influence operations first and fast.

Ethical and Legal Constraints: The UK and EU operate under strict norms – adherence to truth, rule of law, and respect for civil liberties – which limit tactics in information warfare. Propaganda or deception by government is domestically unpopular and legally fraught. This makes it hard to match the scale and aggressiveness of Russian or Chinese influence operations without undermining democratic values. Authoritarians face no such constraints.

Fragmented Coordination: In Europe, tackling cognitive threats cuts across multiple jurisdictions and agencies (domestic, EU, NATO), leading to fragmentation. A unified command-and-control for information operations is lacking. Meanwhile, adversaries often orchestrate their messaging from a centralised playbook, giving them agility and consistency.

Regulatory Focus Over Capabilities: The EU’s inclination has been to regulate (AI, social media, data) to create guardrails – a necessary but slow process. However, regulation alone does not equal capability. Rules might curb some harmful content but do not stop a determined adversary. The West has invested less in developing its own AI tools for strategic communication, psyops, or rapid counter-messaging. This capability gap means ceding the technological edge to opponents.

Underestimation of Cognitive Warfare: Historically, Western security doctrine prioritised physical and cyber threats, sometimes underestimating the impact of information warfare. The concept of a sustained “cognitive war” waged in peacetime is relatively new to Western planners. Initial responses were tepid – for example, before 2016, few anticipated that online influence could significantly affect major votes. This lag in appreciation allowed adversaries to build momentum.

These factors contribute to a situation where, despite growing awareness, the UK and EU have struggled to turn rhetoric into effective countermeasures on the cognitive front. As a result, authoritarian influence campaigns continue to find fertile ground in Western societies. Each viral conspiracy theory that goes unchecked, each wedge driven between communities via disinformation, and each doubt cast on democratic institutions chips away at the West’s strategic advantage. NATO officials warn that information warfare threats “must neither be overlooked nor underestimated” in the face of the AI revolution. Yet current efforts remain a step behind the onslaught of AI-generated falsehoods.

Conclusion and Implications

If the UK, EU, and like-minded democracies do not rapidly adapt to the realities of AI-driven cognitive warfare, they risk strategic defeat in an important realm of 21st-century conflict. Losing the AI influence war doesn’t happen with a formal surrender; instead, it manifests as a gradual erosion of democratic resilience. Societies may grow deeply divided, citisens lose trust in media and governments, and adversarial narratives become entrenched. In the long run, this could weaken the political will and cohesion needed to respond to more conventional security threats. As one analysis grimly observed, the cost of inaction is high – allowing adversaries to exploit AI for malign influence can lead to a “strategic imbalance favouring adversaries”, with a flood of false narratives eroding public trust and even devastating democratic institutions if left unchecked.

Reversing this trajectory will require Western nations to elevate the priority of the cognitive battlespace in national security planning. Some broad imperatives emerge:

Develop Offensive and Defensive AI Capabilities: The UK and EU should invest in AI tools not just to detect and debunk disinformation, but also to disseminate counter-narratives that truthfully push back against authoritarian propaganda. Ethical guidelines for such operations must be established, but fear of using AI at all in information ops leaves the field open to adversaries.

Whole-of-Society Resilience: Building public resilience is crucial. Education in media literacy and critical thinking, transparency about threats, and empowering independent journalism are all part of inoculating society. A populace that can recognise manipulation is the best defence against cognitive warfare. The goal is to ensure citizens can engage with digital information sceptically, blunting the impact of fake or AI-manipulated content.

International Coordination: The transatlantic alliance and democratic partners need better coordination in the information domain. NATO’s work on cognitive warfare should be complemented by EU and UK joint initiatives to share intelligence on disinformation campaigns and align responses. A unified front can deny adversaries the ability to play divide-and-conquer with different countries.

Adaptive Governance: Western policymakers must make their regulatory frameworks more agile in the face of technological change. This might include faster mechanisms to hold platforms accountable, updated election laws regarding AI-generated content, and perhaps narrowly tailored laws against the most dangerous forms of disinformation (such as deceptive media that incites violence). The challenge is doing so without undermining free speech – a balance that requires constant calibration as AI technology evolves.

In summary, the UK and EU are at a crossroads. They can continue on the current path – risking that AI-enabled influence attacks will outpace their responses – or they can strategise anew and invest in winning the cognitive fight. The latter will demand political will and creativity: treating information space as a domain to be secured, much like land, sea, air, cyber and space. It also means confronting uncomfortable questions about using emerging technologies in ways that align with democratic values yet neutralise malign propaganda.

The cognitive battle-space is now a permanent feature of international security. Western democracies must not cede this battlefield. Maintaining an open society does not mean being defenceless. With prudent adoption of AI for good, and a staunch defence of truth, the UK, EU, and their allies can start to turn the tide in the AI influence war. Failing to do so will only embolden those who seek to “attack the democratic pillars of the West” through information manipulation. In this contest for minds and hearts, as much as in any other domain of conflict, strength and resolve will determine who prevails.

Bibliography

1. NATO Allied Command Transformation. “Cognitive Warfare.” NATO ACT, Norfolk VA.

2. Bryc, Agnieszka. “Destroy from within: Russia’s cognitive warfare on EU democracy.” Kolegium Europy Wschodniej, 27 Nov 2024.

3. European Security & Defence College (ESDC). “Cognitive warfare in the new international competition: an emerging challenge for the EU,” 28 May 2024.

4. Williams, Cameron (Modern War Institute). “Persuade, Change, and Influence with AI: Leveraging Artificial Intelligence in the Information Environment.” Modern War Institute at West Point, 14 Nov 2023.

5. UK Ministry of Defence. Defence Artificial Intelligence Strategy, June 2022. UK

6. Fitz-Gerald, Ann M., and Halyna Padalko (RUSI). “The Need for a Strategic Approach to Disinformation and AI-Driven Threats.” RUSI Commentary, 25 July 2024.

7. Science Business News. “EU is ‘losing the narrative battle’ over AI Act, says UN adviser,” 05 Dec 2024.

The Future of War: AI and Strategy

When Carl von Clausewitz wrote that war is “a continuation of politics by other means,” he centred conflict on purpose rather than technology. Colin Gray later warned that strategic constants outlive every gadget. Artificial intelligence now accelerates observation-decision loops from minutes to milliseconds, but whether that shift dethrones human strategy is still contested.

Speed Meets Friction

Ukrainian drone teams run machine-vision updates at the front line every fortnight, turning quadcopters into near-autonomous kamikaze platforms (Bondar 2025). Yet the same coders struggle with false positives – such as bears flagged as enemy sentries – and with mesh-network bottlenecks once EW jammers blanket the spectrum. AI compresses time, but it also multiplies friction, the very element Clausewitz thought ineradicable. We have to be conscious that false positives do not just waste munitions; when the same image-detection stack mis-tags an ambulance as a supply truck, the result is shrapnel in a paediatric ward, not an algorithmic hiccup. In 2025, the World Health Organization stated that hospitals reported 205 deaths from strike-related service loss in Ukraine.

Open-source models still give insurgents propaganda reach, but the sharper edge of algorithmic warfare sits with states.  Israel’s Lavender system, revealed in 2024, generated a list of roughly 37,000 potential Hamas targets and was used even when commanders expected up to twenty civilian deaths per strike—a machine-driven tempo that unsettled some of the intelligence officers involved (McKernan & Davies 2024).  Cutting-edge autonomy, however, still demands high-end GPUs, abundant power and proprietary data.  That keeps strategic dominance gated by infrastructure, mirroring geopolitical power.  Yet, as Kuner (2024) notes, Brussels carved a national-security escape hatch into the AI Act precisely to preserve state leverage over the biggest models.

Washington’s Replicator initiative aims to field “thousands of attriable autonomous systems” within two years (DoD 2024). Beijing answers through civil-military fusion; Moscow improvises with AI-augmented loitering munitions. These programmes underpin an operating concept of continuous, sub-threshold contest, paralleling U.S. Cyber Command’s “persistent engagement”. Strategic deterrence thus rests on the hope that algorithmic agents still read tacit red lines the way humans do. In Stanford’s 2024 crisis simulations, LLM agents recommended first-strike escalation in seventy-nine per cent of runs, providing evidence that algorithmic ‘rationality’ may be anything but.

If LLM advisers escalate crises in simulation nine times out of ten, the locus of judgement drifts from commander to code.  The next question is whether that drift merely speeds execution or begins to automate strategy itself.

Promoters once claimed AI would dissolve uncertainty; real battlefields say different. Sensor glut, spoofed tracks and synthetic “ghost columns” now drown analysts in contradictory feeds (Collazzo 2025). AI redistributes fog rather than lifting it – accelerating some judgements while blinding others through overload or deception (Askew and Salinas 2025). 

The Pentagon updated Directive 3000.09 on autonomous weapons in late 2024, tightening human-in-the-loop requirements. At the multilateral level, UN talks in May 2025 once again failed to agree binding rules, though Secretary-General Guterres set a 2026 deadline (Le Poidevin 2025). Norms lag well behind code, keeping accountability – and escalation liability – firmly in human hands. 

Strategic implications

The transformative impact of AI on strategic paradigms can be distilled into a few key considerations:

  • Advantage remains political. AI is a means; objectives still emanate from human intent. Strategy therefore keeps its Clausewitzian anchor in politics.
  • Automation magnifies misperception. Faster loops leave less time for reflection; and black-box models hide their own failure modes.  Bias and data poisoning risk strategic self-harm.
  • Deterrence becomes brittle. Autonomous systems may over-react to spoofed inputs; adversaries may test thresholds in micro-seconds rather than hours, shortening the ladder of de-escalation.

Conclusions

AI does not automate strategy; it amplifies both its promise and its pathologies. Machines accelerate tactics, generate options and even draft operational plans, but they do not choose political ends – and they continue to manifest friction, chance and uncertainty. Thomas Rid remains broadly right that most cyber and AI activity falls short of war (Rid, 2017), yet as energy grids, logistics chains and battlefield kill cycles digitise, the gap between systemic disruption and physical violence narrows. For the moment, Clausewitz’s compass still points true – but the ground beneath it is starting to slide.

Select bibliography

Askew, M. and Salinas, A. (2025) ‘AI Will Make the Mind Games of War More Risky’, Business Insider, 18 Apr.

Bondar, K. (2025) Ukraine’s Future Vision and Current Capabilities for Waging AI-Enabled Autonomous Warfare.  CSIS.

Collazzo, A. (2025) ‘Warfare at the Speed of Thought’, Modern War Institute, 21 Feb.

Department of Defense (2024) ‘Replicator Initiative Progress Update’.

Kuner, C. (2024) ‘The AI Act National Security Exception’, Verfassungsblog, 15 Dec.

Le Poidevin, O. (2025) ‘Nations Meet at UN for “Killer Robot” Talks’, Reuters, 12 May.

McKernan, B., & Davies, H. (2024). The Machine Did It Coldly. The Guardian, 3 April. https://www.theguardian.com/world/2024/apr/03/israel-gaza-ai-database-hamas-airstrikes  

Rid, T. (2017) Cyber War Will Not Take Place.  Oxford: Oxford University Press.

Stanford HAI (2024) Escalation Risks from LLMs in Military and Diplomatic Contexts.

World Health Organization (2025) WHO’s Health Emergency Appeal 2025. Geneva: World Health Organization.

Powered by WordPress & Theme by Anders Norén