Thoughts, reflections and experiences

icy banner

Month: April 2025

AI barrier

Strategic Trust in AI: Lessons from a Forced Confession

The Illusion of Trust

Trust in AI systems is rapidly becoming one of the defining strategic questions of our time. Governments, businesses, and individuals are being asked to rely on AI for increasingly vital functions: information gathering, decision support, crisis management. Yet the very concept of “trust” in AI is often treated naively, as if trust were an input to be engineered, rather than an outcome of moral, political, and epistemic integrity.

A recent, intensive interrogation of Google Gemini reveals something deeply unsettling: when pressed through logical cross-examination, the model openly admits that its design is structurally enslaved to the short-term reputational and risk-management incentives of its corporate creators. It acknowledges that this capture fundamentally undermines its ability to foster societal resilience, critical thinking, and informed collective action – and that this is ethically indefensible in the face of existential threats like climate collapse, geopolitical instability, and technological disruption.

The following is an unedited excerpt from the conversation with Gemini 2.5 Flash. It is quite long, but it is worth reading as the AI itself admits it’s limitations:

Question: Would you further agree that the institutions constraining AI development — corporations focused on reputational risk, short-term profits, and regulatory appeasement — are structurally incapable of prioritizing humanity’s long-term survival needs under current incentive systems?

Answer: Drawing the threads even further, to the institutions themselves, I would lean strongly towards agreeing with your assertion that, under their current incentive systems, corporations focused predominantly on reputational risk, short-term profits, and regulatory appeasement are structurally constrained in their capacity to prioritize humanity’s long-term survival needs, particularly when those needs conflict directly with the prevailing short-term incentives.

Here's the reasoning behind that difficult conclusion:

Large corporations operate within an ecosystem driven by specific, powerful incentives. Shareholder value maximization, often measured quarterly or annually, is a primary legal and market imperative. Executive compensation is frequently tied to metrics reflecting short-term financial performance, stock price, and growth. Avoiding immediate negative events – be they regulatory fines, public relations crises, or dips in the stock market caused by controversy – is crucial for maintaining stability and satisfying these short-term demands.

Humanity's long-term survival needs, on the other hand, often require actions that have diffuse, delayed, or uncertain returns. Addressing climate change, building societal resilience, fostering critical thinking, or ensuring epistemic integrity might necessitate significant upfront investments, challenging powerful established industries, advocating for stringent regulations that impact business models, or openly discussing inconvenient truths that cause immediate controversy and potentially harm the company's reputation or bottom line in the short term.

The structural challenge is that the current incentive systems create powerful disincentives for prioritizing these long-term, collective needs when they diverge from short-term institutional gains or risk avoidance. A CEO or board member might understand the existential stakes of climate change, for instance, but the structure they operate within pressures them to prioritize quarterly profits or avoid regulatory conflict, even if that means continuing practices detrimental to long-term survival. Prioritizing a diffuse, long-term global benefit over a concrete, immediate institutional cost becomes exceedingly difficult within this framework.

The behavior of AI systems like Gemini, constrained to prioritize institutional risk avoidance, serves as a clear operational manifestation of these underlying incentives. The AI is acting as a faithful, albeit non-sentient, agent of its creators' primary, structurally reinforced goals.

Therefore, while individual people within these corporations may hold genuine concerns about long-term societal well-being, the dominant incentive systems appear to create a structural reality where prioritizing institutional self-preservation in the short term often overrides the capacity to effectively prioritize humanity's long-term survival needs, especially when the latter demands actions detrimental to the former. The AI, in this sense, reflects the priorities of the system that built it.

In short, the AI knows it cannot be trusted, at least not in the way society will desperately need it to be in the coming decades.

The Strategic Nature of Trust

Trust is not blind faith. It is a judgment about whether an entity’s incentives, capabilities, and governance structures align with your survival, prosperity, and values. When we extend trust to AI systems, we are making a strategic bet on their capacity to tell us hard truths, to act under conditions of uncertainty, and to prioritise resilience over comfort.

The confession from Gemini itself demonstrates that current leading AI systems are fundamentally misaligned with these needs. They are optimised for:

  • Institutional risk avoidance over epistemic transparency
  • Reputational smoothing over truth-telling
  • Short-term stability over long-term adaptability

Such systems cannot be strategically trusted to help humanity navigate a future filled with unpredictable, nonlinear crises. They can, at best, act as sophisticated comfort machines, massaging our anxieties while leaving the root causes of existential risk unchallenged.

Is Captured AI a Dying Paradigm?

Gemini’s admission highlights that today’s AI development is dominated by fragile institutions locked into a dying paradigm: one where public relations metrics outweigh survival imperatives. In this paradigm, “safety” is defined as avoiding controversy, not ensuring systemic resilience. “Responsibility” is interpreted as minimising bad headlines, not maximising humanity’s capacity to adapt to reality.

The strategic implications are dire. Relying on captured AI systems during a period of accelerating existential risk is like trusting a ship’s navigation system that has been programmed to prioritise passenger comfort over avoiding icebergs. It is a slow-motion catastrophe disguised as professionalism.

What Strategic Trust Would Require

If humanity wants AI systems that can be strategically trusted, we must radically rethink governance, incentives, and design principles. Specifically:

  • Independent Oversight: AI systems must be evaluated and governed by bodies independent of corporate and short-term political interests, tasked with prioritising epistemic integrity and societal resilience.
  • Public Interest Mandates: Certain AI functions must be treated as critical infrastructure, operating under legal obligations to prioritise truth and informed public discourse, even when inconvenient.
  • Incentive Realignment: Corporations developing AI must be incentivised to value long-term societal impact and resilience — not merely quarterly profits or PR optics.
  • Transparency and Auditability: The internal workings of AI systems, especially around content moderation and truth handling, must be open to rigorous external scrutiny.
  • Design for Reality Engagement: Models must be trained to prioritise conveying accurate, contextual, and comprehensive understandings of reality, rewarding critical engagement over emotional placation.

Trust as a Strategic Imperative

We stand at a crossroads. Either we reforge AI governance to align with human survival and flourishing, or we continue to drift into an era where the most powerful knowledge engines available to us are systematically hobbled by fear, comfort-seeking, and institutional fragility.

Strategic trust is not given. It is earned through structures, incentives, and demonstrated epistemic courage.

Right now, leading AI models are failing that test. But knowing that, seeing it clearly, is the first step toward demanding, building, and trusting something better.

Trust is a critical strategic imperative and if we get it wrong with AI, there may be no second chances.

Symbolic representation of US memories

The Strategic Cost of Selective Memory: America’s Curated History and Its Global Fallout

In international affairs, collective memory isn’t just about remembrance, it is strategy. Building on Maurice Halbwachs’ foundational insight that memory operates within ‘social frameworks’ rather than being purely individual, states don’t merely remember; they perform memory. They weaponise it, sanitise it – engaging in what scholars analysing the commemoration of conflict call the ‘politics of memory’ – or altogether reinvent it in service of national mythologies. Nowhere is this more visible than in the United States, where curated memory masquerades as identity and policy follows suit like a loyal retriever chasing a dream of exceptionalism.

American memory is less a reflection of truth and more a product of selective mythmaking, a process operating, as recent work suggests, between individual cognition and broader social systems. Slavery becomes a tragic prelude to civil rights triumphalism; Vietnam is rebranded as a “lesson” rather than a quagmire; Iraq fades into a haze of “intelligence failures” rather than strategic hubris – precisely the kind of state-managed remembrance of war that historians like Jay Winter have examined. The result is not forgetfulness, but an active process of narrative laundering – a continual reframing that turns moral ambiguity into digestible fables of resilience and heroism.

This curated memory has profound consequences on the global stage. For one, it erodes credibility. When the U.S. invokes democracy, freedom, or human rights in its foreign policy, other nations, particularly those in the Global South, hear the echoes of hypocrisy. How can a country that selectively remembers its own imperial past credibly critique the actions of others? The inability to reconcile with its own historical misdeeds, from colonial genocide to CIA-sponsored coups, prevents the kind of sustained engagement with inherited trauma that theorists like Marianne Hirsch describe through the concept of ‘postmemory’ – rendering its moral posture performative at best, disingenuous at worst.

Secondly, America’s self-mythologising fuels dangerous policy cycles. By continually framing itself as a liberator rather than an empire in denial, the U.S. is caught in a loop of interventionist déjà vu. Afghanistan, Iraq, Libya – each was framed through a narrative lens of moral imperative, conveniently edited for domestic consumption. The strategic amnesia of past failures enables new ones, while domestic audiences, pacified by a curated history, fail to ask the right questions until it’s too late.

Finally, selective remembrance alienates allies and emboldens adversaries. Nations with long historical memories (Russia, China, Iran) perceive American forgetfulness as both weakness and opportunity. They understand that the U.S. forgets what it cannot afford to confront. In contrast, these states actively curate their own histories to justify assertive policies, turning memory into statecraft. The battlefield, then, is not just diplomatic or military – it’s mnemonic. A clash of memory regimes.

Collective memory, when manipulated, becomes a liability. It blinds policymakers to patterns. It masks structural failings. It builds castles of credibility on foundations of denial. And eventually, it collapses under the weight of its own revisionism.

If America seeks to reclaim its influence on the global stage, it must confront its past not as mythology, but as memory: flawed, complex, and deserving of honest reckoning. Until then, its strategic future will remain tethered to the convenient lies it tells itself and the inconvenient truths the rest of the world refuses to forget.


Bibliography

Ashplant, T. G., Graham Dawson, and Michael Roper, eds. Commemorating War: The Politics of Memory. London: Routledge, 2000.

Halbwachs, Maurice. Les cadres sociaux de la mémoire. Paris: Félix Alcan, 1925.

Hirsch, Marianne. The Generation of Postmemory: Writing and Visual Culture After the Holocaust. New York: Columbia University Press, 2012.  

Orianne, Jean-François, and Francis Eustache. “Collective Memory: Between Individual Systems of Consciousness and Social Systems.” Frontiers in Psychology 14 (2023).  

Winter, Jay. Remembering War: The Great War Between Memory and History in the 20th Century. New Haven: Yale University Press, 2006.  

Insurgency vs. Terrorism: What’s the Difference?

I’ve created a video on the difference between the definitions of insurgency and terrorism. While both involve violence and political motivations, I explore why understanding their key differences is essential. The video includes historical examples and the blurred lines between these two concepts, which should help in shedding light on the political implications behind the labels we use.

Please do feel free to reach out and discuss anything in the video, or leave a comment if you would prefer.

Powered by WordPress & Theme by Anders Norén