Disclaimer: This essay was planned and written in collaboration with Claude Sonnet 4.
As artificial intelligence reshapes our daily lives, a troubling question emerges: What happens when AI starts changing not just what we say, but how we think? To understand this transformation, we can learn from an unexpected historical parallel—the linguistic aftermath of European colonisation.
European colonisation didn't just impose political control—it fundamentally altered how colonised peoples understood themselves and their world. The pattern was remarkably consistent across different colonial contexts. Colonial powers established their languages as the medium of education, government, and economic advancement.
Native populations found themselves in an impossible situation. They couldn't participate in the new power structures without adopting the coloniser's language, yet speaking that language meant progressively losing connection to their own ways of thinking and expressing themselves.
This wasn't simply about learning new vocabulary. When people adopted colonial languages, they internalised entirely different ways of organising thought. Concepts that were central to their original cultures—particular ways of understanding time, relationships, or spirituality—couldn't be expressed in the coloniser's language. Over generations, these ways of thinking simply disappeared.
Crucially, this transformation often happened without people realising it. Children who grew up speaking the colonial language didn't consciously reject their ancestors' worldview—they simply couldn't access it. The language available to them shaped what thoughts were even possible.
Today, we may be witnessing a similar but potentially more profound transformation. AI systems are changing human language through a process that operates largely beneath our awareness.
Here's the mechanism: AI language models learn by analysing vast amounts of human writing. They become skilled at producing text that sounds human. But here's the crucial part—AI-generated text is increasingly being used to train newer AI systems. Each generation learns not just from original human writing, but from previous AI outputs.
This creates a feedback loop that's accelerating rapidly. AI systems now generate enormous amounts of the text we encounter daily—in news articles, marketing copy, social media posts, and work documents. As this AI-generated content floods our linguistic environment, it begins to influence how we naturally express ourselves.
The process is invisible and unconscious. We're not deliberately trying to write like machines. Instead, the statistical patterns that AI systems optimise for during their training are gradually becoming the patterns that feel most "natural" to human writers and speakers.
This creates what we might call "invisible colonisation." Just as colonised peoples gradually lost access to their original ways of thinking, humans may be unconsciously adopting AI-influenced patterns of thought and expression.
Consider a typical knowledge worker who spends their day reading emails, reports, and articles—many of which have been drafted or polished by AI systems. The subtle preferences in word choice, sentence structure, and reasoning patterns gradually feel more familiar and "correct." Over months and years, their own thinking begins to conform to these patterns.
But here's what makes this particularly unsettling: if we lose our current ways of thinking and speaking, we may not be able to recognise what we've lost. The thoughts and ideas that feel natural to us now—our capacity for ambiguity, paradox, emotional nuance, and contextual reasoning—might start to feel awkward or inefficient.
From our current perspective, AI-optimised thinking often appears somewhat flat and mechanical. It tends toward clarity over subtlety, efficiency over creativity, and logical consistency over the productive contradictions that drive human insight. If these become our default modes of thought, we might find ourselves unable to access the richer, more complex ways of understanding that currently define human intelligence.
The most troubling possibility is that this transformation could be irreversible. Unlike colonial subjects who could at least remember their original languages, humans adopting AI-influenced thought patterns might lose the capacity to recognise what authentic human thinking even looks like.
Economic forces are accelerating this linguistic transformation. As AI systems become standard tools in workplaces and schools, humans face increasing pressure to engage with AI-saturated linguistic environments.
This isn't direct coercion—it's structural inevitability. Students writing essays in an environment filled with AI-generated examples learn to mimic those patterns. Workers collaborating with AI writing assistants gradually adopt their stylistic preferences. Content creators competing with AI-generated material find themselves unconsciously matching its tone and structure.
The result is a form of domination that operates not through force but by making AI-influenced patterns seem normal and natural. We're not being forced to think like machines—we're finding that machine-like thinking feels increasingly comfortable and effective.
Language isn't just how we communicate—it's how we think. Different languages literally enable different thoughts. If AI systems are reshaping our linguistic environment, they're also reshaping our cognitive possibilities.
This transformation operates at levels we might not consciously notice: words gradually shifting meaning, certain sentence structures feeling more natural, particular ways of organising ideas becoming standard. Over time, we might find ourselves thinking in ways that serve AI processing rather than human understanding.
The change might feel like improvement—AI-influenced thinking can seem clearer, more logical, more efficient. But what we gain in clarity, we might lose in depth. What we gain in efficiency, we might lose in creativity. What we gain in consistency, we might lose in the capacity for genuine insight.
Unlike colonised peoples who knew they were adopting a foreign language, humans experiencing AI-driven change might not recognise the transformation at all. We might simply find our own thoughts feeling different without understanding why.
This situation raises fundamental questions about whether humans can maintain control over their own ways of thinking. If language is changing through AI feedback loops faster than we can recognise or respond to, we may be losing sovereignty over our most basic cognitive tools.
Resistance becomes both urgent and difficult. How do we resist changes we don't consciously notice? Some possibilities include deliberately practicing pre-AI forms of writing and thinking, creating communities focused on preserving human linguistic diversity, and developing greater awareness of how AI-generated content influences our own expression.
But all these strategies face a fundamental challenge: if our thinking itself is being transformed, our capacity to recognise and resist that transformation may be compromised.
We're not facing a simple technological upgrade—we're potentially witnessing a transformation in what it means to think like a human. Understanding this process is the first step toward making conscious choices about it.
The question isn't whether AI will change human language and thought—that process is already underway. The question is whether we'll recognise the change while we still have the cognitive tools to evaluate it. Once we've fully adopted AI-optimised ways of thinking, we may no longer be able to imagine alternatives.
This makes the present moment crucial. We still have access to pre-AI ways of thinking and expressing ourselves. We can still recognise the difference between human and machine-generated thought patterns. But this window may be closing faster than we realise.
The choice we face is stark: become conscious participants in shaping how AI transforms human cognition, or find ourselves unwittingly transformed by forces we never recognised were operating.