When we begin to sound like AI

Source: http://www.britannica.com

To me, language has always felt creative, unpredictable, and full of possibility, and I have often thought of myself as a writer. I read fiction voraciously and wrote stories and poetry from a young age. In Year 3 I attempted to write a novel, with my protagonist, Lesley, emulating the characters I so loved reading about – Carolyn Keene’s Nancy Drew and Enid Blyton’s Famous Five and Secret Seven.

I began this edu flaneuse blog site in 2014, and enjoy writing my way into thinking through those professional things on my mind, gaining more clarity as a result. Writing my PhD, and later my solo-authored book, Transformational Professional Learning, were labours of constant learning and problem solving. How could I synthesise the literature? How might I use narrative and creative methods to communicate data? How can I translate research for an audience of educators? Writing and editing collaboratively have helped to hone my writing and to show me how to connect my thoughts to those of others in a kind of symphony of cohesion and difference.

It was through engaging with academic writing during my PhD that I discovered the em dash (—), which quickly became my favourite punctuation mark after the fabulously-named interrobang (?!). The long line of the em dash so elegantly allows the inclusion of parenthetical information into sentences in a cleaner way than through the use of commas, colons or parentheses. And yet now I find myself self-editing the em dash out of my writing as it has become an ‘AI tell’ thanks to ChatGPT’s enthusiastic use of it.

Reading websites and scrolling through social media captions, we notice the rhetorical moves and linguistic frames of LLM chatbots. Drawn originally from human writing—journalism, academic papers, blog posts, books—they congeal together to form a homogeneity of language. We are all beginning to sound the same and to mimic this kind of writing. Moon et al. (2025) found that, despite LLMs’ potential to enhance individual creativity, the widespread use of LLMs can homogenise language and diminish the collective diversity of creative ideas. Kobak et al. (2025) found that hundreds of words have abruptly increased their frequency in academic writing since ChatGPT became available, with LLMs having an unprecedented effect on scientific writing.

Our feeds are saturated by stylistic repetition, standardisation of voice, and smoothing of identity. Posts are accurate and well-edited, but lack individuality and variety.

We see contrastive frames like “it’s not a, it’s b,” “not c, but d,” and “the real risk isn’t e, it’s f.”

We see announcements of significance like “this distinction matters,” “that matters,” “this is what people miss,” “an uncomfortable truth,” “what is often overlooked is,” and “a simple but powerful idea.”

We see coherence markers such as “at its core,” “the challenge then, is,” “the task is not merely g, but h,” and “over time, a pattern begins to emerge”.

We see softening of arguments such as through “a quiet shift,” “a subtle tension,” “there is a quieter story,” and “a small but significant move.”

These phrases are all drawn from human writing and are not in themselves problematic. We all have patterns in the way we write. What is problematic, for me, is when all our patterns become the same and what is left is a frictionless and profound-sounding rhetoric that does not engage with specificities, difficulties and surprising nuances, where the reassurance of credibility is a façade for shallowness of thought. AI was trained to sound like us, but now we are beginning to sound like AI.

As we accept AI-shaped writing and begin to internalise chatbot cadence, it is worth considering the relationship between language and thought. The overuse of formulaic rhetocial patterns echoes the warning of George Orwell’s concept of newspeak from the novel 1984 – that a narrowing of language can also narrow what it is possible to think.

I am reminded of the ancient circular symbol of the Ouroboros serpent devouring its own tail – as though language is devouring itself in a never-ending Ouroborosic loop of sameness. Human language has trained the machine. Machine language is saturating human feeds. Humans are adapting our language to the machine’s version of human language. Writing is no longer a creative outlet where our brains grapple with words and structures, we explore the messiness of thought through writing, and we explore our voice and what it has the power to say. Rather, language is a shortcut to a clean and quick product.

However, the Ouroboros is not only an image of self-destruction. It is also an image of renewal through its endless cycle of ending and beginning, consuming and becoming. We can notice and interrupt the language loops we find ourselves in. We can celebrate idiosyncrasies of thought and voice through our writing with words that are specific to our contexts, our histories and our identities. We can embrace sounding fully like ourselves, even when that is messy, unformed and imperfect. With or without em dashes.

References

Kobak, D., González-Márquez, R., Horvát, E. Á., & Lause, J. (2025). Delving into LLM-assisted writing in biomedical publications through excess vocabulary. Science Advances, 11(27), eadt3813.

Moon, K., Green, A. E., & Kushlev, K. (2025). Homogenizing effect of large language models (LLMs) on creative diversity: An empirical comparison of human and ChatGPT writing. Computers in Human Behavior: Artificial Humans, 100207.

The effects of AI on human cognition and connection

Source: djovan on pixabay

ChatGPT is one of the world’s 10 most-visited websites and people are increasingly turning to AI to think, write, summarise, plan, counsel and even connect in a social sense. This month the OECD released its Introducing the OECD AI Capability Indicators Report, mapping current AI capabilities against the human capabilities of: language; social interaction; problem solving; creativity; metacognition and critical thinking; robotic intelligence; knowledge, learning and memory; vision; manipulation; and robotic intelligence. The report notes that AI currently lacks advanced reasoning and ethical reasoning capabilities. It adds that AI has weak social perception and struggles to infer social interactions, adjust for the emotional weight of a situation, or wrestle with ambiguity.

Reflecting on the professional moments I experienced this week, those in which I felt most fulfilled were human moments of connection, often filled with emotion and ambiguity. Sitting with parents in conversation about what it means to support young people to flourish in adolescence, at our ‘Thriving in the Middle School’ parent event. Touring an old scholar through the school and hearing her stories of her 1970s education and what continues to resonate for her 50 years later. Announcing the school’s new student leaders and feeling the palpable nervousness and excitement in the auditorium, and the subsequent pride and joy of those elected to leadership positions. Collaboratively solving the newspaper crossword in the staff room with colleagues. Watching students shine in the drama production. These are human experiences that technology cannot replicate.

The increasing use of AI Large Language Models (LLMs) is influencing our capacity for lateral thought, problem solving, creativity and human connection.

During my PhD research I could access publications online, but I needed to read them, synthesise them and analyse them myself. I could get help transcribing interviews, but I needed to sit with my participants, immerse myself in the data, draw out themes over time, and write my way into knowledge and understanding.

As I write this blog post, I am integrating knowledge and exploring ideas. I am thinking and writing my perspective into being in an organic way that engages me in cognition, reflection and construction of argument. I am utilising and connecting my cognitive architecture. If I had produced this post using AI to write it, I would benefit from the outcome, but not the process. There may be less friction between reader and written piece, as LLMs apply consistency of tone, genre and word choice based on programmed patterns. The piece may well have been more logically structured, with sub-headings, bullet points and a predictable cadence of language. It may use a number of em dashes, a favourite punctuation mark of ChatGPT writing. (On a side note, I am disappointed that the em dash has become a ‘tell’ of AI writing as it is one of my favourite punctuation marks after the interrobang, and ChatGPT’s use of it emerges from the credible human authorship, including academic sources, on which the LLM is trained). My piece may have been affected by AI’s cultural and linguistic biases (largely US-centric and masculine), and ‘hallucinations’, in which it makes up information and references.

How does our relationship with reading, writing and thinking change when we can paste swathes of content into a LLM and ask it to provide a neat summary? Or to ‘write a X in the style of Y person’ or to ‘generate an academic report on X topic using Y resources’?

If we get someone else, or AI, to do our reading or writing, we do less thinking. This recent research by a team at MIT explores the ‘cognitive cost’ or ‘cognitive debt’ of using AI to outsource our thinking. While ChatGPT outperforms students on many writing tasks including essay writing, this study found that students who used ChatGPT produced essays similar to one another. Human assessors described the AI-assisted essays as lengthy, academic-sounding and accurate, but “soulless”. The standard ideas, formulaic approaches and reoccurring statements reflected an AI homogeneity of argument and ‘echo chamber’ of ideas that lacked individuality and uniqueness. The research found that AI assistance reduced cognitive load and reduced cognitive friction. This made the task easier, potentially freeing up cognitive resources to allow the brain to reallocate effort toward executive functions. However, this convenience came at a cognitive cost as users defaulted to the easy option of the task being finished with minimal effort, rather than critically evaluating the AI-generated output or value-adding their own content. Those who engaged the most brain connectivity and activation, around memory and creative thinking, were in the group who used their ‘brain only’ to write the essay .

We need to consider what we are willing to outsource to technology, and for what purpose. Is our desired result an outcome or a process? Producing or thinking? Output or connection? ‘Done’ or continuously improving? How might AI free us to do more that is human without narrowing our capacity for thought and connection?

As we continue to explore how AI and technologies might replicate human capabilities, we need to lean in to our humanity and into what relational human connection and critical thought can continue to offer us. Our shared humanity and our capacity for cognition, emotion, connection, and ethical engagement remains paramount.