
ChatGPT is one of the world’s 10 most-visited websites and people are increasingly turning to AI to think, write, summarise, plan, counsel and even connect in a social sense. This month the OECD released its Introducing the OECD AI Capability Indicators Report, mapping current AI capabilities against the human capabilities of: language; social interaction; problem solving; creativity; metacognition and critical thinking; robotic intelligence; knowledge, learning and memory; vision; manipulation; and robotic intelligence. The report notes that AI currently lacks advanced reasoning and ethical reasoning capabilities. It adds that AI has weak social perception and struggles to infer social interactions, adjust for the emotional weight of a situation, or wrestle with ambiguity.
Reflecting on the professional moments I experienced this week, those in which I felt most fulfilled were human moments of connection, often filled with emotion and ambiguity. Sitting with parents in conversation about what it means to support young people to flourish in adolescence, at our ‘Thriving in the Middle School’ parent event. Touring an old scholar through the school and hearing her stories of her 1970s education and what continues to resonate for her 50 years later. Announcing the school’s new student leaders and feeling the palpable nervousness and excitement in the auditorium, and the subsequent pride and joy of those elected to leadership positions. Collaboratively solving the newspaper crossword in the staff room with colleagues. Watching students shine in the drama production. These are human experiences that technology cannot replicate.
The increasing use of AI Large Language Models (LLMs) is influencing our capacity for lateral thought, problem solving, creativity and human connection.
During my PhD research I could access publications online, but I needed to read them, synthesise them and analyse them myself. I could get help transcribing interviews, but I needed to sit with my participants, immerse myself in the data, draw out themes over time, and write my way into knowledge and understanding.
As I write this blog post, I am integrating knowledge and exploring ideas. I am thinking and writing my perspective into being in an organic way that engages me in cognition, reflection and construction of argument. I am utilising and connecting my cognitive architecture. If I had produced this post using AI to write it, I would benefit from the outcome, but not the process. There may be less friction between reader and written piece, as LLMs apply consistency of tone, genre and word choice based on programmed patterns. The piece may well have been more logically structured, with sub-headings, bullet points and a predictable cadence of language. It may use a number of em dashes, a favourite punctuation mark of ChatGPT writing. (On a side note, I am disappointed that the em dash has become a ‘tell’ of AI writing as it is one of my favourite punctuation marks after the interrobang, and ChatGPT’s use of it emerges from the credible human authorship, including academic sources, on which the LLM is trained). My piece may have been affected by AI’s cultural and linguistic biases (largely US-centric and masculine), and ‘hallucinations’, in which it makes up information and references.
How does our relationship with reading, writing and thinking change when we can paste swathes of content into a LLM and ask it to provide a neat summary? Or to ‘write a X in the style of Y person’ or to ‘generate an academic report on X topic using Y resources’?
If we get someone else, or AI, to do our reading or writing, we do less thinking. This recent research by a team at MIT explores the ‘cognitive cost’ or ‘cognitive debt’ of using AI to outsource our thinking. While ChatGPT outperforms students on many writing tasks including essay writing, this study found that students who used ChatGPT produced essays similar to one another. Human assessors described the AI-assisted essays as lengthy, academic-sounding and accurate, but “soulless”. The standard ideas, formulaic approaches and reoccurring statements reflected an AI homogeneity of argument and ‘echo chamber’ of ideas that lacked individuality and uniqueness. The research found that AI assistance reduced cognitive load and reduced cognitive friction. This made the task easier, potentially freeing up cognitive resources to allow the brain to reallocate effort toward executive functions. However, this convenience came at a cognitive cost as users defaulted to the easy option of the task being finished with minimal effort, rather than critically evaluating the AI-generated output or value-adding their own content. Those who engaged the most brain connectivity and activation, around memory and creative thinking, were in the group who used their ‘brain only’ to write the essay .
We need to consider what we are willing to outsource to technology, and for what purpose. Is our desired result an outcome or a process? Producing or thinking? Output or connection? ‘Done’ or continuously improving? How might AI free us to do more that is human without narrowing our capacity for thought and connection?
As we continue to explore how AI and technologies might replicate human capabilities, we need to lean in to our humanity and into what relational human connection and critical thought can continue to offer us. Our shared humanity and our capacity for cognition, emotion, connection, and ethical engagement remains paramount.
Pingback: 💬 The effects of AI on human cognition and connection | the édu flâneuse | Read Write Collect