From at least the mid-15th century, with the invention of the printing press, a wave of anxiety has accompanied every new technology around the creation and dissemination of the written word. Pens, typewriters, computers, the internet and search engines and, most recently, large language models (LLMs) like ChatGPT — each technology was seen by its critics as somehow diluting the purity of the relationship between thought and word. This anxiety peaked with the sudden and widespread rise of LLMs and their ubiquity in knowledge production, especially by students and researchers. Unlike earlier technologies, which were tools that assisted either the physical act of writing or, as with search engines, made research and referencing easier, AI models can “think” for the user. According to a study conducted at the Massachusetts Institute of Technology (MIT), published earlier this month, the use of ChatGPT for writing incurs a considerable “cognitive debt”.
Nataliya Kosmyna and other researchers divided test subjects — who had to write essays — into three groups: Those who used only their brains to write, those assisted by search engines, and those who used ChatGPT. Participants also switched roles to ensure more robust results. The neural activity of all three groups was monitored over four months. The group using LLMs showed considerably lower cognitive engagement with their writing, had less ownership over their work and remembered less than their counterparts. For many, these findings conform to the broader panic around AI. The fear that AI will replace intellectual labour like automation did in manufacturing is exacerbated by evangelists like OpenAI CEO Sam Altman. In a recent essay, Altman wrote, “ChatGPT is already more powerful than any human that has ever lived.”
Both the MIT study and Altman might be overestimating the consequences of AI. While the cognitive effects of relying completely on an LLM might be adverse, there are ways to use it effectively. The act of writing — good writing, at least — is not about regurgitation of facts but rather about ways to collate, analyse and express. This training is important for intellectual development, but that does not mean it cannot incorporate new tools. The stage when AI is integrated into learning is also important. School students, for example, are still taught how to do long division even though they will likely use a calculator in adulthood. LLMs can be useful for “language” tasks — correcting grammar, summarising texts, helping with tone and tenor — without replacing or diminishing the author. The issue is not whether to use AI but how to. The rapid growth of AI means that research into its effects is still playing catch-up. The lessons from the social media boom, and the issues that appeared in its wake, highlight the importance of narrowing this gap.