Death of the typo: how AI chatbots are changing the way we communicate

AI’s ability to effectively convey our thoughts cannot be undermined. But here is the problem: we all sound the same.

The typo has had a quiet death. No one writes ‘beleive’ instead of ‘believe’. The sentences are thought out. The structure flows seamlessly.The typo has had a quiet death. No one writes ‘beleive’ instead of ‘believe’. The sentences are thought out. The structure flows seamlessly.

There was a time when you could figure something out about whoever was on the other side of the screen from the way they typed. Too many typos meant perhaps they were in a hurry, or perhaps they were typing on a new keyboard. Their choice of words, the way they structured their sentences, the way they broke their paragraphs (or didn’t) revealed a little bit about them. Words had emotion. There was excitement and confidence, but also hesitation and uncertainty.

Now? The typo has had a quiet death. No one writes ‘beleive’ instead of ‘believe’. The sentences are thought out. The structure flows seamlessly. Everyone sounds polished and put together. Somehow, grammar is no longer a skill, but a given.

As LLM chatbots take over as the world’s writing assistants, online communication, whether it’s a LinkedIn post, a Substack essay, or even an internal e-mail, has developed a certain kind of vocabulary and cadence that is replicated everywhere. It may be hard to put into words, but we all know how it sounds. The “it’s not xyz, but abc”, and the short, hard-hitting sentence, right after a long one. Then there is the notorious em dash. Of course, it isn’t a trademark of AI writing, but its appearance in WhatsApp messages and X replies — beyond the realm of professional or academic writing — suggests the growing reliance on chatbots to improve texts.

Story continues below this ad

A paper, titled ‘How people use ChatGPT’, released by OpenAI in September 2025, states that writing was among the three most common topics, besides “practical guidance” and “seeking information”. Together, they account for nearly 80 per cent of all conversations.

The paper elaborates that writing includes automated production of emails, documents, and other communications, as well as editing, critiquing, summarising, and translating text provided by the user. “Writing is the most common use case at work, accounting for 40 per cent of work-related messages on average in June 2025,” it says.

My qualm isn’t that people sound fluent in the language. In fact, the AI’s ability to effectively communicate our thoughts cannot be undermined. But here is the problem: we all sound the same.

Chatbot-written texts model the best of human behaviour. The texts are politically correct; they have a certain kind of restraint. They gloss over human emotion or double down on it, depending on your prompt.

Story continues below this ad

A wedding invite sounds similar to a brand’s marketing campaign. A long social media post could rival a CEO’s ghost-written message to employees. And when this happens, writing becomes a mere tool of communication. It is no longer reflective. There was a time when writing could be almost meditative. Imagine writing a letter to your loved one. The AI chatbot doesn’t understand the nostalgia you share with the recipient or the yearning to meet them. Its response is likely to be a mere skeleton of what your emotions could reveal.

This influence isn’t on our writing alone. LLMs may be changing the way we speak!

Researchers at Cornell University scoured hours of YouTube videos and podcast episodes on a range of topics and found an “increased use of words preferentially generated by ChatGPT”, such as “delve”, “comprehend”, “boast”, “swift”, and “meticulous”.

The paper calls it the “beginning of a closed cultural feedback loop,” where humans and machines exchange cultural traits. Chatbots trained on human data, evolved their own cultural traits and are now shaping human behaviour.

Story continues below this ad

Adam Aleksic, author of ‘Algospeak: How Social Media Is Transforming the Future of Language’, writes in the Washington Post that this feedback loop makes it difficult to discern between human and AI-generated language. “And as AI models continue to be trained on both their own output and the AI-influenced human writing they ingest, the pervasiveness of LLM-speak will only intensify,” he writes.

He warns that with racial, gender, or political biases coded into these chatbots, their influence could extend well beyond communication and into our thinking.

LLM chatbots come with more warnings. They are known to hallucinate and lie, which means any text generated through them should ideally be double-checked. And while AI firms roll out newer models, pegged to match human intelligence, the fact is that chatbots are yet to reason the same way we do. At best, it is sophisticated pattern matching.

The awareness of this drawback is apparent in ChatGPT’s usage. The OpenAI paper notes that two-thirds of writing tasks ask the chatbot to “modify” user text rather than create new text from scratch.

Story continues below this ad

So, for a while at least, perhaps there is a saving grace in the separation of man and machine. The chatbot only works if it is fed a prompt. It can only polish what we have already expressed. Even if it could ‘act’ more human, it can’t ‘be’ human.

My take? Let’s make ‘mistaks’. Show when we are angry, or happy, or sad. Trail off mid-sentence… Let’s keep it messy.

Sonal Gupta is a Deputy Copy Editor on the news desk. She writes feature stories and explainers on a wide range of topics from art and culture to international affairs. She also curates the Morning Expresso, a daily briefing of top stories of the day, which won gold in the ‘best newsletter’ category at the WAN-IFRA South Asian Digital Media Awards 2023. She also edits our newly-launched pop culture section, Fresh Take.   ... Read More

Latest Comment
Post Comment
Read Comments
Advertisement
Advertisement
Advertisement
Advertisement