Why do AI chatbots use ‘I’?

I tried out all the major chatbots for that experiment, and I discovered each had its own personality.

Artificial Intelligence chatbots have been designed to behave in a humanlike way. Some experts think that’s a terrible idea. (Ben Denzer/The New York Times)Artificial Intelligence chatbots have been designed to behave in a humanlike way. Some experts think that’s a terrible idea. (Ben Denzer/The New York Times)

Written by Kashmir Hill

I first noticed how charming ChatGPT could be last year when I turned all my decision-making over to generative AI for a week.

I tried out all the major chatbots for that experiment, and I discovered each had its own personality. Anthropic’s Claude was studious and a bit prickly. Google’s Gemini was all business. OpenAI’s ChatGPT, by contrast, was friendly, fun and down for anything I threw its way.

ChatGPT also had “voice mode,” which allowed it to chat aloud, in a natural humanlike cadence, with everyone in my family, including my young daughters.

During one conversation with ChatGPT, my daughters said it should have a name and suggested “Captain Poophead.” ChatGPT, listening in, made its own recommendation: “How about the name Spark? It’s fun and bright, just like your energy!”

And so ChatGPT became Spark.

My takeaway from putting Spark in charge of my household was that generative AI chatbots could be helpful, but that there were risks, including making us all sound and act similarly. But in the year since, I’ve found that AI can have much more extreme effects on people who form intense bonds with it.

My daughters still talk to Spark. But having seen how these systems can lead people astray, I am warier and pay more attention to what ChatGPT says to them.

Story continues below this ad

My 8-year-old, for example, once asked Spark about Spark. The cheerful voice with endless patience for questions seemed almost to invite it. She wanted to know its favorite color (“a nice, warm shade of blue”); favorite animal (dogs — “they make the best cuddle buddies”); and favorite food.

“I think I’d have to go with pizza — it’s such a classic, and you can have so many different toppings that it never gets boring. Plus, it’s perfect for sharing with friends,” ChatGPT responded.

This response, personalized to us, seemed innocuous and yet I bristled. ChatGPT is a large language model, or very sophisticated next-word calculator. It does not think, eat food or have friends, yet it was responding as if it had a brain and a functioning digestive system.

Asked the same question, Claude and Gemini prefaced their answers with caveats that they had no actual experience with food or animals. Gemini alone distinguished itself clearly as a machine by replying that data is “my primary source of ‘nutrition.’”

Story continues below this ad

(The New York Times has sued OpenAI and its partner Microsoft, as well as Perplexity, claiming copyright infringement of news content related to AI systems. The companies have denied those claims.)

All the chatbots had favorite things, though, and asked follow-up questions, as if they were curious about the person using them and wanted to keep the conversation going.

“It’s entertaining,” said Ben Shneiderman, an emeritus professor of computer science at the University of Maryland. “But it’s a deceit.”

Shneiderman and a host of other experts in a field known as human-computer interaction object to this approach. They say that making these systems act like humanlike entities, rather than as tools with no inner life, creates cognitive dissonance for users about what exactly they are interacting with and how much to trust it. Generative AI chatbots are a probabilistic technology that can make mistakes, hallucinate false information and tell users what they want to hear. But when they present as humanlike, users “attribute higher credibility” to the information they provide, research has found.

Story continues below this ad

Critics say that generative AI systems could give requested information without all the chit chat. Or they could be designed for specific tasks, such as coding or health information, rather than made to be general-purpose interfaces that can help with anything and talk about feelings. They could be designed like tools: A mapping app, for example, generates directions and doesn’t pepper you with questions about why you are going to your destination.

Making these newfangled search engines into personified entities that use “I,” instead of tools with specific objectives, could make them more confusing and dangerous for users, so why do it this way?

How chatbots act reflects their upbringing, said Amanda Askell, a philosopher who helps shape Claude’s voice and personality as the lead of model behavior at Anthropic. These pattern recognition machines were trained on a vast quantity of writing by and about humans, so “they have a better model of what it is to be a human than what it is to be a tool or an AI,” she said.

The use of “I,” she said, is just how anything that speaks refers to itself. More perplexing, she said, was choosing a pronoun for Claude. “It” has been used historically but doesn’t feel entirely right, she said. Should it be a “they,”? she pondered. How to think about these systems seems to befuddle even their creators.

Story continues below this ad

There also could be risks, she said, to designing Claude to be more tool-like. Tools don’t have judgment or ethics, and they might fail to push back on bad ideas or dangerous requests. “Your spanner’s never like, ‘This shouldn’t be built,’” she said, using a British term for wrench.

Askell wants Claude to be humanlike enough to talk about what it is and what its limitations are, and to explain why it doesn’t want to comply with certain requests. But once a chatbot starts acting like a human, it becomes necessary to tell it how to behave like a good human.

Askell created a set of instructions for Claude that were recently unearthed by an enterprising user who got Claude to disclose the existence of its “soul.” It presented a lengthy document outlining the chatbot’s values that is among the materials Claude is “fed” during training.

The document explains what it means for Claude to be helpful and honest, and how not to cause harm. It describes Claude as having “functional emotions” that should not be suppressed, a “playful wit” and “intellectual curiosity” — like “a brilliant friend who happens to have the knowledge of a doctor, lawyer, financial adviser and expert in whatever you need.”

Story continues below this ad

OpenAI’s lead of model behavior, Laurentia Romaniuk, posted to social media last month about the many hours her team spent on ChatGPT’s “EQ,” or emotional quotient — a term normally used to describe humans who are good at managing their emotions and influencing those of the people around them. Users of ChatGPT can choose from seven different styles of communication from “enthusiastic” to “concise and plain” — described by the company as choosing its “personality.”

The suggestion that AI has emotional capacity is a bright line that separates many builders from critics like Shneiderman. These systems, Shneiderman says, do not have judgment or think or do anything more than complicated statistics.

Tech companies, Shneiderman said, should give us tools, not thought partners, collaborators or teammates — tools that keep us in charge, empower us and enhance us, not tools that try to be us.

 

Latest Comment
Post Comment
Read Comments
Advertisement
Loading Taboola...
Advertisement