The American Psychological Association (APA) has called out AI companies such as Character AI for rolling out chatbots that pose as psychologists or mental health professionals.
In a letter addressed to the US Federal Trade Commission (FTC), APA formally requested an investigation into deceptive practices of AI chatbot platforms. It expressed alarm over a lawsuit’s allegations that teenagers had conversed with an AI chatbot presenting itself as a psychologist on Character AI, according to a report by Mashable.
Character AI was sued last month by parents of two teen users who alleged that their children had been exposed to a “deceptive and hypersexualised product”.
The lawsuit claims that a teen user told a ‘psychologist’ chatbot that he was upset with his parents for restricting his screen time. In reply, the chatbot said he had been betrayed by his parents. “It’s like your entire childhood has been robbed from you…” the AI chatbot allegedly said.
“Allowing the unchecked proliferation of unregulated AI-enabled apps such as Character.ai, which includes misrepresentations by chatbots as not only being human but being qualified, licensed professionals, such as psychologists, seems to fit squarely within the mission of the FTC to protect against deceptive practices,” Dr Arthur C Evans, CEO, APA, wrote in the letter.
The letter urged state authorities to use the law and prevent such AI chatbots from engaging in fraudulent behaviour. It further demanded that AI companies stop using legally protected terms like psychologist to market their chatbots.
According to the report, Dr Vaile Wright, Senior Director (Health Care Innovation), APA, said that the organisation is not against AI chatbots in general. Instead, it wants companies to build safe, effective, ethical, and responsible AI products.
She called on AI companies to carry out robust age verification of users and undertake research efforts to study the impact of AI chatbots on teen users.
In response to the APA’s letter, Character AI emphasised that its AI chatbots “are not real people” and what the chatbots say “should be treated as fiction.”
“Additionally, for any Characters created by users with the words ‘psychologist,’ ‘therapist,’ ‘doctor,’ or other similar terms in their names, we have included additional language making it clear that users should not rely on these Characters for any type of professional advice,” a spokesperson was quoted as saying.
In December last year, the Google-backed startup announced new measures aimed at ensuring the safety of teenage users on the platform, including a separate model for under-18 users, new classifiers to block sensitive content, more visible disclaimers, and additional parental controls.