A chatbot with the ability to call out fake news and misinformation was able to persuade participants in a study to have second thoughts about their beliefs — which suggests that artificial intelligence (AI) can be used as a tool to combat conspiracy theories and disinformation.
The chatbot presented participants with comprehensive answers and detailed arguments, following which they found themselves thinking differently — a change that lasted for several weeks. (‘Durably reducing conspiracy beliefs through dialogues with AI’: Science, September 13, Thomas H Costello and others)
Conspiracy theories are thought to feed off the yearning of individuals for safety and stability in a world full of uncertainties. “What we found in this paper goes against that traditional explanation,” study co-author Thomas H Costello, a psychology researcher at American University in Washington DC, told Nature news, which published a report on the research.
“One of the potentially cool applications of this research is you could use AI to debunk conspiracy theories in real life,” Costello said.
The study shows that many people who strongly believe in “seemingly fact-resistant conspiratorial beliefs” can change their minds when presented with compelling evidence, the researchers wrote.
“From a theoretical perspective, this paints a surprisingly optimistic picture of human reasoning: Conspiratorial rabbit holes may indeed have an exit. Psychological needs and motivations do not inherently blind conspiracists to evidence — it simply takes the right evidence to reach them,” they said.
Studies have shown that almost 1 in every 2 Americans believe conspiracy theories — the claim that NASA “faked” the 1969 Moon landing has endured for decades. During the Covid-19 pandemic, some conspiracy theorists said vaccines were used to inject chips into the body to enable mass surveillance; in Germany, ideas such as these triggered violent protests.
With social media dramatically amplifying the voices of conspiracy theorists, some of these bizarre ideas came to have real and serious consequences — vaccine uptake was impacted; in 2016, a North Carolina man who believed a conspiracy theory that top Democratic Party officials were running a paedophile ring opened fire in a pizza shop in Washington DC; and the January 6, 2021 attack on the US Capitol was fuelled by fake news that the presidential election of 2020 had been “stolen”.
The researchers said they sought to “leverage advancements in large language models (LLMs)”, a form of artificial intelligence (AI) that has access to vast amounts of information and the ability to generate bespoke arguments, “to try to directly refute” particular evidence each study participant cited as supporting their conspiratorial beliefs.
“Across two experiments, 2,190 Americans articulated — in their own words — a conspiracy theory in which they believe, along with the evidence they think supports this theory. They then engaged in a three-round conversation with the LLM GPT-4 Turbo [chatbot], which we prompted to respond to this specific evidence while trying to reduce participants’ belief in the conspiracy theory,” the study says.
The results were encouraging: across a wide range of conspiracy theories, “the treatment reduced participants’ belief in their chosen conspiracy theory by 20% on average”, and the “effect persisted undiminished for at least 2 months”. Also, the study noted, “AI did not reduce belief in true conspiracies”.