In a first, OpenAI – the company behind ChatGPT – said it disrupted a covert influence campaign originating in Israel that used its models to generate pro-Congress and anti-Bharatiya Janata Party (BJP) content and spread those online in a bid to disrupt the ongoing election process.
OpenAI said that in May, this network used its AI models to generate “large quantities of short comments that were then posted across Telegram, X, Instagram and other sites”.
As per a report released by the company on Thursday, a commercial company in Israel called ‘STOIC,’ which was generating content about the Gaza conflict, and to a lesser extent the Histadrut trade unions organisation in Israel and the Indian elections. The company said it had nicknamed the operation as “Zero Zeno”.
Story continues below this ad
OpenAI’s report, titled ‘AI and Covert Influence Operations: Latest Trends’ is a first of its kind from the company and offers a glimpse into how actors in cyberspace are using artificial intelligence (AI) in their efforts to manipulate the public. It also disrupted similar operations originating from China, Russia and Iran.
The report immediately drew a reaction from the BJP, with Minister of State for Electronics and IT Rajeev Chandrasekhar calling it a “dangerous threat to our democracy”. “It is absolutely clear and obvious that BJP was and is the target of influence operations, misinformation and foreign interference, being done by and/or on behalf of some Indian political parties,” he said on X.
“This is a very dangerous threat to our democracy. It is clear vested interests in India and outside are clearly driving this and needs to be deeply scrutinised/investigated and exposed,” he added.
The company said this operation targeted audiences in Canada, the United States and Israel with content in English and Hebrew. In early May, it began targeting audiences in India with English-language content as well, OpenAI said. The company said that it disrupted some of its activity focused on India’s elections in less than 24 hours after it began.
Story continues below this ad
“…the network began generating comments that focused on India, criticised the ruling BJP party and praised the opposition Congress party,” OpenAI said.
OpenAI’s investigations, as per the company, showed that, while the actors behind these operations sought to generate content or increase productivity using its models, these campaigns did not appear to have “meaningfully increased their audience engagement or reach as a result of their use of our services”. Many accounts have already been disabled by Meta and X, so current engagement figures may not present the complete picture, it added.
“Using the Breakout Scale to assess the impact of IO, which rates them on a scale of 1 (lowest) to 6 (highest), we would assess this as a Category 2 operation, marked by posting activity on multiple platforms and websites, but with no evidence of it being significantly amplified by people outside the network,” OpenAI said.
The report surveys campaigns by threat actors that have used its products to further covert influence operations. The company defines such operations as “deceptive attempts to manipulate public opinion or influence political outcomes without revealing the true identity or intentions of the actors behind them”.
Story continues below this ad
“While we observed these threat actors using our models for a range of influence operations, they all attempted to deceive people about who they were or what they were trying to achieve,” OpenAI said.
OpenAI said that in May, this network used its AI models to generate “large quantities of short comments that were then posted across Telegram, X, Instagram and other sites”.