Kulkarni pointed out that in this environment traceability of creators would be necessary to fix accountability, but this also raised the question of who would be accountable in case of machine-created content.Noted neuroscientist and author Mauktik Kulkarni has expressed concern over the mushrooming of fake news sites using Artificial Intelligence and highlighted the capacity of AI to tell lies, during his speech held in Bengaluru.
The fake news thus generated witnessed an increase of a whopping 1000 per cent – even with select articles in legitimate sites such as CNET had errors in them, he said, adding that other propaganda sites used AI along with human-written content to push viewpoints of the Chinese government, for instance.
The misinformation issue had also spread to politics, with deepfake videos morphing shows such as KBC to present a political viewpoint, or creating AI- generated speeches of the voice of long-dead leaders such as Swami Vivekananda, he said during his presentation on ‘Decoding the Landscape
AI’s Initial Impact on Artists, Journalists, and Democratic Institutions’.
Kulkarni pointed out that in this environment traceability of creators would be necessary to fix accountability, but this also raised the question of who would be accountable in case of machine-created content.
Kulkarni said, “The US Copyright office has finally started reviewing….what you mean by copyright violation. That is the need of the hour.”
On the ability of AI to give distorted information, Kulkarni highlighted an experiment where a GPT-4 AI was used to sell and buy stimulated stocks, and was overseen by a virtual manager. According to him, in 75 per cent of the tests, the bot in question resorted to insider trading and even lied to the manager about it. “We’ve seen that a human can give a robot a command and it can execute that command, but it can also lie if put under pressure,” he said.
Regarding the applications of AI in visual art, Kulkarni noted that the technology could be used to model which part of a picture the eye would be initially drawn to, aiding in graphic design.
According to the neuroscientist, it also had immediate consequences in other arenas as well, with Amazon having had to limit e-publishing to three books a day due to a glut of AI-coauthored books, and sci-fi publisher Clarkesworld having to close submissions entirely after a rise in pitches using ChatGPT.
On the impact of AI on journalism, Kulkarni spoke of the instance where the New York Times sued ChatGPT owner OpenAI for copyright infringement.
Kulkarni pointed out that remedies to these issues had not kept up with the problem. He highlighted instances where only a small number of artists had asked for the work to be excluded from the date used by DALL E -3, an AI image generator, and with another issue being that if a large proportion of artists and writers were of privileged backgrounds, then this would create certain biases in the output.
Kulkarni pointed out that a potential issue in these cases is that if only one AI operator signed such a deal, they might be putting themselves at a disadvantage as others would simply use the same data without having to pay at all.