With the US gearing up to hold the presidential elections later this year, experts have raised concerns about the use of AI-generated content to spread misinformation and manipulate voters in the country. Recently, former US President Donald Trump posted fake, AI-generated photos of popular musician Taylor Swift and her fans supporting his presidential campaign on social media platform Truth Social.
Amid the rise of such AI-generated deepfakes, state lawmakers in California, US, have introduced over 65 new bills that touch on AI regulation. While most of them are unlikely to be passed, one piece of legislation that is gaining steam and being backed by big AI companies is a new bill called AB 3211.
The new bill titled AB 3211 or the ‘California Digital Provenance Standards Bill’ proposes to require that tech companies embed watermarks in the metadata of AI-generated images and videos. Metadata provides essential information such as the origin, context, and the history of a piece of text or an image.
“The Legislature should require online platforms to label synthetic content produced by GenAI. Through these actions, the Legislature can help to ensure that Californians remain safe and informed,” the bill reads.
The provenance data added to AI-generated content must include information about the synthetic nature of the content, the name of the generative AI provider, the time and date of when the provenance data was added, and which parts of the content are AI-generated, as per the bill.
Additionally, the bill requires tech companies to build tools that let users assess whether an image or video has been generated using AI, along with displaying the provenance data attached to that content. These tools are also required to undergo testing to ensure that they cannot be misused to attach fake provenance data to AI-generated content.
One of the key challenges in detecting and identifying AI-generated content is that photos and videos can be stripped of metadata. Addressing this limitation, the bill proposes to ban software applications or tools primarily designed to remove provenance data from synthetic content.
Large online platforms such as Instagram or X to label AI-generated content in an “easy-to-understand” format for users. Furthermore, sound recordings and music videos shared on these platforms need to be labelled with the name of the artist, the track, and the copyright holder or other licensor information, as per the bill.
ChatGPT developer OpenAI has reportedly backed the draft legislation. “New technology and standards can help people understand the origin of content they find online, and avoid confusion between human-generated and photorealistic AI-generated content,” Jason Kwon, the chief strategy officer of OpenAI, was quoted as saying by Reuters.
Besides OpenAI, Adobe and Microsoft have also thrown their weight behind the AI bill, according to a report by TechCrunch. Interestingly, AB 3211 was initially opposed by an industry association with Adobe and Microsoft as members. Amendments to the bill propose lesser fines in case of violations with a penalty of $100,000 if the violation is intentional and a penalty of $25,000 if it is unintentional.
All three of the above tech companies are part of the Coalition for Content Provenance and Authenticity (C2PA) initiative which looks to establish an industry-wide standard for marking AI-generated content.
https://platform.twitter.com/widgets.js
Meanwhile, Elon Musk, who owns the social media platform X and AI company xAI, has voiced his support for another California AI bill called SB 1047 that would require tech companies and AI developers to conduct safety testing on some of their own models. As per the latest draft of this bill, developers behind large frontier AI models can be held liable for “critical harms”.