
The Associated Press (AP) on Wednesday published new standards for the use of generative AI in its newsroom. The organisation which is behind ‘AP Style’ – English publications’ go-to writing and editing reference – listed some fairly basic rules surrounding the emerging technology, concluding that it does “not see AI as a replacement of journalists in any way.”
The AP has allowed journalists to experiment with AI tools like ChatGPT, provided that they exercise caution and do not use it to create publishable content. The organisation also stresses that any output from a generative AI tool should be treated as “unvetted source material,” meaning journalists are needed to apply their editorial judgement and AP’s sourcing standards before using such information. In other words, journalists are required to rigorously fact-check all AI-generated material before using it.
AP recognises these risks and has asked its journalists to “exercise the same caution and skepticism they would normally,” including identifying sources, doing reverse image searches, and checking for similar reports from trusted media.
AP also forbids the use of generative AI to add or subtract any elements from photos, videos, or audio. Certain generative tools like Adobe’s Firefly-powered Photoshop are capable of expanding images beyond their original bounds and altering their aspect ratios non-destructively, and this rule is likely aimed at those. However, the guidelines did carve exceptions where AI illustrations/art is the subject of the story – but even then, the image should be labelled as such.
Finally, following the example of many companies that have banned employees from entering sensitive information into AI chatbots, AP has also advised staff to not put confidential or sensitive information into AI tools.
It is noteworthy that while AP has set these standards for its staff, the publication has signed a licensing agreement with OpenAI, allowing the ChatGPT maker to use its news stories to train generative AI models.