Google says the new policy will be effective from January 2024. (Image Source: Google) Google has announced that apps like ChatGPT, Bing and others using generative AI should have an in-app system to report offensive content.
The new policy will be enforced from January 2024 and requires app developers to make use of user reports to limit the apps from generating ‘restricted content’ and refrain from engaging in ‘deceptive behaviour’. With people adopting generative AI apps at an exponential rate, it looks like Google want to curb explicit content generated by these apps.
Last year, the popular AI-powered photo editor app Lensa was used to generate explicit content. Microsoft’s Bing Image Creator, powered by DALL-E 3, recently came under fire after it generated a picture of the fictional character Mickey Mouse flying in a plane and heading towards two towers with a gun in hand.
Google is also making some changes to permissions required by generative AI apps like ChatGPT. The tech giant says “apps will only be able to access photos and videos for purposes directly related to app functionality”. AI apps like ChatGPT, which does not require storage access to work properly but often request for photo or video access will soon need to resort to Google’s system picker.
The tech giant also made changes to how apps display full-screen notifications. Currently, many apps abuse the functionality to lure users into buying a subscription or in-app purchase. When the new policy kicks in, Google says the functionality will be limited, with apps that display full-screen notifications requiring a special app access permission.