OpenAI cemented its position as one of the world's most valuable private companies last week. (Reuters file)OpenAI has apparently developed a tool that can detect 99.9 per cent of the text generated by ChatGPT, but the company is not yet ready to release it as it could affect its users.
According to a report by The Wall Street Journal, OpenAI built a tool that could detect AI-generated text almost a year ago, but the company is not yet ready to release it due to an internal conflict.
OpenAI watermarks ChatGPT-generated text by analysing how the model predicts words and phrases compared to the previous one, where humans might not be able to spot this, but the company’s self-developed tool can.
According to the report, OpenAI’s text detection tool can detect 99.9 per cent of the ChatGPT-generated text. However, according to a survey conducted by the company, 30 per cent of users would use ChatGPT less if such software existed.
OpenAI has been quite vocal about watermarking AI-generated content, including text, images, and videos. For images generated using DALL.E 3, the company adds a visible watermark and marks the same in metadata, making it easier to identify AI-generated content.
The company says it already has a text watermarking method. However, right now, it is focused on releasing “audiovisual content provenance solutions,” which are considered to have a higher level of risk compared to text.
The company claims that the AI text detection tool developed by OpenAI can identify localised tampering and paraphrasing. However, as of now, it cannot detect globally tampered text using translation tools or another AI tool.
For example, if a user were to ask the tool to insert special characters between words. The company also states that releasing an AI text detection tool could stigmatise the use of AI as a helpful writing tool for non-native English speakers.