A group of researchers have created a new AI worm called ‘Morris II’ that can steal your confidential data, send spam emails and spread malware using various methods. Named after the first worm that rocked the internet back in 1988, the research paper suggests that the generative AI worm can spread itself between artificial intelligence systems.
Morris II can affect generative AI email assistants, extract data from AI-enabled email assistants and even take down security measures of popular AI-powered chatbots like ChatGPT and Gemini. Using self-replicating prompts, the AI worm can easily navigate through AI systems without getting detected.
Ben Nassi from Cornell Tech, Stav Cohen from the Israel Institute of Technology and Ron Button from Intuit say a text prompt infects the email assistant using the large language model using extra data, which is then sent to GPT-4 or Gemini Pro to create text content, which then breaks the safeguards of the generative AI service and steals data.
It also suggests an image prompt method can embed the harmful prompt in a photo so the email assistant automatically forwards messages to infect new email clients. Using Morris II, they were able to mine confidential information like social security numbers and credit card information.
The researchers quickly alerted about their findings to both OpenAI and Google. According to a recent report by Wired, Google refused to give a reply but a spokesperson on OpenAI’s behalf said that they are working on making their systems more secure and that developers should use methods to see they are not working with harmful input.