
Generative AI is a powerful tool that can be used for both good and evil in the digital realm. Following the launch of ChatGPT when the technology exploded in popularity, experts began to contemplate its implications for cybersecurity. While we are thankfully yet to see the tech being used in major exploits by bad actors, security experts have been demonstrating clever ways generative AI can be employed to bolster cybersecurity.
A team of researchers from ETH Zürich, Swiss Data Science Center, and SRI International in New York have developed PassGPT, a new model based on OpenAI’s GPT-2 architecture that can generate and guess passwords. The model is trained on millions of passwords leaked in various cyberattacks, more specifically on the infamous RockYou leak. Reportedly, PassGPT can guess 20% more unseen passwords than state-of-the-art GAN models.
The creator of PassGPT, Javi Rando, said that the model can also compute the probability of any given password and analyse its strength and vulnerabilities. He added that the model can find patterns that are considered strong by conventional methods, but are actually easy to guess with generative techniques. He also said that the model can handle passwords in different languages and guess new passwords that are not in its dataset.
PassGPT is an example of how LLMs can be adapted to different domains and applications using different data sources. And it isn’t the first time generative AI has been trained on illegal data by those on the right side of the law. Previously, researchers trained an AI model called DarkBERT on dark web data to detect ransomware leak sites and monitor unlawful information exchange.