
OpenAI’s ChatGPT Atlas is the latest entrant in a new wave of AI-powered browsers that are vying to capture market share in a space that has long been dominated by Google Chrome.
The browser integrates directly with ChatGPT, allowing users to open a sidebar window and ask the popular AI chatbot questions about the web pages they visit. It also provides access to a built-in AI agent that can be deployed to complete various tasks on a user’s behalf, such as planning events or booking appointments as they browse.
In a product demo livestream on Tuesday, October 21, OpenAI also showcased the browser’s ability to recall users’ past searches to suggest relevant topics, automate recurring tasks, or surface previously visited web sites.
However, less than 24 hours since its launch, ChatGPT Atlas has raised security concerns with cybersecurity researchers pointing out that AI-powered web browsers are vulnerable to prompt injection attacks. These browsers could also pose privacy risks as they likely require deep access to sensitive data from logged-in sessions.
This comes at a time when AI-centric browsers like Perplexity’s Comet are gaining traction due to a fundamental shift in user behaviour when looking up information online. Let’s take a closer look at the potential safety issues that come with AI-powered web browsers.
Vulnerabilities in AI browsers differ from traditional web exploits as they could allow the AI agent to be easily tricked into pulling sensitive data across domains. Security researchers at Brave recently identified a potential vulnerability in Perplexity’s agentic AI browser, Comet, that could allow attackers to maliciously instruct the browser agent via indirect prompt injection and gain access to sensitive user data, including emails, banking passwords, and other personal information.
Attackers could hide the malicious instructions for the AI browser agent between web content. These instructions would appear as text on white backgrounds, HTML comments, or other invisible elements. They could also be embedded in Reddit comments or Facebook posts.
As a result, if a user submits a prompt such as ‘summarise this page’, the AI browser agent would crawl the webpage content, process it to extract key points, follow the hidden commands, and get tricked into visiting a user’s banking website to exfiltrate saved passwords or 2FA codes. The root problem is that AI browser agents do not distinguish between the content it should summarise and the instructions it should not follow, as per the report.
(function() { function copyHTMLToClipboard() { const infographic = document.querySelector('.infographic-prompt-injection'); if (!infographic) return; const clone = infographic.cloneNode(true); const branding = clone.querySelector('.infographic-prompt-injection__branding'); if (branding) { const newBranding = document.createElement('div'); newBranding.className = 'infographic-prompt-injection__branding'; const textSpan = document.createElement('span'); textSpan.className = 'infographic-prompt-injection__branding-text'; textSpan.textContent = 'Indian Express InfoGenIE'; newBranding.appendChild(textSpan); branding.parentNode.replaceChild(newBranding, branding); } const style = clone.querySelector('style'); let htmlContent = ''; if (style) htmlContent += style.outerHTML + '\n'; const contentNodes = Array.from(clone.childNodes).filter(node => node.nodeType === 1 && node.tagName !== 'STYLE' && node.tagName !== 'SCRIPT' ); contentNodes.forEach(node => htmlContent += node.outerHTML + '\n'); navigator.clipboard.writeText(htmlContent).then(() => { const message = document.createElement('div'); message.textContent = 'HTML Code Copied to Clipboard'; message.style.cssText = 'position:fixed;top:20px;right:20px;background:#10b981;color:white;padding:16px 24px;border-radius:8px;font-family:Roboto,sans-serif;font-size:14px;font-weight:600;box-shadow:0 4px 12px rgba(0,0,0,0.15);z-index:10000;animation:slideIn 0.3s ease;'; document.body.appendChild(message); setTimeout(() => message.remove(), 3000); }).catch(err => { alert('Copy failed: ' + err.message + '. Contact UI/Growth Team for more understanding'); }); } document.addEventListener('DOMContentLoaded', function() { const button = document.createElement('button'); button.textContent = 'Copy HTML'; button.style.cssText = 'position:fixed;bottom:20px;right:20px;background:#3b82f6;color:white;border:none;padding:12px 24px;border-radius:8px;font-family:Roboto,sans-serif;font-size:14px;font-weight:600;cursor:pointer;box-shadow:0 4px 12px rgba(0,0,0,0.15);z-index:9999;transition:background 0.2s;'; button.onmouseover = () => button.style.background = '#2563eb'; button.onmouseout = () => button.style.background = '#3b82f6'; button.onclick = copyHTMLToClipboard; document.body.appendChild(button); }); })();
To be sure, Brave researchers did not cite any real-world instances of the vulnerability in AI browser agents being exploited. After the Brave report came out, Perplexity said it made changes to Comet so that the AI browser agent can “clearly separate the user’s instructions from the website’s contents when sending them as context to the model”.
In regard to ChatGPT Atlas, OpenAI has acknowledged the possibility of such an attack.
“Besides simply making mistakes when acting on your behalf, agents are susceptible to hidden malicious instructions, which may be hidden in places such as a webpage or email with the intention that the instructions override ChatGPT agent’s intended behaviour. This could lead to stealing data from sites you’re logged into or taking actions you didn’t intend,” it said in a blog post on Tuesday.
Since AI native browsers deliver chatbot-style answers rather than a list of blue links, they lack the traditional feedback loop that refines and surfaces more relevant search results. As a result, LLM-based browsers resort to tracking detailed browser activity, extensive data collection, and pulling context from third-party apps to provide more personalised answers.
“Many GenAI browser assistants include a memory component that persists across navigations, sessions, or tabs. This enables longitudinal tracking and cross-context profiling, which is not common in traditional extensions,” read a 2025 research paper titled ‘Big Help or Big Brother? Auditing Tracking, Profiling, and Personalization in Generative AI Assistants.’
The AI browser agent in Comet pulls user context from third-party apps but only when users are logged in to these apps. Perplexity CEO Aravind Srinivas’ remarks had previously sparked controversy when he appeared to suggest that Comet would potentially track user behaviour outside of the browser as well.
Browser memories in Atlas is an extension of ChatGPT’s existing memory capability that stores details about users based on their past interactions with the AI chatbot. It can be used to “create a to-do list from your recent activity” or “research holiday gifts based on products you’ve viewed”.
However, the browser memories feature is not enabled at the outset. “You can view or archive them at any time in settings, and deleting browsing history deletes any associated browser memories,” OpenAI said. “Even when browser memories are on, you can decide which sites ChatGPT can or can’t see using the toggle in the address bar. When visibility is off, ChatGPT can’t view the page content, and no memories are created from it,” it added.
OpenAI also said it will not use the content captured by browser memories to train its AI models, unless users opt-in.
OpenAI has said that the AI browser agent in Atlas cannot run code in the browser, download files, or install extensions. It cannot access other apps on a user’s computer or file system. Additionally, the AI browser agent will “stop watching” when it takes actions on specific sensitive sites such as financial institutions.
“You can use agent in logged out mode to limit its access to sensitive data and the risk of it taking actions as you on websites,” OpenAI said. While ChatGPT Atlas is free to use, its agentic AI features are only accessible for users subscribed to OpenAI’s ChatGPT Plus or ChatGPT Pro plans.
In the ChatGPT agent system card, OpenAI says that it has run “thousands of hours of focused red-teaming” to safeguard its AI agents and quickly adapt to novel attacks. But it has also added that “safeguards will not stop every attack that emerges as AI agents grow in popularity.”
“Users should weigh the tradeoffs when deciding what information to provide to the agent, as well as take steps to minimize their exposure to these risks such as using ChatGPT agent in logged-out mode in Atlas and monitoring agent’s activities,” the company said.