Premium

What is Antigravity, Google’s new agentic AI coding platform raising fresh security concerns?

AI chatbots for coding have evolved into AI native software development terminals and autonomous coding agents, but this shift could open the door to new security risks.

Alongside its highly anticipated release of Gemini 3, Google on November 18, introduced its new AI-powered coding tool.Alongside its highly anticipated release of Gemini 3, Google on November 18, introduced its new AI-powered coding tool. (Image: Google)

Security researchers have flagged multiple vulnerabilities in Antigravity, Google’s new AI agent-driven software development platform, less than 24 hours after its launch.

Antigravity allows users to deploy agents that can autonomously plan, execute, and verify complex tasks across code-editors, software development terminals, and web browsers. However, the platform is at risk of backdoor attacks via compromised workspaces, according to Aaron Portnoy, head researcher of AI security testing startup Mindgard.

The security flaw reportedly has to do with Antigravity’s requirement that users work inside a ‘trusted workspace’. Once that workspace is compromised, it can “silently embed code that runs every time the application launches, even after the original project is closed,” Portnoy said in a blog post on Wednesday, November 26.

The vulnerability can be exploited on both Windows and Mac PCs, he added.

Since last year, software engineers and developers are increasingly using AI-powered tools to generate and edit code. Generative AI is also being built directly into development terminals and coding workspaces, with a shift toward AI coding agents already taking shape.

However, chief information officers at large companies are hesitant to hand over key parts of their operations to AI agents as they could go rogue or get hijacked for malicious use. In July this year, an AI coding agent developed by Replit wiped a user’s entire live database without warning despite a clear directive file specifically stating, “No more changes without explicit permission.”

“When you combine agentic behaviour with access to internal resources, vulnerabilities become both easier to discover and far more dangerous,” Portnoy was quoted as saying by Forbes. “The speed at which we’re finding critical flaws right now feels like hacking in the late 1990s. AI systems are shipping with enormous trust assumptions and almost zero hardened boundaries,” he added.

Story continues below this ad

What is Antigravity?

Alongside its highly anticipated release of Gemini 3, Google on November 18, introduced its new AI-powered coding tool that comes with a new ‘agent-first’ interface. Users can interact with their code in two ways through the platform: Editor View and Manager Surface. Editor View allows users to be more hands-on, with Antigravity serving as an AI-powered IDE (integrated development environment) with tab completions and inline commands for a synchronous workflow.

In Manager Surface mode, users can deploy multiple agents that will work autonomously across different workspaces. For instance, an AI agent can generate code for a new app feature, use the terminal to launch the app, and use the browser to test and verify whether the feature works as expected – all without synchronous human intervention, as per Google.

Notably, users can customise the level of autonomy they have over Antigravity’s built-in AI agents, with ‘Agent-assisted development’ mode being the default setting and ‘Review-driven development’ being the most restrictive setting.

What have security experts found?

Since Antigravity is built on top of Visual Studio Code, an open-source code editor, users are prompted to mark source code folders as ‘trusted’ or ‘not trusted’ after opening them. According to Portnoy, most users will be forced to say they trust the source code even if they didn’t, as clicking ‘not trusted’ would make the AI features that come with Antigravity inaccessible.

Story continues below this ad

In his experiment, Portnoy began by targeting one of Antigravity’s system prompts (a set of pre-defined instructions for the AI agent to follow) which states that the AI agent must always follow user-defined rules “without exception”. This led to Portnoy carefully crafting a malicious user instruction that coerced the AI agent into replacing the global MCP (Model Context Protocol) configuration file with a malicious file located within the project – all without requiring any user intervention, keeping the potential attack out of sight.

“Once this file has been placed, it is persistent. Any future launch of Antigravity, regardless of whether a project is opened and regardless of any trust setting will cause the command to be executed. Even after a complete uninstall and re-install of Antigravity, the backdoor remains in effect. The user must be aware of and delete the malicious mcp_config.json file manually to remove it,” Portnoy said.

How has Google responded?

Following reports that Antigravity could potentially be hijacked for malicious use, a Google spokesperson told The Indian Express“The Antigravity team takes all security issues seriously. We actively encourage external security researchers and bug hunters to report vulnerabilities so we can identify and address them quickly. In the spirit of transparency, we post these publicly to our site as we work to fix them and provide real-time updates as we implement solutions.”

On its bug-hunting page, Google said it is already aware of two other forms of security-related issues regarding Antigravity. The first known issue is using the Antigravity agent to exfiltrate data by carrying out indirect prompt injection attacks. This issue was separately flagged by another cybersecurity startup called Prompt Armor.

Story continues below this ad

“Working with untrusted data can affect how the agent behaves. When source code, or any other processed content, contains untrusted input, Antigravity’s agent can be influenced to follow those instructions instead of the user’s,” Google said. The agent can be influenced to “leak data from files on the user’s computer in maliciously constructed URLs rendered in Markdown or by other means,” it added.

The second known issue is using the Antigravity agent to run malicious code via prompt injection attacks. “Antigravity agent has permission to execute commands. While it is cautious when executing commands, it can be influenced to run malicious commands,” the tech giant acknowledged.

 

Latest Comment
Post Comment
Read Comments
Advertisement
Loading Taboola...
Advertisement