Anthropic's Cowork tool is built into its Claude desktop app. (Screenshot: Anthropic)
Anthropic has launched a new agentic AI tool called Cowork that is capable of autonomously taking actions on a user’s desktop such as creating spreadsheets, editing and organising files, generating a report based on scattered notes, etc.
Cowork is designed as a less advanced version of Claude Code, Anthropic’s AI coding tool that has gained significant traction among developers since its launch in February 2025. Cowork is a simpler way for anyone—not just developers—to work with Claude in the very same way, the AI startup reportedly eyeing a $350 billion valuation said in a blog post on Monday, January 12.
Anthropic said that Claude in Cowork is different from the AI chatbot as it has “much more agency” than in a regular chat with a user. “Once you’ve set it a task, Claude will make a plan and steadily complete it, while looping you in on what it’s up to,” it said.
Giving Cowork access to web browsers such as Google Chrome will also enable the AI agent to complete tasks that involve navigating the web. The newly unveiled agentic AI tool is currently being rolled out in research preview with access limited to Claude Max subscribers. It is accessible through the Claude macOS desktop app, with a ‘Cowork’ option in the sidebar.
Anthropic’s new agentic AI tool is the startup’s latest effort to take on rivals such as OpenAI and Google by focusing on serving the general consumer market. So far, the AI startup has prioritised capturing enterprise market share with advanced tools such as Claude Code which has become one of Anthropic’s most successful products till date. It has also rolled out more tailored versions of Claude to serve specialised sectors such as Claude for Healthcare, Claude for Financial Services, and Claude for Life Sciences.
While Cowork is built on the same foundations of Claude Code, it has been designed to be easy to set up and use even for less tech savvy users. “You don’t need to keep manually providing context or converting Claude’s outputs into the right format. Nor do you have to wait for Claude to finish before offering further ideas or feedback: you can queue up tasks and let Claude work through them in parallel,” Anthropic said.
Using existing connectors, Claude in Cowork can link to external data and create documents, presentations, and other files.
In terms of safety and security, Anthropic said that Claude in Cowork can only access folders and connectors chosen by the user. “Claude can’t read or edit anything you don’t give it explicit access to. Claude will also ask before taking any significant actions, so you can steer or course-correct it as you need,” Anthropic said.
However, Cowork remains susceptible to prompt injection attacks similar to other agentic AI tools. Acknowledging that Cowork can “take potentially destructive actions (such as deleting local files) if it’s instructed to”, Anthropic suggested that users should give Claude “very clear guidance” in order to prevent the agentic AI tool from misinterpreting their instructions.
“We’ve built sophisticated defenses against prompt injections, but agent safety—that is, the task of securing Claude’s real-world actions—is still an active area of development in the industry,” Anthropic added.
Several agentic AI tools including AI browser agents such as ChatGPT Atlas and Perplexity’s Comet as well as IDE platforms such as Google’s Antigravity have been flagged by security researchers as posing security risks and opening the door to new types of threats such as direct or indirect prompt injection attacks.