Journalism of Courage
Advertisement
Premium

Anthropic’s Claude models to be used by US intel agencies in major AI defence deal

Other AI companies are also lining up to ink deals with US defence agencies and contractors.

3 min read
AnthropicAnthropic’s deal with US defence agencies comes as it looks to kick off a new funding round. (Image: Reuters)

Anthropic’s Claude series of AI models will soon be used by US intelligence and defence agencies as part of a deal that also involves data analytics firm Palantir and Amazon Web Services (AWS).

“We’re proud to be at the forefront of bringing responsible AI solutions to U.S. classified environments, enhancing analytical capabilities and operational efficiencies in vital government operations,” Kate Earle Jensen, Anthropic’s head of sales, was quoted as saying by Tech Crunch.

US defence and intelligence agencies will reportedly be able to access Claude on Palantir’s platform that is hosted by AWS.

“Access to Claude within Palantir on AWS will equip US defence and intelligence organisations with powerful AI tools that can rapidly process and analyse vast amounts of complex data,” Jensen said.

Specifically, Claude is likely to be made available via Palantir’s defence-accredited environment called Palantir Impact Level 6 (IL6).

The IL6 tag is reportedly issued by the US Department of Defense to indicate that the information in such systems are a step below “top secret” and must have “maximum protection” against unauthorised access.

“This [Claude] will dramatically improve intelligence analysis and enable officials in their decision-making processes, streamline resource intensive tasks and boost operational efficiency across departments,” Jensen added.

Story continues below this ad

Anthropic terms of service allows it AI models to be used for various defence purposes such as “legally authorised foreign intelligence analysis,” “identifying covert influence or sabotage campaigns,” and “providing warning in advance of potential military activities.”

“[We will] tailor use restrictions to the mission and legal authorities of a government entity” based on factors such as “the extent of the agency’s willingness to engage in ongoing dialogue,” the Amazon-backed AI startup’s policy reads.

However, it does not apply to AI systems that are considered to “substantially increase the risk of catastrophic misuse,” show “low-level autonomous capabilities,” or that can be used for disinformation campaigns, the design or deployment of weapons, censorship, domestic surveillance, and malicious cyber operations.

Anthropic’s deal with US defence agencies comes as it looks to kick off a new funding round at a valuation of up to $40 billion, as per reports. So far, the company has raised over $7.6 billion.

Story continues below this ad

Other AI companies are also lining up to ink deals with US defence agencies and contractors.

Last week, Meta said it now permits US government agencies and contractors to use Llama, its open-source family of AI models, for military and national security purposes, according to a report by The New York Times. Earlier, the tech giant’s policy reportedly prohibited the use of the company’s AI software for “military, warfare, nuclear” purposes, among other reasons.

Tags:
  • Anthropic artificial intelligence
Edition
Install the Express App for
a better experience
Featured
Trending Topics
News
Multimedia
Follow Us
History Headline On Nehru’s China trip, a shared concern: The US
X