Premium
This is an archive article published on July 22, 2023

Pressured by Biden, AI companies agree to guardrails on new tools

The seven companies — Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI — formally announced their commitment to new standards in the areas of safety, security and trust at a meeting with President Joe Biden at the White House on Friday afternoon.

Joe BidenUS President Joe Biden
Listen to this article
Pressured by Biden, AI companies agree to guardrails on new tools
x
00:00
1x 1.5x 1.8x

Written by Michael D. Shear, Cecilia Kang and David E. Sanger

Seven leading AI companies in the United States have agreed to voluntary safeguards on the technology’s development, the White House announced Friday, pledging to manage the risks of the new tools even as they compete over the potential of artificial intelligence.

The seven companies — Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI — formally announced their commitment to new standards in the areas of safety, security and trust at a meeting with President Joe Biden at the White House on Friday afternoon.

“We must be cleareyed and vigilant about the threats emerging from emerging technologies that can pose — don’t have to but can pose — to our democracy and our values,” Biden said in brief remarks from the Roosevelt Room at the White House.

“This is a serious responsibility. We have to get it right,” he said, flanked by the executives from the companies. “And there’s enormous, enormous potential upside as well.”

The announcement comes as the companies are racing to outdo each other with versions of AI that offer powerful new ways to create text, photos, music and video without human input. But the technological leaps have prompted fears about the spread of disinformation and dire warnings of a “risk of extinction” as artificial intelligence becomes more sophisticated and humanlike.

The voluntary safeguards are only an early, tentative step as Washington and governments across the world seek to put in place legal and regulatory frameworks for the development of artificial intelligence. The agreements include testing products for security risks and using watermarks to make sure consumers can spot AI-generated material.

Story continues below this ad

But lawmakers have struggled to regulate social media and other technologies in ways that keep up with the rapidly evolving technology.

“In the weeks ahead, I’m going to continue to take executive action to help America lead the way toward responsible innovation,” Biden said. “And we’re going to work with both parties to develop appropriate legislation and regulation.”

The White House offered no details of a forthcoming presidential executive order that aims to deal with another problem: How to control the ability of China and other competitors to get ahold of the new AI programs, or the components used to develop them.

The order is expected to involve new restrictions on advanced semiconductors and restrictions on the export of the large language models. Those are hard to secure — much of the software can fit, compressed, on a thumb drive.

Story continues below this ad

An executive order could provoke more opposition from the industry than Friday’s voluntary commitments, which experts said were already reflected in the practices of the companies involved. The promises won’t restrain the plans of the AI companies nor hinder the development of their technologies. And as voluntary commitments, they won’t be enforced by government regulators.

“We are pleased to make these voluntary commitments alongside others in the sector,” Nick Clegg, the president of global affairs at Meta, the parent company of Facebook, said in a statement. “They are an important first step in ensuring responsible guardrails are established for AI and they create a model for other governments to follow.”

As part of the safeguards, the companies agreed to security testing, in part by independent experts; research on bias and privacy concerns; information sharing about risks with governments and other organizations; development of tools to fight societal challenges such as climate change; and transparency measures to identify AI-generated material.

In a statement announcing the agreements, the Biden administration said the companies must ensure that “innovation doesn’t come at the expense of Americans’ rights and safety.”

Story continues below this ad

“Companies that are developing these emerging technologies have a responsibility to ensure their products are safe,” the administration said in a statement.

For the companies, the standards described Friday serve two purposes: as an effort to forestall, or shape, legislative and regulatory moves with self-policing, and a signal that they are dealing with this new technology thoughtfully and proactively.

But the rules that they agreed on are largely the lowest common denominator, and can be interpreted by every company differently. For example, the firms committed to strict cybersecurity measures around the data used to make the “language models” on which generative AI programs are developed. But there is no specificity about what that means — and the companies would have an interest in protecting their intellectual property anyway.

And even the most careful companies are vulnerable. Microsoft, one of the firms attending the White House event with Biden, scrambled last week to counter a Chinese government-organized hack on the private emails of U.S. officials who were dealing with China. It now appears that China stole, or somehow obtained, a “private key” held by Microsoft that is the key to authenticating emails — one of the company’s most closely guarded pieces of code.

Story continues below this ad

Given such risks, the agreement is unlikely to slow the efforts to pass legislation and impose regulation on the emerging technology.

European regulators are poised to adopt AI laws this year, which has prompted many of the companies to encourage U.S. regulations. Several lawmakers have introduced bills that include licensing for AI companies to release their technologies, the creation of a federal agency to oversee the industry, and data privacy requirements. But members of Congress are far from agreement on rules.

Lawmakers have been grappling with how to address the ascent of AI technology, with some focused on risks to consumers while others are acutely concerned about falling behind adversaries, particularly China, in the race for dominance in the field.

 

Latest Comment
Post Comment
Read Comments
Advertisement
Loading Taboola...
Advertisement