The rapidly advancing area of artificial intelligence will require a new field of law and new regulations governing a growing pool of businesses involved, according to Microsoft Corp, a 25-year participant in AI research.
Companies making and selling AI software will need to be held responsible for potential harm caused by ‘unreasonable practices’ – if a self-driving car program is set up in an unsafe manner that causes injury or death, for example, Microsoft said. And as AI and automation boost the number of laborers in the gig-economy or on-demand jobs, Microsoft said technology companies need to take responsibility and advocate for protections and benefits for workers, rather than passing the buck by claiming to be ‘just the technology platform’ enabling all this change.
Microsoft broaches these ideas in a 149-page book entitled “The Future Computed,” which will also be the subject of a panel at the World Economic Forum in Davos, Switzerland, next week. As Redmond, Washington-based Microsoft seeks to be a leader in AI and automating work tasks, it’s also trying to get out in front of the challenges expected to arise from promising new technologies, such as job losses and everyday citizens who may be hurt or disadvantaged by malfunctioning or biased algorithms.
“We are trying to be clear-eyed in talking about the challenges,” said Microsoft President and Chief Legal Officer Brad Smith, who will sit on the Davos panel and co-wrote the introduction to the book. Microsoft is working on some of these areas through groups such as the Partnership on AI, which includes rivals like Amazon.com Inc, Alphabet Inc’s Google, Apple Inc and Facebook Inc. Still, the call for more regulation in an emerging area like AI is unusual for technology companies, said Ryan Calo, a professor at the University of Washington School of Law, who has read the book.
“There are a bunch of players in this space, and if you are Microsoft you want to be seen as trusted,” said Calo, who did not assist the company on the book but has consulted for it on other issues and whose lab is partially funded with donations from Microsoft. Whatever the company’s motivations, the area is an important one, Calo said. “Any sufficiently transformative technology is going to require new laws,” he said.
Both Microsoft and Calo say the development of new legislation isn’t imminent because the specific needs are still emerging. Over the next two years, Microsoft plans to codify the company’s ethics and design rules to govern its AI work, using staff from Smith’s legal group and the AI group run by Executive Vice President Harry Shum. The development of laws will come a few years after that, Smith said.
In the nearer term, Microsoft is advocating for changes to labor laws to properly classify workers and allocate benefits like health care and retirement planning to people with jobs such as an Uber driver or Postmates delivery person. Smith expects there will be a need for a new category of worker to cover these employees, who are neither full-time nor independent contractors, he said.
“The technology industry needs to engage to change the perception that it reaps the benefits of technology progress at the expense of workers who are displaced or left without protections, benefits or long-term career paths,” the company writes in the book. “Companies that do not acknowledge the importance of worker protections and benefits risk damage to their brands and face the possibility that lawmakers and the courts will step in to impose regulations.”