Microsoft requires AI rules to minimize risks

Microsoft on Thursday approved a raft of artificial intelligence regulations as the company heeds concerns from governments around the world about the risks of the fast-growing technology.

Microsoft, which has promised to build artificial intelligence into many of its products, has proposed regulations, including a requirement that systems used in critical infrastructure can be completely shut down or slowed down, such as a train’s emergency braking system. The company is also calling for the laws to clarify when additional legal obligations apply to an artificial intelligence system, and labels that clarify when an image or video was produced by a computer.

“Companies need to step up,” Microsoft President Brad Smith said in an interview about the enforcement of the regulations. “The government should move faster.” He unveiled the proposals Thursday morning at an event in downtown Washington before an audience that included lawmakers.

The call for regulations marks a boom in AI, with the release of the ChatGPT chatbot in November sparking interest. Companies including Microsoft and Google parent Alphabet have since raced to incorporate the technology into their products. That has fueled concern that companies are sacrificing security to get to the next big thing before their competitors.

Lawmakers have publicly expressed concern that products like artificial intelligence, which can generate text and images on their own, will create a flood of disinformation, be used by criminals and put people out of work. Regulators in Washington have vowed to be on the lookout for fraudsters using AI and cases where the systems perpetuate discrimination or make decisions that violate the law.

In response to that scrutiny, AI developers are increasingly calling for the burden of controlling the technology to be shifted to the government. Sam Altman, CEO of OpenAI, which creates ChatGPT and counts Microsoft as an investor, told a Senate subcommittee this month that the government should regulate the technology.

The maneuver echoes the demands of new privacy or social media laws such as Google and Facebook’s parent Meta. Lawmakers in the United States have been slow to move forward on such calls, with few new federal rules on privacy or social media in recent years.

In the interview Mr. Smith said Microsoft wasn’t trying to abdicate responsibility for managing the new technology, as it offered concrete ideas and promised to implement some of them regardless of whether the government took action.

“There is no point of waiving responsibility,” he said.

He approved the idea, which was supported by Mr. Altman during his congressional testimony that the government agency should require companies to obtain licenses to deploy “highly capable” artificial intelligence models.

“That means you notify the government when you start testing,” said Mr. Smith said: “You have to share the results with the government. Even when it’s licensed for deployment, you have a duty to continue to monitor it and report to the government if unexpected problems arise.”

Microsoft, which earned more than $22 billion from its cloud computing business in the first quarter, also said those high-risk systems should only be allowed to run in “Licensed AI data centers.” Mr. Smith acknowledged that the company would not be in a “bad position” to offer such services, but said many American competitors could also provide them.

Microsoft added that governments should designate certain AI systems used in critical infrastructure as “high risk” and require them to have a “safety brake.” It compared the feature to “brake systems engineers have long built into other technologies such as elevators, school buses and high-speed trains.”

In some sensitive cases, Microsoft says, companies that provide AI systems need to know certain information about their customers. To protect consumers from fraud, content created by artificial intelligence must carry a special label, the company said.

Mr. Smith said companies should bear legal “responsibility” for AI-related damages. In some cases, he said, the responsible party could be the developer of an app, such as Microsoft’s Bing search engine, that uses someone else’s underlying AI technology. Cloud companies can be responsible for compliance with security regulations and other rules, he added.

“We don’t necessarily have the best information or the best answer, or we can’t be the most credible spokesperson,” said Mr. Smith said: “But, you know, right now, especially in Washington, people are looking for ideas.”

Source link