
Stable diffusion
Microsoft Chairman Brad Smith said on Thursday that his biggest fear about artificial intelligence revolves around the growing concern about deep fakes and synthetic media designed to deceive, Reuters reports.
Smith made his remarks while unveiling his “AI Public Management Project” in a speech at Planet World, the language arts museum in Washington, DC. His concerns come at a time when talk of AI regulations has become increasingly common, largely due to the popularity of OpenAI’s ChatGPT and the political tour of OpenAI CEO Sam Altman.
Smith expressed his desire to create an urgency to distinguish real photos and videos from photos created by artificial intelligence when they can be used for illegal purposes, especially to enable disinformation that destabilizes society.
“We need to address the issues with Deepfaker. We need to address specifically what concerns us about most of the foreign cyber influence activities, the types of activities that are already taking place by the Russian government, by the Chinese. , the Iranians,” Smith said, according to Reuters. “We must take steps to protect against the alteration of legitimate content with the intent to deceive or mislead people through the use of artificial intelligence.”
Smith also pushed for the introduction of licensing for important forms of artificial intelligence, arguing that those licenses should carry obligations to protect against security threats, whether physical, cyber or national. “We’re going to need a new generation of export controls, at least an evolution of the export controls we have, to make sure these models aren’t stolen or used in ways that violate the country’s export control requirements,” he said.
Last week, Altman appeared before the US Senate and voiced his concerns about artificial intelligence, saying the nascent industry needs to be regulated. Altman, whose company OpenAI is backed by Microsoft, argued for global cooperation on AI and incentives for security compliance.
In a speech on Thursday, Smith echoed these sentiments and argued that humans should be held accountable for problems caused by artificial intelligence. He called for security measures on AI systems that monitor critical infrastructure, such as the power grid and water supply, to ensure human control.
In an effort to maintain transparency around AI technologies, Smith urged that developers should develop a “know your customer” style system to closely monitor how AI technologies are used and inform the public about AI-generated content, making it easier to: to reveal the content of the invention. Along these lines, companies like Adobe, Google, and Microsoft are all working on ways to watermark or otherwise label AI-generated content.
Deepfakes have been the subject of Microsoft’s research for years. In September, Microsoft Chief Science Officer Eric Horwitz wrote a research paper on the dangers of both interactive deepfakes and synthetic storytelling, topics also covered by this author in a 2020 FastCompany article that also mentions Microsoft )’s previous efforts to detect deep forgeries. .
Meanwhile, Microsoft is simultaneously trying to incorporate text- and image-based generative AI technology into its products, including Office and Windows. In February, the rough launch of the underrated and underrated Bing chatbot (based on a version of GPT-4) provoked deep emotional reactions from its users. It has also sparked latent fears that world-dominating superintelligence may be just around the corner, a reaction that some critics argue is part of a conscious marketing campaign by AI vendors.
So the question remains. What does it mean when companies like Microsoft sell the very product they warn us about?