At one point during OpenAI CEO Sam Altman’s recent congressional hearing, Altman said: “My worst fear is that we are doing significant damage to the world.”
Lawmakers and two other experts at the hearing, IBM executive Christina Montgomery and Gary Marcus, a leading AI expert, academic and entrepreneur, agreed.
At the hearing, they cited a number of common AI security issues that keep them up at night, including:
- Election disinformation, or the ability of generative artificial intelligence to create false text, images, video and audio at scale, and to emotionally manipulate people who consume the content to influence the outcome of elections, including the 2024 US presidential election.
- Job disruption or the likelihood that AI will cause significant rapid unemployment.
- Copyright and Licensing or the concern that AI models are being trained on material that is legally owned by other parties and used without their consent.
- Harmful or dangerous content in general, or the possibility that generating AI systems produce results that harm human users. This can happen in a number of ways, such as hallucinations, where generative AI creates information and misleads users, or lack of consistency, where generative AI is not trained well enough and gives users information they can use on their own. or to harm others. .
- Common concerns about the pace and scale of AI innovation and our ability to control it. Experts and lawmakers fear that without adequate guardrails, AI development could move so fast that we release potentially harmful technology that cannot be adequately controlled and/or, in some more extreme views, actually create machines far smarter than us. and outside our territory. control (often referred to broadly as “AGI”).
Which of these risks should we and policymakers take seriously?
In episode 48 of the Marketing AI Show, I spoke with Paul Retzer, founder and CEO of the Marketing AI Institute, to find out.
- Congress’ focus on near-term issues is welcome. A lot of attention is paid to doomsday headlines about possible superhuman AGI. And it’s important that AI leaders think about existential threats. But there are many short-term problems that AI can cause that we need to focus on, says Retzer.
- Job loss and election interference are the most immediate threats. Both the loss of jobs (especially knowledge workers) due to AI and the interference of AI-powered primary elections are likely to be the biggest issues in the next 12 months, Reutzer says. He emphasizes that these problems are here today. “There is no advance in technology needed to make all these things happen.”
- It is unrealistic to think that companies will police themselves. Companies like OpenAI are taking over few Strong steps to ensure AI security, such as spending months working on alignment and red teams (or trying to find flaws in systems) before releasing a product. But there’s a problem, says Retzer. “They are not incentivized to prevent this technology from entering the world.” The few big companies now leading AI innovation are financially rewarded for releasing the technology quickly, even if it causes problems. “Ethical concerns seem to be becoming secondary within some of these tech companies,” he says.
- And politicians have conflicting motivations when it comes to regulation. On the one hand, lawmakers are taking a serious interest in talking about AI safety, which is great. But on the other hand, they also have a clear interest in using technology to win elections and increase US economic competitiveness. Thus, they have conflicting motivations when it comes to prudent regulation of technology.
Bottom line: There are very real near-term dangers we will face from artificial intelligence, but there are no easy answers to combat these dangers or concrete regulations to prevent them.
Don’t be left behind…
You can get ahead of AI-driven disruption, and fast, with us A series of courses for pilot AI marketersA series of 17 on-demand courses designed as a step-by-step learning journey for marketers and business leaders to increase productivity and efficiency through AI.
The course series contains 7+ hours of training, dozens of AI use cases and vendors, a collection of templates, course quizzes, a final exam, and a professional certificate upon completion.
After adopting Piloting AI for Marketers, you will:
- Understand how to advance your career and transform your business with AI.
- Experience 100+ AI use cases in marketing and learn how to identify and prioritize your own use cases.
- Discover 70+ AI vendors in various marketing categories that you can start piloting today.