document.write(''); How Anthropic is teaching AI the difference between right and wrong - Simo Baha

How Anthropic is teaching AI the difference between right and wrong

Anthropic, a major AI player, just published something that could have a big impact on AI security.

In a recent post, Anthropic, the creator of the AI ​​helper Claude, outlined an approach they’re using to make large language models safer, called “constitutional AI.”

Constitutive artificial intelligence in the large-scale language model “explicit values ​​that are determined by the constitution, rather than values ​​that are implicitly determined through large-scale human responses.”

By “constitution,” Anthropic means a specific set of company-created principles that guide the model’s results.

In the past, you would teach the machine what a “good” or “bad” result was based on human feedback. the model got output. This exposed people to disturbing content and relied primarily on crowd feedback to dictate the model’s “values”.

In the case of constitutional AI, the model both compares its results to established core values ​​and receives feedback from other AI systems to gauge how well it follows the constitution.

Anthropic notes: “It’s not a perfect approach, but it makes the values ​​of the AI ​​system easier to understand and easier to adjust as needed.”

For its Claude AI assistant, Anthropic created Claude’s constitution based on sources such as the UN Declaration of Human Rights, best practices for trust and security, and principles suggested by other research labs.

Why is this important? On Episode 47 of the Marketing AI ShowI spoke with Paul Retzer, founder and CEO of the Marketing AI Institute, to find out.

  1. This is one of the biggest challenges of generative AI. Convincing generative AI systems to produce toxic, biased or harmful content is not easy. And it’s a subjective process. There are differing views on what are problematic outcomes across the political and societal spectrums. Getting this error results in malicious machines.
  2. This is just one way to solve the problem. There are different ways to solve this problem, says Retzer. For example, OpenAI has announced that in the future you will be able to configure models however you want, putting the responsibility of the outputs on the user.
  3. Anthropic’s approach is one to watch. “This is a really novel approach,” says Retzer. And the company that pioneered it is important. Anthropic is not a small AI player. They raised $1.3 billion. And they claim the next model they’re building will be 10 times more powerful than today’s most powerful AI systems.
  4. This is important because language models will increasingly become sources of facts and truth. As adoption increases and models improve, we will regularly refer to these models for our input. So how these models are adapted to respond to social, political and cultural contexts is important, says Retzer.

Bottom line: How we monitor what large language models produce will increasingly determine what facts and truths people see as adoption grows.

Don’t be left behind…

You can get ahead of AI-driven disruption, and fast, with us A series of courses for pilot AI marketersA series of 17 on-demand courses designed as a step-by-step learning journey for marketers and business leaders to increase productivity and efficiency through AI.

The course series contains 7+ hours of training, dozens of AI use cases and vendors, a collection of templates, course quizzes, a final exam, and a professional certificate upon completion.

After adopting Piloting AI for Marketers, you will:

  1. Understand how to advance your career and transform your business with AI.
  2. Experience 100+ AI use cases in marketing and learn how to identify and prioritize your own use cases.
  3. Discover 70+ AI vendors in various marketing categories that you can start piloting today.

Learn more about piloting AI for marketers



Source link