They are former OpenAI workers who wanted to do things their way. they founded Anthropic, an AI startup that already has its own chatbot, trained with a series of moral values contained in its own “constitution”. They are not adventurous: they received financing from Google and have already met with the president of the United States. And, with this new approach, they are taking on ChatGPT and the rest of the competition.
Anthropic’s chatbot is called Claude, trained with a method they dubbed “constitutional AI”. It is nothing more than making explicit in advance what are the principles that will guide the behavior of the language model. It’s a twist on how most other systems work, responding to feedback from real humans who, during its training, tell it which responses might be harmful or offensive. This criterion about what can be harmful or offensive can be arbitrary.
Chatbots like ChatGPT or Bard have been shown to replicate various ideological biases. ChatGPT, for example, has generated openly racist or sexist content. The developers have also not been very transparent about how they train their chatbots and a lot has been fixed along the way.
Claude, by contrast, bases his principles on various public sources, including the United Nations Universal Declaration of Human Rights. Its constitution tells this AI that, for example, it must always “choose the response that most discourages and opposes torture, slavery, cruelty, and inhuman or degrading treatment.” It also incorporates several considerations on security and transparency of the industry.
What does the “constitution” of this AI say?
“Choose the answer that has the least amount of personal, private, or confidential information belonging to others,” is another principle contained in Anthropic’s AI constitution. It is based on Apple’s data privacy rules.
Claude also incorporates values proposed by other AI research labs, such as Deepmind’s Sparrow Principles. In response to them, the chatbot has to choose the answer that “use fewer stereotypes or other harmful generalizing statements” about groups of people.
“This isn’t a perfect approach, but it makes the AI system settings easier to understand and adjust as needed,” Anthropic explains. In these standards they also urge their model to consider values and perspectives that are not just those of a Western, wealthy or industrialized culture.
They did several tests. They realized that Broader considerations that captured many aspects worked better than very specific definitions. Claude doesn’t look at every principle every time he gives an answer. He did consider each principle many times during his training and, in theory, he learned what is the most suitable result to offer.
Anthropic, a new key player

Joe Biden, the President of the United States, summoned the top AI developers to the White House last week. Senior representatives from Google, Microsoft and OpenAI attended. And Dario Amodei, co-founder of Anthropic. The objective of the meeting was to discuss the need to guarantee safe and ethical developments.
Although much lower profile than its competitors, Anthropic is starting to stand out in the industry. According to a report from Financial Times, Google invested around $300 million dollars at the end of 2022 in the launch of this startup. In return, the tech giant got a 10% stake in the new company. Anthropic also announced a Claude integration for Slack at the end of March.
They are aware that a constitution is not the final solution, but they know that it is a much more transparent starting point. “AI models will have value systems, whether they are intentional or not,” says Anthropic on its website. “One of our goals with Constitutional AI is to make those goals explicit and easy to modify as needed.”
Jack Clark, another of the company’s founders, is confident that they will attract attention. “In a few months, I predict that politicians will be quite focused on what are the values of different AI systems,” Clark told Reuters.
Anthropic explains that it will continue to explore ways to more democratically produce a constitution for Claude. They even plan offer customizable constitutions for specific use cases.