Anthropic publishes AI constitution to promote development and ethical behavior

Join top executives in San Francisco on July 11-12 to hear how leaders are integrating and optimizing AI investments for success.. Learn more

Anthropic, a leading artificial intelligence company founded by former OpenAI engineers, has taken a novel approach to addressing the ethical and social challenges posed by increasingly powerful AI systems: give them a constitution.

On Tuesday, the company made public its official Constitution for Claude, his latest conversational AI model that can generate text, images, and code. The constitution outlines a set of values ​​and principles that Claude must follow when interacting with users, such as being helpful, harmless, and honest. It also specifies how Claude should handle sensitive topics, respect user privacy, and prevent illegal behavior.

“We are sharing Claude’s current constitution in a spirit of transparency,” Jared Kaplan, Anthropic co-founder, said in an interview with VentureBeat. “We hope that this research will help the AI ​​community to build more beneficial models and clarify their values. We’re also sharing this as a starting point: We hope to continually review Claude’s constitution, and part of our hope in sharing this post is that it will spark more research and debate about the constitution’s design.”

The constitution is based on sources such as the UN Declaration of Human Rights, AI ethics research, and the platform’s content policies. It is the result of months of collaboration between Anthropic researchers, policy experts and operational leaders, who have been testing and refining Claude’s behavior and performance.


transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they’ve integrated and optimized AI investments to achieve success and avoid common pitfalls.

Register now

By making its constitution public, Anthropic hopes to foster greater trust and transparency in the field of AI, which has been plagued by controversy over bias, misinformation, and manipulation. The company also hopes to inspire other AI developers and stakeholders to adopt similar practices and standards.

The announcement highlights growing concerns about how to ensure AI systems behave ethically as they become more advanced and autonomous. Last week, the former head of Google’s AI research division, Geoffrey Hinton, resigned from him at the tech giant, citing growing concerns about the ethical implications of the technology he helped create. Large language models (LLMs), which generate text from massive data sets, have been shown to reflect and even amplify biases in their training data.

Building AI systems to combat bias and harm

Anthropic is one of the few startups that specializes in developing general AI systems and language models, which aim to perform a wide range of tasks in different domains. The company, which launched in 2021 with a $124 million series TO round of funding, is on a mission to ensure that transformative AI helps people and society prosper.

Claude is Anthropic’s flagship product, which it plans to deploy for various applications including education, entertainment, and social welfare. Claude can generate content like poems, stories, code, essays, songs, celebrity parodies, and more. You can also help users to rewrite, improve or optimize their content. Anthropic claims that Claude is one of the most reliable and steerable AI systems on the market, thanks to his build and ability to learn from human feedback.

“We chose principles like those in the UN Declaration of Human Rights that are widely agreed upon and were created in a participatory manner,” Kaplan told VentureBeat. “To complement this, we include principles inspired by best practices in the Terms of Service for digital platforms to help manage more contemporary issues. We also included principles that we found to work well through a process of trial and error in our research. The principles were compiled and chosen by Anthropic researchers. We are exploring ways to more democratically produce a constitution for Claude, and we are also exploring offering customizable constitutions for specific use cases.”

Anthropic’s charter filing highlights the AI ​​community’s growing concern with the values ​​and ethics of the system, and the demand for new techniques to address them. With increasingly advanced AI deployed by companies around the world, the researchers argue that models should be based on and constrained by human ethics and morality, not just optimized for limited tasks like generating engaging text. Constitutional AI offers a promising path toward achieving that ideal.

Constitution to evolve with the progress of AI

A key aspect of Anthropic’s constitution is its adaptability. Anthropic acknowledges that the current version is neither final nor likely to be the best it can be, and welcomes research and feedback to refine and improve the constitution. This openness to change demonstrates the company’s commitment to ensuring that AI systems remain current and relevant as new ethical concerns and social norms emerge.

“We will have more to share on constitution customization later,” Kaplan said. “But to be clear: all uses of our model must fall within our Acceptable Use Policy. This provides security measures in any customization. Our AUP rules out harmful uses of our model and will continue to do so.”

While AI builds are not a panacea, they do represent a proactive approach to addressing the complex ethical issues that arise as AI systems continue to advance. By making the value systems of AI models more explicit and easily modifiable, the AI ​​community can work together to build more beneficial models that truly meet the needs of society.

“We’re excited about getting more people involved in drafting the constitution,” Kaplan said. “Anthropic invented the method for constitutional AI, but we don’t believe it’s the role of a private company to dictate what values ​​should ultimately guide AI. We did our best to find principles that were in line with our goal of creating a useful, harmless, and honest AI system, but ultimately, we want more voices to say what values ​​should be in our systems. Our constitution is alive: we will continue to update and iterate on it. We want this blog post to spark research and discussion, and we will continue to explore ways to collect more information about our constitutions.”

VentureBeat’s mission is to be a digital public square for technical decision makers to gain insights into transformative business technology and transact. Discover our informative sessions.


Scroll to Top