As artificial intelligence becomes integral to daily life, it must reflect societal principles. This is the idea behind the Collective Constitutional AI project in America, designed to align AI behaviour with societal values.
Not long ago, the term “artificial intelligence” was unfamiliar to most people. Yet, it has rapidly become a cornerstone of contemporary life, impacting various industries, governance, and decision-making.
Given these realities, AI must be developed ethically and responsibly. This is simply the only way to ensure its systems align with human values and serve society positively. However, the traditional approach to developing these systems often lacks transparency and inclusivity, which can lead to biased results and a technology perpetually plagued by ethical dilemmas. Development is typically the task of internal teams, not without the influence of subjective factors, even when guided by normative principles such as the Universal Declaration of Human Rights.
In contexts like American democracy, such a risk cannot be taken, leaving rigid institutions to fail in serving their constituents or coordinating solutions to global crises, especially since the most pressing challenges are collective, from climate change to resource management. This raises the fundamental question: How can we ensure that AI systems reflect the diverse values and priorities of the societies they serve?
To answer this question, the Collective Intelligence Project collaborated with Anthropic, known for its work in AI ethics, to jointly present a project called Collective Constitutional AI, aimed at exploring new ways to align AI with societal values.
This initiative represents a pioneering attempt to leverage democratically provided public input and incorporate it into shaping the behaviour of AI systems. In other words, this project is an incubator for innovative governance models designed specifically for transformative technologies, with a focus on researching and developing participatory capabilities.
Using the open-source Polis platform, approximately one thousand Americans contributed diverse ideas on how AI systems should function and voted on existing principles or proposed new options. Their contributions were valuable inputs in training large language models (LLM) and a foundation for what project leaders called the “Public Constitution,” which reflects a mix of shared societal values such as promoting accessibility and objectivity and differs significantly from the version developed by the developers.
The project revealed many technical, social, and procedural challenges. For example, there is the challenge of selecting participants, a process that must ensure the formation of a truly comprehensive, diverse, and representative group. While a sample of American adults is selected to ensure population diversity, this limits the applicability of the results globally. Additionally, the participant screening criteria designed to ensure their familiarity with the subject inadvertently narrowed the scope, which could lead to the exclusion of valuable insights.
On the other hand, sorting invalid or irrelevant data posed another major challenge. Public participation had to be supervised to ensure that the constitution remained focused, but supervisory decisions – no matter how well-considered – remain subject to potential biases, as they are subjective in nature.
A third challenge is the translation into AI-compatible principles. Contributions were written in natural language and had to be reformulated in a precise standard language suitable for training AI algorithms. This process is also subject to subjective judgments, raising questions about the accuracy of the final form of contributions.
Finally, there remain the technical obstacles in training the model, and the difficulties of training language models to adhere to the Constitution. The early versions of the model prioritized non-harm at the expense of benefit, which required repeated experiments to effectively balance these two factors.
But these challenges do not negate the fact that the project was able to achieve many important results, such as enhancing trust and inclusivity by involving the public in AI development, which increased user satisfaction and their belief in the ethical compatibility of this technology. It also revealed the complexities of efforts to integrate democratic input into AI systems and demonstrated the subjective elements of the process, which will enhance transparency.
Moreover, evaluations have shown that the model trained on the “Public Constitution” has shown less bias across social dimensions compared to models prepared by developers, which promises to reduce bias and heralds the ability of collective intelligence to address the ethical concerns associated with this technology.
The success of this project indicates the possibility of applying similar democratic processes to other transformative technologies such as bioengineering, which also highlights the need for innovative governance models to address the collective challenges posed by technological advances.
References:
https://www.nytimes.com/2023/10/17/technology/ai-chatbot-control.html






