Anthropic Innovates User-Influenced AI Models: A Step Toward Democratized Artificial Intelligence
Summary:
Anthropic, an AI company, developed a study involving a large language model (LLM) that adapts to the users' value judgements. The experiment, a first of its kind, involved around 1000 U.S citizens who contributed to formulating an AI constitution. The process highlighted certain challenges but ultimately achieved a marginal improvement in terms of biased output. Anthropic believes this could be one of the first instances of a public group influencing an LLM's behavior and hopes this technique can be utilised to create culturally specific models in the future.
Artificial intelligence corporation, Anthropic, pioneers an unprecedented experiment in its development of a sizeable language model (LLM) tailored towards the value judgments of users. In its quest to democratize AI development, it collaborated with @collect_intel and utilised @usepolis to establish an AI constitution grounded in the views of approximately 1000 U.S citizens. Subsequently, the established model was put through its paces via Constitutional AI.
Guardrails or pre-defined rules of functioning employed in the design of many consumer-facing LLMs including Anthropic’s Claude and OpenAI’s ChatGPT to curb undesired outcomes. Notwithstanding, critics argue that these adopted guardrails infringe on user's autonomy. They contend that what is deemed tolerable may not necessarily be beneficial and vice versa. Moreover, cultural, temporal and populational differences further complicate moral and value judgements.
One way to navigate this intricate issue could be to enable users to define the value alignment for AI models themselves. Anthropic attempted to tackle this complex issue with Collective Constitutional AI experiment, by asking 1000 users diverse in demographic background series of questions via polls. The goal lies in finding the equilibrium between permitting editorial discretion and preventing inappropriate content output. The process involved user feedback implementation on a model that was already trained.
Anthropic utilizes Constitutional AI for fine-tuning LLM's tailor-made for safety and utility. This is akin to instructing the model with a governing set of rules, much like a constitution, which it must then strictly adhere to. In the Collective Constitutional AI experiment, the attempt was made to amalgamate collective feedback into the model's constitution.
According to a blog post by Anthropic, the breakthrough experiment seems to have met its scientific objectives despite revealing hurdles towards attaining its ultimate aim - enabling LLM users to determine their collective values. Hurdles included inventing an innovative benchmarking method since there's no established test for models tuned through a crowd-sourced exercise in existence due to the novelty of this experiment.
In conclusion, the model incorporating user polling data performed marginally better in the biased outputs sector. Anthropic looks forward to further exploring the procedure and is hopeful that future global communities can leverage these methodologies to create models that cater specifically to their individual needs.
Published At
10/18/2023 5:00:00 PM
Disclaimer: Algoine does not endorse any content or product on this page. Readers should conduct their own research before taking any actions related to the asset, company, or any information in this article and assume full responsibility for their decisions. This article should not be considered as investment advice. Our news is prepared with AI support.
Do you suspect this content may be misleading, incomplete, or inappropriate in any way, requiring modification or removal?
We appreciate your report.