Researchers Develop OpinionGPT: An AI Model Intentionally Programmed to Generate Biased Outputs
Summary:
Researchers from Humboldt-Universitat zu Berlin have developed OpinionGPT, an artificial intelligence model intentionally programmed to generate biased outputs. The model, a modified version of Meta's Llama 2, is trained to respond as representative of 11 bias groups. However, due to the limited nature of the training data and its questionable relation to real-world bias, the model primarily generates text reflecting the bias of its data. While OpinionGPT may not be suitable for studying actual human bias, it can be used to explore stereotypes within large document repositories. The researchers have made OpinionGPT available for public testing, but caution that the generated content may not be reliable.
A team of researchers from Humboldt-Universitat zu Berlin have created OpinionGPT, an artificial intelligence model that is intentionally designed to produce biased outputs. This model is a modified version of Meta's Llama 2, which is similar in capability to OpenAI's ChatGPT or Anthropic's Claude 2. OpinionGPT is trained to respond as if it represents one of 11 bias groups, such as American, German, or conservative. The researchers use a process called instruction-based fine-tuning to achieve this. They refined OpinionGPT on data obtained from Reddit's "AskX" communities, specifically subreddits related to the 11 bias groups. By applying separate instruction sets to the Llama2 model, the researchers aimed to represent each bias label. However, due to the nature of the data used and its questionable relation to real-world bias, OpinionGPT predominantly generates text that reflects the bias of its training data. The researchers acknowledge the limitations of their study and recognize that the responses generated by OpinionGPT should be understood as reflective of a specific subset of individuals rather than the entire population. The researchers plan to explore models that further differentiate specific demographics. While OpinionGPT may not be suitable for studying actual human bias, it can be valuable for examining stereotypes within large document repositories. The researchers have made OpinionGPT publicly available for testing, but caution that the generated content may be false, inaccurate, or even obscene.
Published At
9/8/2023 8:42:29 PM
Disclaimer: Algoine does not endorse any content or product on this page. Readers should conduct their own research before taking any actions related to the asset, company, or any information in this article and assume full responsibility for their decisions. This article should not be considered as investment advice. Our news is prepared with AI support.
Do you suspect this content may be misleading, incomplete, or inappropriate in any way, requiring modification or removal?
We appreciate your report.