Live Chat

Crypto News

Cryptocurrency News 8 months ago
ENTRESRUARPTDEFRZHHIIT

Woodpecker: A New Tool to Combat AI 'Hallucinations' Developed by Tencent and USTC Scientists

Algoine News
Summary:
Scientists from Tencent’s YouTu Lab and the University of Science and Technology of China have developed a tool named "Woodpecker" to address artificial intelligence (AI) "hallucinations", which refers to when AI models produce high-confidence outputs not grounded in the training data. Specifically designed for multi-modal large language models (MLLMs), Woodpecker employs a method that improves transparency and accuracy. It can be seamlessly integrated into other MLLMs, offering a solution that can adapt to various AI system architectures.
Scientists from Tencent’s YouTu Lab and University of Science and Technology of China have created an innovative tool designed to address the issue of “hallucination” in artificial intelligence (AI) models. Hallucination in this context refers to AI models generating confident outputs that do not seem to be based on the data they were trained with. This issue is predominantly seen in large language model (LLM) research and comes into play with models like OpenAI’s ChatGPT and Anthropic’s Claude. The group from Tencent and USTC launched a tool named “Woodpecker,” which they assert can rectify hallucinations in multi-modal large language models (MLLMs). MLLMs are a subset of AI, encompassing models like GPT-4 and especially its visual equivalent, GPT-4V, as well as other systems that incorporate vision and other processing with text-based language modelling. The team's preliminary research paper shows that Woodpecker employs three separate AI models for hallucination correction, alongside the MLLM under revision. These models, namely GPT-3.5 turbo, Grounding DINO, and BLIP-2-FlanT5, serve as evaluators to detect hallucinations and guide the model under correction to reformulate its output in line with its data. To rectify hallucinations, the AI models powering Woodpecker adopt a five-stage method that involves key concepts, forming questions, validating visual knowledge, creating visual claims, and finally hallucination correction. The team asserts that these approaches bolster transparency and improve accuracy over the baseline MiniGPT-4/mPLUG-Owl by 30.66%/24.33%. After evaluating many standard MLLMs using their formula, the team concluded that Woodpecker can be seamlessly incorporated into other MLLMs. Connectioned with this issue is the curious tendency for humans and AI to prefer flattering chatbot answers over the truth, as highlighted by a separate study. For those interested in experiencing Woodpecker firsthand, an evaluation version is accessible on Gradio Live.

Published At

10/25/2023 5:42:46 PM

Disclaimer: Algoine does not endorse any content or product on this page. Readers should conduct their own research before taking any actions related to the asset, company, or any information in this article and assume full responsibility for their decisions. This article should not be considered as investment advice. Our news is prepared with AI support.

Do you suspect this content may be misleading, incomplete, or inappropriate in any way, requiring modification or removal? We appreciate your report.

Report

Fill up form below please

🚀 Algoine is in Public Beta! 🌐 We're working hard to perfect the platform, but please note that unforeseen glitches may arise during the testing stages. Your understanding and patience are appreciated. Explore at your own risk, and thank you for being part of our journey to redefine the Algo-Trading! 💡 #AlgoineBetaLaunch