Live Chat

Crypto News

Cryptocurrency News 4 months ago
ENTRESRUARPTDEFRZHHIIT

Researchers Advance AI Usage within Operating Systems, Boost Model Accuracy by 27%

Algoine News
Summary:
A team of scientists from Microsoft Research and Peking University conducted a study to understand why large language models (LLMs) like GPT-4 struggle to manipulate operating systems. Traditionally trained through reinforcement learning, these AI models falter in OS environments due to multimodal challenges and risk of data loss. In a unique training environment created by the team called AndroidArena, they identified four key skills that LLMs lacked: understanding, reasoning, exploration, and reflection. In a surprise twist, the researchers discovered a "simple" method that boosted a model’s accuracy by 27% by addressing the lack of "reflection". This research could pave the way for an advanced AI assistant.
Developing a strategy for ChatGPT to function independently within an operating system has been tricky, but a collaborative effort from scientists at Microsoft Research and Peking University may have found the key. The researchers embarked on an exploration to pinpoint why large language models (LLMs) for artificial intelligence (AI) such as GPT-4 fail in tasks requiring operating system manipulations. Cutting-edge systems like ChatGPT, powered by GPT-4, set the standard for generative tasks like composing emails or penning a poem. However, enabling these models to operate as agents within a general environment brings its set of trials. Typically, AI models learn to negotiate virtual environments through reinforcement learning. AI creators have tapped into modified versions of well-known video games like Super Mario Bros and Minecraft to impart learnings on self-propelled exploration and goal aiming. However, operating systems pose a unique challenge for AI models. As agents, executing functions within an OS frequently presents a multimodal hurdle involving information exchange among various components, applications, and programs. In the context of reinforcement training, the approach largely depends on experimentation. This method can lead to data loss as seen when passwords are entered incorrectly multiple times or unclear about applicable shortcuts in different apps. Related: ChatGPT's propensity with nukes, SEGA's 80s AI, TAO's 90% growth: AI Eye The scientist group worked with multiple LLMs including those open-sourced by Meta such as Llama2 70B and those by OpenAI such as GPT-3.5 and GPT-4. The research found that none of these LLMs showed exceptional performance. As stated in the team's research paper, the current demands supersede the capabilities of present-day AI for several reasons. They pioneered a novel training environment named AndroidArena that allowed LLMs to navigate in a setting similar to Android OS. After establishing testing tasks and a benchmark system, they found that LLMs primarily lacked four key skills: comprehension, reasoning, exploration, and reflection. Though the focus of the study was to pinpoint the problem, the researchers unexpectedly identified a straightforward method to enhance a model's accuracy by 27%. They tackled the issue of reflection lacking by feeding automated information into the model regarding its prior attempts and the strategies used during those. Embedding memory within prompts used to trigger the action ensured this. This line of research could have profound implications in creating an improved AI assistant.

Published At

2/12/2024 11:37:47 PM

Disclaimer: Algoine does not endorse any content or product on this page. Readers should conduct their own research before taking any actions related to the asset, company, or any information in this article and assume full responsibility for their decisions. This article should not be considered as investment advice. Our news is prepared with AI support.

Do you suspect this content may be misleading, incomplete, or inappropriate in any way, requiring modification or removal? We appreciate your report.

Report

Fill up form below please

🚀 Algoine is in Public Beta! 🌐 We're working hard to perfect the platform, but please note that unforeseen glitches may arise during the testing stages. Your understanding and patience are appreciated. Explore at your own risk, and thank you for being part of our journey to redefine the Algo-Trading! 💡 #AlgoineBetaLaunch