Nvidia CEO Predicts Human-Level AI Within Five Years, Promises Resolution for AI Hallucinations
Summary:
Nvidia CEO, Jensen Huang predicted the advent of human-level artificial intelligence (AI) within the next five years, during a recent speech at a developers conference. He also addressed AI "hallucinations," a phenomenon where AI models produce incorrect information not present in the dataset and believes that they can be easily rectified. Overcoming this issue could revolutionize sectors like finance and cryptocurrencies, which currently use AI with caution due to accuracy concerns. Ultimately, solving this problem may lead to fully automated trading.
During a recent speech at Nvidia's GTC developers conference in San Jose, California, CEO Jensen Huang expressed his belief that complete artificial intelligence (AI) at a human level is most likely just five years away. He confidently suggested that one of the main obstacles in the field, artificial hallucinations, should be easy to overcome. Huang talked extensively about the concept of artificial general intelligence (AGI) during his keynote speech. As per TechCrunch's account, Huang underscored the critical role benchmarking plays in realizing AGI and hoped for AGI to outperform humans with an 8% better score within five years.
The specifics and context of the benchmark tests Huang was alluding to, however, remain unclear. Generally, the term refers to an AI's capacity to perform tasks a human of average intelligence would accomplish, given unlimited resources. An intriguing subject Huang discussed was "hallucinations," a phenomenon where AI models produce new information, typically erroneous, not present in their core dataset. This typically occurs when large language models are trained to serve as creative AI systems. In Huang's view, rectifying this issue should not pose a major challenge; the primary solution involves integrating a rule asking the AI to cross-check every answer it generates.
While noting that several existing AI systems like Microsoft's CoPilot, Google's Gemini, OpenAI's ChatGPT, and Anthropic's Claude 3 are equipped with a referencing feature, overcoming the hallucination problem could cause a seismic shift across multiple sectors such as finance and cryptocurrencies. At present, creators of these AI systems warn users about their accuracy limitations, particularly in undertaking tasks where precision is paramount. The use of these creative AI systems in finance and cryptocurrency is therefore substantially limited.
Existing AI-enabled trading robots are normally programmed to adhere to a set of rigid rules in preventing unsupervised execution, making them closely resemble limited orders. Assuming that these systems could sidestep the hallucination challenge, they could ideally execute trades and make financial recommendations and decisions independently. All in all, a solution to hallucinations in AI could pave the way for fully automated trading.
Published At
3/21/2024 8:49:42 PM
Disclaimer: Algoine does not endorse any content or product on this page. Readers should conduct their own research before taking any actions related to the asset, company, or any information in this article and assume full responsibility for their decisions. This article should not be considered as investment advice. Our news is prepared with AI support.
Do you suspect this content may be misleading, incomplete, or inappropriate in any way, requiring modification or removal?
We appreciate your report.