Ethereum Co-Founder Claims OpenAI's GPT-4 Model Passes Turing Test
Summary:
Ethereum co-founder Vitalik Buterin claims that OpenAI's GPT-4 model has passed the Turing Test, a benchmark to evaluate an AI's human-like conversation abilities. He based his statement on a UC San Diego study where participants mistakenly identified GPT-4 as human 56% of the time. The study raises debate about the interpretation of the Turing Test and the concept of Artificial General Intelligence (AGI).
According to Vitalik Buterin, co-creator of the popular cryptocurrency Ethereum, the AI model GPT-4 from OpenAI has successfully cleared the Turing Test. The term Turing Test is derived from the famous mathematician Alan Turing, who suggested the measure in 1950 to evaluate how closely an AI system can emulate human-like conversation. Turing declared that if an AI system could convincingly produce text that deceives a human into believing they're conversing with another person, it displays a form of "thought".
The Ethereum co-founder interprets recent preprint findings from UC San Diego as an indication that, for the first time, an operational model has cleared the Turing Test. The concept put forth by a key player in the creation of the second most widely used digital currency, is based on evidence derived from a research paper drafted by investigators of the University of California San Diego titled, "People cannot distinguish GPT-4 from a human in a Turing test". In this study, roughly 500 individuals participated in a blind experiment, interacting with humans and AI models, aiming to decipher their true identities.
Human participants wrongly identified GPT-4 as a human 56 percent of the time, indicating that the AI system deceived them into believing it was a fellow interaction partner more times than not. Contrary to common misperceptions, the Turing Test and Artificial General Intelligence (AGI) are distinct entities, even though they are often incorrectly interchanged. Turing designed his test utilizing his mathematical brilliance, forecasting an eventuality wherein AI could bamboozle people through dialog into believing they were interacting with a fellow human.
Point to note is that the Turing test does not have a standard benchmark, lacks a technical basis and is more of a conceptual construct. No unanimous scientific agreement exists on whether machines are capable of "thought" similarly as living organisms, or how such an accomplishment could be quantified. Simply said, the AI's capability to "think" or AGI is currently not measurable or defined by the scientific or technical communities.
The concept of AGI, which is often linked with the Turing Test, further convolutes the issue. From a scientific perspective, a "general intelligence" is one capable of any intellectual feat. No human has demonstrated "general" abilities across all intellectual pursuits. Hence, a "general artificial intelligence" should theoretically possess intellectual abilities that far exceed any known human. However, GPT-4 does not meet the strict scientific definition of a "general intelligence." Despite this, some AI enthusiasts erroneously label any AI system that can fool a sizeable number of humans as "AGI".
In today's world, it's commonplace to hear terms like "AGI," "human-like," and "passes the Turing test" used loosely to refer to any AI system that produces content similar to that produced by humans.
Published At
5/16/2024 8:35:00 PM
Disclaimer: Algoine does not endorse any content or product on this page. Readers should conduct their own research before taking any actions related to the asset, company, or any information in this article and assume full responsibility for their decisions. This article should not be considered as investment advice. Our news is prepared with AI support.
Do you suspect this content may be misleading, incomplete, or inappropriate in any way, requiring modification or removal?
We appreciate your report.