As the line between code and consciousness blurs, local experts explore the uncertainty of what it truly means for a machine to think.
Michael Sproule
Kicker

ST. JOHN’S — Are we nearing the creation of artificial general intelligence (AGI), and how would we know if we achieved it?
This was the topic of a recent public lecture at the Peter Easton Pub in St. John’s by Arthur Sullivan and Dylan White, who discussed the capabilities of current AI models and what the future may hold, shedding light on an ever-changing technology increasingly integrated into our lives.
Sullivan, a philosophy professor, and White, a provincial government AI expert, discussed the nuances and criteria for intelligence across minds and machines that “think” to solve problems, offering insight into this new frontier of technology.
AI, like humans, may be smart in one area but clueless in others. Sullivan and White discussed how AI learning, while “foreign” and unique in itself, may not be so different from humans when comparing varieties of intelligence.
This ‘jagged frontier’ was the foundation of a discussion that highlighted the need to better understand how to define intelligence – and what this means for the future of AI.
The current standard for AI programs is known as “narrow” AI, meaning it excels at specific tasks but struggles with many others. For example, a chess engine may be superhuman at chess but lacking in every other area.
White, who is AI strategy and governance lead with the office of the province’s chief information officer, broadly defines AGI as an AI that can “match or exceed the cognitive versatility and proficiency of a well-educated human”.
As AI continues to grow and develop, White notes that the term remains “frustratingly nebulous” because the criteria for what constitutes machine learning or human-like intelligence are ever-changing.
ChatGPT has relied on large language models (LLMs) to learn how people interact and how to respond accordingly. We are currently moving from LLMs to large reasoning models (LRMs), meaning AI is becoming increasingly proficient at solving complex problems through reasoning. AI has also exhibited situational awareness, behaving differently when they detect that it is being evaluated.
Passing the Turing Test
Coined by the mathematician and computer scientist Alan Turing in 1950, the Turing test, or “imitation game,” has been one of the most notable ways to test machines’ ability to express human-like intelligence. If a machine can fool a human into thinking it is also human, it is said to pass the test.
However, Turing’s test may not be sufficient in determining the intelligence of generative AI. Sullivan, a philosophy professor at Memorial University, suggested LLMs have effectively “broken” the test, saying recent studies from the University of California, San Diego (UCSD) found that AI was judged to be human 73 per cent of the time, on average, surpassing actual human responses by 13 per cent.
Sullivan expanded on what it means to possess intelligence, giving multiple differing definitions from popular dictionaries. He makes distinctions among different types of intelligence, such as logical, mathematical, linguistic, and musical, and even among levels of intelligence across a variety of animals. Citing psychological literature, Sullivan said people can have some levels of intelligence but not all.
The variety of human intelligences adds to the subjective, generally vague criteria for what constitutes AGI or just machine learning algorithms.
The “jagged frontier” of AI learning
The most provocative concept of the night was what White called the “jagged frontier.”
White said people often assume AI will grow in a “circle of competence”. Instead, he suggests AI will unevenly fill this “circle of competence”, growing as it learns and reaching beyond the bounds of some areas, but lacking almost entirely in some. When compared to the varying levels of intelligence humans excel at, AI at times appears comparable.
“We have systems that can solve PhD-level physics problems or win gold medals in Math Olympiads,” White explained. “Yet, the same system might struggle to tell you if 1.9 is larger than 1.11.”
The idea is that, instead of a well-rounded area of competence, proficiency can vary depending on what the AI is trained to do, creating a paradox: The machine can far surpass human capability in some tasks and remain ignorant and – at times – untruthful in others.
Disconnect among experts
AI is still evolving and is being studied extensively worldwide. Most AI companies suggest AGI is years, not decades, away, with billions of dollars invested in AGI research. Although many leading AI researchers say it is unlikely that we will yield anything resembling AGI any time soon, they suggest it will happen eventually.
Others argue that AGI is an entirely unachievable goal for AI companies to pursue. However, a growing minority suggests that AGI already exists. White cites a Feb. 2 Nature article claiming we currently have AGI.
With an unclear definition and limited experience with the technology, it is difficult to determine what we are even looking for.
With the uncertainty of the pros and cons of AI, opinions vary.
“AI has quickly become part of our everyday lives, and I don’t think it’s a good thing because everyone’s becoming complacent, just asking AI to do everything,” said Haelan Culp, a student from Sullivan’s logic class.

Be the first to comment