06/17/2025 | News release | Distributed by Public on 06/17/2025 00:49
An error occurred while preparing your download
What can babies do that even the most advanced Large Language Models (LLMs) currently cannot? Renowned AI expert, Professor Richard Sutton, demonstrates this with a video of an infant excitedly crawling about his play area and examining his toys: humans are capable of generating new knowledge through experience. LLMs - artificial intelligence (AI) systems such as ChatGPT that are trained on human-generated text and images - still lack this capability.
To qualify as truly intelligent, AI would require a greater source of data that is not static but experiential, said Prof Sutton, a professor of computing science at the University of Alberta in Canada. "Experience just means the data you get when you interact with the world - this is the way both people and animals learn," he explained. But what is the limit of machine learning? Will humans eventually lose control of AI?
Prof Sutton answered these questions and more as part of the NUS120 Distinguished Speaker Series on 6 June 2025. The series is one of several events held to celebrate the University's 120th anniversary this year. His lecture, which came on the heels of two other AI-themed ones by Professors Yoshua Bengio and Yann LeCun, drew close to 600 in-person attendees, with over 4,000 joining the livestream on YouTube.
From simulation to real-world interaction
Prof Sutton, who won the 2024 Turing Award - named after the "father of computer science" Alan Turing and often dubbed the Nobel Prize of computing - pointed out that experiential learning in machines was mooted by Turing himself, during his first public presentation on AI in 1947.
Since then, much progress has been made on this front. This is seen in the rise of computer programs such as AlphaGo, which beat a grandmaster in the board game of Go in 2016, and AlphaProof, which achieved silver medal-standard at the 2024 International Mathematical Olympiad.
The earliest phase of machine learning revolved around simulated experience, but an increasing number of programs today learn through direct first-hand interactions with the real world. This is part of reinforcement learning, a field in which Prof Sutton is a pioneer from the 1980s.
In a nutshell, the computer program is not told what actions to take in reinforcement learning, but is given a certain set of rewards to pursue. It must then discover which actions yield the most reward by trying them out, thus enabling experiential learning.
An error occurred while preparing your download
Indeed, NUS President, Professor Tan Eng Chye, stated in his opening remarks that reinforcement learning "has evolved to become one of the most important approaches for creating intelligent systems", and affirmed NUS' belief in the vision of advancing technologies with real-world impact, guided by a strong sense of social responsibility.
AI development an "unalloyed good"
Prof Sutton predicted that LLMs would one day be seen as a "momentary fixation of the world" compared with superior AI systems in a future he dubbed the "era of experience".
For now, though, reinforcement learning has yet to realise its full power because most existing algorithms are still incapable of continual learning and meta-learning-what he dubbed "learning to learn".
Likening AI development to a marathon, he suggested that the creation of such super-intelligent agents would take several decades, and still longer if they were to design and replicate things without prompting. But he believes the outcome would be an "unalloyed good" for the world. "AI is the inevitable next step in the development of the universe," he declared.
NUS Associate Vice President (AI) Bryan Low, who moderated the event's Q&A session, pointed out that instead of being open and sharing knowledge, there were those who called for AI to be developed in a "closed-door fashion" for safety-related reasons. In response, Prof Sutton acknowledged that there were "countervailing intuitions" on how best to manage AI.
An error occurred while preparing your download
But he expressed a hopeful view of AI, noting that "so many calls for safety are really calls for centralised control". In the long run, he cautioned that such control would stymie cooperation, which is "the source of all that is good in the world".
Calls for centralised control, he added, are rooted in fear and could drive a dangerous wedge between actors who are well capable of peacefully coexisting and working towards the common good.
In his view, AI politics mirrors human politics. "We have to worry that these centralised authorities will take too much power and abuse it and become authoritarian, or introduce a sort of sclerotic friction in our lives. We just have to resist the calls for centralised control," he urged.
Change our world, not AI
Instead, Prof Sutton urged for decentralised cooperation, where authority and decision-making are distributed across various levels. Such is the operating model of the Openmind Research Institute, which he co-founded in 2023 to conduct basic AI research. He added that the institute would likely set up its headquarters in Singapore, although he did not specify a timeline.
To combat rising hostility against AI, all of the institute's research will be made freely available. "No country can dominate and control this research output," said Prof Sutton. Openmind currently has a research lab in Canada.
An error occurred while preparing your download
To an audience member who asked why he was looking to set up Openmind's headquarters in Singapore, Prof Sutton noted that the free transfer of knowledge is commonplace in many other fields. When it comes to AI, however, national security is often cited as a reason for withholding information from other countries.
Unlike some countries where AI institutes have to sign non-disclosure forms to prevent certain other countries from accessing their work, Singapore does not have such restrictions. This makes it conducive for free and open research, he said.
The dangers of AI, Prof Sutton added, are not the fault of AI but people themselves. "Instead of changing the AI, we ought to change the world in which they live," he said. "People will respond to their environment - if they are brought up in an environment where it is not rational to cooperate, then they will not cooperate. I want a world in which AI sees that cooperation is the natural thing to do."
Watch the livestream of Prof Sutton's lecture here.