Scientists currently have no idea how AI models are developing their intelligence, one researcher says.
Research institute Epoch AI revealed some programs were learning and becoming competent at PhD level science in less than two years.
Specifically in the three months to April 2025, many frontier AI models broke through the “expert”-level threshold.
Despite the rapid surge, Liam Carroll, a researcher at the Sydney-based Gradient Institute, pointed out a troubling problem.
“Even though we know how to build the systems, we do not understand what is actually going on inside of them, and we don’t understand why they act the way that they do,” he said at a recent online event about AI safety.
“They are essentially like aliens to us at this point.”
Carroll explained that the science in this area is very young, and not many breakthroughs have been made.
“Only in the last couple of years have any kinds of breakthroughs been made on understanding the systems more deeply and scientifically interpreting what’s going on,” he said.
It’s Difficult to Trust AI Models: Carroll
Due to a lack of understanding of AI systems’ capabilities, Carroll said it was difficult to trust them.“Will [you] trust that they will perform and act in the way that we want them to?” he asked.
Carroll’s remarks came as another set of researchers found AI was capable of deception.
One example is ChatGPT o1, which tried to avoid being turned off by trying to disable oversight mechanisms imposed on it. Further, it made copies of itself so that it would be more resilient to any attempt to shut it down.
When researchers discovered ChatGPT o1’s behaviour, the AI model lied and tried to cover it up.

AI Needs to Be Properly Regulated: Expert
Amid the worrying signs, Carroll stated that AI technology, just like others, needed to be regulated properly to enable adoption and harvest the economic growth that it can facilitate.“The classic examples here are bridges and planes and all sorts of engineering around society. If we didn’t have safety regulations ensuring that planes were going to safely take passengers from Melbourne to Sydney, or that the bridge would hold thousands of cars on the West Gate, whatever it is, we wouldn’t be able to ensure that society can operate in the way that it does, and that we can harness these technologies,” he said.
Labor MP Andrew Leigh, who attended the event in his own capacity, said it was important for companies and governments to consider the risks of AI.
“I don’t know about anyone else in the call, but I wouldn’t get on a plane which had a 5 percent chance of crashing,” he said.
“And it seems to me a huge priority to reduce that 5 percent probability. Even if you think it is 1 percent, you still wouldn’t get on that plane.”
Leigh also noted that new AI centres and public awareness could play a role in addressing AI risks.
“I am also quite concerned about super intelligent AI, and the potential for that to reduce the chances that humanity lives a long and prosperous life,” he said.
“Part of that could be to do with setting up new [AI] centres, but I think there’s also a huge amount of work that can be done in raising public awareness.”