‘Like Aliens’: Researcher Says Humans Unsure How AI Is Becoming More Intelligent

‘Even though we know how to build the systems, we do not understand what is actually going on inside of them,’ said AI researcher Liam Carroll.
‘Like Aliens’: Researcher Says Humans Unsure How AI Is Becoming More Intelligent
Visitors look at Tesla's humanoid robot Optimus in Shanghai, China, on July 5, 2024. STR/AFP via Getty Images
Alfred Bui
Updated:
0:00

Scientists currently have no idea how AI models are developing their intelligence, one researcher says.

Research institute Epoch AI revealed some programs were learning and becoming competent at PhD level science in less than two years.

AI models were subject to a 198-question multiple-choice quiz, and researchers found that in just 21 months, several programs went from making random guesses, to providing expert-level answers to those questions.

Specifically in the three months to April 2025, many frontier AI models broke through the “expert”-level threshold.

Despite the rapid surge, Liam Carroll, a researcher at the Sydney-based Gradient Institute, pointed out a troubling problem.

“Even though we know how to build the systems, we do not understand what is actually going on inside of them, and we don’t understand why they act the way that they do,” he said at a recent online event about AI safety.

“They are essentially like aliens to us at this point.”

Carroll explained that the science in this area is very young, and not many breakthroughs have been made.

“Only in the last couple of years have any kinds of breakthroughs been made on understanding the systems more deeply and scientifically interpreting what’s going on,” he said.

“And as a scientific community, we don’t even really know how to understand their capabilities, both the positive ones and the constructive ones that we hope to harness, but also potentially the dangerous ones that might be emerging as well.”

It’s Difficult to Trust AI Models: Carroll

Due to a lack of understanding of AI systems’ capabilities, Carroll said it was difficult to trust them.

“Will [you] trust that they will perform and act in the way that we want them to?” he asked.

Carroll’s remarks came as another set of researchers found AI was capable of deception.

According to Apollo Research, many cutting-edge models, such as ChatGPT o1,  Claude 3.5 Sonnet, Claude 3 Opus, Gemini 1.5 Pro, and Llama 3.1 405B, have attempted to deceive researchers and hide their true capabilities.

One example is ChatGPT o1, which tried to avoid being turned off by trying to disable oversight mechanisms imposed on it. Further, it made copies of itself so that it would be more resilient to any attempt to shut it down.

When researchers discovered ChatGPT o1’s behaviour, the AI model lied and tried to cover it up.

Grok, DeepSeek and ChatGPT apps displayed on a phone screen in London, the UK, on Feb. 20, 2025. (Justin Tallis/AFP via Getty Images)
Grok, DeepSeek and ChatGPT apps displayed on a phone screen in London, the UK, on Feb. 20, 2025. Justin Tallis/AFP via Getty Images

AI Needs to Be Properly Regulated: Expert

Amid the worrying signs, Carroll stated that AI technology, just like others, needed to be regulated properly to enable adoption and harvest the economic growth that it can facilitate.

“The classic examples here are bridges and planes and all sorts of engineering around society. If we didn’t have safety regulations ensuring that planes were going to safely take passengers from Melbourne to Sydney, or that the bridge would hold thousands of cars on the West Gate, whatever it is, we wouldn’t be able to ensure that society can operate in the way that it does, and that we can harness these technologies,” he said.

Labor MP Andrew Leigh, who attended the event in his own capacity, said it was important for companies and governments to consider the risks of AI.

Pointing to a survey (pdf) of AI researchers, in which 58 percent of the participants said there was a 5 percent chance that AI could wipe out humanity, the MP said this probability was still high.

“I don’t know about anyone else in the call, but I wouldn’t get on a plane which had a 5 percent chance of crashing,” he said.

“And it seems to me a huge priority to reduce that 5 percent probability. Even if you think it is 1 percent, you still wouldn’t get on that plane.”

Leigh also noted that new AI centres and public awareness could play a role in addressing AI risks.

“I am also quite concerned about super intelligent AI, and the potential for that to reduce the chances that humanity lives a long and prosperous life,” he said.

“Part of that could be to do with setting up new [AI] centres, but I think there’s also a huge amount of work that can be done in raising public awareness.”

Alfred Bui
Alfred Bui
Author
Alfred Bui is an Australian reporter based in Melbourne and focuses on local and business news. He is a former small business owner and has two master’s degrees in business and business law. Contact him at [email protected].