Viewpoints
Opinion

The Emptiness Inside: Why Large Language Models Can’t Think—and Never Will

The hype that modern AI systems based on large language models are on the brink of ’true intelligence' mistakes fluency for thought.
The Emptiness Inside: Why Large Language Models Can’t Think—and Never Will
Artificial Intelligence signage is displayed during the Mobile World Congress, the world's largest mobile technology trade show, in Barcelona on March 3, 2025. Manaure Quintero/AFP
|Updated:
0:00
Commentary
Early attempts at artificial intelligence (AI) were ridiculed for giving answers that were confident, wrong, and often surreal—the intellectual equivalent of asking a drunken parrot to explain Kant. But modern AI systems based on large language models (LLMs) are so polished, articulate, and eerily competent at generating answers that many people assume they can know and, even better, can independently reason their way to knowing.
Gleb Lisikh
Gleb Lisikh
Author
Gleb Lisikh is an IT management professional and father of three children. He grew up in various parts of the Soviet Union before coming to Canada.