Tech giants promise AI’s ability to “understand” and “reason,” and even the dawn of AGI. Yet philosopher and futurist, Aleksandra Przegalińska, argues that today’s models remain powerful pattern-matchers, not thinking machines. She warns that the collapse of conceptual precision in AI discourse has inflated expectations, obscured limitations, and encouraged dangerous deployments. To grasp what these systems can genuinely do—and what they fundamentally cannot—we must return to philosophical clarity about the nature of intelligence.
When a chatbot passes a bar exam, tech companies announce the dawn of artificial general intelligence. When an algorithm recognizes cats in photographs, we’re told machines now “understand” visual information. When a language model generates coherent text, Silicon Valley proclaims we’ve achieved “reasoning” at scale. But have we really? Or have we simply witnessed one of the most successful marketing campaigns in technological history?
The contemporary discourse around artificial intelligence seems to be suffering from a profound philosophical crisis. Big tech companies have stretched AI-related terminology beyond recognition, distorting incremental advances in machine learning as revolutionary breakthroughs in intelligence. This isn’t merely semantic pedantry: it represents a dangerous erosion of conceptual clarity that obscures AI’s limitations and inflates public expectations of what it can and should be used for.
___
Terminological inflation serves corporate interests remarkably well.
___
As someone who researches human-machine interaction from both philosophical and empirical perspectives, I’ve watched with growing concern as the term “artificial intelligence” has been weaponized by marketing departments and venture capitalists, detached from any rigorous foundation about what intelligence actually entails. If we’re serious about understanding AI’s potential and limitations, we need to begin where all clear thinking must: with precise definitions and honest epistemological frameworks.
The meaning of “AI” has expanded radically over the past decade. Once reserved for systems that might genuinely exhibit general intelligence, the ability to reason and adapt across domains, the term now applies to virtually any software involving statistical pattern recognition. Your email spam filter? AI. Netflix recommendations? AI. The autocorrect on your phone? Groundbreaking AI.
This terminological inflation serves corporate interests remarkably well. Labeling a product “AI-powered” attracts investment, justifies higher pricing, and generates media coverage. But it does so by exploiting a fundamental ambiguity about what these systems actually do. When companies describe their machine learning algorithms as “intelligent,” they’re making an implicit philosophical claim about the nature of cognition—one they’ve neither defended nor, in most cases, even acknowledged.
The philosopher John Searle famously distinguished between syntax and semantics in his (now famous) Chinese Room thought experiment. A system can manipulate symbols according to rules (syntax) without understanding what those symbols mean (semantics). Today’s large language models are sophisticated Chinese Rooms: they process linguistic patterns without genuine comprehension. Yet tech companies routinely describe these systems as “understanding” language, “reasoning” about problems, or even developing “knowledge.”
This isn’t just hairsplitting. When we claim that systems “understand” or “know,” we’re making predictions about their reliability, generalization capabilities, and appropriate applications. A system that genuinely understands medical diagnosis should perform reliably across populations and contexts. A pattern-matching algorithm trained primarily on data from one demographic will systematically fail when deployed more broadly, as numerous studies of algorithmic bias have demonstrated.
SUGGESTED VIEWING The AI enigma With Cory Doctorow, Joscha Bach, Mazviita Chirimuuta
Nonetheless, companies present incremental improvements within these limited systems as paradigm-shifting breakthroughs. Each new iteration of a large language model is announced with revolutionary rhetoric, despite often representing marginal improvements in statistical performance on narrow benchmarks.
Join the conversation