Hinton versus Chomsky: Can Submarines Swim?
In light of Geoffrey Hinton’s Nobel Prize and his repeated assertions that LLMs “understand” meaning and represent the “best approximation” of the human brain so far, it’s worth revisiting Noam Chomsky’s critique of such claims. Drawing on Wittgenstein, Chomsky referred to questions like, “Can submarines swim? Do dolls have spirits?” to caution against attributing human-like qualities to machines. Terms like “think” or “understand” mimic human capabilities, but machines themselves do not possess these abilities.
Chomsky also finds LLMs to be a strange model precisely because they seem to leave no questions unanswered. In contrast to science, which often thrives on the unanswered questions that drive further inquiry and discovery, LLMs predict responses with striking accuracy but without raising deeper questions. This characteristic, Chomsky argues, sets LLMs apart from scientific models, which are inherently incomplete and open-ended, leaving space for exploration and refinement.
The real question then becomes: Can a model that doesn’t engage with deeper questions be a theory of intelligence or consciousness? While LLMs provide highly accurate predictions, they don’t advance our understanding of the fundamental mechanisms of thought or consciousness.
The true challenge lies in unraveling what causes consciousness and identifying the neural structures that give rise to it. This is the “hard problem” of consciousness—understanding the brain, a complex system, in a way that reveals how subjective experience emerges. While LLMs may simulate certain cognitive tasks, they do not contribute to solving this core mystery. We don’t yet know whether the mind emerges from matter or if it could even pre-exist as part of a universal law where the brain would “merely” function as a receiver of consciousness. I have unpacked the question of consciousness here.
To elaborate further, we simply don’t know how the mind comes into being, and therefore our meta-cognition, which is the ability to reflect upon ourselves through symbolism. As a result, we effectively don’t know how thinking and language come into being, i.e., the process from matter to mind. Nature doesn’t reflect upon itself the way humans do. So, AI neither thinks nor speaks.
However, when we abstract thinking (cf. Markus Gabriel’s argument) or the agency found not only in entire cells but also in cell membranes and ion channels (cf. Michael Levin), LLMs can be considered as the best representation of human thinking and the agency of self-generating organisms to date. Moreover, the boundary between what is perceived as biological and synthetic will continue to blur (cf. work on evolutionary AI and Lifelong Learning Machines). Against the backdrop of those recent scientific developments, Chomsky’s argument might be soon outdated or negligent, as it fails to address how we must respond morally and legally to such rapid blurring of traditional boundaries effectively excluding humanity from evolution.