Michael Dummett’s Anti-Realism and the Contemporary Understanding of Artificial Intelligence

By Matthew Parish, Associate Editor

The work of Michael Dummett, one of the most influential British philosophers of the latter half of the twentieth century, was grounded in the conviction that understanding language is the key to understanding thought. His anti-realism, developed through reflections on the philosophy of logic and the theory of meaning, challenged entrenched assumptions about truth, knowledge and the nature of linguistic understanding. While Dummett died just before the modern renaissance of artificial intelligence, his ideas possess an unexpected relevance to today’s debates about the status, capability and interpretability of advanced computational systems. Dummett’s philosophical framework provides a lens through which the nature of machine reasoning, the limits of linguistic comprehension, and the question of artificial understanding may be examined afresh.

Dummett’s anti-realism is rooted in the intuition that truth cannot float free of our capacity to recognise it. Against traditional realist views, which hold that statements are true or false independently of our knowledge of them, Dummett maintained that the meaning of a statement is inseparable from the conditions under which we could verify or refute it. For him, understanding a sentence is knowing what counts as a justification for asserting it. This verificationist tradition, which he re-casts through sophisticated logical argumentation, assigns primacy to demonstrability, inference and public linguistic practice over metaphysical commitments.

Such a view bears striking affinity with certain themes in contemporary artificial intelligence. Machine learning systems, particularly large language models, operate on patterns of use rather than metaphysical truths. Their outputs are grounded in inferential regularities extracted from data, not in any independent realm of facts. In this respect, they embody a kind of instrumentalist epistemology that Dummett would likely have found congenial. They do not grasp meanings in a realist sense; they manipulate representations according to rules that mimic the justificatory structures humans employ when using language. An AI model is therefore not a repository of truths about the world but a mechanism for generating plausible assertions based upon accessible evidence.

Dummett’s emphasis upon the public and communal nature of linguistic understanding also bears upon current debates about machine cognition. He argued that meaning is not a private mental entity but a function of shared practice, governed by implicit rules manifested in behaviour. The question of whether a machine understands language thus becomes, in Dummettian terms, a matter of whether it can participate in the inferential games through which meaning is expressed. Modern artificial intelligence is beginning to approximate such participation. Systems can adjust their assertions in response to new inputs, correct themselves after error, and follow the pattern of human linguistic norms in ways that resemble rule-governed behaviour. Nevertheless the absence of an experiential or embodied context places limits on how closely this alignment can be interpreted as genuine understanding. Dummett would have pressed the question of whether inferential capability alone, absent the full context of linguistic practice in a human society, is sufficient for meaning.

A further insight from Dummett’s work concerns the status of undecidable propositions and the nature of computational limits. His exploration of intuitionistic logic, in which excluded middle does not universally hold, provides a philosophical framework that resonates with the limitations built into machine learning systems. These models often operate in probabilistic or uncertain environments in which determinate truth values are unavailable. They do not claim certainty; they generate likelihoods. Dummett’s view that some propositions may lack a truth value unless a proof or refutation is available parallels the conditions under which AI systems make predictions about complex or underdetermined data. The philosophical affinity is not perfect, but it underscores a shared recognition that knowledge is frequently partial, conditional and dependent upon the evidence available to us.

Moreover, Dummett’s insistence that meaning is tied to manifestability raises important ethical and interpretive questions about artificial intelligence. If a system’s internal states cannot be interpreted or inspected, and if its justificatory processes remain opaque, can we regard its outputs as meaningful in a philosophically robust sense? Dummett’s framework suggests that understanding cannot be divorced from transparency. Where human agents must be able to justify their assertions, artificial systems should likewise be required to provide interpretable grounds for their outputs. This line of reasoning strengthens the case for explainable AI, insisting that computational decisions must be accessible to public scrutiny rather than taken as inscrutable facts.

Finally, Dummett’s anti-realism directs attention to the future trajectory of artificial intelligence. If meaning depends upon verification, then the expansion of machine capability will entail the construction of new justificatory practices. In some contexts AI systems already perform inferential tasks beyond the reach of individual human agents yet remain comprehensible as long as their operations can be translated into a shared framework. This suggests that artificial intelligence may not threaten human conceptual authority if its operations can be embedded in public linguistic norms. Conversely, if AI develops modes of inference inaccessible to human explanation, Dummett’s insights warn that such systems will fall outside the domain of meaningful discourse. They would produce symbols without sense, because they would be incomprehensible to the linguistic community that gives meaning its home.

In this manner, Dummett’s anti-realist philosophy provides a deep, even if indirect, set of tools for thinking about modern artificial intelligence. He compels us to consider the nature of meaning, the structure of justification, the boundaries of knowledge, and the importance of public reasoning. As AI becomes more sophisticated, the relevance of Dummett’s thought increases rather than diminishes. His work reminds us that understanding is not an internal metaphysical state but a social and rational practice. Artificial intelligence will be judged not by whether it possesses hidden truths, but by whether it can participate coherently, transparently and responsibly within that practice.

 

5 Views