top of page
  • Writer's pictureRob Hamilton

Does Wittgenstein give us the key to understanding whether AI could be conscious?

Updated: Jun 10

A recent paper from Murray Shanahan of Google DeepMind, argues that the ideas put forward by Wittgenstein in his later work give us the key to understanding the possibility of consciousness in machines.  In this blog, we’ll take a look at what is being claimed here, and see how this contrasts with the Anything Goes perspective. 


Wittgenstein’s central project in the later stages of his career was to ‘show the fly the way out of the fly bottle’ – that is to dissolve many of the confusions and pickles that we get ourselves into, when we use language inappropriately.  One of his major goals was to disabuse us of the notion that the words in our language stand for things.  This was the dominant view prior to this point and it still persists today.  Bertrand Russell wrote in 1903 that “Words have meaning in the simple sense that they are symbols that stand for something other than themselves”.  This was part of a broader attempt to introduce more clarity over concepts and more logical rigour into philosophical argument, that developed in the early part of the 20th century and was pioneered by figures such as Gottlob Frege, G.E. Moore and Russell himself. 

At face value, this idea might appear to make sense.  For example, the word ‘Sun’ refers to the star around which our planet revolves.  When we talk about consciousness in machines, the word ‘consciousness’ refers to the state of affairs in which an entity has a point of view on the world around it.  Or as the philosopher Thomas Nagel put it, in his famous 1974 paper What it is like to be a bat, “an organism has conscious mental states if and only if there is something that it is like to be that organism”.  However, this view of language soon runs into problems.  Where are the things that words like ‘love’ or ‘justice’ refer to?  What about the number 5 or connecting words like ‘is’ or ‘or’?  Wittgenstein argued that rather than the meaning of words being the objects to which they refer, the meaning of words instead derives from how they are used. 

This is perhaps best seen through the use of colour words.  You might at first sight think that the word ‘blue’ refers to a certain image that you see in your mind.  It is the way the sea and the sky appear to you.  But it occurs to many of us growing up, that others of us might look at the sky and perceive it differently.  For example, some may see blue as we would see yellow – we could never know!  This was discussed explicitly by 17th century philosopher, John Locke in his Essays Concerning Human Understanding.  The key point is that it doesn’t matter how the sky and the sea appear to you – the meaning of the word blue is not dictated by these private sensations; it is determined by its use.  It is used to describe that property that the sea and the sky have in common – that is how we learn to use the word appropriately in the first place.  In what is known as his ‘Private Language Arguments’, Wittgenstein makes the point that there is no way of communicating our private sensations to others. 

This applies to other private sensations as well, and perhaps the one that has been most hotly debated is the concept of pain.  We can only know that someone is in pain by observing their behaviour and Wittgenstein would say that it is this behaviour that dictates what it means to be in pain.  It has to be, because the behaviour is what is publicly available and how we learn to use the word pain.  A private sensation of pain is not something we can get a handle on.  Philosophers such as Hilary Putnam have disputed this, claiming a distinction between the sensation of pain and its expression through pain behaviour.  In his Brains and Behavior (1965), he considers a race of ‘super-spartans’ who are suffering great torment, but are so well disciplined that they walk around normally with smiles on their faces.  Wittgenstein would counter that they can’t really be in great torment if the torment doesn’t affect them.  This is not what torment means.  We use that word in situations where the pain is too severe to continue behaving normally. 

We can now turn to consciousness or the ‘problem of other minds’ as it can sometimes be known.  The meaning of the word ‘consciousness’ cannot be a reference to some private sensation that an entity might have.  Rather, it is determined by how it is used.  We describe a being as conscious based on normal observables, such as whether they are awake, respond to stimuli and display an awareness of the world around them etc.  This is how the word consciousness is used and therefore this is what it means to be conscious.  Shanahan proposes in his paper that if an AI were to display all these traits of consciousness, then it would be reasonable to assert that it is conscious as a matter of fact, because this is what it means to be conscious. 


Not everyone finds this argument persuasive.  Some might think that even if it is accepted that the term consciousness does not refer to any kind of private sensation, it is not obvious that this answers the real question of whether machines might have a point of view on the world – whether there is something that it is like, to be a machine.  The world is the way it is, they say, independently of how we talk about it, so considering clever arguments about the meaning of words does not address the question of how the world is

Anything Goes metaphysics helps us see things more clearly.  Graduates of Rob Hamilton’s “Anything Goes” School of Metaphysics acknowledge that it is impossible to know the world ‘as it really is’ and that indeed such a concept may not even make sense.  All we have is the models of the world given to us by our brains and our science.  And because this is all we have, it is only these components of our models that can be considered ‘real’.  We don’t have anything else.  Because ‘All the world is models’, ‘the map is the territory’, as we like to say.  And so Wittgenstein's arguments are relevant, because it is language that gives us the tools that we need to construct our models of reality. 

So what are the implications of this revelation for consciousness?  Well because both other people and AIs are part of our model, it makes no sense to think of their inner sensations as mysterious.  We model our experience as populated by people that have thoughts and feelings and experiences of their own.  When we recognise them as seeing something that is blue, for example, that is our model of what is going on.  We can only understand this on our own terms.  And given that they are part of our model, it doesn’t make sense to imagine that there might be a ‘real inner way’ that they see it - a way that we don’t have access to - like it would if they were an independently existing entity.  So we may as well model them as seeing blue in the same way that we do.  There is nothing to be gained by modelling them as seeing it differently or by characterising it as something mysterious.  They see something blue.  We know what that means. 

When it comes to AI, similar considerations apply.  We need to think about, or model, AIs in a way that allows us to get a handle on them and understand their behaviour.  If, in future, AIs behave as we would expect a conscious being to behave, we can conclude that they are conscious.  We are not treating them merely ‘as if’ they are conscious – which carries with it the implicit caveat that they might not really be conscious.  Rather, ‘the map is the territory’.  If we model them as being conscious, then they are conscious.  It is not something we can be mistaken about, when the only standard of correctness to which we can refer, is consistency with our wider model.

These issues are explored in depth in my upcoming book Anything Goes: A Philosophical Approach to Solving the Hard Problem.  Why not visit my website to find out more or sign up to my mailing list to receive a notification when the book is released? 


66 views0 comments


bottom of page