The problem of other minds is the question of how we can know that there are minds other than our own. We each experience our own minds directly, from ‘within’. I can apprehend my sensations and emotions in a way that is ‘felt’; these mental states have a phenomenology that I experience. Furthermore, I can know what I want or believe through introspection. But my knowledge of other people’s minds, if I have any such knowledge, is very different. I can neither have any phenomenological experience of other people’s mental states; nor can I know them through introspection. At first sight, at least, all we have to go on is other people’s behaviour. Can I infer from this that they have minds, and what mental states they might have? If I can’t know about other minds directly, as I know about my own, is there an indirect way of knowing about other minds; and if so, what is it?
There are four families of answers, broadly speaking, that philosophers have presented:
1) the argument from analogy: other people are made of the same stuff as me and behave very much as I do in similar circumstances; I have a mind; by analogy, it’s logical to think they do as well;
2) behaviour is not evidence that someone has a mind; the link is logical rather than inductive;
3) that other human beings have minds is the ‘best explanation’ for how they behave;
the premise of the argument is wrong – we do have direct experience of other people’s minds, e.g. in the experience of being watched; or again, that we need a sense of others before we can gain a sense of ourselves.
The argument from Analogy
Thomas Reid was perhaps the first philosopher to recognise the problem in its modern form, and John Stuart Mill later developed the argument from analogy in response (in fact, Mill was also dissatisfied with it, and developed a form of inference to the best explanation – see below). Although it is perhaps the ‘commonsense’ position, the argument from analogy was attacked in the 1950s by Gilbert Ryle and others. The standard objection is based on its use of induction. The conclusion that other people have minds is based on a single case – mine. This is like saying ‘that dog has three legs; therefore, all dogs have three legs’. You can’t generalize from one case, because it could be a special case. Perhaps I am the only person to have a mind. Ryle further argued that inductive conclusions should be able to be checked. We know from seeing other dogs that a three-legged dog is unusual – we can check the conclusion. But it is impossible to check if other people have minds; we cannot experience their minds directly (in fact, this is logically impossible). However, this second point is not as strong: the absence of the possibility of an independent check doesn’t weaken the argument from analogy: the absence of extra evidence doesn’t undermine the evidence one already has.
Ayer reformulated the argument to avoid induction from one case by moving from a single correlation between ‘behaviour’ and (a single) ‘mind’ to correlating many behaviours of mine with many mental states of mine. We can further develop the point that mental states are causes of behaviour – to establish this, we only need our own case. But having established the cause, we may legitimately infer the relevant causal link between the behaviour of others and mental states.
But the argument still doesn’t work. It relies on the view that like effects (behaviour) have like causes (mental states), which has been generally rejected. Even if my behaviour is caused by my mental states, that doesn’t not mean that the behaviour of other people could not be caused by something entirely different (say, brain states without mental states).
Ryle’s behaviourism – the claim that talk about the mind and mental states is talk about dispositions to behaviour – solved the problem, he felt. If behaviourism is true, talking about mental states – and so minds – is just talking about dispositions to behave in certain ways. The link between behaviour and minds isn’t evidential, it’s logical. From how someone behaves, we can infer what behavioural dispositions they have. But from this, we don’t then infer that they have a mind. To say they have certain dispositions just is to say they have certain mental states. Certain types of behaviour, then, which we can observe directly, aren’t simply evidence for certain mental states; the link is logical.
However, behaviourism has faced devastating objections of its own, in particular, that there is no set behaviour correlated with a mental state. Doing exactly the same thing could, in different instances, be expressions of completely different mental states – I might run towards something because I’m scared of it, and want to surprise it; or I might run towards it because I’m not scared of it. The stoic might be in pain, but not show it – thereby expressing the disposition not to show pain, a disposition to suppress the behavioural manifestation of another disposition. How can we tell which dispositions someone has and is expressing, and in particular, without referring to other mental states? But if these states are just dispositions to behave in certain ways as well, we’ll face the same problem over and over, and never get any definite correlation between a mental state and behaviour.
The behaviourist’s response here is to say that we are taking ‘behaviour’ much too narrowly. You can tell – from facial expression, for instance – whether someone is running scared or not scared; or from what happens next; or from just how they go about it. Behaviour is expressive, not just ‘mere behaviour’. But this reply is not legitimate: expressive descriptions of behaviour use the very mental terms (angrily, in fear, etc.) that behaviourism says should be replaced. So either we can’t correlate a mental state to behaviour or we can’t describe the behaviour in a way that replaces all references to mental states. In either case, we can’t replace talk of mental states by talk of behaviour.
Inference to the 'best explanation'
Rather than inferring from one’s own case to other minds (a form of enumerative induction), we may employ a standard form of theoretical scientific reasoning, inference to the best explanation. We seek to understand other people’s behaviour, and we may propose different hypotheses. The hypothesis that other people have minds, and that their mental states cause them to behave as they do, is – it is claimed – the best hypothesis we have, and it does not rely on the singular case of our own experience.
Functionalism can be understood in these terms: mental states are inner states of another organism that are characterized by their causal relations to stimuli, behaviour, and each other. This is a theory of the mind which we may use to then hypothesize that other people have mental states. This has the strength of behaviourism’s connection between mental states and behaviour without its weakness of attempting to reduce one to the other.
But functionalism faces a famous objection: while it seems to deal well with those aspects of the mind that relate to behaviour, what about consciousness? Couldn’t I be in exactly the same functional state as you, yet what we call ‘red’ appears to me in just the same way that what we call ‘green’ appears to you? These qualities of consciousness – qualia – seem to be independent of functional causal role. Or again, suppose that the population of China was organized to duplicate exactly the interactions between the neurons in your brain; they exactly duplicate the functional causal role of every state in your brain – but does a new mind, with consciousness just like yours, come into existence? If we think not, then couldn’t other people be like the population of China – fully functioning and ‘mental’ in that sense, but without anything ‘going on inside’? And without consciousness, does the other person have a mind at all? Furthermore, we must ask what the basis for inferring qualia in others is. As remarked in the introduction, the phenomenological aspects of the mind are something we only know (it seems) through our own experience. Without that experience, we could not postulate them as an explanation for anyone else’s behaviour. Any inference involving them falls into the argument from analogy, rather than inference to the best explanation.
Rejecting The Problem
An alternative solution is to follow behaviourism’s lead in rejecting the idea that we need to infer, in any way, that other people have minds. Wittgenstein argued that we simply take up the attitude that other people have minds, and treat them accordingly. We react to people as minded, just as we react to them as alive, and this response on our part is deeper, more fundamental than any beliefs about them. Our ‘belief’ that other people have minds, then, is not the product of any process of thought (including inference); it is part of human nature, which guides how we think.
Philosophers have also developed a number of arguments to the effect that to have a mind oneself presupposes interaction with other minds. Donald Davidson argues that in order to have thoughts and beliefs, we need to be able to identify what those thoughts and beliefs are about (e.g. trees, dogs, etc.). But this identification of things in the external world depends on ‘triangulation’ with another point of view; we understand what our beliefs are about by seeing others’ responses to the same objects in the world. Without this cross-checking, no conception of the independent world would be possible, and so neither would the thoughts that we, in fact, have.
This line of thought can also be given a developmental perspective: a mind does not develop, in an infant, in the absence of interaction with other minds. The psychological evidence fairly unanimously indicates that a sense of self (of oneself as a self) develops as part of the same process as the sense of others as selves; first- and third-personal use of psychological concepts emerge together. If there can be no knowledge of oneself as a mind without presupposing that there are other minds, the problem does not arise. Sartre argues, in a more philosophical vein, that we cannot develop a sense of ourselves as persons without being seen as persons by other people.
Wittgenstein also argued that we can have direct awareness of other people’s mental states, most particularly, emotions. Of course, we cannot literally see anger, but we can literally see anger in their facial expression, for example. Again, this is not a process of thought or inference; the ‘interpretation’ is part of our perception of human faces itself. And other philosophers have pointed to other apparently direct experience of other minds. Again, Sartre points to the experience of being looked at, held in the gaze of another mind, e.g. in shame or praise.