Biological naturalism, the theory of mind developed by John Searle, has at its heart a theory about consciousness. An investigation into consciousness should perhaps start with a distinction between ‘creature consciousness’ and ‘state consciousness’. Some types of organism, like human beings are conscious, some, like plants, are not. If an organism is conscious ‘in general’, there are times when it can be not conscious, e.g. when asleep. Conscious organisms have mental states. In fact, Searle argues, mental states just are conscious states. The mind is consciousness. A creature is conscious – or has a mind – if it has the property of consciousness, and a conscious mental state is simply a matter of the subject being conscious of something. Consciousness is a ‘field’, conscious states are the ‘flux’, modifications in the field.
Some philosophers have a different explanation of consciousness. A creature is conscious if it has conscious mental states, they argue. We shouldn’t start with ‘consciousness’ per se, as Searle does, but try to explain what makes a mental state concious. And, some functionalists have argued, a mental state is conscious just as a matter of its relations to other mental states and behaviour. This is a functional analysis. Consciousness is completely reducible to the ways in which mental states interact. But functionalism famously faces the objection that what pain feels like, what red looks like, can’t be reduced to some relationship between mental states. A computer that replicated the relationships between mental states wouldn’t thereby be conscious, feeling pain for instance. Consciousness is not reducible to a function.
Searle agrees with this objection. He argues that consciousness is irreducibly ‘first-personal’; its reality, its phenomena exist from the first-personal perspective – that is, it is ‘subjective’, only visible from ‘inside’. To be a mental state is to be someone’s mental state, and the thoughts and feelings – as thoughts and feelings – are available to the subject only. A functional analysis is ‘third-personal’ – it describes conscious states from the ‘outside’ (how they interact), not in terms of how they are like from the subject’s point of view.
Is consciousness something physical, a property of the brain that can be accounted for by neuroscience? Or is it a completely different, non-physical, sort of property? What is it for a creature to have consciousness? Searle argues that consciousness is a biological phenomenon, a property of the brain, but not a purely functional property. Instead, it is a ‘systemic’ property.
Systemic properties are very common in science, and some can seem quite unexpected just looking at the parts of the ‘system’. For example, water is liquid, even though none of its parts, its molecules, are liquid. Liquidity is a systemic property. But we can explain why water is liquid in terms of its parts and their causal interactions. Another example is transparency – molecules aren’t transparent; what makes glass transparent is the way the molecules are organized. In each of these cases, we can explain the ‘new’ systemic property in terms of micro-level interactions.
Similarly, Searle argues, consciousness is a systemic property of the brain. It is the brain as a whole that is conscious, even though its individual parts – neurones – aren’t. Consciousness is caused by micro-level brain processes, and if the brain and its causal powers and processes were reproduced, so would consciousness be. So, Searle says, there is nothing particularly mysterious about consciousness – it is part of the natural world, in particular, biology.
But there seems to be a very important difference between systemic properties like liquidity and transparency, and consciousness. We can give complete scientific explanations for why liquids are liquid, why glass is transparent. In other words, we can ‘reduce’ these properties to what explains them – the behaviour of molecules. But can we do that with consciousness? Searle himself seems to provide a very good reason why we can’t: consciousness is irreducibly first-personal, but the activities of neurones are third-personal. Neuroscientists can see neurones and measure their activity in a way in which they cannot see or experience someone’s thoughts. And so some philosophers argue that if the phenomena of consciousness are irreducibly first-personal, then properties of consciousness are not physical. Consciousness is not like other biological properties, because it cannot be explained in third-personal terms.
Searle rejects this argument. Consciousness is a biological property, even though it is first-personal. We could, if we wanted to, insist on redefining the facts of consciousness in physical terms, just as we have redefined liquidity in molecular terms, or we might redefine colours in terms of wavelengths of light. We could, but we don’t, because then we leave out what we are really interested in, viz. the first-personal conscious experiences themselves. However, this doesn’t show that consciousness is something non-physical. We have explained how consciousness can be a higher-order property of a working brain. The irreducibility of consciousness is purely pragmatic, a matter of what our interest in consciousness is. It doesn’t have any metaphysical implications, such as consciousness not being biological. The fact that we can explain conscious mental states in terms of a working brain shows that we are not talking about two different things when we talk about brain processes and consciousness.
The objection can be pressed, though. With liquidity, our explanation of why something is liquid also shows why it must be liquid (given the properties of the molecules and the laws of nature). But we don’t have any kind of explanation of why, given the properties of the brain and the laws of nature, we must end up with consciousness. Searle accepts this, but makes two points in reply. First, it is possible that as neuroscience develops, we will get such an explanation. A philosopher might find this unsatisfying, partly because it leaves the interesting questions about consciousness to neurobiology – how we are conscious, how consciousness is a feature of the brain – and partly because, as Thomas Nagel argues, it is difficult to imagine what such an explanation could look like. And this is because of the first-personal nature of consciousness. This first response seems to side-step the issue of how an explanation in third-personal terms can ever be an adequate explanation of something first-personal. Searle’s second response is that the fact that we can’t say that the brain must give rise to consciousness isn’t a problem. Science doesn’t always give explanations in terms of necessity. For example, Einstein showed that e = mc2; but do we know that e must be mc2? Could it have been something else?
Searle denies that his view is a form of property dualism (the theory that mental properties are a radically different kind of property from physical properties). But his emphasis on the first-personal nature of consciousness has lef many philosophers to argue that it is property dualism. To claim that consciousness is a biological property like any other is difficult to defend if consciousness, uniquely, turns out to be a biological property like no other.