Creature Consciousness and State Consciousness
An investigation into consciousness should perhaps start with a distinction between ‘creature consciousness’ and ‘state consciousness’. Some types of organism, like human beings are conscious, some, like plants, are not. If an organism is conscious ‘in general’, there are times when it can be not conscious, e.g. when asleep. Conscious organisms have mental states. But very few philosophers think that all mental states are conscious; i.e. most believe that mental states can be unconscious. There are beliefs, desires, and emotions we have that we are not conscious of having. So, we may conclude, there is something – some condition or some property – that makes a mental state conscious or not. (Some philosophers argue, controversially, that there are even sensations that are not conscious. For example, we naturally say ‘my headache lasted all day’, even if I didn’t actually feel it all day. So it lasts between my moments of being aware of it. The properties involved in a headache can occur even when the mental state is not conscious.)
John Searle is one of the philosophers who thinks there are no unconscious mental states. The mind is consciousness. A creature is conscious – or has a mind – if it has the property of consciousness, and a conscious mental state is simply a matter of the subject being conscious of something. Consciousness is a ‘field’, conscious states are the ‘flux’, modifications in the field. This analysis puts creature consciousness first, and uses it to explain state consciousness.
What it is for a creature to be conscious, and what it is for a mental state to be conscious are two different things. Of course, the two will be related; but many philosophers disagree with Searle about how they are related. They want to privilege state consciousness: a mental state is conscious is it has a certain sort of property, and a creature is conscious if it has (or can have) conscious mental states. We shouldn't start with 'consciousness' per se, as Searle does, but try to explain what makes a mental state conscious. Ans some functionalists have argued, a mental state is conscious, just as a matter of its relations to other mental states and behaviour. This is a functional analysis. Consciousness is completely reducible to the ways in which mental states interact. But functionalism famously faces the objection that what pain feels like, what red looks like, can't be reduced to some relationship between mental states. A computer that replicated the relationshops between mental states wouldn't thereby be conscious, feeling pain for instance. Consciousness is not reducible to a function.
Searle agrees with this objection. He argues that consciousness is ireducibly 'first-personal'; its reality, its phenomena ecist from the firs-personal perspective - that is, its 'subjective', only visible from 'inside'. To be a mental state is to be someone's mental state, and the thoughts and feelings - as thoughts and feelings - are available to the subject only. A functional anaylsis is 'third-personal' - it describes consciouss states from the 'outside' (how they interact), not in terms of how they are like from the subject's point of view.
State consciousness, Perception and Introspection
We talk of being ‘conscious of’ something, e.g. I am conscious of my computer; but we also talk about mental states being ‘conscious’ (no ‘of’). We’ve just seen that Searle wants to explain one in terms of the other: a mental state is conscious if the subject is conscious of something. Other philosophers say that a mental state is conscious if the subject is conscious of the state. One reason for preferring this claim is that we can sometimes have perceptual states (seeing something in the world) which we don’t seem to be conscious of, e.g. when driving while talking with a friend, we see the road but we aren’t conscious of seeing the road. Searle would argue that seeing the road is properly considered a conscious state – we are conscious of the road, even if we are not conscious of seeing (being conscious of) the road.
When we are conscious of our mental states, perhaps this is more like a form of self-consciousness. Introspection involves consciousness of one’s consciousness. Human beings, perhaps uniquely, can be not only conscious of things in the world, but can also be conscious of their consciousness. We not only see, think and feel things, we are aware that we do so. This ability is the ability to introspect; it involves self-consciousness. It is very difficult for us to imagine what it is like to be conscious without being self-conscious, but it is a distinction we need to bear in mind.
Self-consciousness can sometimes seem like an ‘inner perception’, but maybe this is misleading. The idea of consciousness has often been linked to perception. Not only are perceptual states – seeing, hearing, etc. – typically conscious, but consciousness of one’s mental states is also sometimes thought of as a kind of ‘inner perception’, called introspection. Being conscious of a mental state could be thought to be like perceiving it. However, there is an important disanalogy here: every kind of perception we have – sight, hearing, smell and so on – detects particular kinds of qualities – colour, sounds, smells… But consciousness itself has no such specific qualities, but can encompass them all. In this way, consciousness is more like thought. There is also the problem of saying that mental states are ‘things’ that can be perceived.
Consciousness and Reduction
This argument, so far, has suggested that a mental state is conscious, if the subject is conscious of it. But in terms of understanding what consciousness is, this doesn't seem to get us very far. For what is it for a subject to have consciousness at all?
David Rosenthal suggests that a mental state is conscious if you have an (unconscious) ‘higher-order thought’ about that mental state, roughly to the effect that ‘I am having state x’. This, however, is a type of functional analysis, because it says a state is conscious just in case it has some relation to another mental state, viz. it is being thought about. This theory suffers the same objection to the functional analysis above.
What pain feels like, what red looks like: these properties can't be reduced to some relationship between mental states. Rather, conscious properties are intrinsic. An intrinsic property is one that its possessor (in this case, the experience) has in and of its own, not in virtue of its relations to anything else. Think of the smell of coffee. It is the smell 'of coffee' because of its relation to the substance of coffee. That it is 'of coffee' is not an intrinsic property. But how that smell smells is an intrinsic property (people of qualia argue), because it would be that smell even if it wasn't caused by coffee (a rose by any other name would smell as sweet).
In this argument, the kind of conscious properties thought to be intrinsic are perceptual - smells, colours, pain etc. But of course, I can also be conscious of my beliefs, yet these don't have any perceptual qualitites. Philosophers who object to the functional analysis of consciousness, then, often accept the account of mental states that don't have perceptual qualitiesl their objection is that the functional analysis is incomplete - it doesn't work for mental states that do have perceptual qualities. In these cases, the mental state must itself have some intrinsic conscious property.
So is consciousness something physical, a property of the brain that can be accounted for by neuroscience? Or is it a completely different, non-physical, sort of property? We can’t survey the different arguments here, and I end with one theory, John Searle’s biological naturalism. While Searle thinks consciousness is unique in being a first-person phenomenon, he also argues that consciousness is a biological phenomenon like any other. In science, we often find a ‘micro-level’ explanation for some feature at a ‘higher’ or ‘system’ level. For example, water is liquid, even though none of its parts are liquid. This isn’t a problem: we can explain why water is liquid in terms of its parts and their causal interactions. Similarly, consciousness is caused by brain processes, and if the brain and its causal powers and processes were reproduced, so would consciousness be.
But there seems to be a very important difference between systemic properties like liquidity and transparency, and consciousness. We can give complete scientific explanations for why liquids are liquid, why glass is transparent. In other words, we can ‘reduce’ these properties to what explains them – the behaviour of molecules. But can we do that with consciousness? Searle himself seems to provide a very good reason why we can’t: consciousness is irreducibly first-personal, but the activities of neurones are third-personal. Neuroscientists can see neurones and measure their activity in a way in which they cannot see or experience someone’s thoughts. And so some philosophers argue that if the phenomena of consciousness are irreducibly first-personal, then properties of consciousness are not physical. Consciousness is not like other biological properties, because it cannot be explained in third-personal terms.
Searle rejects this argument. Consciousness is a biological property, even though it is first-personal. We could, if we wanted to, insist on redefining the facts of consciousness in physical terms, just as we have redefined liquidity in molecular terms, or we might redefine colours in terms of wavelengths of light. We could, but we don’t, because then we leave out what we are really interested in, viz. the first-personal conscious experiences themselves. However, this doesn’t show that consciousness is something non-physical. We have explained how consciousness can be a higher-order property of a working brain. The irreducibility of consciousness is purely pragmatic, a matter of what our interest in consciousness is. It doesn’t have any metaphysical implications, such as consciousness not being biological. The fact that we can explain conscious mental states in terms of a working brain shows that we are not talking about two different things when we talk about brain processes and consciousness.
The objection can be pressed, though. With liquidity, our explanation of why something is liquid also shows why it must be liquid (given the properties of the molecules and the laws of nature). But we don’t have any kind of explanation of why, given the properties of the brain and the laws of nature, we must end up with consciousness. Searle accepts this, but makes two points in reply. First, it is possible that as neuroscience develops, we will get such an explanation. A philosopher might find this unsatisfying, partly because it leaves the interesting questions about consciousness to neurobiology – how we are conscious, how consciousness is a feature of the brain – and partly because, as Thomas Nagel argues, it is difficult to imagine what such an explanation could look like. And this is because of the first-personal nature of consciousness. This first response seems to side-step the issue of how an explanation in third-personal terms can ever be an adequate explanation of something first-personal. Searle’s second response is that the fact that we can’t say that the brain must give rise to consciousness isn’t a problem. Science doesn’t always give explanations in terms of necessity. For example, Einstein showed that e = mc2; but do we know that e must be mc2? Could it have been something else?
Searle denies that his view is a form of property dualism (the theory that mental properties are a radically different kind of property from physical properties). But his emphasis on the the first-personal nature of consciousness has led many philosophers to argue that it is property dualism. To claim that consciousness is a biological property like any other is difficult to defend if consciousness, uniquely, turns out to be a biological property like no other.