Month: December 2015

Why I think neuroscientists should be wary of using the term “consciousness”

I recently received an eloquent email from a person regarding my rather sceptical stance on consciousness. This person explained that they have a background in neuroscience, and that they can assure me that neuroscience has a perfectly sound definition and justification for using the term consciousness in the way it does. I’ve run into a number of neuroscientists in the online forums I spend time in, so I thought I might post my reply here to explain why I think neuroscientists should be very sceptical about using the term consciousness:

Before I get into the details of why I’m often very sceptical about usage of “consciousness”, I want to propose that good understandings of scientific issues tends to depend on at least two main factors – a body of relevant empirical evidence, and a sound conceptual framework. Keeping the two intellectually separate sounds simple, but of course any successful scientist knows that it’s stunningly difficult. This is bad, because when they become blurred, it becomes difficult to differentiate criticism of the conceptual framework from an attack on the body of empirical evidence.

So a problem arises when an objection that says “I think you might have some philosophical baggage in that conceptual framework” starts to sound a lot like “you don’t have evidence for your claims”. I think this topic of consciousness is a lot like that. Neuroscientists have a large body of empirical evidence to support their claims and I completely understand them defending it with vigour. But I also think philosophy can be useful in identifying flaws in scientific conceptual frameworks. Challenging the fundamental way we think about our own field of expertise is one of the most painful parts of science, but it can also yield some really useful results.

Being sceptical about consciousness in neuroscience doesn’t challenge the view that an artificial neural network could, in theory, reproduce all the behaviours displayed by a human. It’s not about suggesting that consciousness is non-physical, or suggesting that neuroscientists/AI-researchers believe that it is. It’s about examining is that the term being used to make sure its neutral and baggage-free. My suggestion is that “consciousness” isn’t baggage-free – depending on how it’s used, it’s either misleading, or carrying subtle (flawed) philosophically assumptions. Let me explain why we might think this.

Consciousness has multiple definitions, meanings or senses in which its used, some of which are clear and others notoriously less so. When somebody says “consciousness” they conceivably could just mean “an animal that has an active brain state associated with use of its sensory organs and motor control”. So, when you’re awake, we say you’re conscious. For clarity, let’s say this “awakeness” is just a description of someone’s brain state when they’re awake and not sleeping or knocked-out. When it is used, this sense seems perfectly reasonable and legitimate. A human or animal being awake and being able to record memories, for example of things they see or hear, is easy to understand as a simple physical process, regardless of philosophy. The problem arises when we confuse or mix this meaning of “consciousness” with other meanings. To avoid equivocation, we might then use “awakeness” instead, for the same reason we wouldn’t talk about a brain by calling it an “apple” or a “table” – those words have other meanings and baggage we don’t want to refer to or evoke.

The big problem occurs when we use another very important meaning of “consciousness” – the one to do with “self-awareness”.

In order to demonstrate why I think there’s more baggage here than meets-the-eye, let’s propose that any legitimate, justified concept should be able to pass the following test – if we were to remove all of our knowledge of the concept, some combination of empirical evidence and reason should force us to adopt it again in order to understand the subject matter that it related to. Perhaps we will derive it under a different name, but the same underlying concept should appear. Additionally, we would use only the concepts we are forced to use (Ockham’s Razor), and we wouldn’t use concepts that can’t be disproven by their nature (falsification).

My assertion is that if somebody believes (as most neuroscientists and AI-researchers do) that the world is purely physical (physicalism/materialism), then consciousness *does not pass this test*.

Ever wonder what this picture would look like if we didn’t have a consciousness?

Suppose I had never heard of consciousness. One day a neuroscientist befriends me and shows me an experiment, where a magnetic field is applied to a subject’s brain, and the subject then reports how what they see, hear or think changes. The change is then correlated with the application of the magnetic field to establish plausible causality. Looking at this experiment, I can clearly see a brain (perhaps on a fMRI). I see evidence of a magnetic field. I see a person talking about what they see and hear. I can describe a relationship between each. To describe what is going on, this is all I need. Nothing requires “self-awareness” or “consciousness” for me to explain. We could perform surgery, examine patients with parkinsons, we could even look at electrical signals moving up and down neural pathways, and we’d find the same. We still at no stage need to talk about self-awareness or consciousness in a self-awareness sense. If we did, we’re probably introducing terms for other reasons using other arguments. To put it differently, if you accept Ockham’s Razor, these tests aren’t evidence of anything like the lay or philosophical usage of the term “consciousness”.

We could try to get evidence by moving into more philosophical territory. We could say that if the subject can think about themselves, and isn’t that something like “self-awareness”? Suppose we propose that a camera (let’s say this one had a simple neural network for a control system) taking a picture of itself in a mirror. If awareness (as physicalism asserts) is just a recorded weighting of pathways in a neural network, isn’t it self-aware? Intuitively no, but why? Perhaps its more specific, like the neural net being aware of itself (not its body). If I take a simplified snapshot of my computer hard drive and save it in some free space on the same hard drive, is my hard disk self-aware in the sense we like to talk about humans being self-aware? Again, intuitively no, but why? To solve this we start having to get very philosophical about “awareness”, and I think that’s good reason to become very cautious about the word. If you look up “awareness” in the dictionary, you’ll see its definition includes “consciousness”, and so we start getting into some pretty weird circular (fallacious) logic. This should be a massive red flag.

Consciousness – an idea with a long philosophical history

To really start putting together strong arguments for consciousness, we have to using some actual philosophy-proper. We’re going to have to start talking about p-zombies and Mary’s room and Qualia. Now philosophers have been arguing back and forth about about these things (or something like them) for centuries, but what’s important to note here s that these things is that they are all Dualist. Dualism states that the mind is not reducible to the merely physical brain. This contradicts the common view in neuroscience/AI, that the mind is physical, that there is only a physical substance/world, and that what we call “mental” is just a regular part of the physical world.

You may wonder if I’m arguing for dualism. I’m not – I find the dualism/monism debate unresolvable, though I lean a little towards neutral monism. The main problem is that whatever way you lean, you can’t be a full dualist and monist at once. When neuroscientists use the term consciousness, unless they’re a dualist, they’re using a term that fundamentally disagrees with their core assumptions.

Now I think it would on the surface be quite reasonable to say – “no, no, no, the neuroscientist really is just using consciousness in a completely un-philosophical way. It’s just a technical term used to point to certain types of observable physical stuff going on in the human brain.” But for centuries “consciousness” has been a term that is absolutely central to the field of philosophy. Isn’t it worth asking why such a fundamentally philosophical term is being used for something that is “definitely not in any way philosophical”? Even if some people are using it some non-philosophical way, the name means almost almost everyone else will read philosophical meaning into it, sometimes without even realising it. Uploading is an example of this – certain parts of a biological organism, parts that change everyday and cease to exist when it sleeps, are deemed to worthy of (abstract?) replication, while the organism itself is discarded.

I don’t think we should pursue the survival and moral elevation of a concept that ultimately might not even correspond to a real thing, much less a morally important thing, at expense of what is real and what morally matters – people; regular everyday humans. Humans may certainly use advanced technology to give themselves new capabilities (eg. maybe someday links to external memory capacity), but I think that’s very different and far more positive. If we destroy humans to protect a contentious, abstract and possibly imaginary concept, then I don’t think that’s very advanced, its more like a primitive tribe sacrificing themselves for the sake of a primitive god (Moloch?). That’s something I believe most neuroscientists would oppose.

Image credit: