Why I think neuroscientists should be wary of using the term “consciousness”

I recently received an eloquent email from a person regarding my rather sceptical stance on consciousness. This person explained that they have a background in neuroscience, and that they can assure me that neuroscience has a perfectly sound definition and justification for using the term consciousness in the way it does. I’ve run into a number of neuroscientists in the online forums I spend time in, so I thought I might post my reply here to explain why I think neuroscientists should be very sceptical about using the term consciousness:

Before I get into the details of why I’m often very sceptical about usage of “consciousness”, I want to propose that good understandings of scientific issues tends to depend on at least two main factors – a body of relevant empirical evidence, and a sound conceptual framework. Keeping the two intellectually separate sounds simple, but of course any successful scientist knows that it’s stunningly difficult. This is bad, because when they become blurred, it becomes difficult to differentiate criticism of the conceptual framework from an attack on the body of empirical evidence.

So a problem arises when an objection that says “I think you might have some philosophical baggage in that conceptual framework” starts to sound a lot like “you don’t have evidence for your claims”. I think this topic of consciousness is a lot like that. Neuroscientists have a large body of empirical evidence to support their claims and I completely understand them defending it with vigour. But I also think philosophy can be useful in identifying flaws in scientific conceptual frameworks. Challenging the fundamental way we think about our own field of expertise is one of the most painful parts of science, but it can also yield some really useful results.

Being sceptical about consciousness in neuroscience doesn’t challenge the view that an artificial neural network could, in theory, reproduce all the behaviours displayed by a human. It’s not about suggesting that consciousness is non-physical, or suggesting that neuroscientists/AI-researchers believe that it is. It’s about examining is that the term being used to make sure its neutral and baggage-free. My suggestion is that “consciousness” isn’t baggage-free – depending on how it’s used, it’s either misleading, or carrying subtle (flawed) philosophically assumptions. Let me explain why we might think this.

Consciousness has multiple definitions, meanings or senses in which its used, some of which are clear and others notoriously less so. When somebody says “consciousness” they conceivably could just mean “an animal that has an active brain state associated with use of its sensory organs and motor control”. So, when you’re awake, we say you’re conscious. For clarity, let’s say this “awakeness” is just a description of someone’s brain state when they’re awake and not sleeping or knocked-out. When it is used, this sense seems perfectly reasonable and legitimate. A human or animal being awake and being able to record memories, for example of things they see or hear, is easy to understand as a simple physical process, regardless of philosophy. The problem arises when we confuse or mix this meaning of “consciousness” with other meanings. To avoid equivocation, we might then use “awakeness” instead, for the same reason we wouldn’t talk about a brain by calling it an “apple” or a “table” – those words have other meanings and baggage we don’t want to refer to or evoke.

The big problem occurs when we use another very important meaning of “consciousness” – the one to do with “self-awareness”.

In order to demonstrate why I think there’s more baggage here than meets-the-eye, let’s propose that any legitimate, justified concept should be able to pass the following test – if we were to remove all of our knowledge of the concept, some combination of empirical evidence and reason should force us to adopt it again in order to understand the subject matter that it related to. Perhaps we will derive it under a different name, but the same underlying concept should appear. Additionally, we would use only the concepts we are forced to use (Ockham’s Razor), and we wouldn’t use concepts that can’t be disproven by their nature (falsification).

My assertion is that if somebody believes (as most neuroscientists and AI-researchers do) that the world is purely physical (physicalism/materialism), then consciousness *does not pass this test*.

Ever wonder what this picture would look like if we didn’t have a consciousness?

Suppose I had never heard of consciousness. One day a neuroscientist befriends me and shows me an experiment, where a magnetic field is applied to a subject’s brain, and the subject then reports how what they see, hear or think changes. The change is then correlated with the application of the magnetic field to establish plausible causality. Looking at this experiment, I can clearly see a brain (perhaps on a fMRI). I see evidence of a magnetic field. I see a person talking about what they see and hear. I can describe a relationship between each. To describe what is going on, this is all I need. Nothing requires “self-awareness” or “consciousness” for me to explain. We could perform surgery, examine patients with parkinsons, we could even look at electrical signals moving up and down neural pathways, and we’d find the same. We still at no stage need to talk about self-awareness or consciousness in a self-awareness sense. If we did, we’re probably introducing terms for other reasons using other arguments. To put it differently, if you accept Ockham’s Razor, these tests aren’t evidence of anything like the lay or philosophical usage of the term “consciousness”.

We could try to get evidence by moving into more philosophical territory. We could say that if the subject can think about themselves, and isn’t that something like “self-awareness”? Suppose we propose that a camera (let’s say this one had a simple neural network for a control system) taking a picture of itself in a mirror. If awareness (as physicalism asserts) is just a recorded weighting of pathways in a neural network, isn’t it self-aware? Intuitively no, but why? Perhaps its more specific, like the neural net being aware of itself (not its body). If I take a simplified snapshot of my computer hard drive and save it in some free space on the same hard drive, is my hard disk self-aware in the sense we like to talk about humans being self-aware? Again, intuitively no, but why? To solve this we start having to get very philosophical about “awareness”, and I think that’s good reason to become very cautious about the word. If you look up “awareness” in the dictionary, you’ll see its definition includes “consciousness”, and so we start getting into some pretty weird circular (fallacious) logic. This should be a massive red flag.

Consciousness – an idea with a long philosophical history

To really start putting together strong arguments for consciousness, we have to using some actual philosophy-proper. We’re going to have to start talking about p-zombies and Mary’s room and Qualia. Now philosophers have been arguing back and forth about about these things (or something like them) for centuries, but what’s important to note here s that these things is that they are all Dualist. Dualism states that the mind is not reducible to the merely physical brain. This contradicts the common view in neuroscience/AI, that the mind is physical, that there is only a physical substance/world, and that what we call “mental” is just a regular part of the physical world.

You may wonder if I’m arguing for dualism. I’m not – I find the dualism/monism debate unresolvable, though I lean a little towards neutral monism. The main problem is that whatever way you lean, you can’t be a full dualist and monist at once. When neuroscientists use the term consciousness, unless they’re a dualist, they’re using a term that fundamentally disagrees with their core assumptions.

Now I think it would on the surface be quite reasonable to say – “no, no, no, the neuroscientist really is just using consciousness in a completely un-philosophical way. It’s just a technical term used to point to certain types of observable physical stuff going on in the human brain.” But for centuries “consciousness” has been a term that is absolutely central to the field of philosophy. Isn’t it worth asking why such a fundamentally philosophical term is being used for something that is “definitely not in any way philosophical”? Even if some people are using it some non-philosophical way, the name means almost almost everyone else will read philosophical meaning into it, sometimes without even realising it. Uploading is an example of this – certain parts of a biological organism, parts that change everyday and cease to exist when it sleeps, are deemed to worthy of (abstract?) replication, while the organism itself is discarded.

I don’t think we should pursue the survival and moral elevation of a concept that ultimately might not even correspond to a real thing, much less a morally important thing, at expense of what is real and what morally matters – people; regular everyday humans. Humans may certainly use advanced technology to give themselves new capabilities (eg. maybe someday links to external memory capacity), but I think that’s very different and far more positive. If we destroy humans to protect a contentious, abstract and possibly imaginary concept, then I don’t think that’s very advanced, its more like a primitive tribe sacrificing themselves for the sake of a primitive god (Moloch?). That’s something I believe most neuroscientists would oppose.

Image credit:

12 comments

    1. Thanks for comment. Perhaps the way I worded it made “forced” sound too binary. But certain evidence is less open to interpretation than other evidence, and I’d suggest in complex areas of inquiry its vital to strive for the former. Consciousness is way up the other extreme end of the scale, so we should be wary methinks.

    1. Thanks for the comment. Interesting post too. Your blog is really dark, a bit too dark for me but it looks like you’re a talented artist – I wish I could sketch better. I don’t post that often, but I hope you continue to enjoy my blog too!

      1. Thank you! I will, I like the way you think. It’s true, my blog is a bit dark:) Todays post is the first one where no one dies, really. It might lighten up some day, though.
        About conscience, what do think about the reality/conscience connection? Could it be the same? Could reality even exist without the existence of any kind of conscience?

      2. Good question! In a dualist universe, I think it might be possible to reason that reality and consciousness might be intrinsically linked, but then we don’t have any real access to/evidence of anybody else’s consciousness. If the universe is dualist, we might then assume reality might stop existing when we do, as we don’t have evidence of any other conscious beings. In a monist universe, I think it’s different. Essentially I’m suggesting that the word “conscious” doesn’t make much sense in a monist universe. In a way it’s better to say “reality”, as you suggest, and separately say “brain” if you’re studying humans, and not to see a specific link between the two. I think most of what we want to talk about with the word “consciousness” can be said in less loaded other ways, like “human perception” or “human rights”. So you don’t lose what you mean when you normally use “conscious”, but you make more logical sense and you avoid assigning moral value to some concept that doesn’t exist at the expense of real people.

  1. But back to consciousness. You’re right, consciousness is a rather bi ambiguous word, I think perception is better for the use I am talking about, the part of us that perceive something, and it keeps being a mystery, even for monists and even for neurologists, as far as I know. We are probably not the only lifeforms with perception though, a lot of things indicate (I would even say prove, but a bit controversial perhaps) a lot of other animals has the same capacity on different levels.

    So what is it and can it be explained by neurology? Buddha said the “Self” does not exist, and the famous quantum physicist Edwin Schrödinger wrote in his book “What is Life?”:

    “The only possible alternative is simply to keep to immediate experience that consciousness is a singular of which the plural is unknown; that there is only one thing and that what seems to be a plurality is merely a series of different aspects of this one thing, produced by a deception (the Indian MAJA); the same illusion is produced in a gallery of mirrors, and in the same way Gaurisankar and Mt Everest turned out to be the same peak seen from different valleys.”

    I can’t find the thread of argumentation online, but to me he seems to be right, that the idea of multiples “consciousnesses” is likely to be just an illusion. So if there is only one, there might as well be none. If everything is green, green ceases to exist, right? Any thoughts?

    1. Interesting thoughts there! I haven’t read that book but it sounds like it’s got some very interesting ideas. Several eastern/Indian religions seem to have explored topics around perception/experience/consciousness etc quite a bit. I’m not especially familiar but I can see there might some cool insights to draw on there, just as Edwin Schrodinger seems to have done.

      I definitely agree our “experience” doesn’t appear to be categorically different from animals. Perhaps our unique skill is an attempt to articulate and discuss that perception.

      I don’t know if this is related to your comment, and not speaking especially philosophically or rigorously here, but I wonder have you ever got a video or web cam and got it film a TV screen on which it’s output was showing (so it’s filming what it sees and seeing what it films). Basically anything around the edges gets amplified into this weird infinite tunnel effect, and the colours do strange things. I wonder if “consciousness” is a little similar. When a biological human tries to focus its perception on perception itself, maybe there’s a little something in common between our “feeling” of consciousness and that weird infinite feedback effect. Perhaps the things around the edge of our perception, for example our emotions, might influence how we perceive “consciousness” like the edge of the TV picture plastering the walls of the video feedback “tunnel”. It would also explain why beliefs around the topic vary so wildly and are often so bizarre. Just some idle musing that came to mind (and some amusing visuals), not a philosophical position or anything.

      1. Philosophical or not, very interesting thoughts and a beautiful video. There might be something there.

        Some eastern doctrine, I think it was Zen Buddhism, says that in these matters neither words nor thoughts can ever explain these kinds of things, the only way to understand is through auto observation in form of meditation. I don’t know, but if you haven’t meditated, it’s highly recommendable. I’m no expert myself, but a good friend of mine thought me the basics, and it seems to be a good help when it comes to understanding oneself and the universe. In a way the video and your thoughts around it reminds me a bit of meditation.

        I like to see it as a “divine” triangle, observation, reason and passion, where meditation teaches you to observe, philosophy to reason and… well, life teaches you passion, and the three of them together teaches you the secrets of life.

        Maybe your right that we might be the only ones capable of discuss perception, but watch out for the complex sound system of the dolphins! They probably talk a lot about fish, water and sex, but who knows;)

Leave a comment