The troubled circle of consciousness (philosophy of mind)

I’ve been concerned for a while that consciousness, used by many as a starting point for fairly important moral reasoning, is a shaky concept when we try to utilise it outside its traditional dualist home. I recently voiced my concern to an online acquaintance, saying that outside a dualist framework, there doesn’t appear to be strong justification that “consciousness” is the correct schema to use in tackling important philosophical questions. Sure, I get the feeling of what this is intended to get at. That feeling, that awareness of myself. But I’m into philosophy – I’m not willing to just go on a feeling – I want to see evidence this is the best concept for me to use. And if you strip away the complex obfuscations of us rhetorically nimble philosophers, the basic definitions of consciousness and awareness look suspiciously circular:

Circular logic - miles of fun with little effort!

Awareness – the state or condition of being aware; having knowledge; consciousness:

Consciousaware of one’s own existence, sensations, thoughts, surroundings, etc.

This seems to be the most philosophically relevant sense of “conscious” that get’s thrown around too – we’re not talking about the word in the sense of concentrating hard on something; and we’re not talking about the opposite of “unconscious”. Yet this definition is obviously circular. Without independent evidence to justify them, it seems to be a classic if subtle case of begging the question – sneaking the answer into the question itself. Of course, I don’t think that’s the same thing as consciousness being false, but it is, I think, an indication we might be using the wrong concept to approach the subject matter.

Perhaps even more worryingly, this problem isn’t limited to this concept. I began to suspect that we could populate similar circular structures with whatever philosophy-of-mind-framework we like (physicalist, dualist, dual-aspect etc). We could, for example, talk of qualia as evidence of the dualist’s mind, neglecting to identify that qualia’s definition actually revolves around experience, and that’s experience requires an experiencer. The mind hidden at the start of the chain of reason becomes a discovery of supposed evidence later on. Or if we’re in a physicalist mood we could swap experience for neuroscience, and the mind for the brain. Or perhaps idealist language appeals to us. Hide your assumptions in the language, and suddenly anything is possible (like calling everything “physical” or “mental”, and expecting those word’s to retain their original meaning).

Concerned about this development, I was compelled to consult Broad’s well-known list of philosophies of the mind for other options (hint: near the end of the paper). However, even here I was dismayed to recognise that horrid circle subtlety etched into Broad’s language. He chooses to talk about unjustified entities as “delusionary” and uses this language to declare several forms of neutral monism invalid. Change not using concept X to “concept X is a delusion”, and you’ve planted the seeds that imply a certain philosophy of mind. It’s then trivial for Broad to exclude several ideas – if concept X doesn’t include the possibility of “delusions” as you frame them, then its contradictory. Perhaps I’m being a bit harsh to Broad here, but I can’t help but see the same kind of language trick at work.

So what if all philosophies of mind beg the question in some subtle way? Are various forms of dualism and monism at first glance internally valid, but without real justification? What if the whole field is just a room of linguistic smoke and mirrors?

My friend, a dual-aspect neutral monist wasn’t yet convinced by my circularity argument. He points out that even if the definitions appear circular, it doesn’t rule out there being empirical evidence that independently justify the concepts:

“If Sarah has evidence that Sarah is aware of Sarah’s own existence, then Sarah has evidence that Sarah is conscious”

Ok, so showing circularity is necessary but not sufficient for my argument. But I wonder if this evidence is as sound as it appears? Or more specifically, is this evidence also linguistically flexible. For the sake of argument, let’s try an interesting substitution here:

“If a camera has evidence that the camera is aware of the camera’s own existence, then the camera has evidence that the camera is conscious”

camera-mirrorSo say we take a camera and get it to take a photo of itself in a mirror (ie. while taking a photo). Is it conscious? In a sense, it does have evidence that it is aware of its own existence – easily shown if we use the awareness definition above that includes the phrase “has knowledge”. In that case, the above statement as true – the camera is really conscious. Still, perhaps we’re being uncharitable – maybe “awareness” of one’s own existence is more than simply having information of one’s own existence. In this case, a simple snapshot in the mirror is not enough.

We might instead say, being conscious, or self-aware, is more like the modelling of thought within thought (a collection of thoughts thinking about itself). So we are aware of ourselves as thinking entities. For you and I, that means thinking in sufficient detail about our own feelings, intentions etc. I think anyone would be hard pressed to deny they do that from time to time.

brain

RIP – all the conscious entities that died here

Yet how often does this occur? Certainly not when I am sleeping. And for most of the day, especially when I’m not doing philosophy or being reflective, I’m not thinking about my own thoughts. I’m thinking “look out for that car” or “that food looks tasty”. And if I’m not “aware” of my own thoughts, and if my self-awareness thoughts are the fabric of my consciousness (whether that consciousness lies upon a physical or mental substance), then I certainly do not have an ongoing coherent object called a consciousness. At best a single conscious entity only flickers into existence briefly, a fleeting moment of self reflection, and then is gone. It’s not even clear that we have justifcation to treat more than one conscious episode as connected – they could be entirely separate objects that cluster around a person in space and time.

Or in other words – if we’re honest with ourself, self-awareness, and therefore being conscious, is a part-time occupation at best.

It seems to me that an object’s defining attributes aren’t a part-time affair. If someone were to say “I am a homo sapien”, then we’d expect them to be a homo sapien all the time. If they briefly altered their DNA to be homo sapien for a brief period a couple of times a day, we wouldn’t accept that label as accurate. Yet that is what appears to occur when we talk of being conscious. We are briefly self-aware, but then our thoughts move on. Perhaps that’s because modelling your own thoughts, your own mind, is pretty darn complex; and the more complex you are, the more complex the modelling you require to create a meaningful representation. Full-time self-awareness is beyond us – even a meditating monk dedicating their life to achieving “consciousness” still has to eat and sleep. Therefore, it’s simply not a defining attribute. It’s more more like something we do from time-to-time.

I suspect some people would attempt to amend consciousness rather than simply discarding it. The chief argument of this kind I have seen is saying consciousness is the unique ability to be self-reflective, even if we’re not actually doing that all the time. Yet, I can’t help but think – is somebody a soccer player because they have the ability to play soccer, or because they actually play soccer? Again, if it’s just something we do from time to time, it’s not worthy as a defining attribute of who we are. Hoping to mend this broken concept simply feels like we’re clutching at straws.

Perhaps using consciousness as the defining human characteristic (especially outside dualism) just isn’t meant to be. Such a problematic starting point throws the while chain of moral reasoning into doubt. Rather than searching for the core of who we are, it seems like we’re taking one of abilities (important though it is), and pretending that ability is literally what we are. Even in the most generous definition of “conscious” I can find, it’s seems inevitable that being conscious is either a completely empty circular tautology, or at best a rare, fleeting, human state. Such disparate flickers of self-reflection don’t seem like justification for the existence of a single coherent entity we can call “consciousness”. It certainly doesn’t equate to a single entity that encompasses “who we are”. You and I are not consciousnesses.

Yet if consciousness has played a role in so much moral reasoning – are we just to discard it? Certainly unpleasant consequences don’t negate logic, but if we are to loosen our grip on this bond, this moral link between you and I, then what can we reach for?

No doubt that topic is beyond a brief article like this. But a logical starting point, I think, is life. Straight-forward, biological life. By seeing ourselves in that context, I think we open the possibility of imbuing all our greatest inventions, our most noble endeavours, our technology, our civilization, our reason, with a purpose. We connect our existence not just to the hedonistic whims of the moment, but with an intricate, awe-inspiring story, that I hope is just beginning to unfold. A story whose central concept is true and sound, requiring no philosophical balancing-act atop a circle of broken logic. Whether you’re a Dualist or Monist, whether Buddhist, Christian, Atheist or whatever else – we’re all living creatures, living together in a difficult universe. If we are to consider moral questions, perhaps this is the best place to begin.

Advertisements

7 comments

  1. As you note, “awareness” of one’s own existence is more than simply having information of one’s own existence. And similarly belief and desire are more than a few bits of information – they can only be attributed in the context of a bustling cognitive economy. Like that of a rat. A rat has beliefs and desires, like “look out for that cat!” and “that food looks tasty.” (Neat! I only had to change one letter.) And so it has simple consciousness, without self-consciousness – without awareness of the fact that it is a thinker and a desirer.

    Perhaps there are borderline cases – animals of which it isn’t quite fair to say simply that they are conscious, nor to say simply that they’re not. But there are also borderline cases called “dusk” and “dawn”, yet the difference between night and day is like the difference between night and day.

    Simple consciousness is morally important too. Self-consciousness is good for lots of instrumental reasons – it makes us moral agents; it underlies empathy; and more – but in its own right, I don’t think it’s more important than sensory and emotional consciousness.

    I somewhat disagree with your dismissal of the awake / asleep-or-comatose distinction. Of course it’s not the same concept of consciousness that’s at stake in phil mind, but it’s not unrelated. Understanding the relationship will highlight some of that empirical evidence you seek.

    1. Thanks for another thoughtful comment.

      So here I think you’re using being conscious not so much as “self-awareness”, which I think is a commonly argued one, but awareness generally. So what is awareness? It might be possible to criticise that as even more directly circular, given the definition I use, but I think you offer a different alternative definition – a bustling cognitive economy.

      Now there is two questions I have here. First, what is meant by “cognitive” here. Is it thought in a dualist/physicalist/neutral monist sense? If we in turn investigate the meaning of “cognitive”, do we find it traces back to empirical or logical meaning, or do we find that definition also relies on a philosophically contentious starting point. I suspect this definition can be reduced to physicalist lanuage (very roughly, 3rd person language about thought) and eventually neuroscience. That’s not false, but I’d suggest that particular language is not totally neutral to the different philosophies of mind either. My suspicion at the moment is that it is in fact very difficult to find language that doesn’t beg the question in thinking about the mind.

      My second question is whether a bustling cognitive economy is sufficient for our purposes. I can imagine a supercomputer, one that is in no way an artificial intelligence (let’s say it processes weather data), that has many complex processes going on, but isn’t anything like things we call conscious. In this way I think “self-awareness” specifies something that gives more of the correct “feeling” of what we call “consciousness”, even though it is logically flawed in the way the article describes.

      All in all, my strong suspicion is that instead of asking “am I justified in using this concept ‘consciousness’ to describe the mind”, we’re really just saying “I must have a consciousness I just have to find it”. That’s not an easy process to challenge because it not the evidence but the concept itself that is the problem.

      The asleep-awake issue for me is quite important because it completely rules out consciousness being a consistent entity across your lifespan. At best it means you have many brief unconnected consciousnesses. It’s not a single object, ergo it’s not “who you are”.

  2. “I have been concerned for a while that consciousness, used by many as a starting point for fairly important moral reasoning, is a shaky concept when we try to utilise it outside its traditional dualist home”

    Consciousness figures in dualism, idealism, monism and most forms of physicalism. It only doesn’t figure in eliminative materialism. Eliminative materialism is the exception. Taking consciousness seriously is not the exception.

    Consciousness isnt the best concept to use for what? Moral relevance? But the arguments you go in to give aren’t specific to ethics.

    “..this definition is obviously circular. Without independent evidence of them…”

    Why jump from talk of definitions to talk of evidence? Circular definitions can be a problem, and circular arguments can be a problem, but they are not  the SAME problem. Evidence for the one is not evidence for the other. (Admittedly you didn’t explicitly say it was)

    What is the problem with circular definition? In the worst cases, it means you don’t have enough information about the concept to empirically confirm it. To told that a freeble is a doobix and a doobix is a freeble doesn’t allow you to build a freeble/doobix detector.

    But that doesn’t seem to be the case here, since you go on to discuss the empirical evidence for consciousness. Not instance, you note that,  by one criterion cameras are possibly self aware. The argument, presumably, is that the criterion is too broad, that it is unintuitive. But it can be refined. Self consciousness in animals can be tested by observing how they behave with regard to their reflections. There are a number of medical tests for consciousness in the medical sense (as a transient state  opposed to sleep, coma, etc) And finally, there is the gold standard, subjective, introspective access, (an issue you don’t term to be facing squarely, in that you rewrote my comment about Ross’s awareness of Ross to be about a third party called Sarah).

    You seem to think this argument, too, is inconclusive, because you move onto another argument about the transience of states of consciousness .

    “It seems to me that an object’s defining characteristics aren’t a part time affair”

    It seems to be that objects are often defined in terms of their capacities and potential behaviour.  We credit a TV set with being a TV set although it is sometimes switched off.  We call a shovel a shovel even when it isnt doing any shovelling. We even credit glass with being fragile if it never breaks. (One could argue that potentialities are possessed permanently, if one is willing to reify them, but I don’t think there is much mileage there…) Admittedly we don’t always define things by what they can do or sometimes do but the lesson here is that there is no one, simple formula.

    Additionally, the terminology is ambiguous. For doctors, “Consciousness” refers to a state human’s are typically in for several hours a day, for psychologists and mystics, full  reflective self consciousness is a state that might last only seconds, and for philosophers, it refers to the potentiality for either state. “Consciousness” can denote a number of states and processes, although there is a family resemblance between them.

    “Is a soccer player a soccer player because they have the ability to play soccer, or because they play soccer”

    We don’t credit someone as being a soccer player on the basis of an unexercised ability,  unlike the case of fragility, nor we don’t require them to play soccer 24 hours a day, so that is still a counterexample to your objection that you are not conscious  unless you are not conscious all the time.

    “We might instead say, being conscious, or self-aware, is more like the modelling of thought within thought (a collection of thoughts thinking about itself).”

    Higher order thought is not the only ethically relevant definition of consciousness.
    Admittedly, some people take the view that it is, but others think that the ability to feel pain, ie to have qualia, is what is important. To argue that consciousness is never ethically relevant, you need to address all forms of the argument that consciousness is ethically relevant.

    To argue that consciousness does not exist at all, you also need to addresss all forms of the argument. You seem to think there is a problem of other minds, but no problem of Ross’s mind…..so you object to consciousness, but not the mind. However, consciousness cannot easily be unravelled from mind. It is needed to explain a range of of phenomena, including blindsight, and synaesthesia.

    “Rather than searching for the core of who we are, it seems like we’re taking one of abilities (important though it is), and pretending that ability is literally what we are”

    Who says that consciousness defines what we are? Is that supposed to be the same argument as the argument that consciousness is the most ethically relevant thing? The connection isnt at all obvious. You can believe that the ability to feel pain makes an entity a moral patient without bringing in essences

    1. Thanks for your detailed comment. As usual, you make interesting arguments, but I do hope to persuade you that this article’s claims are in-fact true and significant.

      “Consciousness figures in dualism, idealism, monism and most forms of physicalism. It only doesn’t figure in eliminative materialism. Eliminative materialism is the exception. Taking consciousness seriously is not the exception.”

      All this is true, but doesn’t bear on my argument. I wouldn’t go to the effort of trying to write a refutation of something no-one believed.

      “Consciousness isnt the best concept to use for what? Moral relevance? But the arguments you go in to give aren’t specific to ethics.”

      Also true. But to take a moral position on a thing shouldn’t we know/agree on what that thing is, or whether it exists? I claim they we do not, and that that renders the starting point of those ethics in doubt.

      “What is the problem with circular definition? In the worst cases, it means you don’t have enough information about the concept to empirically confirm it. To told that a freeble is a doobix and a doobix is a freeble doesn’t allow you to build a freeble/doobix detector.”

      A circular definition is a problem because we cannot pin down the exact meaning of the term. Unless the circularity can be grounded in an independent empirical example (look at that widget over there) or logical truth (a square is a shape with four equal sides), an investigation of the term’s meaning is a fruitless, infinite loop. And if we can’t pin down the meaning, then aren’t claims regarding evidence of that thing meaningless?

      “But that doesn’t seem to be the case here, since you go on to discuss the empirical evidence for consciousness.”

      I’m not doing that, because I believe we can’t discuss consciousness if we don’t know what it is. I am simply attempting to show the deep problems with one core definition and the obvious amendments to that definition. The definition in question is that consciousness is to do with awareness of one’s own existence/self etc.

      “Self consciousness in animals can be tested by observing how they behave with regard to their reflections. There are a number of medical tests for consciousness in the medical sense (as a transient state opposed to sleep, coma, etc) And finally, there is the gold standard, subjective, introspective access,”

      If we call the mind a brain, we can simply say that these things are biological processing machines storing simplified models of itself. Like the camera. That’s also the case with “subjective” access (although I’ll note that I think the phrase subjective as dualist anyway, but let’s put that aside), because in physicalism this is just Ross’ brain imperfectly modelling Ross. Phrased like that, why would a physicalist consider it philosophically significant in the way consciousness is significant for dualists? Even if they do, then for what reason is the camera not conscious? Things don’t remotely add up, and I think that warrants non-acceptance until the equation fits.

      “(an issue you don’t term to be facing squarely, in that you rewrote my comment about Ross’s awareness of Ross to be about a third party called Sarah).”

      If I understand your objection you’re saying I’ve changed this sentence from 1st to 3rd person. That’s true and reasonable to point out (although unless my readers are called Ross it would be true even if I kept it the same). However, change the name as we might, I think my argument still stands.

      “You seem to think this argument, too, is inconclusive, because you move onto another argument about the transience of states of consciousness.”

      Would you agree this sentence may be mild Bulverism? My enthusiasm won’t make my arguments true, neither will a lack of it (which you feel you perceive) make them false. I acknowledge this was probably rhetorical and doesn’t weigh against your counter-argument.

      “We credit a TV set with being a TV set although it is sometimes switched off. We call a shovel a shovel even when it isnt doing any shovelling. We even credit glass with being fragile if it never breaks. (One could argue that potentialities are possessed permanently, if one is willing to reify them, but I don’t think there is much mileage there…) Admittedly we don’t always define things by what they can do or sometimes do but the lesson here is that there is no one, simple formula.

      So you’re saying that we’re conscious sometimes, and there are at least some things we define based on their part-time activities, such as a TV or a shovel. Certainly a shovel is not always shovelling, so I agree there. But imagine for a second that an TV blows its CRT. It will never televise again. Is it no longer a TV? Or is it a broken TV? What about a broken shovel? The potentiality you refer to is gone. I suggest that in these examples the name is not derived from the potentiality, but the intended human purpose for these devices. That’s very different.

      Even if we were to accept potentialities, which to be clear I don’t, then in the case of a human, which potentialities are significant? Don’t we have thousands of abilities, and perhaps infinite potentialities? Why does this particular ability impart something on humans of philosophical significance? Why isn’t it just another of our many abilities? It’s even harder to see how we could approach this problem in choosing teleological conditions (as with the shovel or TV).

      Modelling ourselves in our neurons, as physicalists by definition hold the mind to be doing, is a useful but peripheral ability that does not justify philosophical significance any more than modelling weather or bananas does.

      I also note that accepting many definitions (no one simple formula) seems like a pretty good mental recipe for equivocation, which I’ve noticed is typical in many discussions of consciousness. This is supported by what you point out…

      “Additionally, the terminology is ambiguous. For doctors, “Consciousness” refers to a state human’s are typically in for several hours a day, for psychologists and mystics, full reflective self consciousness is a state that might last only seconds, and for philosophers, it refers to the potentiality for either state. ““Consciousness” can denote a number of states and processes, although there is a family resemblance between them.” … “Higher order thought is not the only ethically relevant definition of consciousness.
      Admittedly, some people take the view that it is, but others think that the ability to feel pain, ie to have qualia, is what is important. To argue that consciousness is never ethically relevant, you need to address all forms of the argument that consciousness is ethically relevant.”

      If the proposed definitions for consciousness are deeply flawed, the entirety of arguments based upon those definitions are moot. We (as philosophers) don’t have address such arguments unless a sound definition is available. I hold that perhaps one is available within dualism (though its creates certain ethical challenges), but that physicalism can not. Here the refutation concerns probably the biggest “family” of definitions that centre around notions of self-awareness. (I think the other big one is “what it is like” which there’s no room for here)

      “To argue that consciousness does not exist at all, you also need to address all forms of the argument.”

      I am not arguing that consciousness doesn’t exist, I am arguing the term is too meaningless for us to use at all within physicalism. I also note the absence of a refutation of ALL definitions does not invalidate a refutation of this important family of definitions. I don’t claim to refute all definitions in this article (though I incidentally think most other definitions are pretty suspect).

      “You seem to think there is a problem of other minds, but no problem of Ross’s mind…..so you object to consciousness, but not the mind.”

      I have previously claimed there is a problem of other minds in dualism. I have not claimed this problem exists in physicalism. We can easily identify a brain in both our head and others heads. I have not seen any strong argument that makes me think “(monism/dualism) is true”, so I think the above sentence doesn’t characterise my position.

      “It is needed to explain a range of of phenomena, including blindsight, and synaesthesia.”

      Equating medical consciousness (awakeness, automatic vs deliberate) to philosophical consciousness (as discussed here) is equivocation. This fallacious reasoning is almost ubiquitous and is a big reason I felt motivated to write on this topic.

      “Who says that consciousness defines what we are? Is that supposed to be the same argument as the argument that consciousness is the most ethically relevant thing? The connection isnt at all obvious. You can believe that the ability to feel pain makes an entity a moral patient without bringing in essences”

      Many people, though I am mostly interested in philosophers and futurists. In their case, many people who claim to be physicalists talk about “uploading” themselves into a computer based often deploy reasoning of this kind. I’m not objecting to the science of their claims (which I think at this stage is beyond our knowledge) I’m saying their claim is incoherent. This has real moral consequences – if we are not a consciousness, then uploading may actually be suicide, or extinction of humanity.

      Moral claims are indeed different from factual claims of moral interest. However in both cases claims about things rely clear, coherent definitions of consciousness. I believe I have demonstrated why, if physicalism is true, one of the biggest families of definitions is deeply flawed. If we ignore this, I worry our moral positions are phantasmal and will evaporate when we try to grasp them in a time of need.

  3. Does it make sense to use the term “conscious” in the phrase ” release those storms that tear at your conscious”?
    I feel that I accidentally used the wrong word to describe one’s mental state. “Consciousness” is what I believe to be the correct word but can I use the later in its place to describe the typical state of a person’s mind?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s