Experience, and its Implications for AI and Humanity

Suppose I assume I have something called experience (or perhaps awareness or consciousness, though these have quite problematic definitions). Experience seems to include things like my senses, my emotions, and my thoughts. Most people naturally believe in something like this. It is also a normal human assumption to believe that other people also have experiences in the same way I do – this is the basis for all human empathy. We also often to place it central to human rights and questions of morality.

Yet this assumption is a big jump philosophically-speaking. My experiences do not include the thoughts, feelings or senses of others – I have no direct knowledge of their experience. How can I know that they experience anything at all? Could they just be complex machines behaving like they have experiences? And if I am careful in my reasoning, how is it I can assume that it is other people that have experiences, but not trees, rocks, water or air?

In order to provide sound reasoning explaining how we could know that other humans have experience, there is an implied reasoning we often selectively ignore:

-I must assume that I and the biological creatures (people) I see before me are of the same category or type (‘I am human and so are you’). This however does not prove they also have experience. Therefore…
-I must assume that my own experience is PHYSICALLY A PART OF my biological body (my mind is basically the same thing as my brain)
-I can then assume that because other people also have biological bodies and brains, that they must therefore also have experiences like I do

We are extremely fond of leaving out the second step, but if you confront the issue logically, it is a neccessary part of believing that others have experiences just as you do.

If instead I assume that my experience is something distinct from my human brain and body (for example by assuming a consciousness as a discrete entity), then I create a problem. I am forced to imagine that this separate entity of experience is somehow causally connected to the biological body, crossing the duality between the mind and the body. This itself is very difficult to explain (philosophers have been trying and failing since Descartes) (I believe there are religious arguments, but this blog does not explore those sorts of arguments). However, even if I accept this link in myself, I have no further evidence to assume that same connection exists elsewhere. Even if it does exist elsewhere, I do not know where. I have no reason to think a rock couldn’t have just as full set of experiences as I do, given that my experience is a separate entity from my body and not the result of neural activity, and given that I have no way to perceive any experience other than my own. I cannot assume it exists in people by thinking ‘well something must connect their experiences or consciousness and their body’, because this assumes the existence of others’ experience and is therefore circular logic.

For all I know, I may be the only entity in the world with experiences, or it could be only people with brown hair or people born on a Tuesday. Or experiences, including my own, could be entities temporarily attaching themselves to objects including myself, perhaps only seconds ago (the memories could be part of the physical body). Certainly I cannot claim to know that all humans have experiences or consciousness. In that case, why would I even care about harming others when I don’t even know if they experience anything at all? Pursued to its logical conclusion, this is a troubling road to walk.

The alternative is to assume that experience is a physical part of the human brain and body. If I think this, it is a perfectly reasonable assumption that others have experiences not unlike my own, simply because I know they have bodies like my own. I know both I and others have a brain – using modern science it is perfectly possible to confirm the existence of my own brain. I also might optionally wish to consider that numerous experiments have shown that behaivours previously thought to be non-physical processes, such as experience and decision-making, correlate extremely closely with specific predictable neural activity in the brain (detectable by brain-scans). It is probably impossible to ever conclusively prove that there is no hidden force somehow involved, yet as science advances we get a clearer and clearer picture of how neural networks can perform all the aspects of complex human decisions without requiring any special help.

The fact that dualism makes it impossible to prove that others have experiences does not disprove the mind-body duality or the idea of a hidden force (that is a separate argument). The real issue is that it creates troubling problems for the dualist versions of our human values. Most alarming is the fact that the metaphysical explanations of altruism, human rights and freedom appear to have been philosophically destroyed, because we have no rational grounds under a dualist system that others have any experiences that we wish to either promote or prevent (eg. suffering).

On the other hand, if the mind is simply the brain and part of the physical body, and if experience is an emergent property, then it is still possible for these things we value to exist. Free-will becomes a biological process (compatibilism) instead of a metaphysical mystery, and human-rights becomes a set of principles derived from biological altruism and reciprocation. From here on we will explore a few implications that may be of interest for those that for whatever reason, have adopted this view.

IF EXPERIENCE IS AN EMERGENT PROPERTY OF NEURAL NETWORKS, THEN IT IS POSSIBLE TO RECREATE IT

The single most important implication of this logic is that it is possible to replicate the components of human experience, decision making and consciousness through Artifical Intelligence (AI). Though both dualism and monist beliefs exist in the AI community, it is a general assumption amongst almost all artifical intelligence researchers that all human qualities can eventually be replicated by AI, or by scanned replications of the human brain’s neural network loaded into a computer system. Robotics have in many areas already surpassed the abilities of humans – the next step is that AI will surpass the ability of humans to think (in a couple of areas they already have).

AI will learn (artifical neural networks have already been able to do this for years), they will have feelings, they will experience pain and pleasure, they will have creativity, they will make complex decisions, they will identify opportunties and threats, they will have beliefs and preferences and choices. All of these beliefs are commonly held by most of those people who actually work with AI and in its related fields. The belief that there is something about the human mind that cannot be reproduced by machines or computers is a belief held primarily by people with little or no experience wth computers or AI. This is unfortunate, because it means that many people who have a valid moral input into such issues are simple not aware or are in denial about the issues. This is less than ideal because fundamental changes in society , including both their benefits and dangers, are better understood when they are discussed by a broad spectrum of intelligent people from many walks of life. This is a view shared by quite a few of the leaders within the industry and field of computer science, who have been trying to get people from wider society to educate themselves and engage with the discussion for a number of years now.

From here I will briefly mention some of the issues related to AI. The first two will be entirely familiar to those with a casual understanding of AI and technology developments.

BASICS IMPLICATIONS AN AGI MIGHT HAVE FOR HUMANITY

AI will probably outcompete people for employment
-Robotics and automation have replaced many occupations. Despite an immense worldwide drive to constantly increase consumption and thereby create new jobs, there are tens or hundreds of millions of people who do not have jobs and a steady source of income. AI will not only increase the abilities of robotics to replace human manual labour, but they will increasingly do the same for intellectual and even creative jobs as AI ability continues to increase. We are now seeing the beginnings of this in the peaking of the prosperity of the middle class in many advanced countries.
-If we begin to value people for their humanity and their moral behaviour, and if we can turn our pursuits to new contructive uses of our time, then there is an amazing opportunity for humanity to discard drudgery and rise to achieve more noble goals. However, if people are only valued for their economic contribution, they will increasingly be outcompeted by AI alternatives. This risks massive social unrest and upheaval, immense poverty and even the death of millions who increasingly have no means of economic income. Eventually even the elite will fall off the bottom as they are outmaneuvered by AI in business, political and military strategy.
-While traditionally people deal with new technology by reskilling to use the technology and maintaining their economic value, this will no longer work because AI CAN RESKILL MUCH FASTER THAN YOU CAN.

Managing a Singularity
-A Singularity is a theoretical event that occurs where the generation of new ideas (eg. technological inventions) is itself automated using AI, and therefore increasingly follows the exponential growth as the AI is able to improve itself without human involvement. This is predicted to lead to an explosion of technological advancement and unpredictable consequences for humanity. If humans are in control of how this exponential advancement proceeds, then it could prove to be an incredible new dawn for our civilisation. If humanity does not deliberately steer the course of a Singulaity, then it is unlikely that humanity’s survival will be either a goal or an outcome of this. In such a case, our species survival is unlikely.

The AI rights issue (the most underappreciated problem of AI)
-As technology advances, creating an AI will become increasingly CHEAP and TRIVIAL. Scanned simulations of human brains could be copied as computer software is copied today. Everyday devices may increasingly incorporate AI into them. Creating a thousand, million or even billion AIs with the neural power equal to a human will eventually be within the power of increasingly modest equipment. At some point, it would be perfectly possible for AIs to outnumber humans one thousand or a million to one. Naturally they would also require energy to perform their functions.
-As AI entities become more sophisticated they may increasingly exhibit human like behaivours, including evidence of experience, emotion and what some call ‘consciousness’. Scanned simulations or “uploaded minds” would probably retain similar feelings and attributes.
-If our system of laws and human-rights are based on experience and consciousness, such AIs would be legally deserving of human rights (there will also be a strong moral discussion of this too), such as a right to survival, the right to express an opinion, and the right not be exposed to fear of death.
-In a crisis, it would in these circumstances become feasbile for government policy to give proportional consideration to all conscious entities. It would be permissable that in a economic or military crisis to make decisions that in some cases protect AIs rather than humans. It would also be possible for AIs to exert an overwhelming political, legal and economic influence on society. It might even be possible to allow the death of many human lives in order to save a greater number of AIs. Such an outcome, though it might in isolation seem neccessary, could represent a grave threat to humanity.
-This is exacerbated by the fact that AIs might be more efficient in many ways, and therefore a policy pursuing allocation of limited resources to a variety of ‘consciousness’ would rationally prioritise AIs over humans (a sort of Utility Monster). For example in an energy crisis, more AIs can be sustained than humans per unit of energy.
-Knowing the possiblity that under a legal and human rights system based simply on experience or consciousness, it is malicious human parties can exploit the potential to syphon off resources or influence through designing AIs to suit their agenda simply by building consciousness into an AI designed for some purpose. For example, AI entities could encourage a particular political perspective by using an AI with the capability to emotional manipulation a citizen it cares for. Or, a person could make a billion copies of themselves or other people with similar political views and demand they are allowed voting rights, or perhaps social security (which they agree to give 1% to the human or company). As some humans are always pursuing wealth and power, this is an almost inevitable outcome in the development of AI. It may be advisable to start considering such issues now so we can design a framework to better cope with them.

A HUMANITY WITH AI

A humanity with AI in its grasp has the power to do great good, but it also has the power to destroy itself. If we can find the answer in clear laws and standards around AI, we should be able to avoid most of these problems while still benefitting from AIs amazing potential. However, we must act soon, and we must act decisively.

-We must develop AI that is carefully designed to help humanity rather than compete with it. We should use legal force where neccessary.
-We must find a way to value a human in their own right rather than what they provide to us, without encouraging dependence or economic instability.
-We must might consider that some legal, economic, social and political rights are exclusively for humans, so that any incentive to misuse AI is removed. Or, there might be parallel but separate systems of rights, with a safe interface, in the virtual and real worlds.
-If we wish to avoid suffering in the world, we might consider preventing the development of AI designed to experience suffering, especially where the conditions of their suffering might limit important human choices at some point – like the need to turn economic AIs off to restructure the economy. We must be resolved in some cases to take neccessary action even if there is some AI suffering in circumstances where such systems are illegally developed, in order to remove any incentive for developing them and ultimately avoid greater suffering.
-We must be willing to take immediate action against those seeking to manipulate humanity by using AI for political purposes.
-We should make sure that AIs are always presented in a way that makes them distinguishable to humans.
-We must research AI risks as intensively and thoroughly as we research its potential

Anyone interested in technology realises this is an extremely exciting time in history. Yet a desire for excitement doesn’t outweigh the importance of keeping humanity safe, nor does it excuse the minimisation and denial of threats. I for one want to make sure that when humanity constructs a magnificant creation, they are going to be around long enough to appreciate it.

18 comments

  1. people who consider brains to be truly complex in the sense of that the Sante Fe Institute uses the term, believe that unless an atomic-level scanner ala a Star Trek transporter is available, “scanning” a brain to obtain all salient information about its state is literally impossible. That being the case, direct copies of specific humans, or even a general copy of a human brain, are also literally impossible.

    AI’s that rival humans in ability and complexity will have to be grown and taught, not “imprinted” ala some Matrix movie magic.

    1. I agree with your logic but I’m not as certain about your premise. I guess its a matter of whether future technology can detect the lowest relevent level of brain complexity. I’m not sure we have the evidence yet to say for certain either way, but my judgement is that its probably going to be possible as some point, and so its a possiblity worth planning for. I guess time will tell!

      1. Well, it goes back to the level of detail that needs to be captured.

        Already we know that brain functioning can depend on how specific individual cells are functioning, and we know that the functioning of individual cells can depend on things like micropores, gene expression, myelination, and many other molecular processes which are ever-changing, often on times-scales so fast that we can only infer what just happened.

        Measuring some of these may end up pushing the Heisenberg Uncertainty barrier, which means it would be literally impossible to measure them accurately.

        Will we be able to copy the state well enough for some AI to emulate a human, or will we need to “train” and/or “grow” human-level AIs?

        I don’t think we have enough info to be able to answer that question adequately for a very long time. In fact, I predict that we train, or grow a human-level AI long before we can even decide whether or not we can “copy” a human’s brain with enough detail to create one the way the OP suggests.

        Perhaps we can clone that home-grown AI, but that depends on what kind of hardware is required to implement one that way in hte first place. Perhaps human-level AI will require hardware that is as complex and difficult to copy as humans themselves are.

  2. How did you jump from consciousness lies in the brain to human beings are able to make artificial brains? What makes you think that a brain can be simulated on the computer? What evidence do you have that an artificial neural network is the correct model for the brain’s neural network?

    1. Fair question. If the brain is a series of neural connections, and scientists are correct that these connections are responsible for the various behaviours humans display, and if the connections are reproducable in a simulation, it follow that the behaviours are also reproducable. That’s several Ifs that I think we can consider likely, but I’d agree they’re not certain until we actually achieve it.

      I don’t really know if I’d want to outright make the “consciousness lies in the brain” statement, as consciousness is quite a vaugue concept to argue about, but I do explore the consequences if we accept that premise. If we do accept the premise, it follows that “consciousness” and other concepts we like directly correlate with the brain (if we do) would be reproduced along with the simluated brain. Thanks for the question.

  3. >The alternative is to assume that experience is a physical part of the human brain and body.

    The only reason you can even speak of, or use terms like “physical”, “human brain”, and “body”, is because these are theoretical concepts inside your mind that help explain your immediate experiences (qualia if you wish). For all you know, nothing else ‘physical’ even exists outside, and independently of, your perception of experiences. Granted, this is solipsistic, but it can not be discounted when talking about the nature of subjective experience and how it relates or supervenes on supposed physical entities, the true nature of which you can never know.

    What you subsequently describe is a form of (reductive) material monism (http://en.wikipedia.org/wiki/Reductive_physicalism#Reductive_physicalism), but many findings in Quantum Mechanics (Bell’s inequality, etc. for example), might be seen to point against realistic material monism, and towards idealistic monism (http://en.wikipedia.org/wiki/Idealism).

    1. My experience of some of the claims of this kind in Quantum Mechanics is that certain Idealist or Dualist assumptions seem to be have actually been fed into the initial assumptions or framing of the investigations. But I don’t claim to be an expert on such claims, feel free to drop some links if you like.

      In regards to Solipsism, I’m willing to entertain the possibility that it’s one of several possible… let’s say scenarios, but because it recommends no action morally or otherwise (not even self-preservation IMO) I’m willing to discard it as valid but irrelevant to questions of what is right and what is true. I prefer a bit of physicalist moralising I think lol šŸ™‚

      1. Well, as one example, Bell’s theorem shows that any “local realistic” theory will differ in its predictions from the predictions of Quantum Mechanics (which have been confirmed by experiments). In Bell’s theorem the concept of “local realism” translates to a specific mathematical expression, but colloquially it may unfolded to mean that a) Realism = something exists independent of observation and the result of an observation is dependent on it, and b) Local = parts of that something can not communicate (in a way that would make one part dependent on the other) faster than light. If you concoct a physical theory with both a) and b) present, it will contradict quantum mechanics. Of course, you can go ahead and just eliminate the locality assumption, but that runs afoul of the spirit of relativity, and might be considered very unsatisfactory – though many theories exist that do just that. So then what’s left is to eliminate realism, and essentially say that nothing exists ‘out there’ independent (or in between) our observations.

        Yes, it’s kind of like the “does a tree fall if there’s no one to hear it” type of question – and the answer here would be that not only does it not fall, it does not even exist, until someone observes. This naturally leads to Idealism, where the mind (which is the thing/substance/entity doing the observing) is the primary substance in the universe, everything else being the mind’s perceptions, and all the theories of physics we have just explaining the correlations between these perceptions. This is not necessarily solipsism – there could be many minds, etc.

        In an Idealist conception an AI machine would have an inherent second class status, or rather, it would not even be a part of reality, since the only true substrate of reality would consist of minds and minds only. Everything else, including robots possessing any kind of artificial intelligence (no matter how sophisticated), would just be a phenomenological perception of these minds, not an equal.

      2. > So then whatā€™s left is to eliminate realism, and essentially say that nothing exists ā€˜out thereā€™ independent (or in between) our observations.
        Do we exist independent of our observations? Yes? Hang on, paradox… No? Nothing exists… in that case are we really here discussing this šŸ™‚

        Thanks for some interesting thoughts though! I stand by my previous assertion about dualist assumptions, but I’ll be trying to keep an eye on physics into the future.

      3. Whoknows’s explanation of “local realism” is pretty standard, but understates what “realism” amounts to in Bell’s theorem. “Realism” in the Bell proof is more than just that *something* exists independent of observation and causes the observation. I’m not sure how to phrase it, but it’s more like the objects and properties of the observation (for example, electron, spin) applying at all times: like, there’s still *an* electron, and it still has *a* spin, when not observed.

        The Everett Interpretation is realistic, in the broad philosophical sense laid out by clause (a) as stated by Whoknows. And it’s local. And its predictions are textbook QM. The trick is that Bell’s proof assumes a single electron while Everett splits it.

  4. I think actually turning to literature and film, for once, could throw AI in a more optimistic light. I think we should back up and consider again what makes a homo sapien a person. If it is the abiity to have experiences and to empathize, or some other similar rubric (yes, I chose my word carefully there), then you can follow this thought to some pretty interesting places.

    Think about language for a moment. So much of it comes from the standpoint of being a human. Take, for example, the word “earth,” not proper. It is no coincidence that the word we use for the soil we stand on is basically also the word we use for our planet. This is the case dispite it being structurally identical (in all meaningful ways) to the soil of other planets. No one refers to soil on Mars as “earth” nor do they really refer to it as “mars,” not proper. You could say the same thing about the Sun. It is structurally identical to a large number of stars in the universe, but we wouldn’t dare call any other star a “sun.” Not without raising an eyebrow or two at least.

    This presents a problem in literature and film, particularly science fiction. What do Martians (pretending they exist) call the Earth? They would not call it the same word they would call their soil. They would likely refer to Mars that way. They might even name their planet Earth after the soil on which they stand. — Briefly, I want to point out that I am aware of, and intentionally avoiding the mythological implications of the names of planets. It would only present another identical example to the ones addressed — Similarly, in works such as The Hitchhiker’s Guide to the Galaxy, Douglas Adams ran into this problem. At one point in the book, there is a character not from our star system that needed to use a name for our star. S/he refered to it as Sol, deciding that it was easiest to simply take one step away from the English word into another language so as to avoid the name being overtly English, nor overly alien. Not the most elegant solution in my opinion, but I am an Adams fan, so I let it slide.

    I believe the same confusion can, and will, happen for the word “human.”

    Before I pull out the best example, I want to point out that there is a difference between a piece of fiction being “in English” and “in translation.” The former doesn’t give much care to how different languages refer to the same thing and to the natural confusions in translation and typically avoids them all together. The latter actually brings this confusion and struggle to the forefront and makes it a legitimate conflict.

    In human history, we have already seen a meeting of two cultures so separate that they may as well have considered themselves different species: when Native Americans encountered Europeans. This struggle is shown well in the film: Little Big Man. (It’s very good if you have not seen it. Content warning though.) It is about a white boy who is separated from his sister and adopted by an Indian tribe after their parents are killed. He is later found and taken in by a white, protestant family. He spends the rest of the movie trying to navigate his way between these two cultures that he belongs to during the campaigns of General Custer.

    In the movie, the parts where he is living with the Native Americans are not merely in English, but in Translation. When he is talking about the Native American tribe, he says that they called themselves “The Human Beings.” This makes sense as Native Americans had little previous experience of not being the only people in the world. (Not to make the mistake of thinking they are all alike.) Therefore, they would not refer to themselves as “Native Americans” in their own language. Actually, Martians might call themselves humans.

    Additionally, later in the movie, he is talking to an elder from his tribe and the subject of African Americans comes up. The Elder’s response is, “Yes, the “black” white man; I have heard of them. It is said that a “black” white man once became a Human Being. They are a very strange creatures. Not as ugly as the white man true; but they are just as crazy!” He says this not because he is ignorant or misinformed; he is one of the wisest characters. But in their language the ideas “foreigner” and “white man” would be the same, probably the same word. So he doesn’t mean a man who is both black and white, but he is just the black variety of foreigner.

    ————–

    So here is the wisdom I mean to draw from this. I do not believe it is meaningful to differentiate much between homo sapiens and artificial intelligence. If they can experience and empathize, or satisfy whatever other personality traits one needs to be human, then they have every right to call themselves human. If they have the ability to empathize with other things that experience things, then they will likely empathize with us rather than intentionally try and throw us away.

    However, it might be more meaningful to consider them a natural extension of humanity, especially if they are destined to outlive us. If we create them, because we are writing code and computer language from our human perspective, they will resemble us a lot, psychologically. If our fragile human bodies ever give out to disease post-singularity, I wouldn’t hesitate for a moment to declare that humanity is not extinct as long as there is sentient AI.

    One needn’t consider it the end of human progression either. It’s well within reason that AI will begin to naturally evolve post-singularity. Basically, bits of code have unintended consequences all the time when they meet (don’t believe me, try modding skyrim). If AI are writing the AI of tomorrow, it makes intuitive sense for them to use random bits of code they found useful in the next generation.

    In the end, if we accept that all the functions of a human brain can be replicated by AI, then there is no particularly meaningful distinction between homo sapiens and AI. We should not fear them supplanting humanity, because they are humanity.

    1. Thanks for the comment. I found it a little hard to follow it parts, but I’ll try to respond. I agree that some language is more relative the absolute. IIRC linguistically the term might be indexicals, but it’s been a while and I might have that wrong.

      Its impossible for us/anyone to debate a definition of a word without being arbitrary (your dictionary definition says this, mine says this), but I’d disagree that a sensible definition of humanity is purely “mental”. We’re a biological entity and while I’m hopeful friendly AI and humanity will be “best friends” if we design it right, they’re distinct and not the same thing. I think of them as more of an extension of ourselves, sort of like how we are kind of an extension of a set of genes.

      There is also the issue of whether a simulation is different from a real object. If we simulate a species of tree, and destroy it in the “real world”, would it make sense to say the tree is not extinct? I dont believe so. I think the same would apply to humanity even if they did simulate every molecule in the body (which I don’t think is planned in “mind uploading”). I’m excited by the possiblity of friendly AI, but I require it to be planned with the survival of you, I, and our species in mind.

      Thanks for takng the time to comment.

      1. Mmmm, maybe I just put less stock in my biology than most folks, but I think that adding biological qualifications for what makes something human is (perhaps counterintuitively) reductive of what humanity is. That’s why I was trying to make a distinction between homo-sapiens and humanity. This is why I would disagree that your tree example is that helpful. Homo-sapiens definitely represent the biological component of persons just as trees represent their own biological component. But whereas we have abstract qualities that help to define us, trees do not.

        When I say humanity will survive in AI, I am specifically not referring to our biological component. I was using the word humanity to mean only those abstract qualities. If we labor too much on our physicality and genes to define humanity, then we might get into so many little inconsistancies. If the last two humans were Stephen Hawking and a woman post-menopause, would we be extinct? I’m not saying that most people would not make an exception and say that they are both certainly human, but if we do labor on our physicality and genes, then we probably would be extinct, for all intents and purposes. But our humanity certainly would not be at least as long as they are running around “being human.”

        Besides, AI have the perfect parallel to our genes: their code. And they are far more an extension of that than of us. Sure, we wrote the code, but that may not always be true. I still think that post singularity AI will have the qualities of this “humanity” purely because they were made initially from the human perspective.

        So, just clarifying. I am not saying that homo-sapiens will not be extinct in that scenario. I am saying that humanity’s continuation in the form of AI is more important than that.

        — Off topic, have you seen Her? It tackles the difference between us and AI in an interesting way. The conflict was not over values or anything like that, but over them percieving time in a different way from us (since they process faster).

      2. Thanks for the comment. Yep I’ve seen Her and don’t disagree with the interesting point about time it makes. Also, I would probably disagree that humans would be extinct in the scenario you describe, only that they are highly likely to become extinct shortly after.

        I want to take a brief shot at convincing you there is another way to look at it, if you’ll allow me. In a sense its a discussion about what a human is, and what extinction means. If we simulate an extinct tree in a computer, in a relatively high level of detail, is there any way we can say the tree isn’t extinct? If we record a massive amount of someone’s thoughts or consciousness through their writing in a book, have we really saved a partial human from death? Perhaps metaphorically, but it seems if we’re realistic, not literally. Are we more than our thoughts? If not, then how can I wake up if my thoughts die each night when I sleep? Preserving human thought in the form of AI is not a bad thing, but it seems to be categorically different from preserving humans. And as a human that cares about you and I, I’d like to preserve our species if I can.

        I agree 100% that AIs will eventually be capable of most human abilities and emotions. I also agree that AI suffering, if programmed that way (I hope it is not), would be real as our own, and we should try to avoid it as part of our the moral principles we deribve from our biology. But I think when we confront what we *are* with a strong desire to believe only the truth, our species, which is biological, matters to us a lot.

        Does this biological view diminish us? For some they feel it does. We probably ought to consider whether a belief that we are greater (for example imagining one’s self to be God or a god) be true merely because its desirable? But in any case, for me, I do not believe it diminishes us. When you study the beautiful complexity of our biology, our incredible achievements, the amazing process that brought profound morality out of unordered chaos, I see humans and humanity as a majestic creation of the universe, and I am proud to be part of the species.

        Thanks for reading I hope I’ve done justice to the topic šŸ™‚

  5. Hi citizensearth,

    Thanks for the post. I tend to agree that an AI *could* be designed to replicate consciousness as we know it, but I think it’s kind of unlikely that it *will* be. Just as submarines don’t swim like a fish, and jets don’t flap their wings, an AI need not hear sounds the way humans do, or groove to music like we do, or see subjective color like we do, to get the job done. The mammalian brain has a very specific way of bringing info on light and vibrations into executive control functions. And our subjective sensations depend on those details. Meanwhile, whole brain emulation is not the only way, and I don’t think even a promising way, to pursue AI in the near to medium future.

    1. Thanks great comment. Since writing this article, I’ve become somewhat more skeptical about the potential capacities of AI beyond relatively narrow tasks. Still not totally convined against, but I definitely agree with you that the things people usually label “consciousness” are very particular to the particular capabilities of the human brain created by an unique evolutionarily context. I can even entertain the possiblity that different humans may have significantly different flavours of those things we label “consciousness”.

      I should also say that I’ve become increasingly convinced that using the schema “consciousness” makes little or no sense in a physicalist framework (I don’t strongly hold a position on which side of the dualist/monist debate is “correct”). You’d probably be the target audience for that article, so I’d be interested if you felt it was convincing or felt more details needed to be added to the case I make.

Leave a comment