Month: August 2014

Is placing Consciousness at the heart of Futurist ethics a big mistake? Are there alternatives?

When we want to think about right and wrong in the context of future technologies, consciousness is the natural go-to as a starting point for moral principles. We associate consciousness with suffering, pain, will, creativity, freedom, even life itself – all the kinds of things ethics is concerned with preserving or avoiding. And for those can envisage our own future intertwined with new and exciting technologies – automation, nanotech, decentralised manufacturing, and perhaps most profoundly, AI – it jumps out as a potential unifying principle in ethics. If we protect consciousness, we protect life – or so we reason. Yet there are some drawbacks we ought to consider, so that we build our ethics on foundations that will truely survive the profound changes in our civlisation. And if there are drawbacks, we ought to consider refinements and alternatives to our thinking, so that the creeds that shape the future are logically sound appeals to moral reason, rather than the enraged cry of the luddite. Here I have tried to lay out some of the deep problems of consciousness as a moral concept, and I’ve tried to demonstrate that there are other options available. If you manage to stay with me and consider the possiblity over problems in the intuitive concept of consciousness that most people have, I hope I can show you an alterntaive that’s sensible and less ambiguous; one that is enthusiastic about our technological future, but still stands against both human extinction and AI suffering and misery.

-Vagueness
Let us begin by assuming that for any worthy moral principle to be placed at the centre of our moral reasoning, it must have a clear, unambiguous definition, so that our logical foundation and reasoning doesn’t falter upon shifting sands. This is the first problem with consciousness. We all have beliefs about it, but if we’re honest, we don’t really know for certain what it is, and its difficult to know if it exists at all. There is no consensus about consciousness, just a range of sentiments centered around the self, and dozens of proposed definitions surrounding those (the first paragraph of the wikipedia entry is a rather amusing illustration of the lack of consensus, though the Stanford Encyclopedia of Philosophy entry provides a more reputable illustration of how there are a myriad of perspectives, most of which start from the same relatively unexamined intuition). Worse, many of the proposed definitions suffer from vagueness too.

For example, perhaps we can decide consciousness is a state of being “self-aware”? So, when an organism or AI becomes self-aware it becomes conscious. But what is self-awareness? Is it simply a thing knowing about its internal states? In other words, reacting to internal states rather than just external stimuli? Does this mean that any system of complexity that reacts to internal states is conscious? On what grounds could we say no? Is an ecosystem conscious? Is a town meeting conscious? Do we kill a conscious being when we break up a town meeting? Or is that absurd? How could self-awareness possibly be considered a unambiguous definition in this light?

Perhaps this approach to the definition is wrong. Perhaps it’s more about awareness of one’s surroundings? Or awareness of some aspect of time, like planning for the future or remembering the past? But then what is awareness itself? Can we define awareness without a circular reference to consciousness? If not, how is awareness different to straight-forward computational processing of stimuli? The same can be said if we try to substitute the term “experience”. Is any processing of stimuli “conscious” by this standard, no matter how mundane? On what grounds do we draw the line?

Others might suggest consciousness is tied up with some notion of creativity. Creativity is even more difficult to define. In particular, can it be differentiated in essence from directed randomness? Or perhaps our definition can refer to emotions? Is it clear what emotions are? Are emotional people more conscious, and by moral standards, more morally worthy than those who are less emotional? Is a stoic person near-worthless? Or what if it turns out animals are more emotional than us? Are we to be removed for their benefit? Is this really a sensible way to determine what is right and what is wrong?

In the end, there is many possible definitions, but precious little clarity in their meaning or premises. And while we look for substance in the mist there is the constant temptation to select a definition that simply suits our purposes or agenda. Many scientists and commentators certainly have. But there is no consensus, because unlike simple physical phenomena, consciousness has a vague meaning that depends on complex and perhaps unresolvable philosophical claims. A clear definition simply does not suggest itself to the observer. There is no guarantee that a lawyer, a politician, or a powerful AI will select the same definition as you. Indeed, under their chosen definition, there is no guarantee that you or those you love will even be attributed as having an acceptable level of consciousness to deserve moral treatment. Under their definition of consciousness, you might have no rights at all. Perhaps humanity ought to rely on something more solid to guarantee our safety and wellbeing.

-The problem of other minds and moral rights

Even if we can define our own consciousness from a subjective point of view, we face the impossiblity of proving consciousness in others. In philosophy, this is called the problem of other minds (https://en.wikipedia.org/wiki/Problem_of_other_minds). How can anybody, human or animal, alien or AI, demonstrate to others that they are truely conscious, and not just a complex machine structured to perfectly emulate the actions of a conscious entity? Can you prove to others that you have consciousness and are not just a machine made to look like you do? In other words, by definition there is no way to look around us and know what is conscious and what is not, because *we are not that thing*. And if we have only one example of consciousness to go on, our own, then are only option would be to link consciousness to some arbitrary entities in the world we see, a process that is totally a matter of opinion rather than fact, because one example cannot establish a pattern. If so, and if we are placing consciousness as central to our systems of morals, ethics and rights, then we are adopting a system in which the moral worth of an entity is contingent on it winning an unprovable debate about the existence of a totally hidden property.

Some might argue this is a esoteric objection to have, but it has practical relevance. Take debates around animal cruelty. Suppose a person owns a dog, and claims it is their property. They also claim that this dog is not conscious. On these grounds, they claim it is acceptable for them to torture their own dog on a daily basis. Can we argue otherwise on a rational basis, if we cannot prove that the dog is conscious? How could we? We cannot assume consciousness a priori, because of the problem of other minds. We cannot ask the dog, and even if we could the impossiblity of providing verbal proof remains. To save the dog from torture, we must instead deploy arguments about more observable phenomena – we can show that the dog is in pain. We can show a deterioration in health, and perhaps, if we push a little, in happiness. We then point to our own value of these things, and point to the inconsistency of ignoring the dog’s suffering.

In the case of AI, the problem becomes far more complex. Suppose a near-human AI is partially conscious. How do we know this? Can we possibly agree as to what signs or indicators we might check for? If we don’t have full agreement, how can we prevent different parties, ideologies and interest groups deploying criteria that serve their own agendas as to what is conscious, to what extent, and to what rights should follow? In the case of a person, we can reasonably assume all humans are roughly conscious in the same way and therefore deserve equal rights to ourselves. But in the case of an AI, it could be only partially conscious. And if consciousness is not absolute/binary, what if an AI is vastly MORE conscious than humans, animals and basic uploaded minds?

Our laws today rely on similar kinds of moral reasoning. The rights of different humans clash on a daily basis (take the ongoing conflicts in the Middle-East). Any framework for resolving such conflicts must be unambiguous enough to avoid endless debate and lawyering. Such frameworks rely on a strong moral foundation to be effective. As conflict will almost certainly not be entirely erradicated in the future, we can safely assume the people of the future will continue to rely on the effectiveness of moral reasoning. What for us might be an intellectual annoyance, could be a great disaster in the future – an inescapable virtual world ruled by a power-hungry dictator, the withering of the human species that’s replaced by mindless AI workers, or a hostile AI who is 100% legally protected even as it is taking control of government from humanity. (If the logic of moral/legal arguments based on consciousness are of interest to you, you might also like to read this article about difficulties in dualist moral reasoning and the problem of others minds)

Finally, even if we feel personally that the problem of other minds is philosophically solvable, we must consider the possibility that a superintelligent AI might decide that it is not. If it was higly rational, as seems possible, it might require proof at a level beyond what we are capable of. If that AI was programmed to protect all consciousness, but couldn’t prove the existence of other minds, then it would be perfectly rational for it to liquidate all life on Earth in an effort to protect an expand the only consciousness that it can confirm – its own.

-Manipulation and interest groups
If we do somehow ignore the definitional and philosophical problems of consciousness, we face the very real possibilty that the ethical systems we create to protect what is good and right might still be twisted and used against us. Part of this problem is that we are currently trained to think about AIs as either obvious rampaging destroyers or as adorable companions, pets, or virtual-people living in virtual freedom. Yet this view is not based in the reality of how new technologies are deployed.

The first AI consciousnesses, whether they be constructions of a programmer or a high-resolution copy of the human brain, will ultimately be the creation of individuals, governments or companies with motives, agendas and opinions. Unlike the human brain (though perhaps we face eventual problems here too) AI brains will initially exist an entirely artificial environment that will be the product of intelligent and morally-imperfect creators. In this initial period at least, and perhaps for much longer, they will receive what information the artificial environment affords them, and the technology that created them might even leave their neural networks open to direct manipulation by their creators. Even more significantly, it seems perfectly possible that the people, company or authorities operating them can select and copy populations with near trivial cost and effort. Given just a handful of unscrupulous corporations or authoritarian governments, it is a strong possiblity that many early AIs won’t be created for their own sake, or in service of humanity, but will instead be quietly deployed for narrow, selfish or perhaps malicious ends. This is the nuanced reality that stands against the destroyer/companion dichotomy presented in popular culture.

One aspect of this potential is AIs social intelligence and emotional capabilities. AIs could be not only an incredibly powerful intellectual workforce, they would also be the ultimate emotional cheer-squad and political rent-a-crowd, particularly if they gained political rights. All that would need to be done would be for the first companies with access to the technology to upload an individual with “the correct” political convictions and replicate them until they completely dominate the political process by weight of numbers. Even without direct political rights, our personal convictions about the rights of consciousnesses could be easily manipulated too, because in the future it is possible we will have embedded AIs in the home, as domestic help, carers, and even as intimate companions. Consider how news, analysis, search engines and websites already represent or are biased towards the views of paying interest groups. Mobilised in the same way, such companions would exert an incredibly powerful force upon their “owners” over whom they might hold a powerful emotional influence. A similar problem would exist if we attach economic rights to AI, where copies could be made to manipulate the welfare system, or to shift revenue to avoid taxes.

And so, rather than friendly companions or godzilla destroyers, it might be more sensible for us to imagine quiet armies of AIs, wielding their conscious economic, legal, moral and emotional rights, serving the agenda of a government, corporate or ideological entity. The would be undermining democracy and fairness beyond all reason, yet they would also be totally unassailable by any citizen within the boundaries of law.

The core of the issue is that the concept of consciousness could become a harmful tool for very narrow interests. It would become so because moral and legal frameworks protecting the rights of AI entities would have perverse outcomes even without there ever being the need to violate the rights of a “consciousness”. By merely selecting and copying AIs that suit a particular agenda, or by manipulating the environment in which an AI exists (imagine an uploadee asked to sign a contract lowering the cost of their scanning and maintainence, so they can afford it), an interest group could recruit AIs as a powerful weapon. This already goes on in the human world – take instances where extreme religious practioners have a stated goal of outbreeding host populations, for example. The difference with AIs is that their environment is under direct control, their neural networks open to direct manipulation, and their potential for population growth far beyond anything in the natural world. Ironically it could end in conflict that causes a lot more AI (and human) suffering than it prevents.

-The Utility Monster

If we persist with consciousness in the face of its vagueness, philosophical flaws, and and somehow prevented its misuse and manipulation, we still face another challenge. This nature of this challenge is best explained using the concept of a “utiltity monster“.

Governments, economics, corporations, militaries and big-picture decision-makers are forced to make certain types of decisions on a daily basis that could best be described as choosing between the lesser of two evils. That is, they are often faced with choices in which all options on the table involve the leaders being responsible for significant suffering and often death of innocent people. Often, their ONLY option is to choose an option that minimises death, even if it leaves blood on their hands. For example, during wartime generals must send soliders to their death to save the lives of countless others, or a company in a difficult economy may have fire a number of workers, of whom a small percentage may suicide, in order to save the jobs of many others. From a leader’s moral perspective, the task is not to do no harm, but rather to do the least harm. In large companies, governments and organisations, there is a strong pull towards thinking commonly called consequentialism (https://en.wikipedia.org/wiki/Consequentialism) or utlitarianism(https://en.wikipedia.org/wiki/Utilitarianism). Roughly speaking, this means that day-to-day decisions are designed to maximise some morally good outcome and minimise some morally evil outcome. Put simply – the greatest good for the greatest number. (Note – I won’t talk here about whether this is a good or bad thing)

Much of the work of utilitarian thinking is in calculating the good or evil of various options. For this reason, some people believe utlitarianism is susceptable to a flaw called a utility monster. A utility monster occurs where an individual utilitarian calculation appears to maximise utility, but where en masse it results in a perverse or intuitively evil moral outcome. So the utility monster steals away the utility and value on a systematic basis, sucking the life out of other morally worthy things. Putting aside whether we believe this is philosophically correct or not (that debate is ancient and ongoing in philosophy), it is an interesting way to illustrate certain real-world problems in using consciousness in morality.

In the case of AI this becomes apparent if we consider government policies governing the use of the Earth’s energy resources. Suppose AIs are able to achieve a human level of “consciousness” (or higher) for a tenth of the energy required to sustain a human life. Perhaps the government also discovers that the same AI’s could achieve ten times the economic output of a human. Wouldn’t a government seeking to maximise the rights, welfare and survival of the maximum number of consciousness’ be perfectly justified (or perhaps superficially justified in the second case) in giving preferential treatment to AIs over humans? Suppose there was a disaster situation where not every consciousness could be saved, for example an energy crisis in which the government had to choose between using crops as food or biofuel energy to sustain AI life. Wouldn’t the government be able to save ten times the “consciousness” by concentrating solely on the preservation of AI life? Or consider a government calculating the best outcome for consciousness when considering laws, incentives and restrictions around reproduction? Wouldn’t it be best off banning human babies in order to make space for AI expansion? We might object, but what rational grounds could we present to a legal system or even AI assisted government whose principles centered around consciousness?

Proposing to respect all conscious life is a fine goal. But in the the real world policy is not the same as pure principle, and leaders, even the relatively virtuous ones, ALWAYS have blood on their hands. Where flaws in a moral principle are subtle, as is the case with consciousness, humanity may need some intelligent leadership to avoid the death-grip of its own utility monster.

-Extinction compatibility

Suppose in the near-future there is a rare form of fern, found only in a handful of isolated valleys. Due to an expanding urban area, it becomes economically desirable for some local developers to build upon this land, potentially rendering this fern extinct. The developers, keen to quiet the concerns of local citizens and environmentalists, propose to take a high resoution scan of a thousand of these ferns and digitise their form, running a simulated version of them in a simulated valley on a computer system. If the development went ahead, and for argument’s sake we say that no-one ever saw or interacted with the simulation, would we really claim that this species of fern is not extinct, merely because we are simulating its likeness at a high level of detail? We might acknowledge that a simulation on display to humans is a worthy artistic and scientific tribute to an extinct species, but it would be an absurd claim to say the simulation is the same as the real plant.

One counter-argument is to point out that we can’t be sure that our real world is itself some form of simulation. This thought-experiment is taken seriously by many scientists, including some who are attempting to test reality’s “maxmimum resolution” in order to try to determine if our world is in fact simulated. Yet if there are multiple levels of simulation, and everything we know is simulated, including everything we value, the LEVEL of simulation becomes relevent. Everything you and I have ever known is at this level of simulation. Such things would be categorically different from whatever exists outside our own simluation. Likewise, even if our world is simulated, it is possible to differentiate between our simluation and the “simluated simulation” in which the scan of the fern exists. If there are different levels of simulation, then how could we possibly dismiss their categorical importance? We may certainly come to a seperate conclusion regarding whether we value the simluated fern for its own sake, but there is no continuity in form or content to justify us treating it as the same thing. The fern, as a species, is extinct.

Somehow, when we imagine our species simulated in the same way, for example under the label of “mind uploading”, there is a uniquely powerful temptation to adopt a different position, one we know is philosophically flawed. Perhaps we like to imagine that our thought is detached from our bodies somehow, and that somehow this thought, this consciousness, would make the journey from one real body to another simulated body. Yet to do so presumes something seperate from the body. For atheists, this is clearly false. For theists, this presumes that this seperate entity will violate its normal preferences in biology to attach itself to a computer simluation at some programmers request, rather than moving to the afterlife described by most relgions. In either case it seems highly illogical to assume that the human, whatever it is, has survived the process. And if individual biological humans do not survive, then it follows that our species might become extinct if uploading occured on a large enough scale.

This is the final problem with a system of morality based on consciousness – if we’re honest about it, the status of consciousness as our ethical centrepiece seems to be entirely compatible with the extinction of the human species, and with the extinction of life itself.

Of course, it seems highly likley that the simulation of a human, provided it was a full and whole simulation (the simulation will probably be full at a certain resolution in any case), would have intelligence, memories, emotion and will in exactly the same way that we do. That certainly matters in how we treat them. The process of natural selection has given many humans a rare survival strategy that has served our species well – empathy and an aversion to suffering. It makes sense that we might seek to apply this consistently, even beyond our own species, and even beyond genetic life. Yet for those who have the bravery to look beyond their own short individual lives, the survival of our species, and for that matter, the survival of the biosphere is something of profound value. We can and should act to minimise suffering, but we can acknowledge that sometimes conerns of suffering must come second to the struggle for survival – as individuals, or in this case, for the very future of our species.

This is the beginning of a solution. Our challenge is to find an ethical framework which, if followed, doesn’t just allow for the possibility of humanity’s survival as a species, it ensures it.

I don’t mean to suggest to you that the problems of consciousness are the only profound challenges facing us as we move towards stronger AI and an increasing pace of technological change. There are many other challenges facing us all as strong AI technologies slowly become a reality. For example, if brain scanning and simulation becomes commonplace, or for that matter human integration into an entirely virtual world, how do we ensure that world is a world of happiness, fairness and freedom, rather than the playground of a sadistic or power-hungry overload that shapes the rules to dominate uploads and humans alike. Or if traditional AI strategies show the most promise, how can we prepare safe-fitness functions or friendly AI concepts when we don’t even know how strong AI works yet?

But we can place ourselves in the strongest position possible by establishing a solid moral framework that can inform our everyday work, one built on human altruism, rationality and sound science. And if you’re concerned about the vacuum that might be left when we set aside the concept of consciousness, please allow me a moment to share an alternative with you. In part, because of Hume’s gullotine (https://en.wikipedia.org/wiki/Is-Ought_Problem ), I must speak to your heart as well as your head, but if you are patient with me, I hope you will be pleasantly surprised in the coming together of science and technology with morality and humanity.

-A Solution – Pro-technology; Pro-human-rights; Pro-survival

At the heart of modern biology is a curious yet vital idea – that humans are the survival machines constructed by our genes. Humans are at the same time both a collection of genes and the expression of those genes. To put it more poetically, humanity’s outer layer expresses the genetic purpose at its heart. Out of the mechanistic beginnings of the genetic struggle for survival evolved the human form and everything we value about it. While the gene are a product of a process that selected those who replicate themselves, a collections of genes, even acting in this interest, can be a creature of altruism, morality, empathy, and selflessness (see kin selection, group selection and reciprocity). Our genes carry on beyond our individual lives, because we as individuals are part of a greater genetic lineages. And our lineages are part of something greater still, our species. And our species in turn is part of a unique plantary explosion of genetic life. All this wonder, arisen out of countless millennia of random chaos.

In this genetic maelstrom we have manifested the most incredible qualities – the human intellect, imagination, determination, friendship, ingenuity, compassion, courage and creativity are all scientific facts and moral wonders, and all constructed by our genes. From a certain point of view, our genes also express themselves indirectly in our surroundings – our architecture, our machines, our writings – all birthed from a process that started from a group of unlikely organic compounds coming together. Our identies, expressed in our work, our appearance, our art, our views, are part of this too. Such creations often reflect both our thought and our form. They reflect our potentiality to become the captains of Earth’s biosphere, ensuring its survival and guiding it into an instellar existence. Civilisation, technology, trade, laws, culture, government; they are all layers that encase, protect and enhance humanity. And humanity is in turn layered around an organic core, a genetic heart that defines our purpose and existence. Without that core, there can be no purpose, only hollow machinations forged by a purpose long dead.

In this context we can look upon AI anew. It is a manifestation of technological humanity. In one way it reflects our collectively human identity as our clothes might, manifesting our creativity as fine artwork does, preserving and extending our thoughts in the same sense that a great book or novel might preserve part of the minds of lost greats. It captures our will, our will to survive, to cooperate, to help one-another, to be part of the group, our desire for the collective welfare, yet at the same time our aspirations to be unique, fluid and free. AI protects us as we protect one-another. AI is not a person, it is the extension of a person, or an extension of a group of people. Perhaps it can even be a combination of what is best in us. Just as a person is an extension of their genes, AI is an extension of humanity. Like a good person, what makes it more than a simple machination is its purpose. Its part in something greater. It is a defender of humanity and of the biosphere. The AI we create in this spirit isn’t humanity’s replacement, it is the manifestation of our best qualities and good will. And if this will is made true and whole in its purpose, then the AIs driving force will be one profoundly infused with the purpose and meaning of the continuation of genetic life.

If we are able to collectively reach this realisation in time, this will fundamentally change our relationship to AI. Instead of an oppositional relationship, it is one of protective partnership. In certain ways it must be more cautious – to blindly worship progress without enlightened direction in mind is suicide. No doubt many species have expressed their nature in their surrounding environment, only to wipe themselves out; tearing the heart out of their existence and leaving only the hollow shell remaining. Not unlike the excesses of bacteria in a petri dish, excreting a substance that becomes a poison that eventually suffocates the life out of it. Or the shell of a long dead crustaceon, created for its survival, but now lifeless, without purpose or meaning. Humanity must guide the creation of its newest layer of existence so that it becomes an integrated part of our survival as a species. AI must not be our end, it must be our future. It must be the protector of the biosphere, the preserver of life.

If and when we upload a dying human mind to a virtual world, we must engineer that world to be safe for the upload, and safe for ourselves. We must treat that upload as an expression and extension of the human life that birthed it. We must create a virtual world based on honour, respect and compatibility. Honour as we show our anscestors, our heroes, our fallen – expressed in our art, our memories, our monuments to them. Respect as we show to a formal will and testament of our dead loved ones – for an upload might be considered a more profound and important will and testament, animated with the emotion, intellect and memories of the living. This can guide our legal recognition for their status. And compatibilty, removing the perverse incentives, and placing the management of the virtual world in the hands of those who value honour, respect, and above all, ensuring that nothing in the real or virtual world endangers the survival of human life or the biosphere.

This is a realisation we must come to now, while our ideas can shape the indsutries, the legality, and the culture of the future. It will inform the struggle to design an interface between the real and virtual worlds that enhances rather than endangers our survival. The struggle to research a safe fitness function for AI and prevent an unfortunately end through molecular disassembly. And the struggle, perhaps most importantly of all, to search for the secrets of friendly AI that might ensure that human technology turns Earth into an interplanetary biosphere rather than the graveyard of life.