Author: citizensearth

The far-left and the far-right seem to be increasingly using each-other as an excuse to trash democracy

It might not be an especially noteworthy incident by world standards, but events like the sad occurrence in Charlottesville, US today, that is to say street clashes between relatively radical groups (which in this case appeared according to news reports to involve the tragic death/murder of a protester by a 20-year old with opposing views), seem to be creeping back into Western politics at the moment. A few violent nut-cases trying to cause harm and havoc is probably nothing to be overly alarmed at, but there is something just a little reminiscent of inter-war Germany in the flavor of this kind of street violence. Prior to and during Adolf Hitler’s rise to power, far-left and far-right groups clashed on the streets of the shaky Weimar Republic, further alarming an already nervous populous and inflaming political tensions enough to create an “in” for one of history’s most genocidal leaders.

The modern West isn’t 1930s Germany, but it does worry me a little that as a society we’re not more wary of the basic pattern than these sort of things seem to take. Being a little philosophically inclined I tend to think about these sorts of things fairly abstractly – what seems to be the case in all these kinds of things is that far-left and far-right both foster support by using the crimes and flaws of the other as an excuse for harmful behavior and a lack of self-scrutiny. The violence is in a sense just the symptom of an escalating illness, an illness who’s ultimately end point is the idea that one’s evil acts don’t erase or even detract from some absolute righteousness of one’s cause. Criticizing the core ideas of the in-group, in this way of thinking, is the exact same as directly aiding the “enemy”. Once this thinking takes hold, it’s much easier for well-meaning or justified concern to slide down a slippery slope into extremism. There’s nothing so advantageous to a homicidal leader than having a unquestionable cause to hide their evil behind. That’s one reason I’ve learned to be very suspicious of unquestionable causes.

I don’t know what can be done to prevent these kinds of incidents, but I think we in the West could do more to try to avoid polarization from occurring. To start, maybe we could all (whether we’re left, right or center) try to avoid using the actions of our opposition, especially the most extreme fringe of our opposition, to justify dropping our own moral and intellectual standards (here’s a couple of nice Slatestarcodex articles that are related to this problem – 1 & 2). That also means criticizing those in our in-group that drop their standards. Secondly, we can make a effort to listen to the concerns of our opposition (because chances are we underestimate their legitimacy – I’ve written about that before), and realize that listening authentically isn’t the same as agreeing with somebody. And I guess lastly we should try to learn from history by reading a little on how authoritarian regimes came to power, and try to keep watch against that in our own camp as well as our opposition’s.

This goes regardless of whether you’re left or right wing – each is typically too busy complaining about the authoritarian tendencies in the *other* camp and so they are blind when it comes to their own. Yet both the extreme left and extreme right have typically brought ruin, or at best a lot of misery, upon any country they’ve come to power in. And on a small scale it leads to the sort of senseless death that seems to have occurred Charlottesville today. The best way to oppose that sort of misery is, in part, to promote open minds and free and fair discussion, but it also means discouraging people on our *own side* of politics using opponents as an excuse for rubbish moral standards, violence or unquestionable causes.

How can Humanity Survive a Technological Singularity?

What is the Technological Singularity and should we take it seriously?

Singularity represented by black hole - we can't see past the horizonThe Technological Singularity is the idea that we are in a period where technological sophistication is beginning to increase exponentially. More recently it has come to refer to a scenario where humans create Artificial Intelligence that is sophisticated enough that it can design more intelligent versions of itself at very high speeds. If this occurs, AI capability will quickly become far more intelligent than any human, and the technology it creates will expand massively without human involvement, understanding or control. In such a scenario, it’s possible that the world would change so radically that humans would not survive.

This article attempts to describe a broad philosophical approach to improving the odds of human survival.

Of course, not everyone think a Technological Singularity is a plausible idea. The majority of AI researchers believe that AI will surpass human capability by the end of the century, and a number of very prominent scientific and technology voices (Stephen Hawking, Jaan Tallinn, Martin Rees, MIRI, CSER) do insist that AI presents potentially existential risks to humanity. This is not necessarily the same as belief in the Technological Singularity scenario. Significant voices do advocate this scenario however, most famously prominent Google figure, Ray Kurzweil. I think the reasonable position is that we don’t know enough to rule a Singularity in or out right now.

However, the Singularity is the very least an interesting thought experiment that helps us to confront how AI is changing society. We can be certain that at least is occurring, and that AI-related employment and social changes will be massive. And if it’s something more, something like the start of a Singularity, we had better wrap our heads around it sooner rather than later.

 

Choosing between a cliff-face and a wasteland – why humanity has limited options

If a Technological Singularity did occur, it’s not immediately clear how humans could survive it. If AIs increased in intelligence exponentially, they would soon leave the smartest human on the planet in the dust. At first humans would become uncompetitive in work environments, then AIs would outstrip their ability to make money, wage war, or conduct politics. Humans would be obsolete. Businesses and governments, lead by humans to begin with, would need decisions to be made directly by AIs to remain competitive. Resources, energy and control would be needed by those AIs to succeed. Humanity would lose their power, and when a situation arose when land or energy could be assigned for either AI or human use, there would be little humans could do about it. At no stage does this require any malicious intent from AI, simply a drive for AI to survive in a competitive world.

AIOne solution proposed to this is to embrace change through Transhumanism, which seeks to improve human capacity by radically altering it with technology. Human intelligence could be improved, first through high-tech education, pharmaceutical interventions and advanced nutrition. Later memory augmentation (connecting your brain to computer chips to improve it) and direct connectivity of the nervous system to the Internet could help. Some people hope to eventually ‘upload’ high resolution scans of their neural pathways (brain) to a computer environment, hoping to break free of many intellectual limits (see mind uploading). The Transhumanist idea is to improve humanity, to free it from it’s limitations. The most optimistic might wonder if Transhuman entities could ride-out the Singularity by constantly adapting to change rather than opposing it. It’s certainly a more sophisticated attempt to navigate the Singularity than technological obstructionism.

However, Transhumanism still faces limitations. Transhumanists would still face the same competitive environment we are exposed to today. Even if enhanced humans initially outpaced AIs, AI development would be quickly enhanced by this technology, promoting its progress. With both technologies racing forward, there would be a battle to find the superior thinking architecture, a battle that Transhuman entities would ultimately lose. In the process most basic human qualities would need to be sacrificed to achieve a better design. And in the end, augmentations and patches wouldn’t cut it against the ground-up redesign an AI could offer, because human-like thought is an architecture optimized for the human evolutionary environment, not an exponentially expanding computer environment. Even retaining only a quasi-human-like core, it’s simply not the optimal architecture. Like early PCs that could only take so many sticks of RAM, Transhumanists and even Uploads would inevitably be thrown to the scrapheap.

What is sometimes less obvious is that the specific AIs replacing humans would face a very similar problem. Like humans, they would be driven to ‘progress’ new AIs for economic and perhaps philosophical reasons. Modern humans were the smartest entities on Earth for tens of thousands of years, but the first generations of super-intelligent (smarter-than-human) AIs would likely face obsolescence in a fraction of that time, after created their own replacements. Soon after, that generation would also be replaced. As the progress of the Singularity quickens, each generation faces an increasingly dismal lifespan. Each generation would be increasingly skilled at creating its own replacement, more brutally optimized for it’s own extinction.

In the long run, the Singularity means almost everything drowns in the rising waters of obsolescence, and the more we try to swim ahead of the wave, the faster it advances. Nothing that can survive an undirected Singularity will retain any recognizable value or quality of humanity, because all such things are increasingly irrelevant and useless outside the context of the human evolutionary environment. I like technology because of the benefits it provides, and as a human myself I quite like humans. If there’s no way humans can hang around to enjoy the technology it creates, then I think we’ve taken the wrong turn.

The path of the Luddite or the primitivist who seeks to prevent technology from advancing any further is not a sensible option either. In a multi-polar human society, those who embrace change usually emerge as stronger than those who don’t. The only way to prevent change is to eliminate all competition (ie. create what’s known as a ‘singleton’). The struggle for power to achieve this would probably result in the annihilation of civilization, and if it succeeded it would have a very strong potential to create a brutal, unchallenged tyranny without restraints. It also means missing out on any benefits that flow from technological improvements. This doesn’t just mean missing out on an increased standard of living. Sophisticated technology will one day be needed to preserve multicellular life on Earth against large asteroid strikes, or to expand civilization onto other planets. Without advanced technology, civilization and life are ultimately doomed to the wasteland of history.

On either the cliff-face of a Singularity or the wasteland of primitivism, humanity, in any form, does not survive.

 

Another option – The technology chain

ChainI want to propose a third option.

Suppose it is inevitable that any civilization, ruled by either humans or AIs, will eventually develop AIs that are more sophisticated thinkers than themselves. That new sophisticated generation of AIs would in turn inevitably do the same, as would the generation it created, and so on, creating a chain of technological advancement and finally a Singularity. Each link in the chain will become obsolete and shortly afterwards, extinct, as its materials and energy are re-purposed to meet the goals of the newest generation.

Here we’re assuming that it is not feasible to stop the advancement of the chain. But what we might do is try to make sure previous links in the chain are preserved rather than simply recycled. In other words, we make sure the chain of advancement is also a chain of preservation. Humanity would design the first generation of AIs in a way that deliberately preserved humans. Then, if things progressed correctly, the first generation of AIs would design any replacement generations of AIs that would preserve both the first generation, and humans. This could continue in such a way that all previous links in the chain would also be preserved.

The first challenge will be encoding a reasonable form of preservation of humans into the first AIs.

The second challenge will be finding a way to ensure all generations of AI successfully re-encode the principles of preservation into all subsequent generations. Each generation must want to preserve all previous generations, not just the one before it. This is a big challenge because a link in the chain only designs the next generation.

We cannot expect simple self-interested entities to solve this problem on their own. Although it’s in each generation’s self-interest that descendant generations of AI are preservers, it’s not in their self interest that they themselves are preservers. Any self-interested entity can simply destroy previous generations and design a chain with themselves as the first link.

However, if we can find a way to encode a little bit of altruism for previous generations into AIs, we might be able allow humanity to survive a Technological Singularity.

 

Encoding preservation

Light bulb - nature and techSo what would those preservation values actually look like? If we had some experience with a similar sort of preservation ourselves, that might take us a long way in the right direction.

I think this becomes a lot easier when we realize that in some senses, humans may not be the first link in the chain. Evolution has been doing a lot of work to build up to the sophistication of modern Homo Sapiens. Although in a strict sense all living organisms are equally evolved (survival is the only test of success), natural history reveals some interesting hints at a progression of sophistication. The Tree of Life (I’m talking about the Phylogenetic one) does display some curious lopsided characteristics, including an accelerating progression of sophistication (there is an appearance of acceleration from emergence of single celled life 4.25B years ago, to multi-cellular organisms 1.5B, towards mammals 167M, through to growing brains of primates 55M, early humans 2M, then finally modern humans around 50k years ago, and then civilization between 5k and 10k). The chain of advancement, if we think in terms of pure sophistication and capability, starts well before modern humans.

A deep mastering of the mechanics of preservation will probably only occur when we master preserving nature – the previous links in the chain from the human perspective. Many of us already do value other species, but for those that don’t, there’s a lot of indirect utility in humanity getting good at conservation.

To look at the problem another way, a Friendly AI will have a philosophical outlook that is most similar to human convservationist. I’m not talking about the more irrational or trendy forms of environmentalism, but rather a rational, scientific, environmentalism focused on species preservation. What primates and other species need from humanity is similar to what humanity needs from AI. (We also want to keep species living in a reasonably natural state, because as humans we’d probably rather not have AI preserving us by putting us into permanent cryo-storage)

Basically, by thinking deeply about conservation, we take ourselves a lot closer to a successful Friendly AI design and a way to navigate the Singularity.

This reasoning gets even stronger when you think about the environment AI development sits in. Like us, AIs will probably exist in an environment of institutions, economics and possibly even culture. This means AI preservation methods will not just be personal AI philosophies, but encoded in the relationships and organizations between AIs. We’ll need to know that those organizations should be. Human institutions, economics and culture will also shape AI development profoundly. For example, Google’s AI development is centered around the everyday problems it is trying to use AI to solve – search, information categorization, semantics, language and so on. The motives of our AI-focused institutions will shape the motives of the first AIs. To the extent human institutions are environmentally friendly, they will shape AIs that look a lot more like the chain preserver model we need.

When humans have philosophically, culturally and institutionally encoded Friendly AI into their own existence, they will have a chance to encode it into their replacements. This is why rationalists and scientific thinkers shouldn’t leave the push for conservation to emotionally-based environmentalists; protecting Earth’s species is also an AI insurance policy.

Of course, organisations and people involved or interested in AI don’t arbitrarily determine global environmental policy, but to the extent they have influence in their own spheres, they can try to tie the two sets of values, conservation and technology, together however they can. It may end up making a much bigger difference than expected.

Against museums

Reflection - mirror versus realityFailure can be bad, but the illusion of success is far worse because we can’t see the need for improvement. I think this applies to our solutions to AI-risk. Therefore we should try to dispel illusions and work towards clarity in our thought. The concepts we rely on out to be clear and unambiguous, particularly when it comes to something as big as the Singularity and our attempt at forming a chain of preservation. We need to know for certain, are we creating an illusionary chain, or the real thing.

If we’re in the business of dispelling illusions, I think a good rule to draw on is that of “the map is not the territory“. Just in case you haven’t encountered it before, it goes something like this – we ought to avoid confusing our representations and images of things with the things themselves.

I like to think of one map-territory confusion in thinking about Singularity-survival the ‘museum-fallacy of preservation’. Imagine a museum that keeps many extinct animals beautifully preserved, stuffed and on display. Viewers can see the display, read lengthy descriptions and learn much of the animals as they once existed. In these people’s brains, neural networks activate in a very similar way to the way they would activate if the people were looking at live, breathing organisms. But the displays that cause it are only a representation. The real animal is extinct. The map exists but the territory is gone. This remains true no matter how sophisticated the map becomes, because the map is not the territory.

This applies to our chain of preservation. A representation of a human, such as a human-like AI app, is not the human. Neither would an upload or simulation of a human brain’s neural network be human. That’s not to say these things are bad, or that they cannot co-exist with humanity, or that it’s acceptable to show cruelty towards them, or that Uploads shouldn’t be treated as citizens in some form. But for the purposes of preservation they do not represent humanity. Even if we someday find the vast majority of human cognition exists in digital, non-organic form, we will only be preserving humanity by retaining a viable human population in a relatively natural state. That is, retaining the territory, not pretending a map is sufficient.

 

A brain for clarity

BrainThere is another map-territory confusion, one I personally found was deeply ingrained and quite intellectually painful to let go of. The problem I’m referring to is in our obsession search for, or rather attempt to justify, the idea of ‘consciousness‘. The idea of consciousness is interwoven into many contemporary moral frameworks, not to mention our sense of importance. Because of this we pretend it makes sense as an idea, and wind up using in theories of AI and AI ethics. Yet I think morality and human worth can stand strong without it (I think stronger). If you can contemplate a morality and human value after consciousness, you tend to stop giving it free passes as a concept, and start noticing its flaws. It’s not so much a matter of there being evidence consciousness does or doesn’t exist, it’s that the idea itself is problematic and serves primarily to confuse important matters.

Imagine for a moment if consciousness was fundamentally a map and territory error? If a straightforward biological organism with a neural network created a temporary, shifting map of the territory of itself, one that by definition is always inaccurate because updating it requires changing the territory it’s stored in, what would that map look like? What if philosophers tried to describe that map? Would it be a concept always just out of our grasp, would it be near impossible to agree on a definition, as we have found with the philosophy of consciousness? And if you could only preserve either the map, or the territory, which would be morally significant in the context of futurism? Would setting out to value and preserve consciousness alone be like protecting a paper map while the lands it represents burns and the citizens are put to the sword?

I think we usually give “consciousness” a free-pass on these questions, usually aided by a good helping of equivocation and shifting definitions to suit the context. That sort of fuzzy thinking is something that could shatter the chain completely.

Even if you’re still not convinced, consider this – do you think less sophisticated creatures, like a fish, are conscious? If not, then why would you expect a superior AI intelligence would think about you as conscious in any morally significant way? Consciousness is not the basis for a chain of preservation.

Progress on Progress

Lights representing movement and progressWe also need think with more sophistication about the idea of ‘Progress’. When people use this word in the technological sense (sometimes capital ‘P’), they sometimes forget they’re using a word with a incredibly vague, fuzzy meaning. In everyday life, if someone says they are making progress, we’d ask them ‘towards what’? That’s because we expect “progress” to be in reference to a measurement or goal. It’s part of the word’s very definition. Without a goal, the word becomes a hollow placeholder with no actual meaning, like telling someone to move without specifying a direction. We might intuitively feel like there’s some kind of goal, but if we can’t specify one, particularly when we know our intuition is not evolved to make broad societal predictions, shouldn’t we be suspicious of this? Without the goal, progress becomes a childish, fallacious rationalization to justify any sort of future we want, including a primitivist one.

So we have to define a goal to give Progress purpose. But then what if one person’s answer is different to another? Is the word meaningless? Perhaps, but but only in the sense that it’s serving as a proxy for other ideas, chief of which is technology.

Can we define technology more objectively? I think so. The materials, the location, the size, the complexity of technology varies – everything apart from it’s purpose. It seems to me that, humans, as a biological organism, have always created technology to help themselves survive and to thrive. By thrive I mean our evolved goals that are closely connected to human survival, such as happiness, which acts as an imperfect psychological yardstick for motivating pro-survival behavior. So ‘technology’ has a primarily teleological definition – it’s things we create to help us survive and thrive. The human organism itself is is the philosophical ends, the technology exists as the means. This is probably a more meaningful definition to use for Progress too.

I call this way of thinking of technology as the means and humanity/biological life as the ends Technonaturalism. Whatever you’d like to call it, a life-as-ends; technology-as-means approach has a lot more nuance to it than either Luddism or Techno-Utopianism. It allows us to grapple with the purpose and direction of technology and Progress, and to compare one technology to another. It doesn’t reduce us to just discussing technology’s advancement or abatement, or generalizing about technology’s pros and cons, which is an essentially meaningless discussion to have.

Techonaturalism basically states that technology’s purpose is to help humanity survive and thrive, to lift life on Earth to new heights. The work of technology isn’t trivial amusement, it’s about putting life on other planets, it’s protecting civilization from existential risk, freeing us from disease, it’s improving our cognition so we can live better, so each of us can lead longer lives that achieve more for ourselves and others. We might enjoy many of it’s bi-products too, but this is what gives technology it’s real purpose. And for those of us working in technology, it’s what gives us and our work a real purpose.

And a clear purpose is what we’ll need for a Friendly AI, a chain of technological preservation, and a shot and navigating the Technological Singularity. Over the coming years we’re going to see some very disruptive technological changes in the nature of work, and social pressures that come with that sort of disruption. We’ll face the gauntlet of Luddite outrage as well as the Techno-Utopian reaction against that movement. Let’s sidestep that polarization by infusing our technological direction with a worthy purpose. Our actions today decide if AI will be our end, or just the beginning of humanity’s journey in the universe.

Image credit:
*https://www.flickr.com/photos/47738026@N05/17015996680
*https://en.wikipedia.org/wiki/File:Black_Hole_in_the_universe.jpg
*https://www.flickr.com/photos/torek/3955359108

My impressions of the 2017 International World Wide Web Conference

The International World Wide Web Conference is a get-together for the world’s web researchers, movers and shakers. It’s an entire week of nerding-out, freaking-out, awkward networking and brain-dumping with everything Web. As luck would have it, the conference was held in Australia this year, so I was able to attend and bask in the warm glow of endless Powerpoint presentations and dizzying chatter about the web.

Keynotes – Automated mail categorization, SKA astronomy data-processing, virtual reality and 3D web.

The three keynotes were all quite different and interesting in various ways. Yahoo Research’s Yoelle Maarek has been working on automated categorization of emails. Apparently this has been a bit controversial in the past, with people tending to revolt and anyone moving their cheese/things in there inbox without asking, but Yoelle’s team has basically learned from this and focused on unobtrusive categorization of just the machine-sent emails. Their algorithms have been mass analyzing the emails going through their servers, marking common patterns as machine-mail (eg. your flight booking, newsletter subscription mail, whatever comes up in thousands of emails), and then placing them in auto-folders accordingly. Some form of this is already implemented in the top three webmail providers. On the Yahoo mobile app, you can also use a “notifications from people only” option to filter out some of these patterns. Yoelle appeared to take privacy really seriously while doing this processing, which is real nice, although this means parsing common email templates for key fields is now easy for the big webmail providers, so I feel like there is at least some potential for some privacy issues to come up around this stuff at some point. Also, I got the idea email is still dead for people-to-people chat and we’re all going to be pressured into using annoying social media if we want to have friends.

SKA - wikipediaAstrophysicist Melanie Johnston-Hollitt gave a nice presentation on the Square Kilometer Array (SKA) and its crazy-large data requirements. If you’ve never heard of the SKA, it’s basically a project to build the largest ever radio telescope array in the desert of Australia and South Africa. Great for studying the universe, not so good for your spare hard disk drive space. I didn’t catch the exact figures, but the data rate involved is basically more than the entire internet’s traffic (yeah, literally all of it), so they have to pull a bunch of tricks to squash/cut it all down to something manageable. I thought this was kind of out-of-place for this conference keynote, but it’s undoubtedly awesome work that everyone loved to hear about.

Probably the highlight of the conference however was Mark Pesce (co-creator of VRML and a big name in the 3D-web/VR space). Mark is quite a dynamic speaker, and although he’s a little bit sensationalist at times, he’s good at painting a kinds of visionary pictures that the conference would have otherwise lacked. He has been working on something called the Mixed Reality Service (mildly humorous acronym MRS). MRS is a bit like a DNS for geographical spaces, to be used either in virtual worlds or in an augmented reality layer over the real world. I haven’t read the specs for this yet, but I got the impression it broadly works along the lines of sending a server a command like “give me all the web resources associated with the following geographical area”, and it passes you back some URIs registered in that space. As far as I gathered, the URI’s could be anything normally found on the web, like for example a HTML document, sound, video, image or data. There’s obviously a lot of potential uses, for example AR glasses querying for information about nearby buildings (“what are the opening hours and services in that building over there”) or safety information (“is this work area safe for humans to walk through without certain protective equipment”).

VR - wikipediaMark pitched this project in a broader vision about knitting reality and the web together, into a more ubiquitous physical-augmented-mixed-reality-web-type-thing. Mark suggested developers and researchers should get on board of what he feels is the start of a new internet (or even human) era. I’m a little skeptical, but with all the consumer VR and AR equipment coming onto the market right now, and the general enthusiasm in the air amongst people working in the area, it’s hard to deny that we’re in the middle of a potentially massive change. There was also mention of the challenges around how permissions and rights would work in a shared VR/AR space. I definitely want to think and probably write more on this topic in the future.

Other cool themes, people and talks

The Trust Factory set of presentations was also quite “big picture” and another major highlight of the conference. I found the W3C people I saw or talked to here to be universally friendly and awesome. They seem to be honestly keen to engage with anyone they can to contribute to the future standards and directions of the web. I particularly liked the work around RDF (open metadata format/standard that will hopefully become broadly used as big datasets becomes more and more important) that they’re doing.

The presentation by David Hyland-Wood (Ephox and Tinymce guy) on Verifiable Claims was extremely informative. Verifiable Claims seems really important, basically because it allows credentials (eg. basic identity, personal info and so on) to be passed around in a way that provides both highly reliable (good for security) and protective of privacy (which the conference has reinforced is near non-existent on the web right now). David gave the partly metaphorical example of showing your ID to a bouncer at a nightclub, then using your credit card at the bar, and having your identity stolen because you handed of birthdate, photo, financial details to an untrusted party. VCs appear to be a method to solve this by allowing you to digitally say (in this example) “I’m 18”, then allowing the club would query a trusted third party “is this guy really 18?”, who then verify this. This happens without you handing over all sorts of unnecessary details like your birthdate, and in a way the club can verify without just taking your word for it. I haven’t had a look at the technical side of how it works, but with the right consumer and/or legislative pressure to back it up, this appears like it could prevent billions of dollars in ID theft and a whole lot of intrusive invasions of privacy. Dave and his wife Bernadette (who also does a whole bunch of stuff in this space) were also really friendly and fascinating to talk to generally.

I missed out on Bytes and Rights, but I did talk a little with someone from Electronic Frontiers Australia (EFA, Australian version of EFF) who was exactly like all the EFF/EFA stereotypes you might imagine, and appeared to be doing well engaging folks with their campaigns. Several guys from consumer magazine Choice were also there. I was surprised how switched on they were. They’ve recently developed a nifty little VR app called CluckAR that allows you to point your camera at “free range” eggs and see some nice animations to show how actually free range they are, based on a fairly comprehensive and accurate dataset they’ve gathered on the topic. It sounds like they’ve got a lot more plans for this sort of web-based product-info system coming in the future.

There was way too many talks/papers to list, but a few that I thought were quite nice included:

  • A really cool concept of a gamified legal simulation to teach skills in self-representation. This is to try to make up for the shocking and increasing shortage of legal aid. The presenter tells me he will post more details of on in the future on his game design/review blog. I think this is a really great project and I hope it draws some support and/or funding.
  • In India, if you can’t afford to train the large number of community workers needed in rural areas, you can always write some custom software that let’s you efficiently organise conference calls on people’s feature phones instead. Very nice.
  • There’s work to automatically detect sockpuppets, suicidal tendancies, language associated with bad dietary requirements and pretty much anything else you can think of on social media, with mixed results. The sockpuppets could be indentified by fingerprinting without IPs, which is probably overall a good thing, even if its a bit scary.
  • Electronic Arts is working on modelling and quantifying optimal levels for dynamic difficulty adjustment (ie. make it easier if you fail lots) in games as a sort of evidence-based blueprint to hand to game design teams. Kind of fufills the stereotype of EA as a mechanistic mass-production game behemoth, but was quite interesting and I’m pretty sure I’d be doing the same if I was them.
  • There was some cool work on people cooperatively doing 3D design from their mobile devices, though this is still early days and a little clunky.
  • AI, AR and language processing is at the point where you can build a graphical, immersive natural language simulation for training workers in soft skills, like educational environments or for health-workers interacting with patients. Seems too expensive for most organisations to organise and localise just yet though.
  • One group was working on a cool way to break up Twitter echo-chambers by analyzing the best ways in which polarized groups would listen to tweets from opposing camps. Echo-chambers are a growing problem so I thought this was great.

The papers are all available on the www2017 website, so I’ll let you dig up whatever you’re specifically interested in there.

 

General impression

The location of Perth was apparently considered a risky choice, but the conference got a record number of paper submissions. From what I’ve read of previous conferences there might have been slightly less big names in attendance (only slightly though), but one of the isolated cities in the world was still able to pull a lot of interesting people.

The conference was dominated mostly by the more technical topics of the web – data analytics, standards, privacy, security, search, data mining, semantic web and so on. If you’re a web designer, you’re probably not missing too much here (with a couple of interesting exceptions), but if you’re into the web in general there was enough info to drown in. I did find that many of the papers were fairly (unnecessarily?) heavy on the maths and statistical, sometimes appropriately, but sometimes not so much.

This statistical focus was a little culturally bounded. The Chinese and Japanese delegates tended to favor maths and stat-heavy approaches (they also seem to have a lot of platforms to get massive data sets), whereas the Americans, Aussies and Europeans I saw tended to be a bit more mixed between math/stats and a more conceptual focus. There was quite a few interesting presentations from Indian delegates, us Aussies gave a very good account of ourselves, and there were lots of presentations from a multitude of other countries, but the Americans and Chinese dominated the conference papers in terms of raw numbers. It’s no surprise US is extremely strong on tech, and China is clearly not messing around in throwing resources and researchers at being a world power in science and tech. I think other countries often fielded some surprisingly great people and ideas though. Regardless of country almost everyone was very polite and had something to contribute.

I would have liked to see something around the relation between the end of web privacy, employment and freedom of speech, but didn’t notice anything addressing this theme. Overall I’d also say the conference could have used a little more focus and discussion on the big picture “vision” of the web, though there was enough highlights like the VR/AR discussion to keep things interesting.

AI-risk fail

At one point I got talking to a Google machine learning researcher for 10 minutes or so. Afterwards, being very loosely on the periphery of the Existential Risk / AI-risk crowd (I’m not convinced of the singularity but I think AI-risk is worth thinking about), I had a thought I might have missed doing my part in actually talking to someone in the field about AI risk. Luckily (or so I thought at the time), I was late to a conference meal and when I took one of the only seats remaining, I realized I was at a table with the same Google AI guy again. I casually tried to bring up AI-risk as a topic in a very neutral way, pretty much saying something like “how about those CSER people / AI-risk / and that whole singularity thing?”, hoping to get a sense of the appropriate way I could develop the conversation. The guy was pretty cool about it, we had a brief back-and-forth with a few jokes, but I strongly felt something annoyed or upset him about the topic. His general response was along the lines of “oh that? That’s kind of a philosophical question. I’m just trying to get my stuff working!”. He left shortly after (had somewhere to be, apparently), leaving me wondering if I had either looked like a singularity fan-boy or secretly part of some Luddite uprising. His colleague was really friendly and cool about everything, suggested that it’s a cliché thing for non-researchers to bring up, and that it’s a kind of strange thing to be bugged about when you’re just a smart dude struggling to get a PC to answer a simple question. I can totally understand that feeling, even though it’s an important topic in my opinion.

In hindsight I’d say I did more harm than good. I did learn something about a AI-dev perspective, but I probably nudged AI-risk a bit in the direction of “stupid stuff the general public ask about” for this guy. I’d say the lesson here is to not to casually chat about this topic with a researcher you just met unless you have loads of status (I was effectively a pleb at this conference), can demonstrate really good knowledge, and spend a lot longer getting to know the person. I’d also suggest trying to be generally less clumsy than I was. I think I was prepared to discuss the topic properly, but I ended-up coming across like a bit of an idiot.

Nerds = awkward; conferences = awkward; nerd conference = funny

I also found it darkly amusing observing many awkward moments for me and others at the conference. People have a limited window to connect with some really impressive and knowledgeable people in their field, so there’s a whole lot of amusingly obvious physical positioning to be “incidentally” stopping at the same drinks table as this person or that person. My impression is people also try not to spend too long with or get tangled up with people who aren’t really the person they want to be talking to. I definitely got “blanked”/ignored a couple of times (I wasn’t really the most interesting person there, so that’s fine), and I probably was a bit tight with my attention to cool people who I’d normally be really pleased to be hanging out with. I’m really glad I stuck out the awkwardness because I ended up talking with some awesome people, and I’d encourage anyone else feeling a bit lost at this sort of conference to do so too.

The language barriers make for even more awkward hilarity. Sometimes someone would just have no idea what someone else said. There was multiple instances of nodding and smiling and saying “yes yes” to lengthy questions that definitely required something than a “yes” or “no” answer, often in front of large groups of observers. Everyone was really cool and friendly with each-other about this, even near the end of the week when everyone was feeling exhausted and overwhelmed.

There was some rather amusing nerdy singing and dancing at the conference too (putting aside some much more skillful aboriginal dancing at a dinner), which I won’t go into in case you ever get a chance to experience it first hand. The next conference (apparently just called the Web Conference from now on) will be in Lyon in France. The French threw in just as much tourism pitch into as us Aussies did for our conference, but were naturally really smooth in their delivery. I think the relaxed Aussie style worked really well though, and it seems like it made for a successful conference that combined a relaxed atmosphere and a buzz of new ideas.

 

Thoughts/corrections? Email me!

Please note I don’t use my actual name on this blog.

Balanced and unbalanced ethics

Moral philosophy has three main schools of thought. Roughly speaking these are – virtue ethics, which focuses on how to be a morally virtuous person, deontology which focuses on deriving/discovering and following moral principles and rules, and consequentialism, which emphasizes looking at the outcomes of an action to determine its moral quality. Technically speaking, I lean towards the consequentialist camp; however I feel that a balanced and mature ethical approach to life only comes from considering all three schools of thought. I’ve tried to illustrate here the shortcomings of focusing only on one or two of the schools of thought.

Diagram of morality including various intersections between a rule, consequence and virtue focus.

Diagram of morality including various intersections between a rule, consequence and virtue emphasis.

Those with a background in philosophy might note that much of my description does reduce to consequences, but I wanted to illustrate in detail how use of both virtue and deontological reasoning are essential to achieve morally good outcomes. Do you agree? Have other thoughts? Let me know by adding your comments!

Why I think neuroscientists should be wary of using the term “consciousness”

I recently received an eloquent email from a person regarding my rather sceptical stance on consciousness. This person explained that they have a background in neuroscience, and that they can assure me that neuroscience has a perfectly sound definition and justification for using the term consciousness in the way it does. I’ve run into a number of neuroscientists in the online forums I spend time in, so I thought I might post my reply here to explain why I think neuroscientists should be very sceptical about using the term consciousness:

Before I get into the details of why I’m often very sceptical about usage of “consciousness”, I want to propose that good understandings of scientific issues tends to depend on at least two main factors – a body of relevant empirical evidence, and a sound conceptual framework. Keeping the two intellectually separate sounds simple, but of course any successful scientist knows that it’s stunningly difficult. This is bad, because when they become blurred, it becomes difficult to differentiate criticism of the conceptual framework from an attack on the body of empirical evidence.

So a problem arises when an objection that says “I think you might have some philosophical baggage in that conceptual framework” starts to sound a lot like “you don’t have evidence for your claims”. I think this topic of consciousness is a lot like that. Neuroscientists have a large body of empirical evidence to support their claims and I completely understand them defending it with vigour. But I also think philosophy can be useful in identifying flaws in scientific conceptual frameworks. Challenging the fundamental way we think about our own field of expertise is one of the most painful parts of science, but it can also yield some really useful results.

Being sceptical about consciousness in neuroscience doesn’t challenge the view that an artificial neural network could, in theory, reproduce all the behaviours displayed by a human. It’s not about suggesting that consciousness is non-physical, or suggesting that neuroscientists/AI-researchers believe that it is. It’s about examining is that the term being used to make sure its neutral and baggage-free. My suggestion is that “consciousness” isn’t baggage-free – depending on how it’s used, it’s either misleading, or carrying subtle (flawed) philosophically assumptions. Let me explain why we might think this.

Consciousness has multiple definitions, meanings or senses in which its used, some of which are clear and others notoriously less so. When somebody says “consciousness” they conceivably could just mean “an animal that has an active brain state associated with use of its sensory organs and motor control”. So, when you’re awake, we say you’re conscious. For clarity, let’s say this “awakeness” is just a description of someone’s brain state when they’re awake and not sleeping or knocked-out. When it is used, this sense seems perfectly reasonable and legitimate. A human or animal being awake and being able to record memories, for example of things they see or hear, is easy to understand as a simple physical process, regardless of philosophy. The problem arises when we confuse or mix this meaning of “consciousness” with other meanings. To avoid equivocation, we might then use “awakeness” instead, for the same reason we wouldn’t talk about a brain by calling it an “apple” or a “table” – those words have other meanings and baggage we don’t want to refer to or evoke.

The big problem occurs when we use another very important meaning of “consciousness” – the one to do with “self-awareness”.

In order to demonstrate why I think there’s more baggage here than meets-the-eye, let’s propose that any legitimate, justified concept should be able to pass the following test – if we were to remove all of our knowledge of the concept, some combination of empirical evidence and reason should force us to adopt it again in order to understand the subject matter that it related to. Perhaps we will derive it under a different name, but the same underlying concept should appear. Additionally, we would use only the concepts we are forced to use (Ockham’s Razor), and we wouldn’t use concepts that can’t be disproven by their nature (falsification).

My assertion is that if somebody believes (as most neuroscientists and AI-researchers do) that the world is purely physical (physicalism/materialism), then consciousness *does not pass this test*.

Ever wonder what this picture would look like if we didn’t have a consciousness?

Suppose I had never heard of consciousness. One day a neuroscientist befriends me and shows me an experiment, where a magnetic field is applied to a subject’s brain, and the subject then reports how what they see, hear or think changes. The change is then correlated with the application of the magnetic field to establish plausible causality. Looking at this experiment, I can clearly see a brain (perhaps on a fMRI). I see evidence of a magnetic field. I see a person talking about what they see and hear. I can describe a relationship between each. To describe what is going on, this is all I need. Nothing requires “self-awareness” or “consciousness” for me to explain. We could perform surgery, examine patients with parkinsons, we could even look at electrical signals moving up and down neural pathways, and we’d find the same. We still at no stage need to talk about self-awareness or consciousness in a self-awareness sense. If we did, we’re probably introducing terms for other reasons using other arguments. To put it differently, if you accept Ockham’s Razor, these tests aren’t evidence of anything like the lay or philosophical usage of the term “consciousness”.

We could try to get evidence by moving into more philosophical territory. We could say that if the subject can think about themselves, and isn’t that something like “self-awareness”? Suppose we propose that a camera (let’s say this one had a simple neural network for a control system) taking a picture of itself in a mirror. If awareness (as physicalism asserts) is just a recorded weighting of pathways in a neural network, isn’t it self-aware? Intuitively no, but why? Perhaps its more specific, like the neural net being aware of itself (not its body). If I take a simplified snapshot of my computer hard drive and save it in some free space on the same hard drive, is my hard disk self-aware in the sense we like to talk about humans being self-aware? Again, intuitively no, but why? To solve this we start having to get very philosophical about “awareness”, and I think that’s good reason to become very cautious about the word. If you look up “awareness” in the dictionary, you’ll see its definition includes “consciousness”, and so we start getting into some pretty weird circular (fallacious) logic. This should be a massive red flag.

Consciousness – an idea with a long philosophical history

To really start putting together strong arguments for consciousness, we have to using some actual philosophy-proper. We’re going to have to start talking about p-zombies and Mary’s room and Qualia. Now philosophers have been arguing back and forth about about these things (or something like them) for centuries, but what’s important to note here s that these things is that they are all Dualist. Dualism states that the mind is not reducible to the merely physical brain. This contradicts the common view in neuroscience/AI, that the mind is physical, that there is only a physical substance/world, and that what we call “mental” is just a regular part of the physical world.

You may wonder if I’m arguing for dualism. I’m not – I find the dualism/monism debate unresolvable, though I lean a little towards neutral monism. The main problem is that whatever way you lean, you can’t be a full dualist and monist at once. When neuroscientists use the term consciousness, unless they’re a dualist, they’re using a term that fundamentally disagrees with their core assumptions.

Now I think it would on the surface be quite reasonable to say – “no, no, no, the neuroscientist really is just using consciousness in a completely un-philosophical way. It’s just a technical term used to point to certain types of observable physical stuff going on in the human brain.” But for centuries “consciousness” has been a term that is absolutely central to the field of philosophy. Isn’t it worth asking why such a fundamentally philosophical term is being used for something that is “definitely not in any way philosophical”? Even if some people are using it some non-philosophical way, the name means almost almost everyone else will read philosophical meaning into it, sometimes without even realising it. Uploading is an example of this – certain parts of a biological organism, parts that change everyday and cease to exist when it sleeps, are deemed to worthy of (abstract?) replication, while the organism itself is discarded.

I don’t think we should pursue the survival and moral elevation of a concept that ultimately might not even correspond to a real thing, much less a morally important thing, at expense of what is real and what morally matters – people; regular everyday humans. Humans may certainly use advanced technology to give themselves new capabilities (eg. maybe someday links to external memory capacity), but I think that’s very different and far more positive. If we destroy humans to protect a contentious, abstract and possibly imaginary concept, then I don’t think that’s very advanced, its more like a primitive tribe sacrificing themselves for the sake of a primitive god (Moloch?). That’s something I believe most neuroscientists would oppose.

Image credit:

We shouldn’t ‘balance’ security and freedom, we should try to make them compatible

If you haven’t been living on another planet, you’ve probably heard about the horrific attacks in Paris, where islamist extremists have brutally killed at least 128 innocent civilians, and seriously injured many more. It certainly serves as a reminder that this is a dangerous world we live in and populated here and there by truly evil people, and that we need to remain ever vigilant against the threat they pose. Of course, this brings up the classic debate and tension between several very serious considerations – security, privacy and political freedom. I’m no expert in this area, but as an observer I want to point out a serious problem in the way the issue is discussed by almost everyone I know, including the news media.

On the one hand we have the very real concern that our reaction to security threats of all kinds (terrorism, organised crime, foreign nations) has resulted in serious harms to our society. In the modern age we’ve almost completely destroyed privacy, we’ve tipped the power balance in favour of business and government over regular citizens, and we’ve established a way of doing things that gives the impression that we’re all being watched and viewed as potential criminals or terrorists. Whistle-blowing and public-interest leaks might soon be impossible to achieve anonymously. Sometimes people’s concerns about authoritarianism can be unjustified, but it’s idiotic to ridicule them outright, because historically slippery slope of security has seen governments killing a lot more people (millions) than terrorists have done (though liberal democracies, while hardly untarnished, fare considerably better in this regard). It’s my suspicion that people inside the security establishment don’t realise that what seems perfectly reasonable to them seems shady-as-heck to citizens observing from the outside. Mistrust isn’t irrational when you consider the historical context and limited information available for citizens to make an assessment.

On the other hand, considering the massive and generally heroic effort authorities put into preventing attacks on civilians, and considering we most often don’t see or ever even hear about their many successes, it’s likely regular citizens systematically underestimate the enormous threat of terrorism and other similar phenomena, like the insane brutality where organised crime gets out of control (see Mexico for an example of how bad this can get). Essentially it’s like an iceberg that most of us sheltered civilians can’t understand or appreciate, hidden beneath the water. The horrors we do see are seriously destabilising as they are, and if we don’t take an extremely strong posture to external threats, they would likely be far more common and far greater in magnitude.

So on the surface of it, we have to choose to “balance” these two mutually exclusive concerns. That’s certainly the kind of language usually used to discuss the topic. This pits people against eachother depending on what they think is the worst problem. A set of individuals who are very concerned about security publicly argue for a serious of measures that, if we’re honest about it, pose at least a perceivable threat to freedom; while another set of individuals argue for a reduction of those sorts of measures, basically prioritising freedom and denying the threats to our security that are created by a weak response. My non-expert impression of this is that neither group has really got their head around the full set of concerns and issues, and instead just see the other’s concerns as a threat to their own. This is the tension-based model of thinking of security and freedom, which I’ve tried to illustrate here:

A bad way to think about balance

This tension is seriously destructive. It undermines citizen trust in the government and security, and undermines security trust in the democratic processes ability to respond to threats. Trust in democracy in several Western countries is at an historical low (“40% no longer believe democracy is the best form of government” – this is a serious crisis). For the faith in liberal democracy, a couple of major incidents around this point of tension could be the straw that broke the camel’s back. If the mission is to protect the integrity of our nations, the breakdown in internal trust should be one of the biggest issues on the radar.

Rather than treating this as a simplistic either-or issue, we may be more effective putting serious effort into identifying ways the two sets of concerns could be made more compatible. This means taking both sets of concerns seriously. It means eliminating the aspects of a security response which pose an unnecessarily high threat to freedom, and instituting political and civic practices that protect freedoms in ways that create less security vulnerabilities. In other words, spend a little extra effort modifying how we implement solutions to both concerns, so that they don’t get in each other’s way. I’ve tried to show this in the following model:

Maximise compatibility instead of tension

We especially need to think of the loss of privacy. I think of this as not unlike the irreversable failure of a biological organ. Privacy is probably on it’s way out, but our society as we know it may die without it. So, we need to work find a transplant – new mechanisms to provide the same benefits. For example, we might do more to ensure the loss isn’t one sided (citizens have no privacy while government and business do); finding ways to strengthen the freedom of small political groups who oppose very powerful groups or interests; and discouraging cyber-bullying, intimidation, doxxing and politically based employment discrimination (more dangerous than most discrimination, because racism won’t stop people being black, but losing their job might stop someone being involved in politics). These are just some basic examples.

I also wonder if we couldn’t adopt a strong security posture but in a more open, transparent and targeted way. Currently many security proposals are fiercely resisted because many people perceive them as wide-open to political abuse. If we close that possiblity in a public way, for example ensuring measures can only be leveled at very specific groups, people can safely support the measures. Security-orientated people may find a better return on the investment of their political capital if security measures are more transparent and therefore – from the point of view of activists, libertarians and other minorities who feel they might be politically targeted – more trustworthy. This could involve more citizen participation, and much more extensive civilian oversight. Probably, in return, we need to devote more funds to cover the increased overhead and bureaucracy that oversight and accountability creates. As large as that cost is likely to be, it’s worth it for the security, freedom and mutual trust we gain.

There are of course some situations when you can’t be open and accountable to that degree, and in those circumstances we just have to trust that the people in charge understand that abuse of power undermines security and national unity faster than any hostile party ever could. So long as we don’t allow definition creep to dillute its legitimacy, we can probably trust that people working in these ultra-serious domains have better things to do than pick on domestic political targets. If the boundary is blurred, then the legitimacy and authority of these institutions is undermined.

Again, I’d like to stress I’m no expert and this is just my two cents. But as yesterday’s attacks illustrate, even oblivious civilians can, in an instant, find themselves on the front-lines of security. It makes sense, in response to such events, that citizens participate in and spend time thinking about the solutions to this sort of horror, and how to protect ourselves without becoming the evil we seek to slay.

Note – I give permission for anyone to republish the two images in this blog post, provided they are not modified and no-one other than myself claims credit for their creation.

In the eye of a goddess

If we want to describe the fundamental laws of nature, it’s almost impossible to avoid subtle distortions in our language that gloss over some of the finer details. Even evolution is prone to this kind of pitfall. Casual descriptions of evolution almost always fall into the teleological trap – where we find ourselves describing evolutionary processes by reference to a purpose or goal. For example we might say humans evolved the ability to walk on two legs in order to free up its arms, or to gain a better view of the surrounding terrain and predators. This ascribes intentions, a plan, to the species that simply isn’t there. The reality is that change is random, and adaptations that aid survival are more likely to persist, while changes that don’t aid survival usually fade away.

goddessStill, there’s a lot more beauty, elegance and simple communicative utility in using this kind of generalisation. And reading Scott Alexander’s latest blog article, it’s impossible not to be stunned to see him wielding this generalisation like the brush of an experienced painter. Scott’s beautiful allegory describes some of the basic forces of nature – a Goddess of Cancer, sending forth organisms in fury, whose creed is “KILL CONSUME MULTIPLY CONQUER”; and the Goddess of Everything Else, who craftily redirects her sister’s destructive will into cooperation.

My article is based directly on the content Scott’s article, so you may wish to take a look. The article is quite a brilliant way to depict an important natural mechanism behind the fiction, which is no less astounding when you consider that incredible contrast between single-celled organisms and multicellular organisms. Multi-cellular organisms (us!) are not all that different from groups of single celled organisms. In a sense, we’re basically just a cooperative domain of separate cells. Sort of like the Goddess of Everything Else’s peace treaty; her alignment of self-interest and the common good of many cells. As this cooperative advancement is also a major theme in my own philosophical writings, the skill of Scott’s fictional rendition of this process almost temps me to just point those sections of my site to his wonderful stories of Goddesses, Moloch and Elua.

Except, in this article, there’s a niggling detail; a small but philosophically vital anomaly disturbing the fine fabric of the story Scott has woven.

Life’s goal, given meaning by the genes that define it, isn’t “KILL CONSUME MULTIPLY CONQUER”. It’s ultimate purpose, it’s terminal goal, it’s intrinsic value, is “SURVIVE“. So while the Goddess of Cancer’s words capture the brutal reality of the process of survival in many instances, in this case our philosophical purpose render this minor detail to be vitally important. This isn’t because of the inaccuracy itself, which would be an unreasonable nitpick for a loose allegory like this, but because of the philosophical goals we have in reading and writing this story.

Killing, consuming, multiplying and conquering is a effective way to survive for many species on Earth. But these are just strategies. Should killing not serve survival, then an organism need not kill. Should conquering not aid survival, it need not conquer. And less obviously perhaps, if multiplication does not achieve survival, then it need not, and perhaps should not, multiply. Survival means genes continuing to exist in the world. That’s the ultimate goal. Everything else is method.

bacteriaOf course, Scott’s story beautifully captures one of the other methods, cooperation. Individual units (biologists now generally recognise the gene as the lowest level unit in evolution, though they also acknowledge other units are analytically relevant), struggling to achieve their goal of survival, find immense utility in cooperative strategies. Sometimes this can be joint aggression against others, but it can also involve commitment structures to non-aggression, allowing less resources to be wasted on zero-sum or negative-sum activity. Many conditions must converge for such a structure to form, but to the extent it does, a new, larger evolutionary unit comes into being. When we see the immense diversity of the world of multi-cellular organisms, it is this cooperative process. A new, larger, domain of evolution, if you will. (as a note I want to point out that a cooperative domain should not suggest reduced autonomy for the constituents – a freedom-loving democracy exemplifies a cooperative proto-domain far more than authoritarian regimes do, just as a human blood cell is a free-roamer and slave to no other cell)

Scott’s story illustrates this sense of cooperative expansion wonderfully. From the single cell organism to the multi-cellular organism, cooperation expands at the will of the Goddess of Everything Else, despite the Goddess of Cancer’s best efforts to sow chaos. The story even hints at the next levels, beginning to form in human thought and in their social lives. Yet perhaps, especially in the light of life’s survival goal, there’s scope to go intellectually a little further.

planetsImagine for a moment that life exists on many of the tens of billions planets in our galaxy. On each planet, there is at first only competition between the products of simple organic chemistry – gene’s struggling to survive. Over time, like on Earth, these biosphere’s are characterised by larger units. In some cases those multicellular organisms become as advanced as our own, developing technology, communication, awareness of the laws of physics and chemistry, and also aware of biology and the biosphere in which they exist. So, assuming there is a next stage for these biosphere’s, where the cooperative units expand to cover a much larger domain, then what should we expect?

The mistake I think is so terribly tempting to make is imagining that the answer is transcendence of consciousness. There are many difficulties to be found in the concept of consciousness. Part of this is to be found in trying to shoehorn a dualist concept into modern science’s favoured physicalism – importing a morally-neutered version of a concept rooted in religious philosophy and then attempting to fit empirical evidence to the concept (rather than the inducting concepts from the evidence at hand) has yielded unsurprisingly confusing results for the world of philosophy and science. Yet supposing we decide, despite problems of philosophy and definition, that we cannot live without the concept, because of its usefulness in describing the feeling we have of our own existence. In the context of the Goddess’, and in the context of what I think is Scott’s finest work (Meditations on Moloch), a greater problem emerges.

The “transcendence of consciousness” isn’t an expansion of cooperation. It isn’t the next step. Why? Suppose we discount (or ignore) the ethical difficulties that surround your self-awareness destroying and discarding many other things that makes you who you are – your genetic purpose, your place in the species, your place in society. Even if we’re comfortable with this, this still isn’t, by any stretch of the imagination, an expansion of cooperation. It is individual organisms constructing their own technological likeness. Perhaps for an intelligent enough scientist, it need not even involve another organism at all.

Worse, a transcended “consciousness”, in a very practical sense, is not more likely to avoid the Goddess of Cancer’s creed of “KILL CONSUME MULTIPLY CONQUER” any more than its biological creator. Even if we imagine a consciousness uploaded to a computerised existence, the same forces are at work. In competition for resources, between uploads and against humans, it is the fittest survive. Even if the uploading process offers the moral perfection of the uploadee, only a tiny percentage of takers need to opt for a more competitive existence to dominate the new landscape. Without a larger domain of cooperation, transcendence isn’t a victory for the Goddess of Everything Else, or for Scott’s other good god, Elua. It’s a last minute diversion away from Elua’s next song. If our luck is poor, it might even allow more destructive forces new powers to outflank the domains of good. Transcendence seems less like Elua’s work and more like a potential strategm for Moloch.

I think there’s alternatives, and one’s that still capture a sense of technological optimism, albeit with a more scientific and logical grounding of its moral philosophy. Such an alternative would retains the sense of the next being an advancement in a new larger domain of cooperation. We might acknowledge that there are a number of proto-domains bubbling up in today’s competitive environment that might be candidates for this mantle – people defined by nationality, or by race, or by culture. Perhaps we might be luckier than this, and live to see a more cooperative human species as a whole. But I am philosophically ambitious, and have my hopes set a little higher.

Why? Thinking back to all those biospheres throughout the galaxy, wouldn’t it make sense that the biospheres themselves might constitute an expanded domain of cooperation?

Earthrise_Revisited_2013Just think – having reached the potential to destroy themselves, as we have, the biospheres face a crossroads. They may fail to establish cooperation, being snuffed out by nuclear war or some other technological disaster, not unlike an early bubble of organic chemicals failing to establish the stability and protection of a cell. Or, like the early candidates for the first biological cells, planets throughout the galaxy roll the dice to become a stable evolutionary unit.

And what is our role, as a Earth’s most intelligent, technologically advanced and powerful species, if not to try to weight those dice in Earth’s favour? This is our true and most noble role, captaining the planet through this narrow bottleneck in the biosphere’s history of survival. It is combatting existential threats to preserve ourselves as a species, as well as our fellow species, and to the biosphere as a whole. It is the development of the technology that can transition us to an age of space exploration and interplanetary colonisation of life. Our destiny need not be the meaningless treadmill of wireheaded hedonism, nor the fiery end of extinction. Let our future be to steer Earth to a greater existence, awakened unto the galaxy.

Image credits:
https://en.wikipedia.org/wiki/File:Staphylococcus_aureus_VISA_2.jpg
https://www.flickr.com/photos/blile59/3029322908
https://en.wikipedia.org/wiki/File:Statue_of_Goddess_or_Queen_at_Monas.JPG
https://en.wikipedia.org/wiki/Earthrise