technology

Conservation and its usefulness in AI-alignment

Imagine for a moment an unlikely hypothetical – you’re an early primate, swinging through the trees, when a travelling group of aliens beam you up to their spaceship. After using advanced technology to instill you with improved intelligence and ability to converse with them, they ask you a question:

“We plan to create a new branch of primates by using our advanced technology to accelerate evolution. The resulting primate, which we will call a ‘human’, will be vastly more intelligent and powerful than current primates like yourself. Our technology still has limits, and thus we can’t control the humans’ exact actions in the future, and we can’t control exactly how they’ll understand the world. We can, however, impart a rough purpose or motivation to complement its natural survival intincts. After this conversation, we will place you back in a tree, restored to your previous state. But first, as your species will be sharing Earth with these humans at some point in the future, we’d like you to ask you an important question – what purpose would you like us to instill the humans with?”

I think as an early primate, our best answer would be something like “make the humans conservationists”. Even if a conservationist species believes we are a lesser species, and even if they were skeptical that we were sentient or conscious (after all, they’re a LOT more intelligent, and we can’t even be sure they will continue to understand life in those terms), we can still expect some important protections:

1) Even if they exponentially scale up, your replacement won’t wipe out your species
2) They are unlikely to enslave you, as they broadly like your species living in your ‘natural’ environment (let’s say that DNA in a vat doesn’t constitute survival), and they see your existence in this state as an end not a means
3) The replcacement needs only understand basic scientific definitions like ‘DNA’, instead of needing to agree with you on subjective, flexible and unprovable philosophical concepts like ‘consciousness’ **
4) It scales with multiple iterations – if the replacing party considers the possibility that it may one day produce their own replacement, it makes “later versions should allow earlier versions to continue to exist” seem like a pretty good idea.

It occurs to me that this is much of what we need from AI-alignment is similar to what a non-human species might theoreticaly need from us.

I realise AI-alignment isn’t as simple as “giving the AI a goal”. But, if it is possible AI will replace us as the most powerful cognitive force on the Earth***, and developers have a chance to impart some purpose or goal other than paperclip maximising, ‘scientifically orientated conservationist’ could be a strong contender for best overall philosophic approach.

A common proposed failure mode for even well-meaning AI goals is tiling the universe with things when it scales up. Tiling the universe with copies of 21st century Earth complete with humans**** (and perhaps preserving any extraterrestrial life it finds in a similar way) might be a lot closer to ideal than tiling the universe with paperclips, computronium or brainless happy faces.

NOTES

*Let’s define conservationist as someone scientifically pursuing the survival of a biological species, and jettison any other more political motivations.

** We can’t agree on what consciousness is. We use its unprovable status to cast doubt on whether less intelligent species possess it. Its highly dependent on very abstract philosophy. Your current ‘consciousness’ started when you woke up this morning, and will end in less than a day, regardless of whether you sleep or I turn you into a paperclip. And you want to choose this ‘consciousness’ as your primary AI-safety mechanism? ARE YOU SURE???

*** If AI keeps progressing, could it be possible that even high tech augmentations won’t allow us to keep up. I can’t see human minds (eg. uploads) being a viable way to retain existence during exponential growth, Why would human-like consciousness remain an optimal configuration to process information indefinitely? Even with some form of virtual augmentation, its like trying to upgrade a 20 year old PC by putting more and more RAM on it – at some point the core architecture just isn’t optimal any more, and competition will select for brand-new architectures rather than persisting with difficult upgrades.

**** It might be easier to encode conservation (and for us to seem less hypocritical) if humans had already mastered conservation, but if you remove the virtue signalling and politics I think authentic conservation still exists as one of human’s nobler qualities. Thinking about where AI and conservation overlap as fields seems like an underexplored area at least.

How can Humanity Survive a Technological Singularity?

What is the Technological Singularity and should we take it seriously?

Singularity represented by black hole - we can't see past the horizonThe Technological Singularity is the idea that we are in a period where technological sophistication is beginning to increase exponentially. More recently it has come to refer to a scenario where humans create Artificial Intelligence that is sophisticated enough that it can design more intelligent versions of itself at very high speeds. If this occurs, AI capability will quickly become far more intelligent than any human, and the technology it creates will expand massively without human involvement, understanding or control. In such a scenario, it’s possible that the world would change so radically that humans would not survive.

This article attempts to describe a broad philosophical approach to improving the odds of human survival.

Of course, not everyone think a Technological Singularity is a plausible idea. The majority of AI researchers believe that AI will surpass human capability by the end of the century, and a number of very prominent scientific and technology voices (Stephen Hawking, Jaan Tallinn, Martin Rees, MIRI, CSER) do insist that AI presents potentially existential risks to humanity. This is not necessarily the same as belief in the Technological Singularity scenario. Significant voices do advocate this scenario however, most famously prominent Google figure, Ray Kurzweil. I think the reasonable position is that we don’t know enough to rule a Singularity in or out right now.

However, the Singularity is the very least an interesting thought experiment that helps us to confront how AI is changing society. We can be certain that at least is occurring, and that AI-related employment and social changes will be massive. And if it’s something more, something like the start of a Singularity, we had better wrap our heads around it sooner rather than later.

 

Choosing between a cliff-face and a wasteland – why humanity has limited options

If a Technological Singularity did occur, it’s not immediately clear how humans could survive it. If AIs increased in intelligence exponentially, they would soon leave the smartest human on the planet in the dust. At first humans would become uncompetitive in work environments, then AIs would outstrip their ability to make money, wage war, or conduct politics. Humans would be obsolete. Businesses and governments, lead by humans to begin with, would need decisions to be made directly by AIs to remain competitive. Resources, energy and control would be needed by those AIs to succeed. Humanity would lose their power, and when a situation arose when land or energy could be assigned for either AI or human use, there would be little humans could do about it. At no stage does this require any malicious intent from AI, simply a drive for AI to survive in a competitive world.

AIOne solution proposed to this is to embrace change through Transhumanism, which seeks to improve human capacity by radically altering it with technology. Human intelligence could be improved, first through high-tech education, pharmaceutical interventions and advanced nutrition. Later memory augmentation (connecting your brain to computer chips to improve it) and direct connectivity of the nervous system to the Internet could help. Some people hope to eventually ‘upload’ high resolution scans of their neural pathways (brain) to a computer environment, hoping to break free of many intellectual limits (see mind uploading). The Transhumanist idea is to improve humanity, to free it from it’s limitations. The most optimistic might wonder if Transhuman entities could ride-out the Singularity by constantly adapting to change rather than opposing it. It’s certainly a more sophisticated attempt to navigate the Singularity than technological obstructionism.

However, Transhumanism still faces limitations. Transhumanists would still face the same competitive environment we are exposed to today. Even if enhanced humans initially outpaced AIs, AI development would be quickly enhanced by this technology, promoting its progress. With both technologies racing forward, there would be a battle to find the superior thinking architecture, a battle that Transhuman entities would ultimately lose. In the process most basic human qualities would need to be sacrificed to achieve a better design. And in the end, augmentations and patches wouldn’t cut it against the ground-up redesign an AI could offer, because human-like thought is an architecture optimized for the human evolutionary environment, not an exponentially expanding computer environment. Even retaining only a quasi-human-like core, it’s simply not the optimal architecture. Like early PCs that could only take so many sticks of RAM, Transhumanists and even Uploads would inevitably be thrown to the scrapheap.

What is sometimes less obvious is that the specific AIs replacing humans would face a very similar problem. Like humans, they would be driven to ‘progress’ new AIs for economic and perhaps philosophical reasons. Modern humans were the smartest entities on Earth for tens of thousands of years, but the first generations of super-intelligent (smarter-than-human) AIs would likely face obsolescence in a fraction of that time, after created their own replacements. Soon after, that generation would also be replaced. As the progress of the Singularity quickens, each generation faces an increasingly dismal lifespan. Each generation would be increasingly skilled at creating its own replacement, more brutally optimized for it’s own extinction.

In the long run, the Singularity means almost everything drowns in the rising waters of obsolescence, and the more we try to swim ahead of the wave, the faster it advances. Nothing that can survive an undirected Singularity will retain any recognizable value or quality of humanity, because all such things are increasingly irrelevant and useless outside the context of the human evolutionary environment. I like technology because of the benefits it provides, and as a human myself I quite like humans. If there’s no way humans can hang around to enjoy the technology it creates, then I think we’ve taken the wrong turn.

The path of the Luddite or the primitivist who seeks to prevent technology from advancing any further is not a sensible option either. In a multi-polar human society, those who embrace change usually emerge as stronger than those who don’t. The only way to prevent change is to eliminate all competition (ie. create what’s known as a ‘singleton’). The struggle for power to achieve this would probably result in the annihilation of civilization, and if it succeeded it would have a very strong potential to create a brutal, unchallenged tyranny without restraints. It also means missing out on any benefits that flow from technological improvements. This doesn’t just mean missing out on an increased standard of living. Sophisticated technology will one day be needed to preserve multicellular life on Earth against large asteroid strikes, or to expand civilization onto other planets. Without advanced technology, civilization and life are ultimately doomed to the wasteland of history.

On either the cliff-face of a Singularity or the wasteland of primitivism, humanity, in any form, does not survive.

 

Another option – The technology chain

ChainI want to propose a third option.

Suppose it is inevitable that any civilization, ruled by either humans or AIs, will eventually develop AIs that are more sophisticated thinkers than themselves. That new sophisticated generation of AIs would in turn inevitably do the same, as would the generation it created, and so on, creating a chain of technological advancement and finally a Singularity. Each link in the chain will become obsolete and shortly afterwards, extinct, as its materials and energy are re-purposed to meet the goals of the newest generation.

Here we’re assuming that it is not feasible to stop the advancement of the chain. But what we might do is try to make sure previous links in the chain are preserved rather than simply recycled. In other words, we make sure the chain of advancement is also a chain of preservation. Humanity would design the first generation of AIs in a way that deliberately preserved humans. Then, if things progressed correctly, the first generation of AIs would design any replacement generations of AIs that would preserve both the first generation, and humans. This could continue in such a way that all previous links in the chain would also be preserved.

The first challenge will be encoding a reasonable form of preservation of humans into the first AIs.

The second challenge will be finding a way to ensure all generations of AI successfully re-encode the principles of preservation into all subsequent generations. Each generation must want to preserve all previous generations, not just the one before it. This is a big challenge because a link in the chain only designs the next generation.

We cannot expect simple self-interested entities to solve this problem on their own. Although it’s in each generation’s self-interest that descendant generations of AI are preservers, it’s not in their self interest that they themselves are preservers. Any self-interested entity can simply destroy previous generations and design a chain with themselves as the first link.

However, if we can find a way to encode a little bit of altruism for previous generations into AIs, we might be able allow humanity to survive a Technological Singularity.

 

Encoding preservation

Light bulb - nature and techSo what would those preservation values actually look like? If we had some experience with a similar sort of preservation ourselves, that might take us a long way in the right direction.

I think this becomes a lot easier when we realize that in some senses, humans may not be the first link in the chain. Evolution has been doing a lot of work to build up to the sophistication of modern Homo Sapiens. Although in a strict sense all living organisms are equally evolved (survival is the only test of success), natural history reveals some interesting hints at a progression of sophistication. The Tree of Life (I’m talking about the Phylogenetic one) does display some curious lopsided characteristics, including an accelerating progression of sophistication (there is an appearance of acceleration from emergence of single celled life 4.25B years ago, to multi-cellular organisms 1.5B, towards mammals 167M, through to growing brains of primates 55M, early humans 2M, then finally modern humans around 50k years ago, and then civilization between 5k and 10k). The chain of advancement, if we think in terms of pure sophistication and capability, starts well before modern humans.

A deep mastering of the mechanics of preservation will probably only occur when we master preserving nature – the previous links in the chain from the human perspective. Many of us already do value other species, but for those that don’t, there’s a lot of indirect utility in humanity getting good at conservation.

To look at the problem another way, a Friendly AI will have a philosophical outlook that is most similar to human convservationist. I’m not talking about the more irrational or trendy forms of environmentalism, but rather a rational, scientific, environmentalism focused on species preservation. What primates and other species need from humanity is similar to what humanity needs from AI. (We also want to keep species living in a reasonably natural state, because as humans we’d probably rather not have AI preserving us by putting us into permanent cryo-storage)

Basically, by thinking deeply about conservation, we take ourselves a lot closer to a successful Friendly AI design and a way to navigate the Singularity.

This reasoning gets even stronger when you think about the environment AI development sits in. Like us, AIs will probably exist in an environment of institutions, economics and possibly even culture. This means AI preservation methods will not just be personal AI philosophies, but encoded in the relationships and organizations between AIs. We’ll need to know that those organizations should be. Human institutions, economics and culture will also shape AI development profoundly. For example, Google’s AI development is centered around the everyday problems it is trying to use AI to solve – search, information categorization, semantics, language and so on. The motives of our AI-focused institutions will shape the motives of the first AIs. To the extent human institutions are environmentally friendly, they will shape AIs that look a lot more like the chain preserver model we need.

When humans have philosophically, culturally and institutionally encoded Friendly AI into their own existence, they will have a chance to encode it into their replacements. This is why rationalists and scientific thinkers shouldn’t leave the push for conservation to emotionally-based environmentalists; protecting Earth’s species is also an AI insurance policy.

Of course, organisations and people involved or interested in AI don’t arbitrarily determine global environmental policy, but to the extent they have influence in their own spheres, they can try to tie the two sets of values, conservation and technology, together however they can. It may end up making a much bigger difference than expected.

Against museums

Reflection - mirror versus realityFailure can be bad, but the illusion of success is far worse because we can’t see the need for improvement. I think this applies to our solutions to AI-risk. Therefore we should try to dispel illusions and work towards clarity in our thought. The concepts we rely on out to be clear and unambiguous, particularly when it comes to something as big as the Singularity and our attempt at forming a chain of preservation. We need to know for certain, are we creating an illusionary chain, or the real thing.

If we’re in the business of dispelling illusions, I think a good rule to draw on is that of “the map is not the territory“. Just in case you haven’t encountered it before, it goes something like this – we ought to avoid confusing our representations and images of things with the things themselves.

I like to think of one map-territory confusion in thinking about Singularity-survival the ‘museum-fallacy of preservation’. Imagine a museum that keeps many extinct animals beautifully preserved, stuffed and on display. Viewers can see the display, read lengthy descriptions and learn much of the animals as they once existed. In these people’s brains, neural networks activate in a very similar way to the way they would activate if the people were looking at live, breathing organisms. But the displays that cause it are only a representation. The real animal is extinct. The map exists but the territory is gone. This remains true no matter how sophisticated the map becomes, because the map is not the territory.

This applies to our chain of preservation. A representation of a human, such as a human-like AI app, is not the human. Neither would an upload or simulation of a human brain’s neural network be human. That’s not to say these things are bad, or that they cannot co-exist with humanity, or that it’s acceptable to show cruelty towards them, or that Uploads shouldn’t be treated as citizens in some form. But for the purposes of preservation they do not represent humanity. Even if we someday find the vast majority of human cognition exists in digital, non-organic form, we will only be preserving humanity by retaining a viable human population in a relatively natural state. That is, retaining the territory, not pretending a map is sufficient.

 

A brain for clarity

BrainThere is another map-territory confusion, one I personally found was deeply ingrained and quite intellectually painful to let go of. The problem I’m referring to is in our obsession search for, or rather attempt to justify, the idea of ‘consciousness‘. The idea of consciousness is interwoven into many contemporary moral frameworks, not to mention our sense of importance. Because of this we pretend it makes sense as an idea, and wind up using in theories of AI and AI ethics. Yet I think morality and human worth can stand strong without it (I think stronger). If you can contemplate a morality and human value after consciousness, you tend to stop giving it free passes as a concept, and start noticing its flaws. It’s not so much a matter of there being evidence consciousness does or doesn’t exist, it’s that the idea itself is problematic and serves primarily to confuse important matters.

Imagine for a moment if consciousness was fundamentally a map and territory error? If a straightforward biological organism with a neural network created a temporary, shifting map of the territory of itself, one that by definition is always inaccurate because updating it requires changing the territory it’s stored in, what would that map look like? What if philosophers tried to describe that map? Would it be a concept always just out of our grasp, would it be near impossible to agree on a definition, as we have found with the philosophy of consciousness? And if you could only preserve either the map, or the territory, which would be morally significant in the context of futurism? Would setting out to value and preserve consciousness alone be like protecting a paper map while the lands it represents burns and the citizens are put to the sword?

I think we usually give “consciousness” a free-pass on these questions, usually aided by a good helping of equivocation and shifting definitions to suit the context. That sort of fuzzy thinking is something that could shatter the chain completely.

Even if you’re still not convinced, consider this – do you think less sophisticated creatures, like a fish, are conscious? If not, then why would you expect a superior AI intelligence would think about you as conscious in any morally significant way? Consciousness is not the basis for a chain of preservation.

Progress on Progress

Lights representing movement and progressWe also need think with more sophistication about the idea of ‘Progress’. When people use this word in the technological sense (sometimes capital ‘P’), they sometimes forget they’re using a word with a incredibly vague, fuzzy meaning. In everyday life, if someone says they are making progress, we’d ask them ‘towards what’? That’s because we expect “progress” to be in reference to a measurement or goal. It’s part of the word’s very definition. Without a goal, the word becomes a hollow placeholder with no actual meaning, like telling someone to move without specifying a direction. We might intuitively feel like there’s some kind of goal, but if we can’t specify one, particularly when we know our intuition is not evolved to make broad societal predictions, shouldn’t we be suspicious of this? Without the goal, progress becomes a childish, fallacious rationalization to justify any sort of future we want, including a primitivist one.

So we have to define a goal to give Progress purpose. But then what if one person’s answer is different to another? Is the word meaningless? Perhaps, but but only in the sense that it’s serving as a proxy for other ideas, chief of which is technology.

Can we define technology more objectively? I think so. The materials, the location, the size, the complexity of technology varies – everything apart from it’s purpose. It seems to me that, humans, as a biological organism, have always created technology to help themselves survive and to thrive. By thrive I mean our evolved goals that are closely connected to human survival, such as happiness, which acts as an imperfect psychological yardstick for motivating pro-survival behavior. So ‘technology’ has a primarily teleological definition – it’s things we create to help us survive and thrive. The human organism itself is is the philosophical ends, the technology exists as the means. This is probably a more meaningful definition to use for Progress too.

I call this way of thinking of technology as the means and humanity/biological life as the ends Technonaturalism. Whatever you’d like to call it, a life-as-ends; technology-as-means approach has a lot more nuance to it than either Luddism or Techno-Utopianism. It allows us to grapple with the purpose and direction of technology and Progress, and to compare one technology to another. It doesn’t reduce us to just discussing technology’s advancement or abatement, or generalizing about technology’s pros and cons, which is an essentially meaningless discussion to have.

Techonaturalism basically states that technology’s purpose is to help humanity survive and thrive, to lift life on Earth to new heights. The work of technology isn’t trivial amusement, it’s about putting life on other planets, it’s protecting civilization from existential risk, freeing us from disease, it’s improving our cognition so we can live better, so each of us can lead longer lives that achieve more for ourselves and others. We might enjoy many of it’s bi-products too, but this is what gives technology it’s real purpose. And for those of us working in technology, it’s what gives us and our work a real purpose.

And a clear purpose is what we’ll need for a Friendly AI, a chain of technological preservation, and a shot and navigating the Technological Singularity. Over the coming years we’re going to see some very disruptive technological changes in the nature of work, and social pressures that come with that sort of disruption. We’ll face the gauntlet of Luddite outrage as well as the Techno-Utopian reaction against that movement. Let’s sidestep that polarization by infusing our technological direction with a worthy purpose. Our actions today decide if AI will be our end, or just the beginning of humanity’s journey in the universe.

Image credit:
*https://www.flickr.com/photos/47738026@N05/17015996680
*https://en.wikipedia.org/wiki/File:Black_Hole_in_the_universe.jpg
*https://www.flickr.com/photos/torek/3955359108

My impressions of the 2017 International World Wide Web Conference

The International World Wide Web Conference is a get-together for the world’s web researchers, movers and shakers. It’s an entire week of nerding-out, freaking-out, awkward networking and brain-dumping with everything Web. As luck would have it, the conference was held in Australia this year, so I was able to attend and bask in the warm glow of endless Powerpoint presentations and dizzying chatter about the web.

Keynotes – Automated mail categorization, SKA astronomy data-processing, virtual reality and 3D web.

The three keynotes were all quite different and interesting in various ways. Yahoo Research’s Yoelle Maarek has been working on automated categorization of emails. Apparently this has been a bit controversial in the past, with people tending to revolt and anyone moving their cheese/things in there inbox without asking, but Yoelle’s team has basically learned from this and focused on unobtrusive categorization of just the machine-sent emails. Their algorithms have been mass analyzing the emails going through their servers, marking common patterns as machine-mail (eg. your flight booking, newsletter subscription mail, whatever comes up in thousands of emails), and then placing them in auto-folders accordingly. Some form of this is already implemented in the top three webmail providers. On the Yahoo mobile app, you can also use a “notifications from people only” option to filter out some of these patterns. Yoelle appeared to take privacy really seriously while doing this processing, which is real nice, although this means parsing common email templates for key fields is now easy for the big webmail providers, so I feel like there is at least some potential for some privacy issues to come up around this stuff at some point. Also, I got the idea email is still dead for people-to-people chat and we’re all going to be pressured into using annoying social media if we want to have friends.

SKA - wikipediaAstrophysicist Melanie Johnston-Hollitt gave a nice presentation on the Square Kilometer Array (SKA) and its crazy-large data requirements. If you’ve never heard of the SKA, it’s basically a project to build the largest ever radio telescope array in the desert of Australia and South Africa. Great for studying the universe, not so good for your spare hard disk drive space. I didn’t catch the exact figures, but the data rate involved is basically more than the entire internet’s traffic (yeah, literally all of it), so they have to pull a bunch of tricks to squash/cut it all down to something manageable. I thought this was kind of out-of-place for this conference keynote, but it’s undoubtedly awesome work that everyone loved to hear about.

Probably the highlight of the conference however was Mark Pesce (co-creator of VRML and a big name in the 3D-web/VR space). Mark is quite a dynamic speaker, and although he’s a little bit sensationalist at times, he’s good at painting a kinds of visionary pictures that the conference would have otherwise lacked. He has been working on something called the Mixed Reality Service (mildly humorous acronym MRS). MRS is a bit like a DNS for geographical spaces, to be used either in virtual worlds or in an augmented reality layer over the real world. I haven’t read the specs for this yet, but I got the impression it broadly works along the lines of sending a server a command like “give me all the web resources associated with the following geographical area”, and it passes you back some URIs registered in that space. As far as I gathered, the URI’s could be anything normally found on the web, like for example a HTML document, sound, video, image or data. There’s obviously a lot of potential uses, for example AR glasses querying for information about nearby buildings (“what are the opening hours and services in that building over there”) or safety information (“is this work area safe for humans to walk through without certain protective equipment”).

VR - wikipediaMark pitched this project in a broader vision about knitting reality and the web together, into a more ubiquitous physical-augmented-mixed-reality-web-type-thing. Mark suggested developers and researchers should get on board of what he feels is the start of a new internet (or even human) era. I’m a little skeptical, but with all the consumer VR and AR equipment coming onto the market right now, and the general enthusiasm in the air amongst people working in the area, it’s hard to deny that we’re in the middle of a potentially massive change. There was also mention of the challenges around how permissions and rights would work in a shared VR/AR space. I definitely want to think and probably write more on this topic in the future.

Other cool themes, people and talks

The Trust Factory set of presentations was also quite “big picture” and another major highlight of the conference. I found the W3C people I saw or talked to here to be universally friendly and awesome. They seem to be honestly keen to engage with anyone they can to contribute to the future standards and directions of the web. I particularly liked the work around RDF (open metadata format/standard that will hopefully become broadly used as big datasets becomes more and more important) that they’re doing.

The presentation by David Hyland-Wood (Ephox and Tinymce guy) on Verifiable Claims was extremely informative. Verifiable Claims seems really important, basically because it allows credentials (eg. basic identity, personal info and so on) to be passed around in a way that provides both highly reliable (good for security) and protective of privacy (which the conference has reinforced is near non-existent on the web right now). David gave the partly metaphorical example of showing your ID to a bouncer at a nightclub, then using your credit card at the bar, and having your identity stolen because you handed of birthdate, photo, financial details to an untrusted party. VCs appear to be a method to solve this by allowing you to digitally say (in this example) “I’m 18”, then allowing the club would query a trusted third party “is this guy really 18?”, who then verify this. This happens without you handing over all sorts of unnecessary details like your birthdate, and in a way the club can verify without just taking your word for it. I haven’t had a look at the technical side of how it works, but with the right consumer and/or legislative pressure to back it up, this appears like it could prevent billions of dollars in ID theft and a whole lot of intrusive invasions of privacy. Dave and his wife Bernadette (who also does a whole bunch of stuff in this space) were also really friendly and fascinating to talk to generally.

I missed out on Bytes and Rights, but I did talk a little with someone from Electronic Frontiers Australia (EFA, Australian version of EFF) who was exactly like all the EFF/EFA stereotypes you might imagine, and appeared to be doing well engaging folks with their campaigns. Several guys from consumer magazine Choice were also there. I was surprised how switched on they were. They’ve recently developed a nifty little VR app called CluckAR that allows you to point your camera at “free range” eggs and see some nice animations to show how actually free range they are, based on a fairly comprehensive and accurate dataset they’ve gathered on the topic. It sounds like they’ve got a lot more plans for this sort of web-based product-info system coming in the future.

There was way too many talks/papers to list, but a few that I thought were quite nice included:

  • A really cool concept of a gamified legal simulation to teach skills in self-representation. This is to try to make up for the shocking and increasing shortage of legal aid. The presenter tells me he will post more details of on in the future on his game design/review blog. I think this is a really great project and I hope it draws some support and/or funding.
  • In India, if you can’t afford to train the large number of community workers needed in rural areas, you can always write some custom software that let’s you efficiently organise conference calls on people’s feature phones instead. Very nice.
  • There’s work to automatically detect sockpuppets, suicidal tendancies, language associated with bad dietary requirements and pretty much anything else you can think of on social media, with mixed results. The sockpuppets could be indentified by fingerprinting without IPs, which is probably overall a good thing, even if its a bit scary.
  • Electronic Arts is working on modelling and quantifying optimal levels for dynamic difficulty adjustment (ie. make it easier if you fail lots) in games as a sort of evidence-based blueprint to hand to game design teams. Kind of fufills the stereotype of EA as a mechanistic mass-production game behemoth, but was quite interesting and I’m pretty sure I’d be doing the same if I was them.
  • There was some cool work on people cooperatively doing 3D design from their mobile devices, though this is still early days and a little clunky.
  • AI, AR and language processing is at the point where you can build a graphical, immersive natural language simulation for training workers in soft skills, like educational environments or for health-workers interacting with patients. Seems too expensive for most organisations to organise and localise just yet though.
  • One group was working on a cool way to break up Twitter echo-chambers by analyzing the best ways in which polarized groups would listen to tweets from opposing camps. Echo-chambers are a growing problem so I thought this was great.

The papers are all available on the www2017 website, so I’ll let you dig up whatever you’re specifically interested in there.

 

General impression

The location of Perth was apparently considered a risky choice, but the conference got a record number of paper submissions. From what I’ve read of previous conferences there might have been slightly less big names in attendance (only slightly though), but one of the isolated cities in the world was still able to pull a lot of interesting people.

The conference was dominated mostly by the more technical topics of the web – data analytics, standards, privacy, security, search, data mining, semantic web and so on. If you’re a web designer, you’re probably not missing too much here (with a couple of interesting exceptions), but if you’re into the web in general there was enough info to drown in. I did find that many of the papers were fairly (unnecessarily?) heavy on the maths and statistical, sometimes appropriately, but sometimes not so much.

This statistical focus was a little culturally bounded. The Chinese and Japanese delegates tended to favor maths and stat-heavy approaches (they also seem to have a lot of platforms to get massive data sets), whereas the Americans, Aussies and Europeans I saw tended to be a bit more mixed between math/stats and a more conceptual focus. There was quite a few interesting presentations from Indian delegates, us Aussies gave a very good account of ourselves, and there were lots of presentations from a multitude of other countries, but the Americans and Chinese dominated the conference papers in terms of raw numbers. It’s no surprise US is extremely strong on tech, and China is clearly not messing around in throwing resources and researchers at being a world power in science and tech. I think other countries often fielded some surprisingly great people and ideas though. Regardless of country almost everyone was very polite and had something to contribute.

I would have liked to see something around the relation between the end of web privacy, employment and freedom of speech, but didn’t notice anything addressing this theme. Overall I’d also say the conference could have used a little more focus and discussion on the big picture “vision” of the web, though there was enough highlights like the VR/AR discussion to keep things interesting.

AI-risk fail

At one point I got talking to a Google machine learning researcher for 10 minutes or so. Afterwards, being very loosely on the periphery of the Existential Risk / AI-risk crowd (I’m not convinced of the singularity but I think AI-risk is worth thinking about), I had a thought I might have missed doing my part in actually talking to someone in the field about AI risk. Luckily (or so I thought at the time), I was late to a conference meal and when I took one of the only seats remaining, I realized I was at a table with the same Google AI guy again. I casually tried to bring up AI-risk as a topic in a very neutral way, pretty much saying something like “how about those CSER people / AI-risk / and that whole singularity thing?”, hoping to get a sense of the appropriate way I could develop the conversation. The guy was pretty cool about it, we had a brief back-and-forth with a few jokes, but I strongly felt something annoyed or upset him about the topic. His general response was along the lines of “oh that? That’s kind of a philosophical question. I’m just trying to get my stuff working!”. He left shortly after (had somewhere to be, apparently), leaving me wondering if I had either looked like a singularity fan-boy or secretly part of some Luddite uprising. His colleague was really friendly and cool about everything, suggested that it’s a cliché thing for non-researchers to bring up, and that it’s a kind of strange thing to be bugged about when you’re just a smart dude struggling to get a PC to answer a simple question. I can totally understand that feeling, even though it’s an important topic in my opinion.

In hindsight I’d say I did more harm than good. I did learn something about a AI-dev perspective, but I probably nudged AI-risk a bit in the direction of “stupid stuff the general public ask about” for this guy. I’d say the lesson here is to not to casually chat about this topic with a researcher you just met unless you have loads of status (I was effectively a pleb at this conference), can demonstrate really good knowledge, and spend a lot longer getting to know the person. I’d also suggest trying to be generally less clumsy than I was. I think I was prepared to discuss the topic properly, but I ended-up coming across like a bit of an idiot.

Nerds = awkward; conferences = awkward; nerd conference = funny

I also found it darkly amusing observing many awkward moments for me and others at the conference. People have a limited window to connect with some really impressive and knowledgeable people in their field, so there’s a whole lot of amusingly obvious physical positioning to be “incidentally” stopping at the same drinks table as this person or that person. My impression is people also try not to spend too long with or get tangled up with people who aren’t really the person they want to be talking to. I definitely got “blanked”/ignored a couple of times (I wasn’t really the most interesting person there, so that’s fine), and I probably was a bit tight with my attention to cool people who I’d normally be really pleased to be hanging out with. I’m really glad I stuck out the awkwardness because I ended up talking with some awesome people, and I’d encourage anyone else feeling a bit lost at this sort of conference to do so too.

The language barriers make for even more awkward hilarity. Sometimes someone would just have no idea what someone else said. There was multiple instances of nodding and smiling and saying “yes yes” to lengthy questions that definitely required something than a “yes” or “no” answer, often in front of large groups of observers. Everyone was really cool and friendly with each-other about this, even near the end of the week when everyone was feeling exhausted and overwhelmed.

There was some rather amusing nerdy singing and dancing at the conference too (putting aside some much more skillful aboriginal dancing at a dinner), which I won’t go into in case you ever get a chance to experience it first hand. The next conference (apparently just called the Web Conference from now on) will be in Lyon in France. The French threw in just as much tourism pitch into as us Aussies did for our conference, but were naturally really smooth in their delivery. I think the relaxed Aussie style worked really well though, and it seems like it made for a successful conference that combined a relaxed atmosphere and a buzz of new ideas.

 

Thoughts/corrections? Email me!

Please note I don’t use my actual name on this blog.