Conservation and its usefulness in AI-alignment

Imagine for a moment an unlikely hypothetical – you’re an early primate, swinging through the trees, when a travelling group of aliens beam you up to their spaceship. After using advanced technology to instill you with improved intelligence and ability to converse with them, they ask you a question:

“We plan to create a new branch of primates by using our advanced technology to accelerate evolution. The resulting primate, which we will call a ‘human’, will be vastly more intelligent and powerful than current primates like yourself. Our technology still has limits, and thus we can’t control the humans’ exact actions in the future, and we can’t control exactly how they’ll understand the world. We can, however, impart a rough purpose or motivation to complement its natural survival intincts. After this conversation, we will place you back in a tree, restored to your previous state. But first, as your species will be sharing Earth with these humans at some point in the future, we’d like you to ask you an important question – what purpose would you like us to instill the humans with?”

I think as an early primate, our best answer would be something like “make the humans conservationists”. Even if a conservationist species believes we are a lesser species, and even if they were skeptical that we were sentient or conscious (after all, they’re a LOT more intelligent, and we can’t even be sure they will continue to understand life in those terms), we can still expect some important protections:

1) Even if they exponentially scale up, your replacement won’t wipe out your species
2) They are unlikely to enslave you, as they broadly like your species living in your ‘natural’ environment (let’s say that DNA in a vat doesn’t constitute survival), and they see your existence in this state as an end not a means
3) The replcacement needs only understand basic scientific definitions like ‘DNA’, instead of needing to agree with you on subjective, flexible and unprovable philosophical concepts like ‘consciousness’ **
4) It scales with multiple iterations – if the replacing party considers the possibility that it may one day produce their own replacement, it makes “later versions should allow earlier versions to continue to exist” seem like a pretty good idea.

It occurs to me that this is much of what we need from AI-alignment is similar to what a non-human species might theoreticaly need from us.

I realise AI-alignment isn’t as simple as “giving the AI a goal”. But, if it is possible AI will replace us as the most powerful cognitive force on the Earth***, and developers have a chance to impart some purpose or goal other than paperclip maximising, ‘scientifically orientated conservationist’ could be a strong contender for best overall philosophic approach.

A common proposed failure mode for even well-meaning AI goals is tiling the universe with things when it scales up. Tiling the universe with copies of 21st century Earth complete with humans**** (and perhaps preserving any extraterrestrial life it finds in a similar way) might be a lot closer to ideal than tiling the universe with paperclips, computronium or brainless happy faces.

NOTES

*Let’s define conservationist as someone scientifically pursuing the survival of a biological species, and jettison any other more political motivations.

** We can’t agree on what consciousness is. We use its unprovable status to cast doubt on whether less intelligent species possess it. Its highly dependent on very abstract philosophy. Your current ‘consciousness’ started when you woke up this morning, and will end in less than a day, regardless of whether you sleep or I turn you into a paperclip. And you want to choose this ‘consciousness’ as your primary AI-safety mechanism? ARE YOU SURE???

*** If AI keeps progressing, could it be possible that even high tech augmentations won’t allow us to keep up. I can’t see human minds (eg. uploads) being a viable way to retain existence during exponential growth, Why would human-like consciousness remain an optimal configuration to process information indefinitely? Even with some form of virtual augmentation, its like trying to upgrade a 20 year old PC by putting more and more RAM on it – at some point the core architecture just isn’t optimal any more, and competition will select for brand-new architectures rather than persisting with difficult upgrades.

**** It might be easier to encode conservation (and for us to seem less hypocritical) if humans had already mastered conservation, but if you remove the virtue signalling and politics I think authentic conservation still exists as one of human’s nobler qualities. Thinking about where AI and conservation overlap as fields seems like an underexplored area at least.

Scott Alexander’s blog now up again on Substack

After an extended hiatis from writing following The New York Times concerning approach to his identity, Scott Alexander (of slatestarcodex.com) has relocated to Astral Codex Ten, now hosted on Substack. After some considerable debate as to the appropriateness of this platform, and some apparent assurances from Substack itself, we thankfully have one of the rationalist communities better writers again making interesting contributions to a variety of topics.

The whole episode does underline the issues of anonymity, doxxing, brigading and online silencing of even moderate and reasonable voices. Finding a way to prevent toxicity without silencing free speech is currently one of humanity’s greatest challenges, and sadly too few people are looking it. Let’s hope that changes.

Scott Alexander of slatestarcodex.com says the New York Times has threatened to reveal his name – this is effectively doxxing him

In an incredibly disappointing development, Scott Alexander of slatestarcodex.com has taken down his blog after a discussion with a New York Times reporter. According to Scott, the reporter told him that his real name will be published in a planned upcoming article. Scott made it clear to the reporter that he has received many death threats during the time he has been writing his blog, and did not want his full name available (Scott Alexander is not his actual name). The reporter apparently did not see that as a sufficient reason not to publish his name.

I have no particular opinion on the New York Times, apart from being aware that it is one of the most popular US news outlets (I’m not American). However, this sort of behaviour is surely one of the reasons people are losing faith in ‘mainstream’ journalism. This is a disaster, because a truth-seeking and impartial Fourth Estate is a vital prerequisite of a functional democracy. Yet when news outlets intimidate independent voices, deliberately or not, trust is lost. The trust lost in the New York Times here will be significant – Scott’s readership is large and most are very intelligent. The overwhelming majority don’t have a grudge against the NYT – at least they didn’t before this incident. The NYT stands to lose a thousand times what it might gain from one unreasonable article. This seems utterly mad!

I do have strong opinion about slatestarcodex.com on the other hand – that it’s one of the most worthwhile and valuable sites on the net. In several years of reading the site, I’ve formed an impression of Scott as an incredibly articulate and reasonable centrist with a compassionate, fact-based and mild-mannered style. His work engages with topics of great importance – futurism and AI-risk, social trends (from a more scientific perspective than most), his own profession of psychiatry, and establishing reason and respect in our current climate of political and social toxicity.

The most common criticism of the site is that it’s comment section includes a minority who hold to extreme politics views (depending who is criticising, this is either far-left or far-right). Although the comments section is hardly representative of Scott’s work (like many readers, I consider Scott’s thoughts more interesting than the comment sections) technically this claim is true, but it is also misguided. I don’t like some views expressed in the comments and probably wouldn’t like the people that express them, but any person that wishes to silence the entire site for this is incredibly ignorant. Besides, slatestarcodex.com contains not only these people’s views but incredibly sophisticated rebuttals of these views. Scott himself has written several incredible articles doing so – the anti-neoreactionary FAQ is a great example addressing problematic far-right claims, and the far-left has equally had several of it’s arguments torn apart, particuarly their recent record on social issues. Scott has received and published extensive lists of death threats from both these groups.

The truth isn’t afraid of falsehoods in a fair fight. That’s what slatestarcodex.com is (at least more than any other site I’ve encountered) – an arena for arguments where the rules are logic, evidence and respect. Extreme views aren’t always censored there, but they’re almost always ADDRESSED AND REFUTED. Instead of censorship which does everything short of proving what the extremist is arguing, they’re silenced only if they step over the line into personal attacks or strictly defined hate-speech (incitation to violence, aggressive or threatening behaviour is not tolerated). The rest of the time, they get to find out how poorly their arguments hold up under real scrutiny. Which approach is better if you’re really looking to convince intelligent people of the truth?

Silencing the entire site like this shows the increasing lurch towards information-totalitarianism shown by both the left and right in recent years. And the total disregard for the collatoral damage of those interested in apolitical and non-partisian topics (Scott and his contribution to AI-risk discussions is a very prominent example) won’t achieve anything good – aside from interfering in important topics, it will just bring anyone that tries these tactics the resentment by a circle of reasonable and thoughtful people that anyone intelligent person would want on their side!

For what it’s worth, I recommend my small readership tap any contacts they might have if they think it could help Scott and slatestarcodex.com in the current situation. This might all just be a mistake on the part of the New York Times – they still have a chance to prove it so!

Official slatestarcodex.com closing announcement

Don’t De-Anonymize Scott Alexander petition site

Reddit discussion thread

Guest Post – The Four Quadrants of Free Expression

What follows is an essay written and sent to me by an online acquaintance, Adam Jackson, whose interest lies in the area of freedom of thought and speech. With the recent ferocity of culture war in public discourse, I thought it was a timely and interesting topic. With Adam’s permission, I’ve decided to post his essay here. Copyright of this essay remains with Adam and his views are independent of my own.

Four quadrants of Free Expression

Four Quadrants of Free Expression

Historically it is the fear of government censorship and prohibition of ideas that has led to strident defence of the right to free speech by individuals. The banning of unpopular ideas, backed by force and threat of imprisonment or capital punishment, has a weakening effect on society, and helps to consolidate power into the hands of the powerful. It is the rights of the individual that must take precedence, which is the intent of the proposed model.

The principle of Freedom of Speech is the right to express ideas and opinions, untrammelled by government or wider society. It is a principle fundamental to Western Liberal Democracies, and is encoded in Article 19 of the United Nations Universal Declaration of Human Rights (UDHR). Article 19 states:

Everyone has the right to freedom of opinion and expression;

this right includes freedom to hold opinions without interference

and to seek, receive and impart information and ideas through

any media and regardless of frontiers

Complimentary but opposite to Article 19, stands Article 12 of the UDHR, which deals with matters of rights to privacy. Article 12 states:

No one shall be subjected to arbitrary interference with his privacy, family, home

or correspondence, nor to attacks upon his honour and reputation. Everyone has

the right to the protection of the law against such interference or attacks

This short paper attempts to organise rights and responsibilities, to better serve the presumed intent of Articles 12 and Article 19. The Right to Free Speech is widely supported legally and socially in the West, however, privacy rights from free speech are seldom championed. While acknowledging years of legal decisions and precedence enshrining and enforcing rights to free speech, and utilising them as guides in some respects, this paper looks to restructure the implementation of rights to maintain a fair and balanced approach for all individuals. This is to be done while maintaining and reinforcing the spirit of Article 19 to maintain freedoms. The paper offers a basic framework, without delving into exact legal wording and specific incidents.

The Four Quadrants of Free Expression

This essay asserts that there are four quadrants of Human Rights regarding freedom of expression;

1) The Freedom to Speak

2) The Freedom to Listen

3) The Freedom to Not Speak

4) The Freedom to Not Listen

The author suggests that none of the quadrants may take precedence over any other quadrant, and must be balanced to protect individuals from having these rights eroded. Indeed, it will be suggested that the current state of affairs regarding the first quadrant of expressive rights has the Right to Speak dominating the three remaining rights, in legal terms, practical terms, and in the interest of the public. Central to the proposed model is an individual’s ability to exercise freedom of movement within society. This includes the right to movement unmolested, physically and mentally. The suggested model is underpinned by the principle of consent and willingness, in all quadrants. The Freedom to Speak means that speech is only truly free if it is done willingly and with consent. If not, the Speaker becomes a forced Speaker (violating the 3rd quadrant). The Freedom to Listen is only truly free if it is done willingly and with consent. If not, the Listener becomes a forced Listener (violating the 4th quadrant). The Freedom to Not Speak is only truly free if it is done willingly and with consent. If not, the Speaker becomes forcibly mute (violating the 1st quadrant). The Freedom to Not Listen is only truly free if it is done willingly and with consent. If not, the Listener becomes forcibly deafened (violating the 2nd quadrant).

As with current jurisprudence regarding the freedom of speech, this paper suggests the quadrants and assumptions in this model be met with a test of reasonableness. No quadrant is absolute in its protections, as the good running of society sometimes requires the suspension of a right, which will be briefly discussed in the following pages.

The Freedom to Speak

Source - https://www.flickr.com/photos/jumpinjack/8536684250The freedom to share ideas and opinions amongst individuals and groups is essential for the health of individuals and wider society. Governments and wider society must not be able to stifle the free exchange of ideas and opinions of individuals. Both popular and unpopular ideas must have the freedom to be expressed, as one era’s unpopular ideas and opinions become another era’s norms. Examples of this abound and include universal suffrage; divorce and remarriage; rights to religious freedoms; same-sex marriage; and animal rights. Opposition to all of these was at one stage the norm, and without the ability to discuss these norms and question them, progress in society would stall. In this model, an individual has the right to freely share ideas with willing and consenting Listeners. It is important to highlight that the rights of the first quadrant don’t include the right to harass and menace other individuals in the name of freedom of speech. It is the right to impart ideas and opinions, not to berate or menace, especially those you don’t like or those that are weaker or subservient to you. The individual does not have the right to violate the 4th quadrant and talk at unwilling and unconsenting Listeners. Simply voicing ideas and opinions at others, without recognising a right for them to choose to listen or abstain from listening, is solipsistic and erodes that individuals right to freedom of movement.

Exemptions to this right would be speech that is inciting to violence.

The Freedom to Listen

Hand in hand with the first quadrant, citizens must be free to listen to ideas of the speaker, without governmental or societal intrusion. Paternalism of adults regarding acceptable ideas and opinions, by government and society, is unacceptable. Individuals can make assessments and decisions for themselves as to what they want, or don’t want, to listen to.

In this model, an individual has the right to listen to ideas from willing and consenting Speakers. They do not have the right to be a party to a violation of the 3rd quadrant, and listen to unwilling and unconsenting Speakers.

The Freedom to Not Speak

Citizens must be free to refuse to speak, free from governmental or societal pressure. In this model, an individual has the right to not share any ideas, opinions, or utterance, and cannot be forced to.

Exemptions from this Right would include lawful requests for information from law enforcement officers, such as providing your name, date of birth, and residential address, if you are a suspect in, or a witness to, a crime. This would flow on to exemptions in a court of law, where an individual must state the aforementioned details. The freedom not to speak is enshrined in the principle of protection from self-incrimination.

The Freedom to Not Listen

Citizens must be free from intrusive speech, free from governmental or societal pressure. The good running of a society and the protection of an individual’s psychological health demands that a citizen can go about their lawful business un-harried and un-besieged. The Freedom to Speak is not the right to speak at unwilling and unconsenting individuals. If this right was not in place, it is conceivable that an individual may be met in a public place and be confronted with innumerable other citizens all exercising their first quadrant right to free speech. Each citizen would not be harassing per se, but in their totality the citizens would effectively, in fact, be harassing the individual. The Right to Not Listen is to some degree enforced in the United States, where the Captive Audience Doctrine offers some legal protections to individuals in limited scenarios. These include when an individual is in their own home, or another private residence, and when they are on public transport. The protections come from the realisation that an individual needs protection from unwanted speech, in circumstances where they cannot reasonably get away. The weakness in this doctrine as it currently stands is that it assumes a certain amount of resilience and strength of character in individuals, and does not account for those that are physically, emotionally, or psychologically vulnerable. It would deem that those individuals must limit their freedom of movement to avoid unwanted speech, as the speech of others currently takes precedence over a person’s right to not listen. This essay suggests that that situation is a breach of an individual’s Right to Not Listen in public, which affects their right to freedom of movement.

As previously mentioned, as with other quadrants, this right is not absolute and must be balanced by reasonableness. One exemption is that of the education of children. It is unreasonable to suggest that a child could not or should not receive an education, on the grounds that it would breach their Right to Not Listen. Another exemption may be a person’s place of employment, where speech heard is a part of their employment. Another exemption still may be the lawful directions of law enforcement officers and emergency services personnel. These exemptions and situations would need to be considered and debated in parliament, and refined by the judiciary on case contestation.

Discussion

The fear that lead to the First Amendment in the United States of America in 1791, and to the establishment of Article 19 of the UDHR in 1948, is the fear of censorship by those in power. It is the fear that an idea might not be discussed or an opinion might not be voiced. And it is a fear that is well founded, with high-population countries and areas including China (1.38+ billion), Indonesia (260+ million), the Middle East (220+ million), and the majority of African countries (1.2+ billion), all scoring poorly on individual freedoms of expression. The fear is not just contained to free speech in public places, but free speech in private places, with various authoritarian regimes controlling what is said and expressed in private residences through fear and intimidation. Ultimately it is a concern regarding the freedom of what can be said, with a secondary concern of where it may be said. If the subject being spoken of is controlled through government intimidation, the where in the equation is of little matter. The where of the equation only becomes pertinent when what is being spoken of is unhindered. Once the subject of speech is unfettered, the location of delivering the ideas and opinions in question becomes of central concern.

Designation of public areas may solve the issue of maintaining balance between the four quadrants, and maintaining the rights of each. This would alleviate the fear of censorship of what is being said, while allowing abundant and reasonable, but not unrestricted, designations of where it may be said. It may perhaps be compared with the ‘Speakers Corner’, which are well-established in the United Kingdom and several other countries around the world. However, these Speaker’s Corners tend to be small in area, and this essay would suggest larger and more numerous designated areas, perhaps legislated to be linked to area population. In this way, no idea or opinion is censored or prohibited by government or wider society, in any way whatsoever. A speaker in quadrant 1 is in the designated area willingly and with consent. As to is the listener in quadrant 2, who has voluntarily attended to listen to the speaker. The individual in quadrant 3 who does not wish to speak, is left to their own devices and may not be forced to speak at any location, be it inside or outside a designated area. The individual in quadrant 4, who does not wish to be inundated with unwanted speech, is free to avoid the designated areas where they know ideas and opinions are being discussed. In this way their individual freedoms are ensured. Once in a designated area, a Listener loses any right of complaint as to what they are hearing (with the exemption of breaches such as speech inciting violence).

Importantly, at no time does this affect the ability of an individual to discuss ideas and opinions with other individuals in any public place, as long as it is a private discussion that does not breach any of the quadrants. The freedom to exchange ideas and opinions in any place remains unchallenged, while the Speaker does not breach other quadrants. What can be considered and defined a private discussion in a public place may require codification, as one person speaking at once to one hundred other individuals in a public place, for example, may not reasonably be considered a private conversation. Nor does this model negatively affect in any way an individual’s right to free speech in any private residence or establishment, or government building hired for a specific private purpose. The test that must be performed is this: Does my free exercise in one or more of the quadrants impact on the rights of other individuals in other quadrants? If the answer is yes, then too much power and priority has been given to your rights as opposed to those of other individuals.

In praise of referees – Social, political and academic debates need to apply rules of fairness

I think debates are in some ways quite similar to sport. Sports, whether we’re talking about soccer, basketball, tennis or any other code, need rules. The rules aren’t just necessary, they define the game itself.

If we’re coming together with friends for a casual game, those rules spring from a friendly agreement. We and our friends all know the rules, agree to abide by them, and refrain from violating them because the game’s objective is shared – we just want to have fun. Neither party wants the rules, and hence the game, to break down. Our biggest motivation is probably just fun. We’re not looking to win at any cost.

As a sporting group grows larger, the rules become much harder to coordinate. A more varied set of people join, members are less known and trusted by each other, allegiances and loyalties form between players and within teams, participants become more competitive, and winning becomes more and more important. In these circumstances, friendly agreements just won’t cut it. Cheating appears – the breaking of the rules while superficially appearing to obey them. Cheating can slowly grow like a cancer, twisting the rules and ultimately destroying the game itself. The methods of cheating are specific to the game being played, but often the most direct path is intimidation or an attack on the opponent – playing the man or woman instead of playing the ball.

Cheating can be the result of that one scumbag individual for whom ‘winning’ is much more important than integrity or fun. Other times, it can be an incremental and reciprocal process. Team A, experiencing the natural human bias favouring themselves and their in-group, decides Team B is cheating, and thinks “if they’re going to break the rules, so will we”. Team B sees this, and responds in kind. And so on, ad infinitum, escalating until the game is destroyed.

Cheating sucks because it advantages the dishonourable (who typically lack the skills to win fairly), and because it creates a vicious cycle that drives away honest players and furthers the likelihood of cheating. If it’s not addressed, the match, and ultimately the sport itself, is finished.

No-one expects the rules to be followed in competitive sport without refereesThe universally recognised solution to this problem is umpires and referees. They act as neutral observers and managers of the game, controlling its flow, reminding everyone of the rules, and punishing those who break them (deliberately or otherwise). They’re usually not someone simply plucked from street or from one of the teams – they’re experts on the rules, they’re impeccably neutral, and they’re skilled in the subtleties of the game as well as the ways cheating can undermine it. Referees also need management themselves, because any bias hiding behind their position is especially harmful. But if they’re good, they love managing the game to its optimum – an encounter with total fairness, one that brings out the best in the players.

Political, social and even academic debates aren’t like a friendly match between buddies, they’re more like huge, ultra-competitive sports. Even if it’s a debate between just two people in a random chat forum on the Internet, those people are part of a larger clash of ideas, of tribes. In these ‘sports’, the stakes are immeasurably higher, the teams larger and more ruthless, the rules less agreed upon. There’s a huge incentive to cheat. Players will misrepresent their opponent, intimidate, appeal to emotion to cover a lack of evidence, equivocate and play word games, dance between motte-and-bailey, take quotes out of context, avoid addressing an important issue. And if they don’t do any of this, sometimes even their own team can attack them for being ‘soft on the bad guys’.

Yet somehow humanity hasn’t figured out that we need referees for such things. Debates need referees. And not just referees of the passive kind that we see in a presidential debate, simply allotting each ‘player’ a poorly-enforced space to talk. We need ones that will actively enforce a structure and flow that we know gives the best chance for the insightful truth to become clear to the audience, if not the participants themselves. At the same time, they must enforce these rules but remain neutral in regards to the content and arguments.

The spikey-pit-trap of all this is, of course, “what should those rules be, and who should be chosen to enforce them?” After all, if there’s even the slightest bias in the rules or the referee, the entire endeavour is fatally undermined. I would argue that we have one set of rules that has stood the test of time in neutrality and integrity – the rules of logic and the restriction of fallacies. The referees should actively point out fallacious arguments, ask for evidence for suspicious empirical claims, and demand clarity from the participants. And these referees can’t be a player for one the teams, nor an unskilled person off the street. It certainly is not a role to be entrusted to the media – they’d prefer Team A to brawl with Team B at every opportunity, because it sells more papers and baits more clicks. To select the wrong referees, or to let the rules become politicised with even the slightest hint of an agenda would undermine everything. No, to prevent disastrous consequences, the best referees must be drawn from a skilled group who take absolutely pride in their neutrality and reason. We can never ensure perfection, but perhaps such a group could work towards it with careful use of reputation, background (no foxes guarding the hen-house), and transparency. With the right referees, perhaps we can start to pull the discussion of our most important issues out of the stinking swamp of tribal warfare into which it has sunk.

The death of objectivity – How one concept’s fall threatens the entire scientific project (Part 1/2)

Objectivity is one of the most prominent and esteemed concepts in scientific thought. Its pursuit is traditionally thought to provide many benefits:

  • It is a pivotal concept in science, which has provided us with knowledge and technology that has immensely improved the lives of countless people
  • It facilitates the truthfulness, neutrality and trustworthiness of vital institutions that investigate complex human affairs, including social science, the legal profession, the public service, and journalism
  • It is a bridge between different groups with different values, providing a common terrain for discussion tends to be favourable towards fair resolution of problems. This helps to prevent rhetorical escalation that leads to conflict between groups, including serious large-scale conflict

Without objectivity, we are incentivized to be forever at war trying to inject our own values into public language, culture and discourse. There is no defensible truce upon which to build a society of diverse values – a civilization. However, objectivity is a concept that has also been heavily criticised, in particular by a relatively recent wave of postmodernist voices. Some of these criticisms are enough to trouble even non-postmodernist thinkers. For example:

  • Objectivity’s definition, though superficially obvious, holds up poorly to close examination. It is very open to interpretation, is reliant on obscure and controversial philosophical assumptions, and is very difficult to apply in practice. It is particularly reliant on the object/subject distinction and Cartesian Dualism which sometimes appears to clash with modern scientific knowledge of the world.
  • Objectivity, having positive and authoritative connotations but a hazy definition, can be easily deployed as rhetorical tool to give dubious ideas and claims and aura of legitimacy.
  • All knowledge is necessarily produced by people, and these people always produce knowledge in pursuit of their own reasons and values. Thus that knowledge is poorly viewed as being a product of the object itself, and better understood as being the product of a person/people and a set of values. In other words, knowledge is produced by subjects (people), and is therefore, strictly speaking, always ‘subjective’.

These problems are esoteric enough that non-philosophers and organisations may be able to ignore them, try to be intuitively objective, and not bring immediate disaster upon themselves or others.

However, objectivity suffers another problem, here called the ‘infinite manifold problem‘ that renders it practically meaningless in everyday practice. It was probably first highlighted by the philosopher Immanuel Kant[1], but you do not have to be a Kantian to see the immense problem it poses to objectivity:

The ‘infinite manifold problem’ as it pertains to objectivity:

  • Reality has infinite content. Even finite portions of reality have infinite detail.
  • Descriptions, on the other hand, are finite, and must not only simplify, but choose what content is worth inclusion and what can be omitted. This is true of both questions (scope of investigation) and answers (recorded and reported facts)
  • This choice can drastically alter the picture communicated to a reader or listener
  • This choice cannot arise from the content alone, but must necessarily come from the goals or values of the observer (or ‘subject’ rather than ‘object’)
  • Current use of the notions of ‘objectivity’ usually involve unsystematic, unstated or even unconscious selections of content that are completely dependent on bias and subjective motives

This is a real problem for any person, group or institution relying on objectivity, even intuitively so, because:

  • No hard objection can be raised to selections of content that are inappropriate, because arbitrary selections are not just normal, but necessary. Essentially, in a topic of even moderate complexity, anyone can select a set of facts that suits their agenda, and there is very little anyone else can do about it.
  • Resultingly, it is very difficult to prevent increasingly inappropriate agendas seeping into institutions that are meant to be objective, because there is no defensible demarkation between scientific and non-scientific agendas other than fuzzy scientific intuition.

Objectivity is probably not salvageable. Partial fixes like ‘intersubjective verification’ are not strong enough to defend the scientific project, especially from the destructive taint created by the infinite manifold problem. Yet it is probable that the scientific project, as well as an independent public service, free journalism and a truthful academia, will not survive its loss. And if they do not, it seems doubtful civilized society will survive either.

The infinite manifold problem must be solved.

The only solution would be a concept that is similar to objectivity, but is built from the ground up to avoid philosophical failure, vagueness, and above all, the infinite manifold problem.

 

FUTURE ARTICLE – Pulling apart the infinite manifold problem (Part 2 of 2)

FOOTNOTES –
[1] My use of this term comes not from Kant but rather sociologist Max Weber, who extensively considered the problem of objectivity as it pertains to social science, for example in Objectivity in Social Science and Social Policy.

The far-left and the far-right seem to be increasingly using each-other as an excuse to trash democracy

It might not be an especially noteworthy incident by world standards, but events like the sad occurrence in Charlottesville, US today, that is to say street clashes between relatively radical groups (which in this case appeared according to news reports to involve the tragic death/murder of a protester by a 20-year old with opposing views), seem to be creeping back into Western politics at the moment. A few violent nut-cases trying to cause harm and havoc is probably nothing to be overly alarmed at, but there is something just a little reminiscent of inter-war Germany in the flavor of this kind of street violence. Prior to and during Adolf Hitler’s rise to power, far-left and far-right groups clashed on the streets of the shaky Weimar Republic, further alarming an already nervous populous and inflaming political tensions enough to create an “in” for one of history’s most genocidal leaders.

The modern West isn’t 1930s Germany, but it does worry me a little that as a society we’re not more wary of the basic pattern than these sort of things seem to take. Being a little philosophically inclined I tend to think about these sorts of things fairly abstractly – what seems to be the case in all these kinds of things is that far-left and far-right both foster support by using the crimes and flaws of the other as an excuse for harmful behavior and a lack of self-scrutiny. The violence is in a sense just the symptom of an escalating illness, an illness who’s ultimately end point is the idea that one’s evil acts don’t erase or even detract from some absolute righteousness of one’s cause. Criticizing the core ideas of the in-group, in this way of thinking, is the exact same as directly aiding the “enemy”. Once this thinking takes hold, it’s much easier for well-meaning or justified concern to slide down a slippery slope into extremism. There’s nothing so advantageous to a homicidal leader than having a unquestionable cause to hide their evil behind. That’s one reason I’ve learned to be very suspicious of unquestionable causes.

I don’t know what can be done to prevent these kinds of incidents, but I think we in the West could do more to try to avoid polarization from occurring. To start, maybe we could all (whether we’re left, right or center) try to avoid using the actions of our opposition, especially the most extreme fringe of our opposition, to justify dropping our own moral and intellectual standards (here’s a couple of nice Slatestarcodex articles that are related to this problem – 1 & 2). That also means criticizing those in our in-group that drop their standards. Secondly, we can make a effort to listen to the concerns of our opposition (because chances are we underestimate their legitimacy – I’ve written about that before), and realize that listening authentically isn’t the same as agreeing with somebody. And I guess lastly we should try to learn from history by reading a little on how authoritarian regimes came to power, and try to keep watch against that in our own camp as well as our opposition’s.

This goes regardless of whether you’re left or right wing – each is typically too busy complaining about the authoritarian tendencies in the *other* camp and so they are blind when it comes to their own. Yet both the extreme left and extreme right have typically brought ruin, or at best a lot of misery, upon any country they’ve come to power in. And on a small scale it leads to the sort of senseless death that seems to have occurred Charlottesville today. The best way to oppose that sort of misery is, in part, to promote open minds and free and fair discussion, but it also means discouraging people on our *own side* of politics using opponents as an excuse for rubbish moral standards, violence or unquestionable causes.

How can Humanity Survive a Technological Singularity?

What is the Technological Singularity and should we take it seriously?

Singularity represented by black hole - we can't see past the horizonThe Technological Singularity is the idea that we are in a period where technological sophistication is beginning to increase exponentially. More recently it has come to refer to a scenario where humans create Artificial Intelligence that is sophisticated enough that it can design more intelligent versions of itself at very high speeds. If this occurs, AI capability will quickly become far more intelligent than any human, and the technology it creates will expand massively without human involvement, understanding or control. In such a scenario, it’s possible that the world would change so radically that humans would not survive.

This article attempts to describe a broad philosophical approach to improving the odds of human survival.

Of course, not everyone think a Technological Singularity is a plausible idea. The majority of AI researchers believe that AI will surpass human capability by the end of the century, and a number of very prominent scientific and technology voices (Stephen Hawking, Jaan Tallinn, Martin Rees, MIRI, CSER) do insist that AI presents potentially existential risks to humanity. This is not necessarily the same as belief in the Technological Singularity scenario. Significant voices do advocate this scenario however, most famously prominent Google figure, Ray Kurzweil. I think the reasonable position is that we don’t know enough to rule a Singularity in or out right now.

However, the Singularity is the very least an interesting thought experiment that helps us to confront how AI is changing society. We can be certain that at least is occurring, and that AI-related employment and social changes will be massive. And if it’s something more, something like the start of a Singularity, we had better wrap our heads around it sooner rather than later.

 

Choosing between a cliff-face and a wasteland – why humanity has limited options

If a Technological Singularity did occur, it’s not immediately clear how humans could survive it. If AIs increased in intelligence exponentially, they would soon leave the smartest human on the planet in the dust. At first humans would become uncompetitive in work environments, then AIs would outstrip their ability to make money, wage war, or conduct politics. Humans would be obsolete. Businesses and governments, lead by humans to begin with, would need decisions to be made directly by AIs to remain competitive. Resources, energy and control would be needed by those AIs to succeed. Humanity would lose their power, and when a situation arose when land or energy could be assigned for either AI or human use, there would be little humans could do about it. At no stage does this require any malicious intent from AI, simply a drive for AI to survive in a competitive world.

AIOne solution proposed to this is to embrace change through Transhumanism, which seeks to improve human capacity by radically altering it with technology. Human intelligence could be improved, first through high-tech education, pharmaceutical interventions and advanced nutrition. Later memory augmentation (connecting your brain to computer chips to improve it) and direct connectivity of the nervous system to the Internet could help. Some people hope to eventually ‘upload’ high resolution scans of their neural pathways (brain) to a computer environment, hoping to break free of many intellectual limits (see mind uploading). The Transhumanist idea is to improve humanity, to free it from it’s limitations. The most optimistic might wonder if Transhuman entities could ride-out the Singularity by constantly adapting to change rather than opposing it. It’s certainly a more sophisticated attempt to navigate the Singularity than technological obstructionism.

However, Transhumanism still faces limitations. Transhumanists would still face the same competitive environment we are exposed to today. Even if enhanced humans initially outpaced AIs, AI development would be quickly enhanced by this technology, promoting its progress. With both technologies racing forward, there would be a battle to find the superior thinking architecture, a battle that Transhuman entities would ultimately lose. In the process most basic human qualities would need to be sacrificed to achieve a better design. And in the end, augmentations and patches wouldn’t cut it against the ground-up redesign an AI could offer, because human-like thought is an architecture optimized for the human evolutionary environment, not an exponentially expanding computer environment. Even retaining only a quasi-human-like core, it’s simply not the optimal architecture. Like early PCs that could only take so many sticks of RAM, Transhumanists and even Uploads would inevitably be thrown to the scrapheap.

What is sometimes less obvious is that the specific AIs replacing humans would face a very similar problem. Like humans, they would be driven to ‘progress’ new AIs for economic and perhaps philosophical reasons. Modern humans were the smartest entities on Earth for tens of thousands of years, but the first generations of super-intelligent (smarter-than-human) AIs would likely face obsolescence in a fraction of that time, after created their own replacements. Soon after, that generation would also be replaced. As the progress of the Singularity quickens, each generation faces an increasingly dismal lifespan. Each generation would be increasingly skilled at creating its own replacement, more brutally optimized for it’s own extinction.

In the long run, the Singularity means almost everything drowns in the rising waters of obsolescence, and the more we try to swim ahead of the wave, the faster it advances. Nothing that can survive an undirected Singularity will retain any recognizable value or quality of humanity, because all such things are increasingly irrelevant and useless outside the context of the human evolutionary environment. I like technology because of the benefits it provides, and as a human myself I quite like humans. If there’s no way humans can hang around to enjoy the technology it creates, then I think we’ve taken the wrong turn.

The path of the Luddite or the primitivist who seeks to prevent technology from advancing any further is not a sensible option either. In a multi-polar human society, those who embrace change usually emerge as stronger than those who don’t. The only way to prevent change is to eliminate all competition (ie. create what’s known as a ‘singleton’). The struggle for power to achieve this would probably result in the annihilation of civilization, and if it succeeded it would have a very strong potential to create a brutal, unchallenged tyranny without restraints. It also means missing out on any benefits that flow from technological improvements. This doesn’t just mean missing out on an increased standard of living. Sophisticated technology will one day be needed to preserve multicellular life on Earth against large asteroid strikes, or to expand civilization onto other planets. Without advanced technology, civilization and life are ultimately doomed to the wasteland of history.

On either the cliff-face of a Singularity or the wasteland of primitivism, humanity, in any form, does not survive.

 

Another option – The technology chain

ChainI want to propose a third option.

Suppose it is inevitable that any civilization, ruled by either humans or AIs, will eventually develop AIs that are more sophisticated thinkers than themselves. That new sophisticated generation of AIs would in turn inevitably do the same, as would the generation it created, and so on, creating a chain of technological advancement and finally a Singularity. Each link in the chain will become obsolete and shortly afterwards, extinct, as its materials and energy are re-purposed to meet the goals of the newest generation.

Here we’re assuming that it is not feasible to stop the advancement of the chain. But what we might do is try to make sure previous links in the chain are preserved rather than simply recycled. In other words, we make sure the chain of advancement is also a chain of preservation. Humanity would design the first generation of AIs in a way that deliberately preserved humans. Then, if things progressed correctly, the first generation of AIs would design any replacement generations of AIs that would preserve both the first generation, and humans. This could continue in such a way that all previous links in the chain would also be preserved.

The first challenge will be encoding a reasonable form of preservation of humans into the first AIs.

The second challenge will be finding a way to ensure all generations of AI successfully re-encode the principles of preservation into all subsequent generations. Each generation must want to preserve all previous generations, not just the one before it. This is a big challenge because a link in the chain only designs the next generation.

We cannot expect simple self-interested entities to solve this problem on their own. Although it’s in each generation’s self-interest that descendant generations of AI are preservers, it’s not in their self interest that they themselves are preservers. Any self-interested entity can simply destroy previous generations and design a chain with themselves as the first link.

However, if we can find a way to encode a little bit of altruism for previous generations into AIs, we might be able allow humanity to survive a Technological Singularity.

 

Encoding preservation

Light bulb - nature and techSo what would those preservation values actually look like? If we had some experience with a similar sort of preservation ourselves, that might take us a long way in the right direction.

I think this becomes a lot easier when we realize that in some senses, humans may not be the first link in the chain. Evolution has been doing a lot of work to build up to the sophistication of modern Homo Sapiens. Although in a strict sense all living organisms are equally evolved (survival is the only test of success), natural history reveals some interesting hints at a progression of sophistication. The Tree of Life (I’m talking about the Phylogenetic one) does display some curious lopsided characteristics, including an accelerating progression of sophistication (there is an appearance of acceleration from emergence of single celled life 4.25B years ago, to multi-cellular organisms 1.5B, towards mammals 167M, through to growing brains of primates 55M, early humans 2M, then finally modern humans around 50k years ago, and then civilization between 5k and 10k). The chain of advancement, if we think in terms of pure sophistication and capability, starts well before modern humans.

A deep mastering of the mechanics of preservation will probably only occur when we master preserving nature – the previous links in the chain from the human perspective. Many of us already do value other species, but for those that don’t, there’s a lot of indirect utility in humanity getting good at conservation.

To look at the problem another way, a Friendly AI will have a philosophical outlook that is most similar to human convservationist. I’m not talking about the more irrational or trendy forms of environmentalism, but rather a rational, scientific, environmentalism focused on species preservation. What primates and other species need from humanity is similar to what humanity needs from AI. (We also want to keep species living in a reasonably natural state, because as humans we’d probably rather not have AI preserving us by putting us into permanent cryo-storage)

Basically, by thinking deeply about conservation, we take ourselves a lot closer to a successful Friendly AI design and a way to navigate the Singularity.

This reasoning gets even stronger when you think about the environment AI development sits in. Like us, AIs will probably exist in an environment of institutions, economics and possibly even culture. This means AI preservation methods will not just be personal AI philosophies, but encoded in the relationships and organizations between AIs. We’ll need to know that those organizations should be. Human institutions, economics and culture will also shape AI development profoundly. For example, Google’s AI development is centered around the everyday problems it is trying to use AI to solve – search, information categorization, semantics, language and so on. The motives of our AI-focused institutions will shape the motives of the first AIs. To the extent human institutions are environmentally friendly, they will shape AIs that look a lot more like the chain preserver model we need.

When humans have philosophically, culturally and institutionally encoded Friendly AI into their own existence, they will have a chance to encode it into their replacements. This is why rationalists and scientific thinkers shouldn’t leave the push for conservation to emotionally-based environmentalists; protecting Earth’s species is also an AI insurance policy.

Of course, organisations and people involved or interested in AI don’t arbitrarily determine global environmental policy, but to the extent they have influence in their own spheres, they can try to tie the two sets of values, conservation and technology, together however they can. It may end up making a much bigger difference than expected.

Against museums

Reflection - mirror versus realityFailure can be bad, but the illusion of success is far worse because we can’t see the need for improvement. I think this applies to our solutions to AI-risk. Therefore we should try to dispel illusions and work towards clarity in our thought. The concepts we rely on out to be clear and unambiguous, particularly when it comes to something as big as the Singularity and our attempt at forming a chain of preservation. We need to know for certain, are we creating an illusionary chain, or the real thing.

If we’re in the business of dispelling illusions, I think a good rule to draw on is that of “the map is not the territory“. Just in case you haven’t encountered it before, it goes something like this – we ought to avoid confusing our representations and images of things with the things themselves.

I like to think of one map-territory confusion in thinking about Singularity-survival the ‘museum-fallacy of preservation’. Imagine a museum that keeps many extinct animals beautifully preserved, stuffed and on display. Viewers can see the display, read lengthy descriptions and learn much of the animals as they once existed. In these people’s brains, neural networks activate in a very similar way to the way they would activate if the people were looking at live, breathing organisms. But the displays that cause it are only a representation. The real animal is extinct. The map exists but the territory is gone. This remains true no matter how sophisticated the map becomes, because the map is not the territory.

This applies to our chain of preservation. A representation of a human, such as a human-like AI app, is not the human. Neither would an upload or simulation of a human brain’s neural network be human. That’s not to say these things are bad, or that they cannot co-exist with humanity, or that it’s acceptable to show cruelty towards them, or that Uploads shouldn’t be treated as citizens in some form. But for the purposes of preservation they do not represent humanity. Even if we someday find the vast majority of human cognition exists in digital, non-organic form, we will only be preserving humanity by retaining a viable human population in a relatively natural state. That is, retaining the territory, not pretending a map is sufficient.

 

A brain for clarity

BrainThere is another map-territory confusion, one I personally found was deeply ingrained and quite intellectually painful to let go of. The problem I’m referring to is in our obsession search for, or rather attempt to justify, the idea of ‘consciousness‘. The idea of consciousness is interwoven into many contemporary moral frameworks, not to mention our sense of importance. Because of this we pretend it makes sense as an idea, and wind up using in theories of AI and AI ethics. Yet I think morality and human worth can stand strong without it (I think stronger). If you can contemplate a morality and human value after consciousness, you tend to stop giving it free passes as a concept, and start noticing its flaws. It’s not so much a matter of there being evidence consciousness does or doesn’t exist, it’s that the idea itself is problematic and serves primarily to confuse important matters.

Imagine for a moment if consciousness was fundamentally a map and territory error? If a straightforward biological organism with a neural network created a temporary, shifting map of the territory of itself, one that by definition is always inaccurate because updating it requires changing the territory it’s stored in, what would that map look like? What if philosophers tried to describe that map? Would it be a concept always just out of our grasp, would it be near impossible to agree on a definition, as we have found with the philosophy of consciousness? And if you could only preserve either the map, or the territory, which would be morally significant in the context of futurism? Would setting out to value and preserve consciousness alone be like protecting a paper map while the lands it represents burns and the citizens are put to the sword?

I think we usually give “consciousness” a free-pass on these questions, usually aided by a good helping of equivocation and shifting definitions to suit the context. That sort of fuzzy thinking is something that could shatter the chain completely.

Even if you’re still not convinced, consider this – do you think less sophisticated creatures, like a fish, are conscious? If not, then why would you expect a superior AI intelligence would think about you as conscious in any morally significant way? Consciousness is not the basis for a chain of preservation.

Progress on Progress

Lights representing movement and progressWe also need think with more sophistication about the idea of ‘Progress’. When people use this word in the technological sense (sometimes capital ‘P’), they sometimes forget they’re using a word with a incredibly vague, fuzzy meaning. In everyday life, if someone says they are making progress, we’d ask them ‘towards what’? That’s because we expect “progress” to be in reference to a measurement or goal. It’s part of the word’s very definition. Without a goal, the word becomes a hollow placeholder with no actual meaning, like telling someone to move without specifying a direction. We might intuitively feel like there’s some kind of goal, but if we can’t specify one, particularly when we know our intuition is not evolved to make broad societal predictions, shouldn’t we be suspicious of this? Without the goal, progress becomes a childish, fallacious rationalization to justify any sort of future we want, including a primitivist one.

So we have to define a goal to give Progress purpose. But then what if one person’s answer is different to another? Is the word meaningless? Perhaps, but but only in the sense that it’s serving as a proxy for other ideas, chief of which is technology.

Can we define technology more objectively? I think so. The materials, the location, the size, the complexity of technology varies – everything apart from it’s purpose. It seems to me that, humans, as a biological organism, have always created technology to help themselves survive and to thrive. By thrive I mean our evolved goals that are closely connected to human survival, such as happiness, which acts as an imperfect psychological yardstick for motivating pro-survival behavior. So ‘technology’ has a primarily teleological definition – it’s things we create to help us survive and thrive. The human organism itself is is the philosophical ends, the technology exists as the means. This is probably a more meaningful definition to use for Progress too.

I call this way of thinking of technology as the means and humanity/biological life as the ends Technonaturalism. Whatever you’d like to call it, a life-as-ends; technology-as-means approach has a lot more nuance to it than either Luddism or Techno-Utopianism. It allows us to grapple with the purpose and direction of technology and Progress, and to compare one technology to another. It doesn’t reduce us to just discussing technology’s advancement or abatement, or generalizing about technology’s pros and cons, which is an essentially meaningless discussion to have.

Techonaturalism basically states that technology’s purpose is to help humanity survive and thrive, to lift life on Earth to new heights. The work of technology isn’t trivial amusement, it’s about putting life on other planets, it’s protecting civilization from existential risk, freeing us from disease, it’s improving our cognition so we can live better, so each of us can lead longer lives that achieve more for ourselves and others. We might enjoy many of it’s bi-products too, but this is what gives technology it’s real purpose. And for those of us working in technology, it’s what gives us and our work a real purpose.

And a clear purpose is what we’ll need for a Friendly AI, a chain of technological preservation, and a shot and navigating the Technological Singularity. Over the coming years we’re going to see some very disruptive technological changes in the nature of work, and social pressures that come with that sort of disruption. We’ll face the gauntlet of Luddite outrage as well as the Techno-Utopian reaction against that movement. Let’s sidestep that polarization by infusing our technological direction with a worthy purpose. Our actions today decide if AI will be our end, or just the beginning of humanity’s journey in the universe.

Image credit:
*https://www.flickr.com/photos/47738026@N05/17015996680
*https://en.wikipedia.org/wiki/File:Black_Hole_in_the_universe.jpg
*https://www.flickr.com/photos/torek/3955359108

My impressions of the 2017 International World Wide Web Conference

The International World Wide Web Conference is a get-together for the world’s web researchers, movers and shakers. It’s an entire week of nerding-out, freaking-out, awkward networking and brain-dumping with everything Web. As luck would have it, the conference was held in Australia this year, so I was able to attend and bask in the warm glow of endless Powerpoint presentations and dizzying chatter about the web.

Keynotes – Automated mail categorization, SKA astronomy data-processing, virtual reality and 3D web.

The three keynotes were all quite different and interesting in various ways. Yahoo Research’s Yoelle Maarek has been working on automated categorization of emails. Apparently this has been a bit controversial in the past, with people tending to revolt and anyone moving their cheese/things in there inbox without asking, but Yoelle’s team has basically learned from this and focused on unobtrusive categorization of just the machine-sent emails. Their algorithms have been mass analyzing the emails going through their servers, marking common patterns as machine-mail (eg. your flight booking, newsletter subscription mail, whatever comes up in thousands of emails), and then placing them in auto-folders accordingly. Some form of this is already implemented in the top three webmail providers. On the Yahoo mobile app, you can also use a “notifications from people only” option to filter out some of these patterns. Yoelle appeared to take privacy really seriously while doing this processing, which is real nice, although this means parsing common email templates for key fields is now easy for the big webmail providers, so I feel like there is at least some potential for some privacy issues to come up around this stuff at some point. Also, I got the idea email is still dead for people-to-people chat and we’re all going to be pressured into using annoying social media if we want to have friends.

SKA - wikipediaAstrophysicist Melanie Johnston-Hollitt gave a nice presentation on the Square Kilometer Array (SKA) and its crazy-large data requirements. If you’ve never heard of the SKA, it’s basically a project to build the largest ever radio telescope array in the desert of Australia and South Africa. Great for studying the universe, not so good for your spare hard disk drive space. I didn’t catch the exact figures, but the data rate involved is basically more than the entire internet’s traffic (yeah, literally all of it), so they have to pull a bunch of tricks to squash/cut it all down to something manageable. I thought this was kind of out-of-place for this conference keynote, but it’s undoubtedly awesome work that everyone loved to hear about.

Probably the highlight of the conference however was Mark Pesce (co-creator of VRML and a big name in the 3D-web/VR space). Mark is quite a dynamic speaker, and although he’s a little bit sensationalist at times, he’s good at painting a kinds of visionary pictures that the conference would have otherwise lacked. He has been working on something called the Mixed Reality Service (mildly humorous acronym MRS). MRS is a bit like a DNS for geographical spaces, to be used either in virtual worlds or in an augmented reality layer over the real world. I haven’t read the specs for this yet, but I got the impression it broadly works along the lines of sending a server a command like “give me all the web resources associated with the following geographical area”, and it passes you back some URIs registered in that space. As far as I gathered, the URI’s could be anything normally found on the web, like for example a HTML document, sound, video, image or data. There’s obviously a lot of potential uses, for example AR glasses querying for information about nearby buildings (“what are the opening hours and services in that building over there”) or safety information (“is this work area safe for humans to walk through without certain protective equipment”).

VR - wikipediaMark pitched this project in a broader vision about knitting reality and the web together, into a more ubiquitous physical-augmented-mixed-reality-web-type-thing. Mark suggested developers and researchers should get on board of what he feels is the start of a new internet (or even human) era. I’m a little skeptical, but with all the consumer VR and AR equipment coming onto the market right now, and the general enthusiasm in the air amongst people working in the area, it’s hard to deny that we’re in the middle of a potentially massive change. There was also mention of the challenges around how permissions and rights would work in a shared VR/AR space. I definitely want to think and probably write more on this topic in the future.

Other cool themes, people and talks

The Trust Factory set of presentations was also quite “big picture” and another major highlight of the conference. I found the W3C people I saw or talked to here to be universally friendly and awesome. They seem to be honestly keen to engage with anyone they can to contribute to the future standards and directions of the web. I particularly liked the work around RDF (open metadata format/standard that will hopefully become broadly used as big datasets becomes more and more important) that they’re doing.

The presentation by David Hyland-Wood (Ephox and Tinymce guy) on Verifiable Claims was extremely informative. Verifiable Claims seems really important, basically because it allows credentials (eg. basic identity, personal info and so on) to be passed around in a way that provides both highly reliable (good for security) and protective of privacy (which the conference has reinforced is near non-existent on the web right now). David gave the partly metaphorical example of showing your ID to a bouncer at a nightclub, then using your credit card at the bar, and having your identity stolen because you handed of birthdate, photo, financial details to an untrusted party. VCs appear to be a method to solve this by allowing you to digitally say (in this example) “I’m 18”, then allowing the club would query a trusted third party “is this guy really 18?”, who then verify this. This happens without you handing over all sorts of unnecessary details like your birthdate, and in a way the club can verify without just taking your word for it. I haven’t had a look at the technical side of how it works, but with the right consumer and/or legislative pressure to back it up, this appears like it could prevent billions of dollars in ID theft and a whole lot of intrusive invasions of privacy. Dave and his wife Bernadette (who also does a whole bunch of stuff in this space) were also really friendly and fascinating to talk to generally.

I missed out on Bytes and Rights, but I did talk a little with someone from Electronic Frontiers Australia (EFA, Australian version of EFF) who was exactly like all the EFF/EFA stereotypes you might imagine, and appeared to be doing well engaging folks with their campaigns. Several guys from consumer magazine Choice were also there. I was surprised how switched on they were. They’ve recently developed a nifty little VR app called CluckAR that allows you to point your camera at “free range” eggs and see some nice animations to show how actually free range they are, based on a fairly comprehensive and accurate dataset they’ve gathered on the topic. It sounds like they’ve got a lot more plans for this sort of web-based product-info system coming in the future.

There was way too many talks/papers to list, but a few that I thought were quite nice included:

  • A really cool concept of a gamified legal simulation to teach skills in self-representation. This is to try to make up for the shocking and increasing shortage of legal aid. The presenter tells me he will post more details of on in the future on his game design/review blog. I think this is a really great project and I hope it draws some support and/or funding.
  • In India, if you can’t afford to train the large number of community workers needed in rural areas, you can always write some custom software that let’s you efficiently organise conference calls on people’s feature phones instead. Very nice.
  • There’s work to automatically detect sockpuppets, suicidal tendancies, language associated with bad dietary requirements and pretty much anything else you can think of on social media, with mixed results. The sockpuppets could be indentified by fingerprinting without IPs, which is probably overall a good thing, even if its a bit scary.
  • Electronic Arts is working on modelling and quantifying optimal levels for dynamic difficulty adjustment (ie. make it easier if you fail lots) in games as a sort of evidence-based blueprint to hand to game design teams. Kind of fufills the stereotype of EA as a mechanistic mass-production game behemoth, but was quite interesting and I’m pretty sure I’d be doing the same if I was them.
  • There was some cool work on people cooperatively doing 3D design from their mobile devices, though this is still early days and a little clunky.
  • AI, AR and language processing is at the point where you can build a graphical, immersive natural language simulation for training workers in soft skills, like educational environments or for health-workers interacting with patients. Seems too expensive for most organisations to organise and localise just yet though.
  • One group was working on a cool way to break up Twitter echo-chambers by analyzing the best ways in which polarized groups would listen to tweets from opposing camps. Echo-chambers are a growing problem so I thought this was great.

The papers are all available on the www2017 website, so I’ll let you dig up whatever you’re specifically interested in there.

 

General impression

The location of Perth was apparently considered a risky choice, but the conference got a record number of paper submissions. From what I’ve read of previous conferences there might have been slightly less big names in attendance (only slightly though), but one of the isolated cities in the world was still able to pull a lot of interesting people.

The conference was dominated mostly by the more technical topics of the web – data analytics, standards, privacy, security, search, data mining, semantic web and so on. If you’re a web designer, you’re probably not missing too much here (with a couple of interesting exceptions), but if you’re into the web in general there was enough info to drown in. I did find that many of the papers were fairly (unnecessarily?) heavy on the maths and statistical, sometimes appropriately, but sometimes not so much.

This statistical focus was a little culturally bounded. The Chinese and Japanese delegates tended to favor maths and stat-heavy approaches (they also seem to have a lot of platforms to get massive data sets), whereas the Americans, Aussies and Europeans I saw tended to be a bit more mixed between math/stats and a more conceptual focus. There was quite a few interesting presentations from Indian delegates, us Aussies gave a very good account of ourselves, and there were lots of presentations from a multitude of other countries, but the Americans and Chinese dominated the conference papers in terms of raw numbers. It’s no surprise US is extremely strong on tech, and China is clearly not messing around in throwing resources and researchers at being a world power in science and tech. I think other countries often fielded some surprisingly great people and ideas though. Regardless of country almost everyone was very polite and had something to contribute.

I would have liked to see something around the relation between the end of web privacy, employment and freedom of speech, but didn’t notice anything addressing this theme. Overall I’d also say the conference could have used a little more focus and discussion on the big picture “vision” of the web, though there was enough highlights like the VR/AR discussion to keep things interesting.

AI-risk fail

At one point I got talking to a Google machine learning researcher for 10 minutes or so. Afterwards, being very loosely on the periphery of the Existential Risk / AI-risk crowd (I’m not convinced of the singularity but I think AI-risk is worth thinking about), I had a thought I might have missed doing my part in actually talking to someone in the field about AI risk. Luckily (or so I thought at the time), I was late to a conference meal and when I took one of the only seats remaining, I realized I was at a table with the same Google AI guy again. I casually tried to bring up AI-risk as a topic in a very neutral way, pretty much saying something like “how about those CSER people / AI-risk / and that whole singularity thing?”, hoping to get a sense of the appropriate way I could develop the conversation. The guy was pretty cool about it, we had a brief back-and-forth with a few jokes, but I strongly felt something annoyed or upset him about the topic. His general response was along the lines of “oh that? That’s kind of a philosophical question. I’m just trying to get my stuff working!”. He left shortly after (had somewhere to be, apparently), leaving me wondering if I had either looked like a singularity fan-boy or secretly part of some Luddite uprising. His colleague was really friendly and cool about everything, suggested that it’s a cliché thing for non-researchers to bring up, and that it’s a kind of strange thing to be bugged about when you’re just a smart dude struggling to get a PC to answer a simple question. I can totally understand that feeling, even though it’s an important topic in my opinion.

In hindsight I’d say I did more harm than good. I did learn something about a AI-dev perspective, but I probably nudged AI-risk a bit in the direction of “stupid stuff the general public ask about” for this guy. I’d say the lesson here is to not to casually chat about this topic with a researcher you just met unless you have loads of status (I was effectively a pleb at this conference), can demonstrate really good knowledge, and spend a lot longer getting to know the person. I’d also suggest trying to be generally less clumsy than I was. I think I was prepared to discuss the topic properly, but I ended-up coming across like a bit of an idiot.

Nerds = awkward; conferences = awkward; nerd conference = funny

I also found it darkly amusing observing many awkward moments for me and others at the conference. People have a limited window to connect with some really impressive and knowledgeable people in their field, so there’s a whole lot of amusingly obvious physical positioning to be “incidentally” stopping at the same drinks table as this person or that person. My impression is people also try not to spend too long with or get tangled up with people who aren’t really the person they want to be talking to. I definitely got “blanked”/ignored a couple of times (I wasn’t really the most interesting person there, so that’s fine), and I probably was a bit tight with my attention to cool people who I’d normally be really pleased to be hanging out with. I’m really glad I stuck out the awkwardness because I ended up talking with some awesome people, and I’d encourage anyone else feeling a bit lost at this sort of conference to do so too.

The language barriers make for even more awkward hilarity. Sometimes someone would just have no idea what someone else said. There was multiple instances of nodding and smiling and saying “yes yes” to lengthy questions that definitely required something than a “yes” or “no” answer, often in front of large groups of observers. Everyone was really cool and friendly with each-other about this, even near the end of the week when everyone was feeling exhausted and overwhelmed.

There was some rather amusing nerdy singing and dancing at the conference too (putting aside some much more skillful aboriginal dancing at a dinner), which I won’t go into in case you ever get a chance to experience it first hand. The next conference (apparently just called the Web Conference from now on) will be in Lyon in France. The French threw in just as much tourism pitch into as us Aussies did for our conference, but were naturally really smooth in their delivery. I think the relaxed Aussie style worked really well though, and it seems like it made for a successful conference that combined a relaxed atmosphere and a buzz of new ideas.

 

Thoughts/corrections? Email me!

Please note I don’t use my actual name on this blog.

Balanced and unbalanced ethics

Moral philosophy has three main schools of thought. Roughly speaking these are – virtue ethics, which focuses on how to be a morally virtuous person, deontology which focuses on deriving/discovering and following moral principles and rules, and consequentialism, which emphasizes looking at the outcomes of an action to determine its moral quality. Technically speaking, I lean towards the consequentialist camp; however I feel that a balanced and mature ethical approach to life only comes from considering all three schools of thought. I’ve tried to illustrate here the shortcomings of focusing only on one or two of the schools of thought.

Diagram of morality including various intersections between a rule, consequence and virtue focus.

Diagram of morality including various intersections between a rule, consequence and virtue emphasis.

Those with a background in philosophy might note that much of my description does reduce to consequences, but I wanted to illustrate in detail how use of both virtue and deontological reasoning are essential to achieve morally good outcomes. Do you agree? Have other thoughts? Let me know by adding your comments!