For those interested in artifical intelligence and existential risk, I posted these ones recently:
- Some thoughts on a friendly AGI
- If we knew about all the ways an Intelligence Explosion could go wrong, would we be able to avoid it?
- Transhumanists, if you had to choose between Hugo De Garis’ “Cosmists” and “Terrans”, which path would you take? (reddit discussion)
I also read a really interest piece on politics over on slatestarcodex.com. If you’re ever banging your head against a wall wondering why people are so stupid, this is pretty good read on the topic! I also posted a philosophical skeleton version of my moral argument, and got reddit-hated pretty hard.
- A theory of moral truth (Arbor Vitae)
- I Can Tolerate Anything Except The Outgroup (from Slate Star Codex)