Recently I’ve been working on some Technonaturalist philosophy. For me the heart of the task is getting at what really matters in human values, and separating them from the proxies we surround them with – our instrumental values and our abstract representations. It sounds like fairly dry stuff, and perhaps in parts it is, but it’s a salient point with the increasing importance of technology in our society, and the philosophies that develop as a result. What’s important about the philosophical part of the discussion? It effects our ability to develop a vision of technological development that is compatible with a future for the human species and our fellow species on this planet.
So details, details details – I’ve been getting into some really interesting discussions with some fine people around this topic:
- On SlateStarCodex a brief exchange around consciousness and “transhumanity”, versus a more biological approach
- On the futurology subreddit a much more detailed and extremely interesting (well for me) discussion exploring the concepts of consciousness versus biological humans in moral philosophy. I argue there’s a lot of good reasons to think that consciousness, while a convenient concept, is just a proxy – it’s not in itself morally significant. Should be a fairly brief conversation right? 🙂
There’s a fundamental difference between the map and the territory. I hope with a bit of deeper thought, we can avoid burning the territory to ashes in an effort to make a really nice looking map.