femmenietzsche:

balioc:

femmenietzsche:

Thinkers from Confucius to the Stoics to the early modern natural rights philosophers all tried to derive the Good from human nature, as human nature was an element in some divine plan/purpose and thus you could infer basic ethical principles from it. Which is sensible enough, given the presuppositions. It raises some difficulties – I could never manage to get through the Meditations of Marcus Aurelius, since he seemed to totally elide the problem of what Right Actions mean in a universe where all actions ultimately bend towards the Good, and working that out seems foundational to the Stoic project, so without it you’re left with nothing much at all – but they’re generally superable if you’re willing to apply modern sensibilities. Even if you don’t believe in an ultimate Good or a divine plan, it’s still easy enough to use human nature as your foundational text when thinking about practical ethics. “Nobody likes starving to death, so let’s try to make that happen less” is entirely reasonable even for the moral non-realist, assuming basic empathy.

So it’s not actually as problematic to kick away the ladder of religion once you’ve climbed it and reached the quasi-utilitarian heights of modernity as some would have you believe. But things become trickier when human nature is no longer held constant. Genetic engineering, cyborg brain implants, AI all raise ethical questions that can’t be answered simply by asking questions about what sort of society works best for humans. You’re now outside those bounds. You can, in a Yudkowsky style, argue that the development of post-humanity should be approached with the aim of further refining our current values, which is fine insofar as it goes, although it’s almost impossible to imagine that our current values don’t radically underdetermine the possible paths we could take. And human values are inconsistent, so you can’t avoid judgment calls when deciding between them. But it’s perhaps at least not totally wrong.

It becomes even trickier if you come to the conclusion that (some) human values are fundamentally incompatible with the nature of the universe. Not just in terms of their correctness, but in terms of their implications for the long-term survival of humanity as well. Perhaps not all values can be reworked in the way that I described natural rights being reworked above. Some values may just be Bad, assuming you value consistency and/or sustainability. And if it’s impossible to be a human being who is both fully informed about the nature of the world and happy about it, then the same could well be true of posthuman beings as well, at least if there is no radical rupture between our values and theirs. And if these beliefs are so incorrect, surely they have to go sooner or later. “That which can be destroyed by the truth should be.” (I’m obviously treating truth as some sort of meta-value which can constrain other values.)

Now you could argue that the wholescale removal of basic human ethical beliefs is still compatible with Yudkowsky’s idea of refining human nature, and you’d be right, but at the very least the emphasis has changed substantially enough to warrant notice. The arbiter of what post-humanity should be is less humanity today and more the brute nature of reality. But, importantly, reality probably underdetermines the space of workable ethical systems even more than human nature does, so if you’re tossing out huge chunks of our intuitions, that doesn’t mean that there will be an obvious replacement waiting in the wings. Far from it. And even if you’re only throwing out, say, our intuitions about population ethics, that could open up a gap big enough to radically alter the future development of all other values. Yudkowsky himself is obviously aware that even a small missing piece of the moral puzzle can lead to dystopia. I don’t know if all the alternative futures are awful, but I’d bet a bunch of them would consider each other to be hellish dystopias, at the very least.

Still, the laws of physics plus correct philosophy plus the fragments of human nature able to survive the scourging of a truthful fire are much more of a grounding for the future than nothing at all.

But things become trickier when human nature is no longer held constant. Genetic engineering, cyborg brain implants, AI all raise ethical questions that can’t be answered simply by asking questions about what sort of society works best for humans. You’re now outside those bounds. 

Highlighting this point, because it’s important…

…and reminding everyone that we don’t have to be at the point of wrestling with transhumanist technologies in order to be facing down this exact kind of problem.  We are facing it down right now.

To a certain extent, political ideologies and philosophies-of-morality are designed to (a) perceive the boundaries of human nature, and (b) react to those perceptions by creating societies that are good for humans-as-they-are.  But only to a certain extent.  Political ideologies and philosophies-of-morality are also designed to change people, to fit people’s utility functions to circumstances, to bring people’s values in line.  

Tradcons don’t actually want to force women, kicking and screaming, into the kitchen and the nursery; they mostly want to teach women to value being helpmeets and mothers.  Identitarian leftists want to teach people to be less repulsed by various kinds of alien-ness, and simultaneously to be more repulsed by racism and bigotry.  Paul Graham wants people to want to keep their identities small.  It’s all about values-shifting, and you it’s not entirely helpful to respond with “people aren’t always like that,” because the goal is to make people be like that. 

The term “culture war” is singularly apt.  

Indeed, though as with various failed utopian schemes there are limits to what can be done (though not to what people wish to be done).

Oh, yeah, absolutely.  Changing people’s values is super hard,and when you try it rarely goes the way you want it to go.

But people still spend a lot of time and effort trying.