A Challenge: Question 2

bambamramfan:

balioc:

bambamramfan:

balioc:

bambamramfan:

balioc:

bambamramfan:

bambamramfan:

My favorite thing about the ITT is the way people answer question 2 “ What is the true reason, deep down, that you believe what you believe? What piece of evidence, test, or line of reasoning would convince you that you’re wrong about your ideology?”

I’m certain every single respondent has thought deeply about what they believe. They’ve seen studies that back them up, thought about ethical principles, and seen the effects of oppression first hand. But when asked “why do you really believe this? what swayed you so much that it would change your mind if it was contradicted”… they often dissolve into vagueness and “everything shows I’m right!” Everything, of course, can never be disproven.

It’s a fascinating insight into how ideology works. Ideology isn’t formed by realizing our terminal values, or reading a study, it’s a much more osmotic experience than that. It involves quasi-believing things because so many other people we know believe them, and not questioning them *too* much because doing so is uncomfortable (both socially, and to our own identity as a good person.) Like Ra, ideology hates it when you try to pin down terms and reasons too precisely.

So. Let’s do that. Here is my challenge to any rebloggers: What is the true reason deep down that you believe what you believe? What evidence could convince you that you were wrong?

I’ll start.


This tumblr is arguing the humanist viewpoint, so I’ll focus on why I’m a humanist, and what could sway me into other philosophies (specifically: parochial tribalism, anti-human universalism, or rights based liberalism and materialism.)

My terminal value is not special. It’s basically happy people, with an emphasis on complex and interesting lives and societies. My own personal goal is to find a button that increases human happiness no matter how much you push it, and to keep pushing it until it breaks.

It turns out that most of the things that we think increase human happiness, such as having better living conditions or more money, don’t really. And even our attempts to build up economies so that people have more stuff, are horribly complicated and unpredictable. I am distinctly unimpressed with a lot of the rationalist projects in this regard, and I suspect they will spend decades trying to find ways to improve the happiness of others with material interventions, and rarely feel they have made much success. There will still be misery everywhere, even after billions are spent. (If rationalist interventions started making a measurable and sizable impact in the amount of misery in the world, that would be evidence to change my view.) (Yes I saw Scott’s chart about malaria interventions. I approve of malaria interventions. And the euphoria in the comments only emphasized to me how many rationalists are insecure about whether this project of theirs is having any results.)

Additionally, a lot of the rules we set down about how we should treat each other should increase human happiness, but mostly make humans miserable as they fight over the rules, and the rules are enforced haphazardly, with some receiving the extreme brunt of enforcement and others being afraid there isn’t enough enforcement. Which is why I am skeptical of rights based liberalism, and will continue to be until it is shown to be a better social technology than primitive tribalism.

The button that does work, that in my experience does make people reliably feel better, is listening to them and one-on-one interaction. Humans are social animals, and humans have very unique individual experiences. Respecting that individual complexity, and giving them social validation, seems the most reliably way to increase happiness, even if only on a very small scale.

If listening and validation are shown to be in the long run net-negative in happiness (if for instance, they operate like a drug that gives you a high that you then grow tolerant for) then I would be skeptical of that button.
If there is no button that can reliably increase human happiness, well that would say a great deal about the chaotic nature of the human condition, which fundamentally validates my anti-categorical humanism.

But basically… if any button on the human psyche is shown to have reliable results - peer reviewed and consistently replicated - about how to affect people and make them happy, I would throw my philosophy out the window and pursue that. My current stance is a result of failure to find anything like that.

Now, humanism might just be speciesist, and it’s possible I don’t give enough credit to non-humans and dehumanized subjects. By appreciating complexity, I may be favoring people who’ve had interesting lives over people who have been so beaten down by the system that they will always be boring to me. This is a real risk and why I dabble in universalism elsewhere. But for now, my interactions with any human have shown that no matter how degraded they have been by society, they’re still as intelligent and social as the richest person I’ve met when you just listen to them for 10 minutes. If this were shown not to be the case statistically, I’d feel guilty about the inherent elitism of humanism, and I’d focus more on a philosophy that tries to exalt the most degraded and inhuman subjects.

Similarly for species, there seems a large gap in cognitive quality between humans and any other creature. If some species existed that were just somewhat less intelligent than humans but still identifiable as having a subjective experience in there, I’d have to look into a much more gradient focused definition of sentience and moral agency.

@jadagul ‘s statement

I have a basically unshakeable conviction that people are mostly decent, and will treat most people well, and would like me to be happy. And following that belief has served me well.
If I ever lost that belief, I would have to seriously rethink my ethics–my current position of trust-by-default would make a lot less sense, and I would probably find it much harder to sustain universal love and acceptance.

from their (he? she? I don’t know) post here, reminded me of the above challenge. More people should do it!

“But basically… if any button on the human psyche is shown to have reliable results - peer reviewed and consistently replicated - about how to affect people and make them happy, I would throw my philosophy out the window and pursue that. My current stance is a result of failure to find anything like that.”

…I think this means that you have to have an answer to “why not wireheading?”

Yeah, I know, it’s the most sophomoric of objections.  But wireheading is a thing, and it does reliably make people happy without them developing a tolerance.  The first-order problem with it for philosophers, in cynical terms, is that from the outside it creates something that looks less like our normal models of happiness than like drug addiction.  If you have non-hedonic terminal values, it’s very easy to explain why it’s not good enough.  But you seem to be working at a project where any conceivable result is going to look very weird from the standpoint of a normal person with normal assumptions.  So, uh, what’s wrong with the answer you’ve got?

I’d want to see the literature on existing wireheading technology as it is now. I’m not really aware of it, and how it works would matter a lot.

The perspective I mention is more to say “lots of people say their idea makes humanity better off, but then does not hold up under empirical verification on a consistent basis.” This is a frequent enough occurrence that it’s a good default stance towards a lot of social engineering and direct neurochemical futzing. Wearing the humanist mask, I find this fact existentially interesting.

Faced with the actual wireheading dilemma, I would probably say that the complexity of life has to exist to be happy. If wireheading makes you a simpleton that’s no more interesting than a labrat, intellectually and emotionally speaking… well I sorta want you to be happy, but only as much as I want labrats to be happy.

If wireheading just makes you euphoric but still living a full life. Then yeah go ahead, why ever not.

“…the complexity of life has to exist to be happy.”

This seems like an obvious thing to think, like a position sufficiently normal that you can just toss it off – that’s not a dig at you, it seems that way to me too, in a very visceral way – but in fact it’s a dodge.  It is the fundamental dodge that underlies all non-crazypants hedonic philosophies. 

*****

We have a strong instinct that happiness is a sort of warm emotional blanket that ought to cover life-in-general, that we ought to be basically happy most of the time unless something is wrong.  I don’t know whether that instinct is baked into the human psyche, or whether it arises from living in a modern civilized society where our basic needs are all easily met, or what.  There are enough conflicting arguments on that score to go around. 

But from a brain-standpoint, from the standpoint of evo-psych, it is of course total nonsense.  Happiness is a mental prodding device, like pain and hunger and fear, that was “designed” to guide us through complex situations by activating in specific limited situations.  It’s the “ding!” that tells us that we did something right in a not-to-be-taken-for-granted way, and that we should try to do that thing again. 

With resources and engineering skill, you can structure a life to maximize happiness, in the same way that you could structure a life to maximize hunger or fear or pain.  But it’s not going to look very much like normal human existence.  One way or another, it’s going to be totally built around exploits (in the cheating-at-video-games sense); it’s going to rely on superstimuli and brain glitches to keep normality at bay, because normality is the hedonic treadmill.  It’s going to be something very much like wireheading, even if it isn’t wireheading exactly.  It’s going to offend your aesthetic sensibilities, it’s going to look and feel wrong, because the lessons we learned about what looks right are all rooted in methods of existence that rely on happiness being a sometimes food. 

*****

OK, having said all that: I am not at all convinced that I believe it.  But it’s certainly a possibility, in the least convenient of all possible worlds.  Building your system of ethics on a feature of the human brain means that you have to be prepared for neurology to work in a way that you wish it wouldn’t.

Or you can just define “happiness” in some wonky way that doesn’t basically map to a human brainstate.  That’s the standard move amongst mainstream utilitarian philosophers, as far as I can tell.  But it is what we call a lie, and leads to some truly unconvincing contortions as the philosophers in question try to hide the fact that they’re basically advocating for their own aesthetic preferences about life to be put into practice. 

If I’m reading you right, you’re saying happiness is more like an optimization mechanism, than a stable state itself that you can be in or out of.

I agree. I probably misphrased myself originally. As an empirical matter, there seemed to be no easy way to “just keep people happy”. That is probably related to your explanation above.

But I tried that first (and read about people trying it a lot), and the conclusion seems to be that it is really incredibly hard no way more than “if my guy wins elections her policies will make this happiness happen”.

I think I didn’t explain myself very clearly, for which I apologize.

The point of the wireheading example is: there is an easy way to “just keep people happy.”  All it takes is a little piece of metal, a generator, and some brain surgery.  We have everything we need to do it right now. 

Of course, the result you get out of that methodology is icky.  It is not at all what you want!  (Probably.  You might be some kind of ethical werewolf or something.  Or maybe you’ve done enough fiddling with the aesthetics that you can appreciate the gods on their lotus thrones.) 

This is not because wireheading gives you some kind of fake not-good-enough happiness.  This is because happiness isn’t actually the thing you want.  As you come closer and closer to maximizing happiness, through any methodology at all, you’re going to converge on a result that will bother you in the same way that wireheading bothers you.  A sufficiently-reliable happiness engine will inevitably produce imbecile simplicity, because the dribs-and-drabs scarcity of happiness – and the consequent scramble to attain it – is one of the key factors in the kind of appealingly complicated life that you’ve learned to value.

(Fictional evidence isn’t trustworthy, but for sheer conceptual punchiness on this exact topic as it addresses your interests, I think it’s useful to turn to David Foster Wallace.  Infinite Jest has, as its central Macguffin, something that is basically wireheading-in-video-tape-form: a short movie that produces addictive euphoria when watched.  And the content of that movie is perfect infinitely-accessible emotional validation of the viewer.) 

There are lots of ways to address this problem, lots of alternate philosophical paths you can take.  But so long as you think you’re aiming to maximize a single value, you’re just going to push farther and farther into maximized-value territory until you suddenly discover the Monster at the End of the Book and freak out.

No i think I got you. (Is anyone else even following this argument anymore, besides @cloakofshadow ?)

If the wireheading just eventually leads you to being a blissed out lab rat, then I stand by my “complexity has diminished” caveat. It doesn’t really matter what got you there, what matters is at that point you are more like a thing that is dead than a human that is alive, and I’m only so glad you’re happy.

But to bite the bullet more, i think we are just using different measures of happiness. I want people to be not afraid, not anxious, and not in pain. I think that human oppression is caused not by greed, but by fear, and if you remove fear, people’s natural generosity and altruism makes them treat each other well.

If wireheading doesn’t have that effect because it doesn’t remove fear and anxiety, then it’s not the happiness my goal is based around.

If wireheading doesn’t have that effect because even without fear and anxiety we are still selfish dicks who oppress each other, then I have to substantially rethink my understanding of the world. As I said, I’d need to see current literature.

If wireheading does have that effect, then I am all in favor of it and sign me up now. I don’t want to be afraid and anxious anymore. 

I don’t think that’s the case though.

I mean…at the very least, it’s easy enough to posit a wireheading technology that has that effect. 

(My understanding is that real-life wireheaders at our tech level, who generally are not just allowed to keep the machine on at the top setting until they die, are afraid and anxious of exactly one thing: that their access to the stimulus will be taken away.  But of course that’s usually not what people mean when they bring up the philosophical hypothetical.  And it’s not hard to rig the setup such that this is not a worry any more than starvation is a worry for us.)

And what do you have then?  You’re not anxious or afraid, because you’re a blissed-out lab rat.  Turns out that’s what not being anxious or afraid, in a serious sustainable way, gets you.  This is not a circle that can be squared.  The kind of human life to which you instinctively assign value runs on an engine of fear and pain. 

*****

To reiterate: I am not actually convinced that this is true.  But I think it’s a real possibility that you have to take into account before you can commit to hedonic consequentialism in a principled way.