January 2017

raggedjackscarlet:

was anyone else honestly disturbed by the Passengers discourse?

I just can’t fathom any human being with a scrap of empathy saying “well if you were a GOOD PERSON your response to a lifetime of solitary confinement would be to INSTANTLY become a HYPER-IDEALIZED FATHER FIGURE with ZERO EMOTIONAL NEEDS”

So I should start by noting: I haven’t actually seen Passengers, or encountered very much of the discourse to which you refer.  I know the basic plot concept from Internet spoilers, and I’ve seen seen a couple of cutesy Tumblr fanfics about the Chris Pratt character making some different choices.  But if there are relevant subtleties that can be drawn out of the text itself, or facets of the discourse that feature people being especially callous, I’m totally missing them.

That said –

It sounds as though, if you read it along the grain, Passengers is not a moral-dilemma story so much as a temptation story.  And a very powerful one.  (Unusually so, given that these days we don’t have a lot of cultural energy invested in the concept of resisting temptation.) 

“You are in a monstrous, terrible, mind-and-soul-destroying situation.  For reasons that are not your fault, you have been sentenced to lifelong solitary confinement.  Your only hope of ever knowing any human contact…is to condemn another innocent to your situation so that at least you can have a cellmate.  At best, this will be awful for her, a cruelty that she didn’t do anything to deserve – a life-with-two is better than a life-with-one, but it’s still not what anyone wants, it’s still the destruction of all her hopes and dreams and joys for the sake of your own sanity.  Will you inflict that on someone else?  How much will you suppress your own needs for the sake of others?” 

There is clearly a correct answer here, a heroic answer, according to Normal-Person Conventional Morality.  Heroes are un-selfish enough that they’re willing to suffer harm to save others.  That’s pretty much the choice being presented here. 

Given that, it’s unfair to call the Chris Pratt character a monster.  He may have failed his moral test, but it was an astonishingly difficult one, and there’s an awful lot of room between “hero” and “monster.” 

It is also unfair to castigate people for fantasizing about the version of the story where he is that heroic, and manages to pull off his heroism with style and grace.  The Hyper-Idealized Father Figure version of the character is a totally legit moral exemplar, and I’m not opposed to people telling admiring stories about moral exemplars. 

One way or another, it seems like a bad plan to insist that this Extreme Space Trolley Problem is a direct metaphor for real-life much-smaller-bore conflicts of personal interest.  

A Challenge: Question 2

bambamramfan:

balioc:

bambamramfan:

balioc:

bambamramfan:

balioc:

bambamramfan:

bambamramfan:

My favorite thing about the ITT is the way people answer question 2 “ What is the true reason, deep down, that you believe what you believe? What piece of evidence, test, or line of reasoning would convince you that you’re wrong about your ideology?”

I’m certain every single respondent has thought deeply about what they believe. They’ve seen studies that back them up, thought about ethical principles, and seen the effects of oppression first hand. But when asked “why do you really believe this? what swayed you so much that it would change your mind if it was contradicted”… they often dissolve into vagueness and “everything shows I’m right!” Everything, of course, can never be disproven.

It’s a fascinating insight into how ideology works. Ideology isn’t formed by realizing our terminal values, or reading a study, it’s a much more osmotic experience than that. It involves quasi-believing things because so many other people we know believe them, and not questioning them *too* much because doing so is uncomfortable (both socially, and to our own identity as a good person.) Like Ra, ideology hates it when you try to pin down terms and reasons too precisely.

So. Let’s do that. Here is my challenge to any rebloggers: What is the true reason deep down that you believe what you believe? What evidence could convince you that you were wrong?

I’ll start.


This tumblr is arguing the humanist viewpoint, so I’ll focus on why I’m a humanist, and what could sway me into other philosophies (specifically: parochial tribalism, anti-human universalism, or rights based liberalism and materialism.)

My terminal value is not special. It’s basically happy people, with an emphasis on complex and interesting lives and societies. My own personal goal is to find a button that increases human happiness no matter how much you push it, and to keep pushing it until it breaks.

It turns out that most of the things that we think increase human happiness, such as having better living conditions or more money, don’t really. And even our attempts to build up economies so that people have more stuff, are horribly complicated and unpredictable. I am distinctly unimpressed with a lot of the rationalist projects in this regard, and I suspect they will spend decades trying to find ways to improve the happiness of others with material interventions, and rarely feel they have made much success. There will still be misery everywhere, even after billions are spent. (If rationalist interventions started making a measurable and sizable impact in the amount of misery in the world, that would be evidence to change my view.) (Yes I saw Scott’s chart about malaria interventions. I approve of malaria interventions. And the euphoria in the comments only emphasized to me how many rationalists are insecure about whether this project of theirs is having any results.)

Additionally, a lot of the rules we set down about how we should treat each other should increase human happiness, but mostly make humans miserable as they fight over the rules, and the rules are enforced haphazardly, with some receiving the extreme brunt of enforcement and others being afraid there isn’t enough enforcement. Which is why I am skeptical of rights based liberalism, and will continue to be until it is shown to be a better social technology than primitive tribalism.

The button that does work, that in my experience does make people reliably feel better, is listening to them and one-on-one interaction. Humans are social animals, and humans have very unique individual experiences. Respecting that individual complexity, and giving them social validation, seems the most reliably way to increase happiness, even if only on a very small scale.

If listening and validation are shown to be in the long run net-negative in happiness (if for instance, they operate like a drug that gives you a high that you then grow tolerant for) then I would be skeptical of that button.
If there is no button that can reliably increase human happiness, well that would say a great deal about the chaotic nature of the human condition, which fundamentally validates my anti-categorical humanism.

But basically… if any button on the human psyche is shown to have reliable results - peer reviewed and consistently replicated - about how to affect people and make them happy, I would throw my philosophy out the window and pursue that. My current stance is a result of failure to find anything like that.

Now, humanism might just be speciesist, and it’s possible I don’t give enough credit to non-humans and dehumanized subjects. By appreciating complexity, I may be favoring people who’ve had interesting lives over people who have been so beaten down by the system that they will always be boring to me. This is a real risk and why I dabble in universalism elsewhere. But for now, my interactions with any human have shown that no matter how degraded they have been by society, they’re still as intelligent and social as the richest person I’ve met when you just listen to them for 10 minutes. If this were shown not to be the case statistically, I’d feel guilty about the inherent elitism of humanism, and I’d focus more on a philosophy that tries to exalt the most degraded and inhuman subjects.

Similarly for species, there seems a large gap in cognitive quality between humans and any other creature. If some species existed that were just somewhat less intelligent than humans but still identifiable as having a subjective experience in there, I’d have to look into a much more gradient focused definition of sentience and moral agency.

@jadagul ‘s statement

I have a basically unshakeable conviction that people are mostly decent, and will treat most people well, and would like me to be happy. And following that belief has served me well.
If I ever lost that belief, I would have to seriously rethink my ethics–my current position of trust-by-default would make a lot less sense, and I would probably find it much harder to sustain universal love and acceptance.

from their (he? she? I don’t know) post here, reminded me of the above challenge. More people should do it!

“But basically… if any button on the human psyche is shown to have reliable results - peer reviewed and consistently replicated - about how to affect people and make them happy, I would throw my philosophy out the window and pursue that. My current stance is a result of failure to find anything like that.”

…I think this means that you have to have an answer to “why not wireheading?”

Yeah, I know, it’s the most sophomoric of objections.  But wireheading is a thing, and it does reliably make people happy without them developing a tolerance.  The first-order problem with it for philosophers, in cynical terms, is that from the outside it creates something that looks less like our normal models of happiness than like drug addiction.  If you have non-hedonic terminal values, it’s very easy to explain why it’s not good enough.  But you seem to be working at a project where any conceivable result is going to look very weird from the standpoint of a normal person with normal assumptions.  So, uh, what’s wrong with the answer you’ve got?

I’d want to see the literature on existing wireheading technology as it is now. I’m not really aware of it, and how it works would matter a lot.

The perspective I mention is more to say “lots of people say their idea makes humanity better off, but then does not hold up under empirical verification on a consistent basis.” This is a frequent enough occurrence that it’s a good default stance towards a lot of social engineering and direct neurochemical futzing. Wearing the humanist mask, I find this fact existentially interesting.

Faced with the actual wireheading dilemma, I would probably say that the complexity of life has to exist to be happy. If wireheading makes you a simpleton that’s no more interesting than a labrat, intellectually and emotionally speaking… well I sorta want you to be happy, but only as much as I want labrats to be happy.

If wireheading just makes you euphoric but still living a full life. Then yeah go ahead, why ever not.

“…the complexity of life has to exist to be happy.”

This seems like an obvious thing to think, like a position sufficiently normal that you can just toss it off – that’s not a dig at you, it seems that way to me too, in a very visceral way – but in fact it’s a dodge.  It is the fundamental dodge that underlies all non-crazypants hedonic philosophies. 

*****

We have a strong instinct that happiness is a sort of warm emotional blanket that ought to cover life-in-general, that we ought to be basically happy most of the time unless something is wrong.  I don’t know whether that instinct is baked into the human psyche, or whether it arises from living in a modern civilized society where our basic needs are all easily met, or what.  There are enough conflicting arguments on that score to go around. 

But from a brain-standpoint, from the standpoint of evo-psych, it is of course total nonsense.  Happiness is a mental prodding device, like pain and hunger and fear, that was “designed” to guide us through complex situations by activating in specific limited situations.  It’s the “ding!” that tells us that we did something right in a not-to-be-taken-for-granted way, and that we should try to do that thing again. 

With resources and engineering skill, you can structure a life to maximize happiness, in the same way that you could structure a life to maximize hunger or fear or pain.  But it’s not going to look very much like normal human existence.  One way or another, it’s going to be totally built around exploits (in the cheating-at-video-games sense); it’s going to rely on superstimuli and brain glitches to keep normality at bay, because normality is the hedonic treadmill.  It’s going to be something very much like wireheading, even if it isn’t wireheading exactly.  It’s going to offend your aesthetic sensibilities, it’s going to look and feel wrong, because the lessons we learned about what looks right are all rooted in methods of existence that rely on happiness being a sometimes food. 

*****

OK, having said all that: I am not at all convinced that I believe it.  But it’s certainly a possibility, in the least convenient of all possible worlds.  Building your system of ethics on a feature of the human brain means that you have to be prepared for neurology to work in a way that you wish it wouldn’t.

Or you can just define “happiness” in some wonky way that doesn’t basically map to a human brainstate.  That’s the standard move amongst mainstream utilitarian philosophers, as far as I can tell.  But it is what we call a lie, and leads to some truly unconvincing contortions as the philosophers in question try to hide the fact that they’re basically advocating for their own aesthetic preferences about life to be put into practice. 

If I’m reading you right, you’re saying happiness is more like an optimization mechanism, than a stable state itself that you can be in or out of.

I agree. I probably misphrased myself originally. As an empirical matter, there seemed to be no easy way to “just keep people happy”. That is probably related to your explanation above.

But I tried that first (and read about people trying it a lot), and the conclusion seems to be that it is really incredibly hard no way more than “if my guy wins elections her policies will make this happiness happen”.

I think I didn’t explain myself very clearly, for which I apologize.

The point of the wireheading example is: there is an easy way to “just keep people happy.”  All it takes is a little piece of metal, a generator, and some brain surgery.  We have everything we need to do it right now. 

Of course, the result you get out of that methodology is icky.  It is not at all what you want!  (Probably.  You might be some kind of ethical werewolf or something.  Or maybe you’ve done enough fiddling with the aesthetics that you can appreciate the gods on their lotus thrones.) 

This is not because wireheading gives you some kind of fake not-good-enough happiness.  This is because happiness isn’t actually the thing you want.  As you come closer and closer to maximizing happiness, through any methodology at all, you’re going to converge on a result that will bother you in the same way that wireheading bothers you.  A sufficiently-reliable happiness engine will inevitably produce imbecile simplicity, because the dribs-and-drabs scarcity of happiness – and the consequent scramble to attain it – is one of the key factors in the kind of appealingly complicated life that you’ve learned to value.

(Fictional evidence isn’t trustworthy, but for sheer conceptual punchiness on this exact topic as it addresses your interests, I think it’s useful to turn to David Foster Wallace.  Infinite Jest has, as its central Macguffin, something that is basically wireheading-in-video-tape-form: a short movie that produces addictive euphoria when watched.  And the content of that movie is perfect infinitely-accessible emotional validation of the viewer.) 

There are lots of ways to address this problem, lots of alternate philosophical paths you can take.  But so long as you think you’re aiming to maximize a single value, you’re just going to push farther and farther into maximized-value territory until you suddenly discover the Monster at the End of the Book and freak out.

No i think I got you. (Is anyone else even following this argument anymore, besides @cloakofshadow ?)

If the wireheading just eventually leads you to being a blissed out lab rat, then I stand by my “complexity has diminished” caveat. It doesn’t really matter what got you there, what matters is at that point you are more like a thing that is dead than a human that is alive, and I’m only so glad you’re happy.

But to bite the bullet more, i think we are just using different measures of happiness. I want people to be not afraid, not anxious, and not in pain. I think that human oppression is caused not by greed, but by fear, and if you remove fear, people’s natural generosity and altruism makes them treat each other well.

If wireheading doesn’t have that effect because it doesn’t remove fear and anxiety, then it’s not the happiness my goal is based around.

If wireheading doesn’t have that effect because even without fear and anxiety we are still selfish dicks who oppress each other, then I have to substantially rethink my understanding of the world. As I said, I’d need to see current literature.

If wireheading does have that effect, then I am all in favor of it and sign me up now. I don’t want to be afraid and anxious anymore. 

I don’t think that’s the case though.

I mean…at the very least, it’s easy enough to posit a wireheading technology that has that effect. 

(My understanding is that real-life wireheaders at our tech level, who generally are not just allowed to keep the machine on at the top setting until they die, are afraid and anxious of exactly one thing: that their access to the stimulus will be taken away.  But of course that’s usually not what people mean when they bring up the philosophical hypothetical.  And it’s not hard to rig the setup such that this is not a worry any more than starvation is a worry for us.)

And what do you have then?  You’re not anxious or afraid, because you’re a blissed-out lab rat.  Turns out that’s what not being anxious or afraid, in a serious sustainable way, gets you.  This is not a circle that can be squared.  The kind of human life to which you instinctively assign value runs on an engine of fear and pain. 

*****

To reiterate: I am not actually convinced that this is true.  But I think it’s a real possibility that you have to take into account before you can commit to hedonic consequentialism in a principled way.

A Challenge: Question 2

bambamramfan:

balioc:

bambamramfan:

balioc:

bambamramfan:

bambamramfan:

My favorite thing about the ITT is the way people answer question 2 “ What is the true reason, deep down, that you believe what you believe? What piece of evidence, test, or line of reasoning would convince you that you’re wrong about your ideology?”

I’m certain every single respondent has thought deeply about what they believe. They’ve seen studies that back them up, thought about ethical principles, and seen the effects of oppression first hand. But when asked “why do you really believe this? what swayed you so much that it would change your mind if it was contradicted”… they often dissolve into vagueness and “everything shows I’m right!” Everything, of course, can never be disproven.

It’s a fascinating insight into how ideology works. Ideology isn’t formed by realizing our terminal values, or reading a study, it’s a much more osmotic experience than that. It involves quasi-believing things because so many other people we know believe them, and not questioning them *too* much because doing so is uncomfortable (both socially, and to our own identity as a good person.) Like Ra, ideology hates it when you try to pin down terms and reasons too precisely.

So. Let’s do that. Here is my challenge to any rebloggers: What is the true reason deep down that you believe what you believe? What evidence could convince you that you were wrong?

I’ll start.


This tumblr is arguing the humanist viewpoint, so I’ll focus on why I’m a humanist, and what could sway me into other philosophies (specifically: parochial tribalism, anti-human universalism, or rights based liberalism and materialism.)

My terminal value is not special. It’s basically happy people, with an emphasis on complex and interesting lives and societies. My own personal goal is to find a button that increases human happiness no matter how much you push it, and to keep pushing it until it breaks.

It turns out that most of the things that we think increase human happiness, such as having better living conditions or more money, don’t really. And even our attempts to build up economies so that people have more stuff, are horribly complicated and unpredictable. I am distinctly unimpressed with a lot of the rationalist projects in this regard, and I suspect they will spend decades trying to find ways to improve the happiness of others with material interventions, and rarely feel they have made much success. There will still be misery everywhere, even after billions are spent. (If rationalist interventions started making a measurable and sizable impact in the amount of misery in the world, that would be evidence to change my view.) (Yes I saw Scott’s chart about malaria interventions. I approve of malaria interventions. And the euphoria in the comments only emphasized to me how many rationalists are insecure about whether this project of theirs is having any results.)

Additionally, a lot of the rules we set down about how we should treat each other should increase human happiness, but mostly make humans miserable as they fight over the rules, and the rules are enforced haphazardly, with some receiving the extreme brunt of enforcement and others being afraid there isn’t enough enforcement. Which is why I am skeptical of rights based liberalism, and will continue to be until it is shown to be a better social technology than primitive tribalism.

The button that does work, that in my experience does make people reliably feel better, is listening to them and one-on-one interaction. Humans are social animals, and humans have very unique individual experiences. Respecting that individual complexity, and giving them social validation, seems the most reliably way to increase happiness, even if only on a very small scale.

If listening and validation are shown to be in the long run net-negative in happiness (if for instance, they operate like a drug that gives you a high that you then grow tolerant for) then I would be skeptical of that button.
If there is no button that can reliably increase human happiness, well that would say a great deal about the chaotic nature of the human condition, which fundamentally validates my anti-categorical humanism.

But basically… if any button on the human psyche is shown to have reliable results - peer reviewed and consistently replicated - about how to affect people and make them happy, I would throw my philosophy out the window and pursue that. My current stance is a result of failure to find anything like that.

Now, humanism might just be speciesist, and it’s possible I don’t give enough credit to non-humans and dehumanized subjects. By appreciating complexity, I may be favoring people who’ve had interesting lives over people who have been so beaten down by the system that they will always be boring to me. This is a real risk and why I dabble in universalism elsewhere. But for now, my interactions with any human have shown that no matter how degraded they have been by society, they’re still as intelligent and social as the richest person I’ve met when you just listen to them for 10 minutes. If this were shown not to be the case statistically, I’d feel guilty about the inherent elitism of humanism, and I’d focus more on a philosophy that tries to exalt the most degraded and inhuman subjects.

Similarly for species, there seems a large gap in cognitive quality between humans and any other creature. If some species existed that were just somewhat less intelligent than humans but still identifiable as having a subjective experience in there, I’d have to look into a much more gradient focused definition of sentience and moral agency.

@jadagul ‘s statement

I have a basically unshakeable conviction that people are mostly decent, and will treat most people well, and would like me to be happy. And following that belief has served me well.
If I ever lost that belief, I would have to seriously rethink my ethics–my current position of trust-by-default would make a lot less sense, and I would probably find it much harder to sustain universal love and acceptance.

from their (he? she? I don’t know) post here, reminded me of the above challenge. More people should do it!

“But basically… if any button on the human psyche is shown to have reliable results - peer reviewed and consistently replicated - about how to affect people and make them happy, I would throw my philosophy out the window and pursue that. My current stance is a result of failure to find anything like that.”

…I think this means that you have to have an answer to “why not wireheading?”

Yeah, I know, it’s the most sophomoric of objections.  But wireheading is a thing, and it does reliably make people happy without them developing a tolerance.  The first-order problem with it for philosophers, in cynical terms, is that from the outside it creates something that looks less like our normal models of happiness than like drug addiction.  If you have non-hedonic terminal values, it’s very easy to explain why it’s not good enough.  But you seem to be working at a project where any conceivable result is going to look very weird from the standpoint of a normal person with normal assumptions.  So, uh, what’s wrong with the answer you’ve got?

I’d want to see the literature on existing wireheading technology as it is now. I’m not really aware of it, and how it works would matter a lot.

The perspective I mention is more to say “lots of people say their idea makes humanity better off, but then does not hold up under empirical verification on a consistent basis.” This is a frequent enough occurrence that it’s a good default stance towards a lot of social engineering and direct neurochemical futzing. Wearing the humanist mask, I find this fact existentially interesting.

Faced with the actual wireheading dilemma, I would probably say that the complexity of life has to exist to be happy. If wireheading makes you a simpleton that’s no more interesting than a labrat, intellectually and emotionally speaking… well I sorta want you to be happy, but only as much as I want labrats to be happy.

If wireheading just makes you euphoric but still living a full life. Then yeah go ahead, why ever not.

“…the complexity of life has to exist to be happy.”

This seems like an obvious thing to think, like a position sufficiently normal that you can just toss it off – that’s not a dig at you, it seems that way to me too, in a very visceral way – but in fact it’s a dodge.  It is the fundamental dodge that underlies all non-crazypants hedonic philosophies. 

*****

We have a strong instinct that happiness is a sort of warm emotional blanket that ought to cover life-in-general, that we ought to be basically happy most of the time unless something is wrong.  I don’t know whether that instinct is baked into the human psyche, or whether it arises from living in a modern civilized society where our basic needs are all easily met, or what.  There are enough conflicting arguments on that score to go around. 

But from a brain-standpoint, from the standpoint of evo-psych, it is of course total nonsense.  Happiness is a mental prodding device, like pain and hunger and fear, that was “designed” to guide us through complex situations by activating in specific limited situations.  It’s the “ding!” that tells us that we did something right in a not-to-be-taken-for-granted way, and that we should try to do that thing again. 

With resources and engineering skill, you can structure a life to maximize happiness, in the same way that you could structure a life to maximize hunger or fear or pain.  But it’s not going to look very much like normal human existence.  One way or another, it’s going to be totally built around exploits (in the cheating-at-video-games sense); it’s going to rely on superstimuli and brain glitches to keep normality at bay, because normality is the hedonic treadmill.  It’s going to be something very much like wireheading, even if it isn’t wireheading exactly.  It’s going to offend your aesthetic sensibilities, it’s going to look and feel wrong, because the lessons we learned about what looks right are all rooted in methods of existence that rely on happiness being a sometimes food. 

*****

OK, having said all that: I am not at all convinced that I believe it.  But it’s certainly a possibility, in the least convenient of all possible worlds.  Building your system of ethics on a feature of the human brain means that you have to be prepared for neurology to work in a way that you wish it wouldn’t.

Or you can just define “happiness” in some wonky way that doesn’t basically map to a human brainstate.  That’s the standard move amongst mainstream utilitarian philosophers, as far as I can tell.  But it is what we call a lie, and leads to some truly unconvincing contortions as the philosophers in question try to hide the fact that they’re basically advocating for their own aesthetic preferences about life to be put into practice. 

If I’m reading you right, you’re saying happiness is more like an optimization mechanism, than a stable state itself that you can be in or out of.

I agree. I probably misphrased myself originally. As an empirical matter, there seemed to be no easy way to “just keep people happy”. That is probably related to your explanation above.

But I tried that first (and read about people trying it a lot), and the conclusion seems to be that it is really incredibly hard no way more than “if my guy wins elections her policies will make this happiness happen”.

I think I didn’t explain myself very clearly, for which I apologize.

The point of the wireheading example is: there is an easy way to “just keep people happy.”  All it takes is a little piece of metal, a generator, and some brain surgery.  We have everything we need to do it right now. 

Of course, the result you get out of that methodology is icky.  It is not at all what you want!  (Probably.  You might be some kind of ethical werewolf or something.  Or maybe you’ve done enough fiddling with the aesthetics that you can appreciate the gods on their lotus thrones.) 

This is not because wireheading gives you some kind of fake not-good-enough happiness.  This is because happiness isn’t actually the thing you want.  As you come closer and closer to maximizing happiness, through any methodology at all, you’re going to converge on a result that will bother you in the same way that wireheading bothers you.  A sufficiently-reliable happiness engine will inevitably produce imbecile simplicity, because the dribs-and-drabs scarcity of happiness – and the consequent scramble to attain it – is one of the key factors in the kind of appealingly complicated life that you’ve learned to value.

(Fictional evidence isn’t trustworthy, but for sheer conceptual punchiness on this exact topic as it addresses your interests, I think it’s useful to turn to David Foster Wallace.  Infinite Jest has, as its central Macguffin, something that is basically wireheading-in-video-tape-form: a short movie that produces addictive euphoria when watched.  And the content of that movie is perfect infinitely-accessible emotional validation of the viewer.) 

There are lots of ways to address this problem, lots of alternate philosophical paths you can take.  But so long as you think you’re aiming to maximize a single value, you’re just going to push farther and farther into maximized-value territory until you suddenly discover the Monster at the End of the Book and freak out.

A Challenge: Question 2

bambamramfan:

balioc:

bambamramfan:

bambamramfan:

My favorite thing about the ITT is the way people answer question 2 “ What is the true reason, deep down, that you believe what you believe? What piece of evidence, test, or line of reasoning would convince you that you’re wrong about your ideology?”

I’m certain every single respondent has thought deeply about what they believe. They’ve seen studies that back them up, thought about ethical principles, and seen the effects of oppression first hand. But when asked “why do you really believe this? what swayed you so much that it would change your mind if it was contradicted”… they often dissolve into vagueness and “everything shows I’m right!” Everything, of course, can never be disproven.

It’s a fascinating insight into how ideology works. Ideology isn’t formed by realizing our terminal values, or reading a study, it’s a much more osmotic experience than that. It involves quasi-believing things because so many other people we know believe them, and not questioning them *too* much because doing so is uncomfortable (both socially, and to our own identity as a good person.) Like Ra, ideology hates it when you try to pin down terms and reasons too precisely.

So. Let’s do that. Here is my challenge to any rebloggers: What is the true reason deep down that you believe what you believe? What evidence could convince you that you were wrong?

I’ll start.


This tumblr is arguing the humanist viewpoint, so I’ll focus on why I’m a humanist, and what could sway me into other philosophies (specifically: parochial tribalism, anti-human universalism, or rights based liberalism and materialism.)

My terminal value is not special. It’s basically happy people, with an emphasis on complex and interesting lives and societies. My own personal goal is to find a button that increases human happiness no matter how much you push it, and to keep pushing it until it breaks.

It turns out that most of the things that we think increase human happiness, such as having better living conditions or more money, don’t really. And even our attempts to build up economies so that people have more stuff, are horribly complicated and unpredictable. I am distinctly unimpressed with a lot of the rationalist projects in this regard, and I suspect they will spend decades trying to find ways to improve the happiness of others with material interventions, and rarely feel they have made much success. There will still be misery everywhere, even after billions are spent. (If rationalist interventions started making a measurable and sizable impact in the amount of misery in the world, that would be evidence to change my view.) (Yes I saw Scott’s chart about malaria interventions. I approve of malaria interventions. And the euphoria in the comments only emphasized to me how many rationalists are insecure about whether this project of theirs is having any results.)

Additionally, a lot of the rules we set down about how we should treat each other should increase human happiness, but mostly make humans miserable as they fight over the rules, and the rules are enforced haphazardly, with some receiving the extreme brunt of enforcement and others being afraid there isn’t enough enforcement. Which is why I am skeptical of rights based liberalism, and will continue to be until it is shown to be a better social technology than primitive tribalism.

The button that does work, that in my experience does make people reliably feel better, is listening to them and one-on-one interaction. Humans are social animals, and humans have very unique individual experiences. Respecting that individual complexity, and giving them social validation, seems the most reliably way to increase happiness, even if only on a very small scale.

If listening and validation are shown to be in the long run net-negative in happiness (if for instance, they operate like a drug that gives you a high that you then grow tolerant for) then I would be skeptical of that button.
If there is no button that can reliably increase human happiness, well that would say a great deal about the chaotic nature of the human condition, which fundamentally validates my anti-categorical humanism.

But basically… if any button on the human psyche is shown to have reliable results - peer reviewed and consistently replicated - about how to affect people and make them happy, I would throw my philosophy out the window and pursue that. My current stance is a result of failure to find anything like that.

Now, humanism might just be speciesist, and it’s possible I don’t give enough credit to non-humans and dehumanized subjects. By appreciating complexity, I may be favoring people who’ve had interesting lives over people who have been so beaten down by the system that they will always be boring to me. This is a real risk and why I dabble in universalism elsewhere. But for now, my interactions with any human have shown that no matter how degraded they have been by society, they’re still as intelligent and social as the richest person I’ve met when you just listen to them for 10 minutes. If this were shown not to be the case statistically, I’d feel guilty about the inherent elitism of humanism, and I’d focus more on a philosophy that tries to exalt the most degraded and inhuman subjects.

Similarly for species, there seems a large gap in cognitive quality between humans and any other creature. If some species existed that were just somewhat less intelligent than humans but still identifiable as having a subjective experience in there, I’d have to look into a much more gradient focused definition of sentience and moral agency.

@jadagul ‘s statement

I have a basically unshakeable conviction that people are mostly decent, and will treat most people well, and would like me to be happy. And following that belief has served me well.
If I ever lost that belief, I would have to seriously rethink my ethics–my current position of trust-by-default would make a lot less sense, and I would probably find it much harder to sustain universal love and acceptance.

from their (he? she? I don’t know) post here, reminded me of the above challenge. More people should do it!

“But basically… if any button on the human psyche is shown to have reliable results - peer reviewed and consistently replicated - about how to affect people and make them happy, I would throw my philosophy out the window and pursue that. My current stance is a result of failure to find anything like that.”

…I think this means that you have to have an answer to “why not wireheading?”

Yeah, I know, it’s the most sophomoric of objections.  But wireheading is a thing, and it does reliably make people happy without them developing a tolerance.  The first-order problem with it for philosophers, in cynical terms, is that from the outside it creates something that looks less like our normal models of happiness than like drug addiction.  If you have non-hedonic terminal values, it’s very easy to explain why it’s not good enough.  But you seem to be working at a project where any conceivable result is going to look very weird from the standpoint of a normal person with normal assumptions.  So, uh, what’s wrong with the answer you’ve got?

I’d want to see the literature on existing wireheading technology as it is now. I’m not really aware of it, and how it works would matter a lot.

The perspective I mention is more to say “lots of people say their idea makes humanity better off, but then does not hold up under empirical verification on a consistent basis.” This is a frequent enough occurrence that it’s a good default stance towards a lot of social engineering and direct neurochemical futzing. Wearing the humanist mask, I find this fact existentially interesting.

Faced with the actual wireheading dilemma, I would probably say that the complexity of life has to exist to be happy. If wireheading makes you a simpleton that’s no more interesting than a labrat, intellectually and emotionally speaking… well I sorta want you to be happy, but only as much as I want labrats to be happy.

If wireheading just makes you euphoric but still living a full life. Then yeah go ahead, why ever not.

“…the complexity of life has to exist to be happy.”

This seems like an obvious thing to think, like a position sufficiently normal that you can just toss it off – that’s not a dig at you, it seems that way to me too, in a very visceral way – but in fact it’s a dodge.  It is the fundamental dodge that underlies all non-crazypants hedonic philosophies. 

*****

We have a strong instinct that happiness is a sort of warm emotional blanket that ought to cover life-in-general, that we ought to be basically happy most of the time unless something is wrong.  I don’t know whether that instinct is baked into the human psyche, or whether it arises from living in a modern civilized society where our basic needs are all easily met, or what.  There are enough conflicting arguments on that score to go around. 

But from a brain-standpoint, from the standpoint of evo-psych, it is of course total nonsense.  Happiness is a mental prodding device, like pain and hunger and fear, that was “designed” to guide us through complex situations by activating in specific limited situations.  It’s the “ding!” that tells us that we did something right in a not-to-be-taken-for-granted way, and that we should try to do that thing again. 

With resources and engineering skill, you can structure a life to maximize happiness, in the same way that you could structure a life to maximize hunger or fear or pain.  But it’s not going to look very much like normal human existence.  One way or another, it’s going to be totally built around exploits (in the cheating-at-video-games sense); it’s going to rely on superstimuli and brain glitches to keep normality at bay, because normality is the hedonic treadmill.  It’s going to be something very much like wireheading, even if it isn’t wireheading exactly.  It’s going to offend your aesthetic sensibilities, it’s going to look and feel wrong, because the lessons we learned about what looks right are all rooted in methods of existence that rely on happiness being a sometimes food. 

*****

OK, having said all that: I am not at all convinced that I believe it.  But it’s certainly a possibility, in the least convenient of all possible worlds.  Building your system of ethics on a feature of the human brain means that you have to be prepared for neurology to work in a way that you wish it wouldn’t.

Or you can just define “happiness” in some wonky way that doesn’t basically map to a human brainstate.  That’s the standard move amongst mainstream utilitarian philosophers, as far as I can tell.  But it is what we call a lie, and leads to some truly unconvincing contortions as the philosophers in question try to hide the fact that they’re basically advocating for their own aesthetic preferences about life to be put into practice. 

A Challenge: Question 2

bambamramfan:

bambamramfan:

My favorite thing about the ITT is the way people answer question 2 “ What is the true reason, deep down, that you believe what you believe? What piece of evidence, test, or line of reasoning would convince you that you’re wrong about your ideology?”

I’m certain every single respondent has thought deeply about what they believe. They’ve seen studies that back them up, thought about ethical principles, and seen the effects of oppression first hand. But when asked “why do you really believe this? what swayed you so much that it would change your mind if it was contradicted”… they often dissolve into vagueness and “everything shows I’m right!” Everything, of course, can never be disproven.

It’s a fascinating insight into how ideology works. Ideology isn’t formed by realizing our terminal values, or reading a study, it’s a much more osmotic experience than that. It involves quasi-believing things because so many other people we know believe them, and not questioning them *too* much because doing so is uncomfortable (both socially, and to our own identity as a good person.) Like Ra, ideology hates it when you try to pin down terms and reasons too precisely.

So. Let’s do that. Here is my challenge to any rebloggers: What is the true reason deep down that you believe what you believe? What evidence could convince you that you were wrong?

I’ll start.


This tumblr is arguing the humanist viewpoint, so I’ll focus on why I’m a humanist, and what could sway me into other philosophies (specifically: parochial tribalism, anti-human universalism, or rights based liberalism and materialism.)

My terminal value is not special. It’s basically happy people, with an emphasis on complex and interesting lives and societies. My own personal goal is to find a button that increases human happiness no matter how much you push it, and to keep pushing it until it breaks.

It turns out that most of the things that we think increase human happiness, such as having better living conditions or more money, don’t really. And even our attempts to build up economies so that people have more stuff, are horribly complicated and unpredictable. I am distinctly unimpressed with a lot of the rationalist projects in this regard, and I suspect they will spend decades trying to find ways to improve the happiness of others with material interventions, and rarely feel they have made much success. There will still be misery everywhere, even after billions are spent. (If rationalist interventions started making a measurable and sizable impact in the amount of misery in the world, that would be evidence to change my view.) (Yes I saw Scott’s chart about malaria interventions. I approve of malaria interventions. And the euphoria in the comments only emphasized to me how many rationalists are insecure about whether this project of theirs is having any results.)

Additionally, a lot of the rules we set down about how we should treat each other should increase human happiness, but mostly make humans miserable as they fight over the rules, and the rules are enforced haphazardly, with some receiving the extreme brunt of enforcement and others being afraid there isn’t enough enforcement. Which is why I am skeptical of rights based liberalism, and will continue to be until it is shown to be a better social technology than primitive tribalism.

The button that does work, that in my experience does make people reliably feel better, is listening to them and one-on-one interaction. Humans are social animals, and humans have very unique individual experiences. Respecting that individual complexity, and giving them social validation, seems the most reliably way to increase happiness, even if only on a very small scale.

If listening and validation are shown to be in the long run net-negative in happiness (if for instance, they operate like a drug that gives you a high that you then grow tolerant for) then I would be skeptical of that button.
If there is no button that can reliably increase human happiness, well that would say a great deal about the chaotic nature of the human condition, which fundamentally validates my anti-categorical humanism.

But basically… if any button on the human psyche is shown to have reliable results - peer reviewed and consistently replicated - about how to affect people and make them happy, I would throw my philosophy out the window and pursue that. My current stance is a result of failure to find anything like that.

Now, humanism might just be speciesist, and it’s possible I don’t give enough credit to non-humans and dehumanized subjects. By appreciating complexity, I may be favoring people who’ve had interesting lives over people who have been so beaten down by the system that they will always be boring to me. This is a real risk and why I dabble in universalism elsewhere. But for now, my interactions with any human have shown that no matter how degraded they have been by society, they’re still as intelligent and social as the richest person I’ve met when you just listen to them for 10 minutes. If this were shown not to be the case statistically, I’d feel guilty about the inherent elitism of humanism, and I’d focus more on a philosophy that tries to exalt the most degraded and inhuman subjects.

Similarly for species, there seems a large gap in cognitive quality between humans and any other creature. If some species existed that were just somewhat less intelligent than humans but still identifiable as having a subjective experience in there, I’d have to look into a much more gradient focused definition of sentience and moral agency.

@jadagul ‘s statement

I have a basically unshakeable conviction that people are mostly decent, and will treat most people well, and would like me to be happy. And following that belief has served me well.
If I ever lost that belief, I would have to seriously rethink my ethics–my current position of trust-by-default would make a lot less sense, and I would probably find it much harder to sustain universal love and acceptance.

from their (he? she? I don’t know) post here, reminded me of the above challenge. More people should do it!

“But basically… if any button on the human psyche is shown to have reliable results - peer reviewed and consistently replicated - about how to affect people and make them happy, I would throw my philosophy out the window and pursue that. My current stance is a result of failure to find anything like that.”

…I think this means that you have to have an answer to “why not wireheading?”

Yeah, I know, it’s the most sophomoric of objections.  But wireheading is a thing, and it does reliably make people happy without them developing a tolerance.  The first-order problem with it for philosophers, in cynical terms, is that from the outside it creates something that looks less like our normal models of happiness than like drug addiction.  If you have non-hedonic terminal values, it’s very easy to explain why it’s not good enough.  But you seem to be working at a project where any conceivable result is going to look very weird from the standpoint of a normal person with normal assumptions.  So, uh, what’s wrong with the answer you’ve got?

brazenautomaton:

dagothcares:

chroniclesofrettek:

dog-of-ulthar:

Trump is an awful person, but he isn’t a competent politician.  Pence is possibly a worse person and a very competent politician.  There is already a conservative plan to impeach Trump so Pence can take the presidency.  

A Pence presidency probably has less chance of accidental nuclear war, but a much greater chance of extremely socially conservative legislation which would be devastating to civil rights in America.  Trump just wants to make money and feel important; Pence has an agenda.

Nixon was impeached for corruption.  We already know Trump is corrupt.  Bill Clinton was impeached for lying about inappropriate sexual conduct.  We already know Trump has lied about inappropriate sexual conduct.

It’s only a matter of time before he lies about something too big for his press aids to smooth over and Congress gets to have his way with him, because he is a mess of a man and makes mistakes, and makes them loud.

If Pence is allowed to take his place, Pence will not make the same mistakes.

Pence is a career politician.  He’s well-spoken.  He’s relatively attractive.  His positions are clear and well-established.  He has a law degree.  He’s on the conservative end of Republicans, but he’s a committed member of the party.  In everything that made Trump unpopular among Republicans as well as Democrats, Pence is the opposite.

There were jokes comparing Trump to Emperor Palpatine, and a good rebuttal of them.  Compared to Pence, Trump is Jar-Jar Binks.  Pence is Palpatine.

Trump is wildly unpopular because both liberals and the Republican establishment don’t like him.  He’s ugly and crass and obviously incompetent, he’s the perfect figurehead to rally against.

Pence is not.  Pence is dangerous in different ways, and one of the worst is that he won’t make nearly as good a symbol for his opponents.  If Trump gets kicked out, we have to keep protesting, even as people try to say that we got what we wanted, that everything can go back to normal.  

Remember this, when the impeachment happens, whether it’s a month from now or three years.  Trump is bad.  Pence might be worse.

“A Pence presidency probably has less chance of accidental nuclear war,”

“Trump is bad. Pence might be worse.”

It seems like some people are less bothered by the idea of nuclear war than my friends and I am. 

This is the “flu during pregnancy doubles chances of schizophrenic children” thing. The base odds are so low we’re not really concerned by them.

Pence was extruded out of the Generic Religious Conservative Republican Machine

he is a generic religious conservative republican

if the generic conservative republican is existentially terrifying, then then problem is with you. either you are so astonishingly self-centered that dealing with any opposition makes you enraged, or you are so astonishingly bad at politics that you have no ability to oppose measures you think are harmful and no ability to convince anyone that doesn’t already agree with you to do this, or both (for today’s left it is both)

pence will not be worse. pence has some measure of control over his emotions. pence has the ability to let minor symbolic slights against him pass beneath his notice. pence has comprehension of why human beings consider it important to say things that are true. 

the only things pence wants to do are roll back some portion of the political victories your side won to an earlier state. a state where your team used to be able to do the actual work of convincing people, and attain those political victories. pence is not going to do damage in ways you cannot imagine and pence is not going to do damage to things you cannot imagine being damaged. pence will not do things that benefit absolutely nobody because he cannot control his emotions. pence will not use the force of the government to hunt down and punish people for saying mean things about him on twitter. pence is capable of perceiving some aspect of reality besides “how much do people like me, and should they be rewarded or punished for it”

these things cannot be said of trump

but the left will never notice this because the left is not capable of noticing this because all is devoured and all is lost. the left is devoured. entropy cannot be reversed. they are now a machine whose sole purpose is to give social power and emotional rewards to people that already have them. they will never again do something because it is useful or because it is correct. they will never again notice anything about the world beyond “the emotions of popular people”. they will not notice how trump is actually dangerous to american democracy in real life in the world because to them “dangerous to american democracy” is just a noise, like all other noises they make, that only means “give me power and emotional rewards”. all is lost. entropy cannot be reversed. all is lost.

So, absolutely agreed and endorsed: Mike Pence is 100% Generic Extruded Religious-Conservative Republican Product.  He is not any kind of new unique monster.  He is not particularly different from any number of normal state-level Republican politicians. 

Also agreed and endorsed: Trump is some kind of new unique monster, and his rise to power carries terrifying dangers that would not be posed by any normal politician, including his VP. 

That said, I don’t think this analysis is fair.  I think you are underselling the extent to which it is possible to think, with integrity and with good reason, that Normal Partisan-Style Politics really matters.

Like, there are three categories of Horrible Things that we have to fear from the Trump presidency:

1. Horrible Things that we apparently have to fear from any presidency of any party because, Christ, apparently the system will co-opt anyone, even a guy like Barack Obama is not going to be your savior (e.g.: drone strikes, total failure to rein in abuses by the financial sector)

2. Horrible Things that might arise from the personal idiosyncrasies of Donald Trump (e.g.: the streets of New York running red with blood because of a pogrom whose origins lay in a Twitter dustup with Taylor Swift)

3. Horrible Things that could be expected from any contemporary Republican administration but not from a Democratic administration

“How big is Category 3, especially compared to the other categories?” is a legitimate question whose answer is not inherently obvious. 

I do in fact think that Category 3 is large.  I think that, for structural reasons that are not very hard to trace, the Republican Party has collectively gone off the deep end.  I think that standard Republican Party orthodoxy at this point involves policies – notably environmental, economic, and social-welfare policies – that are likely to be catastrophically bad for America and for the world (as opposed to Democratic Party orthodoxy, which is merely mildly bad).  I am honestly unsure whether the low-but-still-hideously-high threat of a Trump-fueled nuclear war matters more or less than the likelihood that Trump will act like a normal Republican most of the time.

…you can point out, correctly, that we’re in a first-past-the-post system where the Republicans are one of the two major establishment parties, and therefore that they’re predictably going to win about half the things overall.  You can point out that about half the voters in the US, a group that obviously includes many many many good and decent people, are going to end up voting Republican for those same standard structural reasons.  None of these things is inconsistent with any of the others. 

I certainly think it all adds up to good and sufficient reason for feeling existential horror. 

Excrucians

bambamramfan:

The Deceivers live outside the world; they think that we have built the world out of lies. They think the whole of Creation is a jungle of deceit that we have put up to keep from seeing ourselves the way we really are. 

They love us but they love not that lie.

Nobilis: the Essentials, Volume 1 (Kindle Locations 4145-4157)

Which reminds me so much of Lacan

Kinder Surprise, one of the most popular chocolate products on sale all around Central Europe, are empty egg shells made of chocolate and wrapped up in brightly colored foil; after one unwraps the egg and cracks the chocolate shell open, one finds in it a small plastic toy (or small parts from which a toy is to be put together). A child who buys this chocolate egg often nervously unwraps it and immediately breaks the chocolate, not bothering to eat it at first and worrying only about the toy in the center. Is such a chocolate-lover not a perfect case of French psychoanalyst Jacques Lacan’s dictum “I love you, but, inexplicably, I love something in you more than yourself, and, therefore, I destroy you”? And, effectively, is this toy not l’objet petit a at its purest, the small object filling in the central void of our desire, the hidden treasure, agalma, in the center of the thing we desire?


Zizek

So, Zizek can laugh it up if he wants, but…

I’d never heard of Kinder Surprise until the year after college, when a friend of mine who’d just traveled to Europe introduced me to the concept.  It sounded kind of fun in a campy way, and there was a nearby imported-foods store, so I bought one on a lark.

The toy inside was a little plastic rolly miniature of Cardinal Richelieu with what I can describe only as “Power Leering Action.” 

So, hells yeah, the thing in the middle of a Kinder Surprise egg is l’objet petit a.  Quite rightly and sanely.  How could anything beat that, in terms of unimagined cosmic joy?  What better satiation of the vast fathomless hunger of human existence could there be? 

(Needless to say, I never got anything nearly that awesome out of a Kinder Surprise egg again.)

As far as I can tell, there are two kinds of writers in the world: the ones who dream of being great writers, and the ones who dream of being the kind of people they write about.

brazenautomaton:

orbispelagium:

orbispelagium:

I’m a third of the way through Mascots (available on Netflix), and it’s like it was designed to appeal to me specifically.

It’s nonstop dense, deadpan, absurdist dialogue that rarely belabors a joke, and it focuses on a group of weirdos without point-and-laugh cruelty, and it’s got a genuine affection for them (even in an interlude with furries appearing at the mascot competition).

It’s from the creator of This Is Spinal Tap, which I would have liked more if I was more versed in the last days of glam rock. This doesn’t have as many quotable classic jokes, but it’s good solid comedy throughout.

Also, Zach Woods has such great energy/cadence/appearance for this kind of awkward deadpan comedy and he really needs to be in more things (he was amazing in In The Loop, and he even got to be in the new Ghostbusters as the tour guide at the start)

I hated it, because I didn’t feel genuine affection at all. I felt that movie held all its characters save for the hedgehog guy in complete contempt; we are constantly seeing them do things the movie wants us to regard as dumb without letting us see them pay off or be vindicated. And unlike folk music or dog shows, mascotting is not built-up or glamorous, so the movie is looking down at people who are not highly regarded and saying “I need to take these guys down a few pegs.” Like, the sequence where one of the judges just goes on and on, unprompted, about having a small penis – and that’s it, that’s the joke, he has a small penis. For fuck’s sake, movie.

I dunno.  I think that Guest’s work generally runs on an engine of “just sitting there and watching someone for enough time will humanize him and make you care, even if the narrative doesn’t provide a redemptive stinger.”  And I think it often works.  Not always, but often. 

In Mascots, the plumber guy is probably the best example of this.  He’s a loser, we never see him being not-a-loser, and indeed we’re invited to laugh at the smallness of his world and the dumb things he cares about…but we see him trying hard, we see him reaching out to make a human connection, we see him being humiliated in a moment that should have been a triumph, and it’s sad.  It’s funny to some extent, but it’s definitely also sad, because by that point we’ve mapped some sad-sack part of ourselves onto this sad-sack dude and there is empathetic hurting. 

You get the same thing to a lesser extent with Parker Posey’s character, who is obviously insane and clueless, but who is nonetheless shrouded in a weird humanistic dignity by the earnest purity of her passion.  (Don’t you wish you were that insulated from existential dread?, you can hear Guest whispering.  And, yeah, I kinda do.)  You even get it with the husband in the terrible marriage, because the script hammers you with “imagine if you were in a marriage like that,” and yeah, it’s terrible.

The Fist is indeed basically just a walking joke, though.

BALIOC’S READING LIST, 2016 EDITION

This list counts only published books, consumed in published-book format, that I read for the first time and finished.  No rereads, nothing abandoned halfway through, no Internet detritus of any kind, etc.

1. The Shepherd’s Crown, Terry Pratchett
2. Why We Love Serial Killers, Scott Bonn
3. Zeus Grants Stupid Wishes, Cory O'Brien
4. The Instructions, Adam Levin
5. Otaku: Japan’s Database Animals, Hiroki Azuma
6. Gentleman Jole and the Red Queen, Lois McMaster Bujold
7. The Bands of Mourning, Brandon Sanderson
8. Mistborn: The Secret History, Brandon Sanderson
9. The Great Exception: The New Deal and the Limits of American Politics, Jefferson Cowie
10. Calamity, Brandon Sanderson
11. American Pastoral, Philip Roth
12. This Census-Taker, China Mieville
13. The Secrets of Drearcliff Grange School, Kim Newman
14. Nations and Nationalism, Ernst Gellner
15. The Guns of Ivrea, Clifford Beal
16. Shadow’s Edge, Brent Weeks
17. The Library at Mount Char, Scott Hawkins
18. Phantastes: A Faerie Romance for Men and Women, George MacDonald
19. The Book of the Beast, Tanith Lee
20. Beyond the Shadow, Brent Weeks
21. The Book of the Dead, Tanith Lee
22. The Book of the Mad, Tanith Lee
23. Courtesans and Fishcakes: The Consuming Passions of Classical Athens, James N. Davidson
24. American Nations: A History of the Eleven Rival Regional Cultures of North America, Colin Woodard
25. The Girl Who Circumnavigated Fairyland In a Ship of Her Own Making, Catherynne Valente
26. The Girl Who Fell Beneath Fairyland and Led the Revels There, Catherynne Valente
27. The Girl Who Soared Over Fairyland and Cut the Moon In Two, Catherynne Valente
28. Children of Earth and Sky, Guy Gavriel Kay
29. Being Mortal: Medicine and What Matters In the End, Atul Gawande
30. A Voyage To Arcturus, David Lindsay
31. In the Labyrinth of Drakes, Marie Brennan
32. The Gun Seller, Hugh Laurie
33. Never Let Me Go, Kazuo Ishiguro
34. The Goblin Emperor, Katherine Addison
35. Penric and the Shaman, Lois McMaster Bujold
36. The Hallowed Hunt, Lois McMaster Bujold
37. The Great Ordeal, R. Scott Bakker
38. The Book of Heroes, Miyuki Miyabe
39. Psmith In the City, P. G. Wodehouse
40. Last First Snow, Max Gladstone
41. Harry Potter and the Cursed Child, Jack Thorne
42. The Drug Wars in America, 1940-1973, Kathleen J. Fryol
43. Sapiens: A Brief History of Humankind, Yuval Noah Harari
44. Four Roads Cross, Max Gladstone
45. Best. State. Ever.: A Florida Man Defends His Homeland, Dave Barry
46. Midnight in the Garden of Good and Evil, John Berendt
47. For All the Tea in China: How England Stole the World’s Favorite Drink and Changed History, Sarah Rose
48. The Politeness of Princes and Other School Stories, P. G. Wodehouse
49. Tales of St. Austin’s, P. G. Wodehouse
50. When a Fan Hits the Shit: The Rise and Fall of a Phony Charity, Jeanine Renne
51. Killer of Men, Christian Cameron
52. Marathon, Christian Cameron
53. Poseidon’s Spear, Christian Cameron
54. Debt: The First 5,000 Years, David Graeber
55. The Great King, Christian Cameron
56. Salamis, Christian Cameron
57. The Ill-Made Knight, Christian Cameron
58. The Long Sword, Christian Cameron
59. Strangers in Their Own Land: Anger and Mourning on the American Right, Arlie Russell Hochschild
60. Penric’s Mission, Lois McMaster Bujold
61. Mister Monkey, Francine Prose
62. Hell’s Angels: A Strange and Terrible Saga, Hunter S. Thompson
63. Decline and Fall: The End of Empire and the Future of Democracy in 21st Century America, John Michael Greer
64. Rage of Ares, Christian Cameron
65. The Dragon’s Path, Daniel Abraham

“Full-length” works consumed in 2016: 54

Works consumed in 2016 that are maybe too short to count (novellas, etc.): 11

Plausible works of improving nonfiction consumed in 2016: 14

Works consumed in 2016 written by women: 18

Works consumed in 2016 written by men: 47

Balioc’s Choice Award, fiction division: The Instructions

Balioc’s Choice Award, nonfiction division: Debt: The First 5,000 Years

…I feel like the takeaway here should be “less genre stuff and more Serious Books About the Real World.”  Seeing my improving-nonfiction count sitting at 14 is painful.  But, man, reading fantasy novels is great and it’s hard to feel too bad about doing a lot of it.