I feel like my feelings about “weird EA” are hopelessly biased by the fact that Brian Tomasik is a great person
I genuinely think the whole “do fundamental particles suffer” thing is a reasonable line of thought. I think I might be hopelessly biased by the fact that all the counter arguments I’ve seen* have just been bald-faced absurdity bias. (I really wanted to make a scathing comment about the quality of the arguments but I’m trying to follow through on my commitment to not engage in satire.)
* Not all the counter arguments I can imagine, mind you.
I don’t know if you think my blogging today has qualified as bald-faced absurdity bias, but it is and does reflect my basic objection to that sort of argumentation. I can’t speak for shlevy, but I suspect the point of his original post was also to respond to speciically this sort of thing.
I decided to go find the actual article rather than joking summaries, and I found this post. And it goes so completely off the rails in the introduction, in the first paragraph, and in fact in the first sentence that it seems hard to engage with productively.
The very first sentence of the argument is:
In order to reduce suffering, we have to decide which things can suffer and how much.
And by that point he’s already made the mistake I’ve been criticizing. He’s lost the plot. He has:
Decided the purpose of ethics is to reduce suffering.
Detached that decision from the reason he came to that conclusion in the first place—he’s left it as an unmoored axiom.
He now starts worrying about whether that axiom might apply to things he hadn’t thought of originally.
But when he does decide that it has radically different implications, his response is to throw away his original goals and run with the crazy axiom, rather than using that feedback to modify or restrict the scope of his axiom.
This is exactly the process I was criticizing in the other numbered list I put in a post today. (At least I’m apparently consistent in my criticism).
Like, my fundamental response to that essay is to ask Brian Tomasik: “Why do you care about suffering?” There’s the tautological answer, which is “I think suffering is bad.” And if that’s the only answer, then there’s no way to argue with that but there’s also not terribly much content.
But realistically, people get their feeling that suffering is bad from somewhere. And Tomasik knows this, and points to it in sentence number two (don’t worry, I’m stopping here, not going sentence by sentence through the entire piece):
Suffering by humans and animals tugs our heartstrings and is morally urgent, but we also have an obligation to make sure that we’re not overlooking negative subjective experiences in other places.
I would rewrite the first half of that sentence a bit, into “Suffering by humans and animals tugs our heartstrings, and is [therefore] morally urgent.” We feel that suffering is bad because when we see other people suffer, that makes us sad. We think it’s a bad thing. We want to make it stop. And therefore we value reducing suffering.
But when I see a mosquito suffering, I don’t feel bad. (Except insofar as it’s suffering, and therefore not dead, and dead mosquitos are far preferable to live ones). I don’t feel good, either. I don’t care. I don’t even know what mosquito suffering looks like!
And most people don’t. Most people don’t have any objections to mosquitoes dying, or video game characters losing lives, or to two protons repelling from each other. And most of the people who do get upset by those things are people who have trained themselves to do so, through the sort of confused quasi-argument that I have been critiquing.
And this is the reason I find this sort of logical slide truly objectionable, rather than amusing. This argument takes things that are perfectly harmless, gets people to become upset by them, so that they generate actual suffering in actual people. I want people, and especially the people I like, to be happy. I am not pleased when people find more ways to make them sad.
When you start with “I don’t like it when my friends are unhappy”, try to formalize that, and wind up with “but it’s even worse when electrons are forced to be near each other”, you have lost something. Your model is not matching the thing you’re trying to model. Your axioms don’t generate the system you were envisioning. Your delivered product does not meet the design specs. You have fucked up.
Ozy says that Brian Tomasik is a lovely person. I have never met him, but I genuinely 100% believe that. I would like him to be happy, and so I wish he would not follow confused arguments to give himself more sources of stress.
I think you’re missing the case for trying to rationalify our moral intuitions beyond just “what tugs at my heartstrings”.
If I watched a movie about a cute kid who was starving in Sudan, it would tug at my heartstrings and I would want to help the kid.
In fact, I haven’t watched this movie, and I haven’t thought about the famine in Sudan in months.
I could take a really immediate/self-regarding view of ethics, where the whole point of ethics is to make myself happy. In that case, maybe if I saw the movie, I would donate to help the particular kid in the movie, but no other kid, and my heartstrings would feel better, and so there would be no problem. Maybe I would even just avoid seeing the movie, since I know it would make me want to give away money (which, since my heartstrings are currently untugged, seems to present-me like a waste). Maybe I would give away money to the cute kids in Sudan (who tug at my heartstrings) but not to non-cute kids (who don’t).
But if I think morality is about anything other than making my conscience shut up, it seems like I should accept some common-sense axioms. Like “even though cute kids tug at your heartstrings, and non-cute kids don’t, probably cuteness shouldn’t determine worthiness of help”. Or “just because they made a movie about only one kid, doesn’t mean that the other kids don’t matter and wouldn’t be just as sympathetic if you saw movies about them”. These seem kind of like basic logical actions: “well, there’s no real difference between Movie Kid and other kids, and I don’t endorse treating equivalent people differently for no reason, so I guess I’ll care about all the kids.”
I feel like if you don’t think you should apply basic logical actions to your moral system, and you endorse morality just being “making my conscience shut up about this specific incident”, then you’re at LEAST as morally weird as people who worry about insect suffering.
But if you do apply basic logical actions to your morality, then you get a slippery slope question of which logical actions to take and which ones not to. If you try do all the actions and be logically consistent, you end up as Brian Tomasik.
So I guess I’m positing a trilemma, one branch of which you have to take:
1. Agree that morality is just about placating your random desires, to the point where, if you want to help a cute kid in a movie, it’s almost nonsensical to ask whether that implies you should help identical but less cute kids who weren’t in movies.
2. Accept that morality is subject to the normal rules of logic, but then make a cowardly retreat from this as soon as it becomes inconvenient.
3. Become Brian Tomasik.
All of these options seem about equally unappealing to me, so I think I try to be an awkward three-footed mutant with one foot in each bin.
First of all, I would never want to be read as implying that I’m not incredibly morally weird. There’s a reason Nietzschean Übermensch narratives have a lot of resonance for me. (“Those who write new values on new tablets, etc.”) But I think your trilemma is wildly overstated.
This is partly repeating the post I made while you were presumably writing this one, so apologies for that.
My ethics are about making me happy and satisfied with myself. Because that’s all that ethics can be, because there is no God and there is no great judge. My ethics are an answer to the question “what do I want to do?”
But of course I have things I want other than “to get the biggest possible dopamine hit in the next thirty seconds.” I want to be the sort of person I want to be. I want to be happy with the decisions I’ve made. And because of {mumble about biology and evolution} a lot of what I want is for other people to be happy and healthy and free.
I like people. I want them to be happy. And thus my ethics call for me to do things that make other people happy.
A big ingredient of this is some sort of reflective equilibrium. Once you have preferences about the sort of person you are, that imposes some consistency constraits, and this is where I’m slightly repetitive. If I feel an urge to help “that” toddler, but also feel like I don’t want to be the person who only helps toddlers who happen to be shoved in my face, then I might respond accordingly, by helping some other toddlers instead.
I, personally, want to be the sort of person who doesn’t respond to emotionally manipulative pleas at all, but does take calculated and optimized approaches to achieving things effectively. And this is why I’m fundamentally sympathetic to/supportive of groups like GiveWell.
But a reflective equilibrium is reflective and it’s an equilibrium. It’s reflective because there’s a feedback loop. You have an initial set of values, and then you take steps to harmonize them. And that changes things subtlely, or sometimes not-so-subtlely. (Leaving aside situations where you actually change your mind for other reasons).
But at every step of this reflection you need to connect back to your actual grounding. And eventually you settle down to where all these things are relatively consistent, and then you have a stable equilibrium to work from.
I feel like what happens with Tomasik–like arguments is a loss of that reflective grounding. Rather than having a feedback loop from intuitions to principles to intuitions to principles, he has a straight chain from intuitions to principles and then discards the intuitions like a booster rocket. And that lets his new principles get really divorced from where they started out.
This process is totally internally consistent. But if you just want to be internally consistent, that’s easy. You can be internally consistent and believe anything. (Arguably, you can even be internally consistent while believing two contradictory things, as long as you also believe, like the tortoise, that believing two contradictory things is not a contradiction).
Internal consistency and reflective equilibrium don’t demand that you abandon your foundations.
You call the slide from “normal morality” to “Brian Tomasik” a slippery slope. I see it more as a jagged terrain with a bunch of awkward obstacles that Tomasik seems determined to vault.
I see why people want to extend the logic of “don’t kill people” to “don’t kill chickens either”. I disagree, but I see the pull of it, because most people actually do care about whether, say, animals are suffering in front of them. There’s a coherent reason to use the word “suffer” to describe humans in pain and also puppies in pain.
Once you extend that to insects, though, you’re stretching the word beyond what people actually care about. You’re making the Worst Argument in the World, and saying “this thing is like that thing in one way, and thus this thing is bad like that thing is.” And Tomasik keeps making that move, from animals to insects to video game characters to subatomic particles. There is no reason to do that.
I guess maybe the short version is this. I don’t think that applying logic to your ethical code is a bad thing. But I think people have a bad habit of generating a principle, equivocating on the definitions of the words in that principle, and thus getting a new principle out of it that is basically unrelated in any reasonable way to where they started.
And that’s why I want to talk about grounding your feedback loop in the intuitions and values you started with. It protects you from this sort of equivocation and linguistic trickery.
And this is a fundamental problem of model-building in general. You build a model. You extrapolate. You get a lot of interesting conclusions. And then you go back, and you check to make sure that in the situations where you know the answer, your model gives you the right one. If it doesn’t, that means your model is broken. Not that the world is.
…I suspect that this boils down to “most EA-types (and indeed most humans) believe in something much closer to moral realism than you do.”
Even the most hardcore preference utilitarian will generally hew to some abstract idea like “morality = satisfying preferences = good in the world,” rather than a more personal one like “I support satisfying preferences because this tickles my self-conception.” Many EA-types report experiencing a really visceral horror and pain at the idea that any entities are suffering, as the output of a hyperactive imagination/empathic faculty, which is clearly not being processed through any kind of self-image-processing mental module. All this bespeaks a certain desire to find the “moral facts” [which may be mostly unknown] and act on them, rather than using morality in the way you describe, as something like a tool in the toolkit of self-definition.
Which is not crazy or surprising, when you realize how much EA culture generally is steeped in the concept of progress. Rationalism generally tends to take a stance of “the present is wildly different from the past, even the near future is going to be wildly different from the present, we need to ride on top of that onrushing change rather than letting it flatten us.” And, in terms of socially-defined morality, it’s been widely and correctly noted that we have in fact seen change that looks a lot like a technological-progress curve over the course of the past several centuries. Much of morality used to be [SHIT MOST OF US DON’T CARE ABOUT AT ALL RIGHT NOW], but we gradually started acting on the twin ideas that are “people’s lives shouldn’t be awful” and “more and more entities should be treated as morally equivalent.” Now we’ve gotten to the point where racism and sexism and straight-up contempt for the poor will give rise to nigh-universal censure, where we’re actually engaging seriously with ideas like UBI and trans acceptance and animal suffering mattering, and all of this looks like it’s just a continuation of the curve – like we’re (ahem) “making progress,” like we’re expanding moral technology in the way that we’ve expanded actual technology. Seen through that lens, the desire to “get ahead of the curve” and to know as many of the facts-that-will-be-discovered as soon as you can is eminently understandable.
All that said, I’m pretty much on your side here in terms of metaethics. Which should not be a surprise, given the extent to which my worldview revolves around the creation and maintenance of narratives. (Something something, hem hem poke poke.)
I do think you’re underselling the extent to which, for a given (weird) person, hewing to his desired self-conception can allow and require him to “fly off the rails.” I assume it matters vastly more to Brian Tomasik that he be someone who cares about all suffering in all its conceivable forms than that he avoid being shackled to the strictures of a weird morality.
(I myself can cop to a much-less-extensive version of this. I am not a hedonic utilitarian, and I am willing not to care at all about the suffering of entities that do not “think” by my fairly-rigorous standards, which is why e.g. I am not especially incensed about factory farms. But I do care about creating a decent existence* for all thinking beings, and that stance is actually pretty damn important to me, important enough that the right facts might cause it to overthrow major aspects of my life that are ultimately less important than it is. If it were demonstrated to me that chickens were capable of abstract reflective thought, that the existence of a chicken language and chicken literature had been hidden from me, Doing Something about factory farming would zoom up to become one of the central priorities of my life – even though I would, all else being equal, much rather exist as my current self than as an anti-factory-farming crusader who doesn’t get to enjoy eating delicious chicken. At some point, your reflective feedback loop doesn’t actually tell you to stop.)
* also defined in non-strictly-hedonic terms, for those who care