jadagul:

fatpinocchio:

jadagul:

fatpinocchio:

jadagul:

wirehead-wannabe:

maddeningscientist:

Less snarkily: I don’t think that looking at “what people want” is the same thing as determining what’s actually good. Someone with an intense desire to make themselves as miserable as possible (not in the “pleasure from masochism” sense, like really genuinely bad) seems obviously misguided.

ehhh? in the end there is nothing else to draw “good” from than what people want at some level, unless you’re into like, strong moral objectivism, which seems like a straightforwardly ridiculous position. “what would you do without morality” etc.

i’d probably agree with modifying the pain-creature to want things i think are good, just as much as i’d agree with modifying a paperclipper to want things i think are good.  i don’t really think “misguided” has meaning here, except in the sense of being mistaken about what you value (which is easy to be, but doesn’t sound like what you mean)

This is probably another instance of disagreement about moral realism/objectivism, because to me the idea of “what would you do without morality” is like… I’d go be homeless and hang out at the library all day and probably eventually die of hypothermia? Like, it’s asking me what I’d do if I found out that my sense that pain is bad and pleasure is good went away. I would say “I wouldn’t function”, but who cares about functioning in a scenario like that! Why should we give that hypothetical any weight whatsoever if it gives us no guidance as to how to act or what to believe? Why wouldn’t you focus on the 0.0000000000001% chance that moral realism is true, since that’s where 100% of the value and disvalue lie?

Do you not, like, have things you want?

Why should I do what I want to do?

On one hand, this question gets something wrong at the outset. Though I reject both “whim-worship” and preference-fulfillment theories of well-being, there’s a strong connection between an agent’s values and their reasons for action. That’s how morality is ultimately grounded. On the other hand, asking “What would you do without morality?” stipulates that this grounding fails. What would you do if there were nothing you ought to do - not even what you want?

Why shouldn’t you do what you want to do?

It’s not that you should do what you want to do. It’s that you want to do it. So you do it, unless you have a reason not to do it.

And if you have a reason not to do it, that means you don’t want to do it any more. Because that’s what “want” and “reason” mean.

If you want to define “reason” that narrowly, sure. But, pre-theoretically, “Why should I do what I want to do?” is a coherent question, and “You shouldn’t necessarily do what you want” is a coherent position. Also, if I should do something, it’s because I have a reason to do it, and if all reasons are founded in desires, then it is that I should do what I want. “I should do what I should do” is a tautology, but “I should do what I want” is a substantive claim.

It may be that drilling down into the relevant concepts reveals a foundational connection between obligation and desire, and/or that a desire is a prima facie reason for action and there are no defeaters, but that involves some degree of commitment to a metaethical theory (or at least vague family of theories).

“What would you do without morality?” acknowledges the pre-theoretical question/position, but rejects the theory (and all alternative theories).

I don’t think the word “should” is as clear or as ontologically primitive as you’re treating it.

Like, I don’t think I have a better gloss of “you should do this” than “On reflection, you would agree with me that you actually want to do this.” If you make me define “should” I would define it in terms of “want”. I don’t really know what it could mean outside of that.

@balioc’s comments here are probably relevant.

Heh.  This is one of those cases where it turns out to be relevant that human language is first and foremost a tool of social manipulation rather than a tool of objective-truth-delineation. 

“You should do this” has a very clear gloss in a practical sense: “I want you to do this, and I am willing to use the language of moral demands to try and make you.”  (With definite overtones of “…and if you don’t comply, I may be driven to denounce you to a coalition of my allies.”) 

If you insist on translating it into coherent metaethical language, I suspect you end up with something like, “According to my ethical schema, it would be better for you to do this thing; either allow yourself to be convinced by my logic, or understand that we are irreconcilably opposed on this issue.”   But, as with so many things, it turns out that this social technology can serve its purpose just fine without being reduceable to a single consistent thing on the root-logic level, and so…it’s just fundamentally inconsistent, and regularly employed by people who use it inconsistently in that sense.