35 Comments

I think that deception is generally a bad policy for sophisticated consequentialist reasons but for the reasons I describe in the linked post, enrshrining its wrongness into the fundamental moral law is super implausible.

Expand full comment

I agree on consequentialism is generally a bad policy, and I agree that there are serious puzzles about deontic constraints, but I’m more confident in some of these object-level judgements than these new-fangled paradoxes.

Expand full comment

Solve any of the puzzles, then we'll talk!

Some strike me as close to proofs. In addition, I think we can tell a debunking story: of course you'd buy into the rules society has taught you about how to be a good person since you were 2 years old.

Expand full comment

I seemed to me I could see the rightness of this rule in a way I don’t see the rightness of the rule to, e.g, respect my elders, follow the law, etc. I agree that will be the explanation for my intuition if utilitarianism is false, but the debunking story isn’t esp. plausible independently

Expand full comment

Well this rule is more thoroughly taught to you by society!

Expand full comment

The rule against lying in general is more thoroughly taught by society, but that’s easy to see through; the rule against religious deception or whatever is not something I’ve ever been told, except by Alexander Pruss — how would you specify the rule you think I’ve been taught in a way that’s plausible that society would’ve taught it to me more than, e.g., “don’t break the law”?

Expand full comment

Don't lie or mislead people on important matters.

Expand full comment

I recognize that this was a parable illustrating why utilitariansim is false, but just as a matter of virtue you came off as extremely based.

Expand full comment

I actually held off from ousting this for a while bc it totally just felt like virtue signalling 💀

Expand full comment

*posting

Expand full comment

I think this might just be another common desire satisfactionism W.

Expand full comment

What about desire-based act U? (or objective list theory that heavily values desires of this kind)

It would seem they agree that you shouldn't lie to the Muslims because they would be severely harmed, albeit unknown to themselves

Expand full comment

They would both solve it — the preference utilitarian solution is esp. clear cut.

Expand full comment

moral particularism will triumph 😈

Expand full comment

Mate wtf I've been to Efes

Expand full comment

I should clarify, I didn’t work at Efes!

Expand full comment

Fair. Also moral philosophy is cool and all but when are we going to talk about your pseudo-Russellian theory of Turkish waiters

Expand full comment

Well done, excellent essay. We gottu get these fellas into the public eye more, and continue to wrestle with their ideas.

Expand full comment

I think Bentham is still right. In any real world version of this scenario, theres no way to be certain about things like customers never finding out about the deception, and if they did it could easily collapse the fragile consequentialist logic here into net loss territory. But in a pure hypothetical where end consequences are definitely fixed, different choices make sense. Imo a lot of anti- utilitarian arguments seem to emerge from this dynamic where under conditions of total consequence certainty (which are basically impossible in the real world), utilitarianism can require unintuitive behaviour. Happy to accept that its right to deceive in a hypothetical scenario where consequences are fixed, but it’d basically never be the same in the real world, hence our instinctive aversion to the utilitarian conclusion.

Expand full comment

""It struck me that I was in a moral dilemma that divided consequentialism from deontology."

I don't think that's right. The bad consequence of lying is that it results in a lie! The relevant consequence of intentionally deceiving customers is the act of intentional deception. This should add -50 consequence points (arbitrary scale) into your moral calculus! Any plausible consequentialism will count lying as finitely intrinsically bad (utilitarianism be darned, which presumably the point of this blog post though).

Expand full comment

Although the people were in no way harmed by being lied to in this scenario.

Expand full comment

Well you said it yourself: "My intuition had nothing to do with beliefs about what might happen [...]: had to do with the obvious wrongness". When you read anyone arguing ethical systems, if you read between the lines, the real master is always the intuition. Formal systems, utilitarian or otherwise, are fine models, but the moment they say something that offends our intuition, in the bin they go.

This shouldn't really surprise anyone. The fact that we have a moral sense, i.e the ability to feel "this is wrong", is a result of evolution. As it happens with these things, it has plenty of parameters to be tweaked and filled in by culture (another form of evolution), and then by personal experience or reflection or influences received.

Given these kinds of messy origins, there's no reason at all to expect the resulting sense of right/wrong to conform to some simple logic that can easily be made explicit and formalized. And given lack of convergence in the endless attempts to make it so, I'd say it's pretty much confirmed at this point that it doesn't.

Does that tell us what to do? Of course not; but this intuition is a rather special one, because it occasionally gives us hints not only on how we can improve our behavior or the world around us, but also on can we can improve our moral sense itself. The fact that it can operate on itself is the seed of a sense of direction, which opens the possibility that moral progress may be real, without depending on the fantasy of some firm logical foundations, which are truly nowhere to be found.

TLDR: yay anti-foundationalism!

Expand full comment

Examples like this are part of why I'm a preference utilitarian, though there may be a game theory way for hedonists to get out of it. Though, on the other hand, game theory solutions seem more natural on a preference account of well-being.

Expand full comment

"Don't tell a lie about the food not being hahal" is a moral imperative, not a utilitarian one.

From a utility/overall well-being perspective tells us nothing about whether we should lie in this case.

The wellbeing of you and the owner would obviously be better off if you lie: you'd get money from salary and customers respectively.

As for the customer well-being, who is to say it will suffer from eating non-halal food unknowingly? Perhaps they'd just be better off being lied to: they get to blisfully enjoy their meal, instead of having to put it off, or go to the trouble to find another restaurant. Who measured their relative enjoyment/well-being, to know for certain that unknowlighly enjoying their food wont give them more hapiness than being told the truth would?

And even if they find out, who is to say that they just shrug it off (like the food blogger you mentioned), or that it wont help drive them to the conclusion that halal food was delicious anyway and start relaxing their guard about it? It might even prompt them to checkout the deliciousness of bacon, something they perhaps always wondered about.

In the end, this isn't about object harm to their well-being, but about lying to them, which is a moral issue, not a utilitarian one.

If we equate utility with being moral, then there's no separate meaning to utilitarianism, just morals (whatever their source, e.g. religion, tradition, personal convictions, etc). And if we don't, then it seems to me that utiltarianism is just doing what you'd have done anyway, but with extra steps.

Expand full comment

I'm treating utilitarianism as a theory of morality: morality is what we ought to do, utilitarianism fills that in.

Expand full comment
Nov 6Edited

That's just the morality we already wanted to follow with extra steps. The definition of utilitarianism as about "increasing the overall well-being", taken to mean doing what's moral, resolves to just "do what's moral".

And of course, since well-being is quite open ended, we also get to pick and chose whether telling the truth, or not hurting the other's feelings, or not causing some other secondary issue, or helping someone over somebody else, or several other things are more important in each case. In the end, we just do what we rationalize to do.

As per the examples, in the end, we can find whatever justifications we want to conclude that some X course of action is the "utilitarian" one. Two people can even come to the exact oppose X as the proper "utilitarian" action, even starting from the same moral principles.

In the end, utilitarianism and other such logic, is an attempt to simplify and reduce the complexity of life, and the decisions our moral agency has to make in each case, to some mechanistic rules and syllogisms.

Expand full comment

“The definition of utilitarianism as about "increasing the overall well-being", taken to mean doing what's moral, resolves to just "do what's moral".”

Right, but even if the utilitarianism is in fact what’s moral “do what’s moral” doesn’t analytically entail “act according to utilitarianism”.

Expand full comment

Not too inclined towards utilitarianism in general (although this is partly because analytic philosophy often flies over my head :P), but I've always had some trouble seeing the difference between rule-utilitarianism and deontology (philosophy noob ... if only that prevented me from pontificating: https://thelurkingophelia.substack.com/p/musings-on-morality). At least, I think there's usually significant overlap in those schools of thought.

Couldn't you still be a rule-utilitarian in this situation? If you were to apply the dilemma as a general moral rule (to deceive or not to deceive), I think there could be an argument that public distrust would be disastrous for businesses, lowering overall welfare. Even with deontology, if an act's morality is contingent upon its suitability as a golden rule / categorical imperative, doesn't one still need to factor whether they'd want to live in a world where the act is the rule?

Sorry if this is completely incoherent!

Edit: I forgot that Bentham was the utilitarian philosopher, and thought that this was you sundering your friendship with BB--thank goodness for being wrong...

Expand full comment

I think JS Mill would still consider you to be a ultilitarian based on your experience in the restaurant, and he would approve of your qualms.

Expand full comment

I don’t know if you read mill as a rule-utilitarian, but if you read him as an act-utilitarian like me, surely he would’ve approved of the fact that I had the qualms but thought — given that I understand utilitarianism/can conceptually separate what a useful moral heuristic recommends from what I ought to do in a given situation — that my qualms were misfiring in this case, and that I had no reason to respect them *for reasons having to do with the particular customers I’d be lying to*

Expand full comment

I'm thinking of stuff like this

"if it may possibly be doubted whether a noble character is always the happier for its nobleness, there can be no doubt that it makes other people happier, and that the world in general is immensely a gainer by it. Utilitarianism, therefore, could only attain its end by the general cultivation of nobleness of character, even if each individual were only benefited by the nobleness of others, and his own, so far as happiness is concerned, were a sheer deduction from the benefit. "

I don't think he would approve of you letting your moral standards slip because the bigger utility is in having people of good character in society.

Expand full comment

This sounds a little like some bastard offspring of rule utilitarianism and virtue ethics…

Expand full comment