Thanks for stopping by! If you want to support me—a lowly, lowly student dressed in sackcloth and ashes, tears in my eyes, etc.—consider upgrading your subscription and growing Awol’s Army! Paid subscriptions are the best defence against a time-gobbling side gig, freeing me up to publish more. My current pricings are, following a 7-day free trial: $6 per month, $60 dollars per year, or—if you really want to be in my good books—you can become a founding member, with a one-time payment of $150 dollars or more! Now, on with the show…
Classical utilitarianism is a slurry of silly talk, a basket of bunkum, a fog of flapdoodle.
Less polemically, I’m reasonably confident that utilitarianism — hedonic act-utilitarianism, anyway — is false, and I’ve felt this way for some time.
Utilitarianism is the view that one always ought to promote overall wellbeing1. Hedonic act-utilitarianism, in particular, says the rightness of an action depends on whether it best promotes over all wellbeing, where wellbeing = pleasure and ill-being = pain. (Rule-utilitarianism, in contrast, assesses acts on the basis of whether they conform to a moral rule which — if internalised by the vast majority of people — would produce the best outcomes. For the sake of snappy prose, I’ll cut the ‘act’ and refer just to ‘utilitarianism’, even though hedonic act-utilitarianism is my target. Rule-utilitarianism is a cool theory, and deserves a post of its own.)
Over the years — from overexposure to a certain
, among others — I’ve warmed up to utilitarianism much more than I’d care to admit; still, when I yank my head out of the clouds, it seems clear to me that utilitarianism can’t be true.I remember the exact moment that I turned my back on Bentham.
In the summer before my first term at Oxford, I worked as a Turkish waiter. Despite appearances, I’m not Turkish by blood or culture; but the restaurant was Turkish and I was a waiter, so I think that more or less counts.
After a while, I got pretty chummy with my boss, a nice man we’ll call Semir. One day, in between lunch and dinner hours when the restaurant was almost vacant, Semir jokingly told me that the meat wasn’t really halal, despite what he’d had me tell customers.
These fucking Muslims, he laughed. How can they expect halal meat here — there isn’t a halal butcher for miles!2
I tried suggesting that we shouldn’t trick Muslims into violating what they take to be an important religious precept, but he laughed it off, telling me to give the ‘Is this halal?’ questions to him if I wasn’t comfortable lying.
As he straightened chairs in the back half of the restaurant, I did the same in the front half, thinking about what to do. It struck me that I was in a moral dilemma that divided consequentialism from deontology.
If I lied, or passed the lying duties off to Semir, there would be no bad consequences in expectation. B-b-but, the usual cope proceeds, Muslim customers might find out and be unhappy and come after the restaurant that lied to them!
Sure, they might, but that wasn’t going to happen: there was no feasible way that Muslim customers would find out, and no feasible story on which this low-profile restaurant would come under fire if they did.
My intuition had nothing to do with beliefs about what might happen in the 1% chance of X: they had to do with the obvious wrongness of tricking devout religious people into unintentionally violating their religious commitments, entirely for the sake of profit.
When Semir came over to chat to me, I told him I was quitting, because I couldn’t lie about the meat and I couldn’t co-operate with his lying about it either.
If actual result act-consequentialism is true — that is, if the rightness of an action depends on its actual consequences, regardless of what the actor knew or intended — then I did the right thing: Semir didn’t want to lose me, probably because I’m extremely handsome and have a Substack worthy of Olympus, so he promised not to lie about the meat.
But in expectation, I couldn’t have known that, and I thought, correctly, that walking out could lose me up to £2,000 in forfeited wages.
It turns out that he wasn’t lying about that promise. The day after, a Muslim food-blogger came in, asking if he could bring his family later, blog about the food on Instagram, and oh, is the meat halal.
Since Semir wasn’t in, the head chef handled the request, and told him yes: halal, everything! (Semir hadn’t circulated the ‘we’re not lying anymore’ memo to the other staff.)
When Semir came back, I told him what had happened, and asked if he’d tell the food-blogger the truth when he came in, and if — pushing my luck here — I could be there when he did.
Semir agreed right away, and did exactly as he’d promised. (The food-blogger didn’t really care, it turned out, so his family enjoyed a nice meal, and he didn’t cancel us afterwards.)
There are non-crazy reasons to adopt utilitarianism. Deontology — even in its weakest forms — confronts a number of gnarly paradoxes, and I have no clue what to say about most of them. Still, I haven’t thought about them too much, and the best ones haven’t been around for very long. I’m optimistic that the worst deontic paradoxes will go the way of most paradoxes, and solutions will be found that deontologists can adopt without unbearable cost.
Still, when I consider the standard counterexamples to utilitarianism — especially examples involving treachery, like the one above — it seems clear that it’s the wrong way up the mountain.
At least when ‘ought’ is in the moral register. One could be a utilitarian — in principle — but think there are reason-giving oughts besides moral ones (prudential oughts, aesthetic oughts), and that the moral oughts needn’t always outweigh the others in terms of what an agent has most reason to do.
Paraphrasing.
I think that deception is generally a bad policy for sophisticated consequentialist reasons but for the reasons I describe in the linked post, enrshrining its wrongness into the fundamental moral law is super implausible.
I recognize that this was a parable illustrating why utilitariansim is false, but just as a matter of virtue you came off as extremely based.