1. Introduction
Classical Utilitarianism (which I’ll call “utilitarianism”, because brevity = longevity, and I like a good abreve), says that what each of us has most reason to do, morally, is act in ways that produce the largest amount of happiness, and the least amount of pain, and that we should do so no matter what. So, utilitarians are of the view that we should give more to charities, stop throwing rocks at the children, and kill one person if doing so would save five others from death.
Utilitarianism has many critics. Among the most important was the late Bernard Williams—Oxford philosopher, White’s Professor at All Souls, and sayer of insightful things about morality.
In his famous debate book with J. J. C. Smart, Utilitarianism: For and Against, Williams argued that utilitarianism implausibly downplays the moral value of integrity—roughly, the unified sense of self that comes from following one’s values and living out one’s deepest aspirations—and that this makes the theory untenable.
Here, I’m going to flesh out Williams’ Integrity Objection, and question its rational integrity.
2. The Integrity Objection
To get a hold of the Integrity Objection, it helps to start—as Williams does—with two cases.
The first is about a chemist, George, who receives a job-offer at a factory manufacturing chemical weapons. George is morally opposed to chemical warfare, but knows that if he declines the job, another person will get it, and perform it more efficiently, so that more people will end up dying.
The second is of Jim, a botanist who arrives in the town centre of a South American town. Twenty Indians are tied to a wall. If Jim kills one the, he’s told, the other nineteen will live. If he does not, all twenty will be killed by the government.
Utilitarianism says George should take the job and Jim should kill the Indian. Williams does not dispute these results. (Indeed, he thinks that in Jim’s case, “the utilitarian is probably right”1). The objection Williams draws from these cases is not an objection to the answers it gives about the cases, but to the way utilitarianism gets to them, and what it ignores along the way.
In these cases, it isn’t obvious what Jim and George should do. On the one hand, if George takes the job and Jim kills the prisoner, both will do a stonking amount of good: the results will be better for many and worse only for Jim and George.
On the other, George taking the job—even when he’s strongly opposed to chemical warfare—, and Jim killing the prisoner—even when he’s strongly opposed to killing the innocent—compromises both men’s integrity. A plausible moral theory, Williams thinks, will take both values into account—integrity and consequences—and give them both a lot of weight.
But utilitarianism gives almost no weight to Jim and George’s integrity. The value of Jim and George’s actions aligning with their deepest moral convictions, core values, and aspirations amounts only to the pleasure they take in doing so. And in cases like the two described, the hedonic benefits of integrity are completely swamped by the hedonic benefits of the utilitarian decision. Utilitarianism, Williams claims, is committed to saying that the right responses to these cases are obvious. But, he contends, they are not.
That’s the first aspect of the objection: Utilitarianism says that in cases where pursuing our most important projects and living in accordance with our deepest values comes at the cost of severely negative consequences, it’s obvious that we ought to abandon our values and projects. Williams thinks it isn’t.
The second aspect, building on from this, is that this fact reveals that utilitarianism is absurdly demanding. It may be permissible—even praiseworthy—to jettison one’s most core values and deepest aspirations to benefit others. But the correct moral theory shouldn’t say that it’s immoral to pursue one’s core values and live out one’s opposition to chemical weapons, in cases where the utility of not doing so is ever so slightly higher.
Thus, writes Williams: “It is absurd to demand of such a man, when the sums come in from the utility network which the projects of others have in part determined, that he should just step aside from his own project and decision and acknowledge the decision which utilitarian calculation requires”2.
That is the Integrity Objection. Utilitarianism affords no value to the pursuit of one’s fundamental projects and the maintenance of one’s moral integrity besides hedonic value. And it demands that we abandon our deepest aspirations and give up our attempts to live in tandem with our core values in cases like those of Jim and George. All of which, says Williams, is implausible. The correct moral theory would not imply such things.
3. Three Qualifications
It’s worth making three qualifications about what the Integrity Objection does and doesn’t say.
First, when Williams distinguishes between the decision demanded by person’s projects and the decision demanded by the utilitarian calculus, he doesn’t say that these are essentially distinct. It could be that your projects are all about being the World’s Best Utilitarian. The point is that you could legitimately have other projects besides this, even at the expense of maximizing utility.
Second, when Williams defends the moral importance of our deepest aspirations, he doesn’t mean any and all aspirations, no matter how trivial. The desire to walk to work with spotless white shoes is an aspiration. It involves a plan for the future. But an aspiration like that wouldn’t give you the right to bypass a drowning child. The aspirations Williams cares about are the kinds of deep-seated projects that shape our identity and give our lives meaning.
Third, by ‘integrity’, Williams doesn’t just mean moral integrity, even though his examples may give this impression. Rather, as Elizabeth Ashford explains, by ‘integrity’ “Williams is referring to its classical meaning of ‘wholeness,’ and is using it to denote the agent's unified sense of self. Giving up ground projects with which the agent is identified would result in a degree of psychological fragmentation.”3 The relevant sense of psychological fragmentation is that which results from acting in accordance with a set of external dictates that run contrary to one’s character, values, or fundamental projects.
4. Objection Time (Two of Them!)
The first dubious thing Williams says is that utilitarianism implies that it’s obvious that George should take the job and George should kill the prisoner. As Roger Crisp writes of the Objection: “Williams suggests that utilitarians—act utilitarians, that is—would say not only that George should take the job and that Jim should shoot the Indian, but that it is obvious that this is so.”4
To this, I reply: there’s a difference between utilitarianism obviously implying something, and utilitarianism implying that something is obvious. Obviously, utilitarianism implies that George should take the job and Jim should kill the prisoner. But that doesn’t mean utilitarianism implies that these verdicts are obvious.
Utilitarianism—to be the best moral theory—need not imply that it itself is obviously true, nor that all its implications are obvious. No moral theory is obviously true, so it’s unfair to demand that a theory only imply things that are obvious.
This is but a minor hitch, specific to Williams’ presentation of the argument. The Integrity Objection, to get off the ground, doesn’t need the claim that utilitarianism’s truth implies that what it says about Jim and George should be obvious. All it needs is the claim that utilitarianism devalues George and Jim’s integrity to and implausible degree.
My other concern for Williams’ argument—and this is the big one—lies in the examples he use motivates the objection. Both are cases in which the internal reasons the agents have for not making the utilitarian decision are concerns about violating a deontic principle. In the case of George, the deontic principle he wants to adhere to is the principle that one should not be complicit in evil, even if one’s complicity is causally irrelevant. And the deontic principle Jim wants to adhere to is the principle that one shouldn’t kill one person to prevent more killings.
But, of course, the utilitarian thinks these principles are false to begin with. The utilitarian’s claim is precisely that neither principle is true, even if they seem true to Jim and George. In a similar fashion to Williams, we could construct parody problem cases for the deontologist in which an agent, Gerry, has utilitarian intuitions about a case, but in which the deontologist is committed to saying that Gerry should act against them.
In a case like that, the deontologist should say the following: Perhaps, if it really seems on reflection to Gerry that the utilitarian course of action is right, Gerry subjectively ought to take the utilitarian course of action, even if the action is objectively wrong. Gerry can only be blamed for doing the wrong thing in expectation; if Gerry is sincerely wrong about the moral facts, he is morally blameless, even if the decision he makes is objectively wrong.
Utilitarians should say the same about Jim and George. If it really seems to them that taking the utilitarian action violates a deontic principle that they are wrongly (but blamelessly) committed to, then they are morally off the hook—even if their action is wrong objectively.
Williams’s examples muddy the water. They needlessly confuse our intuitions about what Jim and George subjectively ought to do and what they ought to do objectively. As such, they confuse, rather than clarify, the question. When think to ourselves “What should Jim and George do?”, we end up thinking about both what they subjectively ought to do, and what they objectively ought to do. This feature of the cases clouds our intuitions about them, making them imprecise and less revealing.
Fortunately, this is, again, a problem specific to Williams’ presentation—it’s not a general problem for the Integrity Objection. The objection can be made with different examples, which don’t trade on this ambiguity.
For instance: consider an example where the relevant aspiration is not about adhering to a deontic principle, but about pursuing a passion. Suppose Jackie has a choice between working as a fundraiser for the Against Malaria Foundation (a job she knows she’ll hate), and working as a close-up magician (a job in she’ll do less good with, but that’s all she’s ever wanted to do.)
In Jackie’s case, Williams can argue that utilitarianism unacceptably ignores the value of Jackie’s projects, and absurdly claims that she is morally required to take the job at the charity instead. He doesn’t need intuition-confusing examples to make his case.
But now, when we switch the examples, the objection loses a touch of its force. It’s not quite as obvious that Morality doesn’t require Jackie to sacrifice some of her career satisfaction to save a large number of lives.
This seems especially unobvious when you look at things from the perspective of the people Jackie could save: when you look at things from their perspective, and not just Jackie’s, it seems a bit less wild to think Jackie is required to give up on her career goals, given that she could save all those people’s lives by doing so.
Obviously, the claim that Jackie should be a fundraiser and not a magician, even if you accept it, doesn’t get you to full-out, utilitarian-level demandingness. The Jackie case is one in which human lives are at stake. The utilitarian also has to say that if Jackie hates comedy, but would be much more successful as a comedian than as a magician, and would be able cheer up thousands of people every night doing that, she ought to put the cards aside and become a comic. And that seems pretty wild.
But all considered, if you strip out the intuition-confounding variables that Williams needlessly introduces, the utilitarian position doesn’t seem as Bonkers as it seemed at the start. It still seems demanding, to an unintuitive degree, but things aren’t as bad as Williams makes out.
Williams, B. “A Critique of Utilitarianism”. In: Smart, J. J. C., and Williams, B. 1973. Utilitarianism: For and Against. Cambridge: CUP: 117.
Ibid. p. 116.
Ashford, E. 2000. “Utilitarianism, Integrity, and Partiality”. Journal of Philosophy 98(8): p. 422.
Crisp, R. 1997. Mill on Utilitarianism. Oxford: Routledge: p. 138.
I think that in the Jackie case it just seems totally obvious that Jackie should not be a magician. It seems obvious that one should sacrifice their career goals to save many lives. A few thought experiments to buttress this.
1) Suppose Jackie was on her way to get a dream magician job and came across drowning children who she could save. It seems she'd be required to save it. If we accept Singer's reasoning, there's not a difference between the two cases (proximity, for example, doesn't matter).
2) The following principle given by Singer is plausible: one should prevent very bad things from happening if they can do so without sacrificing anything of comparable moral value. But a nice job is not of comparative value with multiple lives.
3) If we're scalar utilitarians, we don't think that there are firm obligations. Thus, Jackie wouldn't be obligated to do it--she'd just have far stronger reasons to do it and be morally better if she does it. But that just seems plausible; of course Jackie has more reason to save multiple lives than to get a cushy magician job.
4) When we imagine it from the perspective of the victim, the non-demanding morality seems horrifyingly callous. Compare: morality requires you to give up your career to save multiple lives to morality requires you die along with your entire family, 6 friends, and 9 other people so that Jackie can achieve her aspirations to be a magician. The only reason utilitarianism seems to demanding is that is places demands on the wealthy and affluent few--but we'd expect the richest .1% globally to face significant moral demands if they can save literally hundreds of lives.