My favourite thing about university is the philosophy conferences: five times a week or thereabouts, philosophers come to Oxford to present their latest work, and—if they’re lucky—have it flame-grilled by Timothy Williamson, who roams from talk to talk, dropping the decisive objection to each one.
Oddly, almost no undergraduates go to these talks. I go because I like to people-watch. (For context, philosophers are the cutest of all God’s creatures—their mannerisms send me over the edge.) Often, I go to talks on topics I know I won’t be able to understand just to sit in on the Q&A. When the objections start, I decide who’s wrong based on whose body language is more dominant and whose replies most deferential.
One talk, I can’t remember how long ago, significantly undermined my confidence in moral realism: the view that there are stance-independent moral truths. (Or, put differently, the view that there are objective, mind-independent moral facts, facts like “torture is wrong” or “pleasure is good”. According to moral realists like me, these claims would be true even if everyone in the world denied them, and felt passionately that they were wrong.)
The talk had nothing to do with moral realism: it was about the ‘problem of moral luck’. Moral luck is one of the gnarliest problems in ethical theory. To get a handle on the problem, suppose you’re driving drunk, swerve to the right, and run over a little girl named Peter. Obviously, in that case, you did something severely wrong, and as a result, you’re severely blameworthy.
Now suppose you drive drunk, swerve right, but don’t run over Peter (by luck, the road was empty). Intuitively, while you’re still blameworthy for taking the moral risk of drunk driving, you aren’t as blameworthy as you would’ve been had you actually mowed Peter down.
But whether Peter happens to be in the road is something beyond your control—a matter of luck. And intuitively, it seems right that you can only be blameworthy or praiseworthy for things that are under your control.
Therein lies the puzzle. On the one hand, we want to say you’re less blameworthy if you don’t mow down Peter than if you do. On the other, we want to say that events beyond your control—events like a child wandering into the road—can’t affect your degree of blameworthiness in principle. But these intuitions collide headlong, just as you collided headlong into Peter you disgraceful bastard. Hence, the problem of moral luck.
Back in the day, I was happy to bite the bullet and concede that you’re just as blameworthy in both cases, whether or not you run over a child. The principle that your degree of blameworthiness can’t be affected by facts beyond your control (what philosophers call the ‘Control Principle’) just seemed too obvious to give up.
But then came Anna Nyman. In a talk titled “Moral Principles: A Challenge for Deniers of Moral Luck” (the fruits of which have now been published in the journal Ergo), Nyman argued that the Control Principle faces a potent counter-example: namely, moral principles themselves.
Consider: if it’s wrong to drive drunk, it’s going to be wrong because it violates some moral principle. But when you chose to drive drunk, you didn’t also choose which moral principles are true. Such matters are beyond your control. But the truth of a moral principle can obviously affect your degree of blameworthiness or praiseworthiness, since its truth is the thing in virtue of which an action is blameworthy or praiseworthy.
Hence, we have a counter-example to the Control Principle on our hands, one that looks pretty devastating. The Control Principle says that facts beyond our control can never affect our degree of moral responsibility—the degree to which we’re blameworthy and praiseworthy. But moral principles seem to do just that. Hence, the Control Principle is false. Moral luck, to some degree, exists—at least if morality does.
I won’t defend Nyman’s argument here. (If you’re curious, check out the paper I linked above. It covers many of the objections you might already be cooking up—e.g., that the Control Principle should only apply to contingent things beyond our control, while moral principles are necessary and fixed.) All I’m saying is I found it persuasive, and can’t think of a good reply.
In light of Nyman’s argument, I’m stuck with the following two ‘seemings’:
The Control Principle seems obviously true; a core and indispensable feature of morality.
The Control Principle seems false, because it faces a devastating counterexample.
These seemings are at loggerheads. They can’t both carry the day. A moral realist, like me, should try to resolve these warring seemings. But right now, I don’t know how to do that.
A moral anti-realist, by contrast—someone who denies that there are objective, mind-independent moral truths—should feel no need to resolve these conflicting seemings. If moral statements are all either false, implicitly subjective (“killing babies is wrong for me”), or merely expressions of emotion (“I hate baby-killing!”), we’d have no reason to expect our core moral intuitions to cohere with each other.
Put differently, if moral realism were true, we’d expect morality to be internally consistent (just as we’d expect mathematics to be internally consistent if mathematical realism were true.) In contrast, if moral anti-realism were true, we should have no such expectation. If moral utterances are devoid of propositional context, only subjectively true or false, or just objectively false, there’d be no reason to expect that the moral principles we endorse to hang together coherently.
Hence, whenever we’re faced with a seemingly intractable ethical puzzle—like the problem of moral luck—the apparent incoherence of our deepest-seated moral intuitions is evidence for moral anti-realism over moral realism, since anti-realism predicts this apparent incoherence better.
The worry goes beyond moral luck. Philosophers have been struggling with seemingly intractable moral paradoxes, and inventing new ones, for decades. These include: the mere addition paradox, the non-identity problem, paradoxes of moderate deontology, paradoxes of demandingness, paradoxes of infinite ethics, and so on. On a meta-level, there has been no substantial convergence of opinion on how to solve these paradoxes. All are alive and kicking. And on a paradox-specific level, though there isn’t time to examine any of these paradoxes individually, many of them strike me as deeply troubling. Each of them consists of innocent-looking premises that are extremely hard to reject.
The realist seems committed to denying at least one extremely plausible moral proposition, in each of these paradoxes, in order to hang on to the other extremely plausible moral propositions. But the anti-realist has a simpler story: none of the propositions in question are stance-independently true, so there were no real paradoxes to start with. The existence of apparently unsolvable moral paradoxes is only surprising on moral realism; on anti-realism, this is more or less par for the course.
I think this is the best argument for anti-realism, and I don’t have a satisfying response to it. The satisfying response, of course, would be to satisfactorily solve every moral paradox. But that’s a very tall order, and I don’t know how to do it. (Nor, I think, do you, though the odds you’ll solve all these paradoxes are significantly higher than the odds of the average person, since you’re the kind of person who reads Going Awol—a freakishly smart, extremely clever person, with looks that could fix a broken mirror.)
So, why aren’t I an anti-realist? First, the moral puzzles that puzzle me most tend to be the ones I’ve thought/read about least. Of the one’s I’ve thought about most (e.g., the non-identity problem), I’ve come to accept solutions to them which satisfy me reasonably well. This gives me some hope that the other puzzles are solvable too.
Second, the argument has what J. L. Mackie called “companions in guilt”. That is, if this argument should lead us to anti-realism about the moral realm, it should also lead us to accept anti-realism about other domains that seem even more indispensable than morality. Take logic. For every moral paradox you can name, you can name an equally troublesome logical paradox—the Sorites Paradox, the Liar Paradox, Curry’s Paradox, etc.—which logicians lack an agreed upon solution to. But while these paradoxes have lead logicians to endorse radically counter-intuitive principles, or even to revise classical logic altogether, few respond by denying that there are stance-independent logical truths. Provisionally, I think we should take the same approach to morality.1
When I raised this point to Christopher Cowie (who’s writing a book defending this argument, and has a good paper on it here), he said he was open the idea that the same reasoning could be applied to logic, time, the infinite, and other domains about which there seem to be deeply intractable paradoxes.
You say realism is the view that there are stance-independent moral truths. But if that's *all* it is, it's very hard to see why you'd expect it to have any consequences concerning the consistency of our moral intuitions.
Rather, it seems to me you're implicitly building into realism not just metaphysics, but epistemology too: there are stance-independent moral truths, *and* moral reasoning is a (somewhat?) reliable way of learning those truths. Maybe you think those are a natural package--why have realist metaphysics without optimistic epistemology?
Still, I think this is an important distinction because some familiar anti-realist strategies focus on just this gap. E.g., Sharon Street style reasoning involves arguing that if realism is true, then we shouldn't expect moral reasoning to be a good way of learning about the stance-independent truths. And in my view it's natural to see earlier anti-realist arguments (e.g., Harman) as mining a similar vein. (Very roughly, he argues that stance-independent moral truths form no part of the explanations for why we end up holding the moral views we do.)
Once we draw this distinction, it also makes me wonder why you don't take the appearance of irresolvable moral paradoxes to count primarily in favor of realism + skepticism--ie, there are stance-independent moral truths, but we're pretty hopeless at figuring out which ones they are--rather than taking it to support the bigger move to anti-realism.
You’ve built a Cartesian element into your realism. Just as Descartes thought that god wouldn’t deceive us about our clear and distinct perceptions, you seem to think that the world wouldn’t give us inconsistent intuitions. It’s a dubious assumption, but a very common one among realists and rarely acknowledged.