I subscribed to Bentham's Bulldog, and it's ruined my life. None of my friends reply to me anymore. Everything smells bad. I haven't had an erection in 3 weeks
If aether accurately predicts the behavior of light & gravity, then any belief "light will go from [x] to [y] successfully" or "celestial body [a] will do [b]" (because of aether) is justified & true—but we're not gonna have a great time trying to do space travel or discover any more things about the universe. Having a *good, true model for why something happens*(knowledge) isn't important to the truth of this particular belief—but it is important to inform all the next beliefs that I could/should hold.
To your analogy: if I accidentally subscribed to Bentham's Newsletter, I might go back and try to subscribe to you some other way. (And again, hold the JTB that my inbox will be filled.) But, if the substack genie is still substack genie-ing, I would accidentally subscribe to Bentham and then think about killing myself again... The knowledge that a genie is getting in the way is very clearly important for all my future actions.
Thanks for this! Wouldn't a justified true belief that a genie is getting in the way be just as action-guiding? (If you complicate the scenario further with another Gettier-case, I imagine I'll make the same move again). More generally, I don't see why you couldn't Gettier the "good, true model for why something happens" analysis of knowledge, in a way that doesn't allow me to make my argument again
I think this works much better in a correspondence theory of truth, but not so much in a coherence framework, where sets of “knowledge holdings” condition other bits of data, groupings of data and models of the world that *abstract away* said data - in interaction with the larger system. The Duhem-Quine thesis
Your example is just as interesting, if in an epistemic low stakes sort of way: if I subscribed to Bentham’s Substack instead of yours, *how* I am wrong is just as important as that I am wrong in the particular decision point. Did I subscribe in error to a totally random Substack? Is yours connected to his? If you’re both smart-asses then that would certainly condition the flavor of my having made “a mistake”. If so, how? Etc. Knowing that I was wrong as a result of a practical joke you two collided on is surely more epistemically valuable that knowing I was wrong, and would govern how I course correct. It might turn out that the correct answer ends up being: don’t go back and correctly subscribe to yours but instead just follow Gwyneth Paltrow, who would never lie to me.
Knowledge seems to be this higher-order, spanning understanding that puts all of these “good enough” justified true beliefs into actions that roll into other actions, and begin to satisfy the ask of Duhem-Quine.
If the world of engagement consists of pre-established cows and pre-established fields with pre-established roles, then the farmer getting lucky is good enough to keep doing his chores. But that hardly pertains to the way we feel our way through interconnected belief points in a system full of ambiguities just beyond that next cow pasture.
So... it's Gettier cases all the way down? That seems fine to me, but also, why shouldn't we call it knowledge at that point? If you have some JTB and some true explanation for that JTB, even if the explanation is a Gettier-case (i.e., not knowledge), the JTB itself is 'known'.
If a Gettier-case is only troubling because a JTB's explanation is false, does it matter why a true explanation is true? Why can't a JTB be satisfactorily explained by a Gettier-case?
I'm worried about a slight tension between caring about blamelessness and not caring about getting lucky; in particular, it seems blameworthy to not care about getting lucky.
Here's one sloppy way of sketching this. Suppose I just care (primarily) about beliefs being true, which makes me care instrumentally about (say) acting like someone who forms true beliefs. The latter looks like blamelessness! For instance, it excuses error in cases where you're tricked. But it also condemns skeptics who luckily avoid error in those cases. So that's one reason we might care secondarily about blamelessness, by caring primarily about truth.
But whatever it means to care about being safe from error (such that not caring is interesting and heterodox, which I think you're after), this secondary norm (roughly, justification) seems to require you to care, insofar as it's conducive to complying with the primary norm (truth). Roughly, this view seems to want to self-efface in favor of a knowledge norm, since accepting a knowledge norm is more truth-conducive than accepting a truth norm. Repent!
Now, you might think that an ersatz knowledge norm is okay — care about knowledge just insofar as it's conducive to JTB — but a full-blooded one isn't. ('Granted, you don’t want to intentionally leave your beliefs up to luck. You want to do the best you can.') But then it's hard for me to see why we shouldn't also accept only an ersatz justification norm, as sketched above — care about blamelessness just insofar as it's conducive to truth. If I reject error-safety as liberal fantasy, I also become disillusioned with blamelessness.
Here's an analogy: suppose I promise to be at the fountain outside the philosophy building at 3:00 PM. My phone's time zone glitched back to Eastern Time, so I think it's only 10:00 AM, but I happen to walk by (say, on my way to a maths lecture in the building next door). I might care about being at the fountain, and being blameless with respect to promise-keeping (i.e., acting like someone who keeps promises); but to not care about situations where I merely luck into technically fulfilling my promise (i.e., I don't even really *keep* my promise) seems blameworthy in the relevant / analogous way. Of course, I could solve this by not caring about blamelessness anymore, but it's better to just care that I *keep* my promises (rather than just that they happen to be fulfilled, or caring about that plus blamelessness), and still say that lucky cases are better than nothing.
“Roughly, this view seems to want to self-efface in favor of a knowledge norm, since accepting a knowledge norm is more truth-conducive than accepting a truth norm. Repent!”
What if I don’t want to repent.
So, as you anticipated, I agree that a truth-only norm will want to self-efface into a knowledge norm, but my claim is that it’ll want just as badly to self-efface into a truth + justification norm, which is the norm I was defending.
Why not then accept an ersatz justification norm? Idk, I’m inclined to say because either one or both of the following are non-instrumentally desirable: (a) respecting deontic constraints on belief-formation, or (b) being epistemically virtuous. (Haven’t thought much about how to cash these out, like I said to Daniel. [I know you said to ignore this part of your comment but typing the reasoning out helps me think, so sue me.]). You might also have my view and think an ersatz justification norm is fine, that an ersatz knowledge norm is fine, but that a truth norm + an ersatz justification norm is simpler than an ersatz justification norm. But if you’re a knowledge-firster you won’t buy that, so I default to the other stuff I said.
On promises: I wonder if I just don’t have the relevant intuition… Suppose I rock up the fountain and learn that I got the right consequences and didn’t do anything blameworthy in the process: is there anything about the past you think it’s fitting for me to regret or think was bad? (WRT states of affairs, not the subjective aspect of my decision-making.) I don’t have the intuition (yet) that there is. But if there isn’t, then in what sense should I care about promise-keeping *itself* (as opposed to doing my best + getting the desired result)? Can you say more to prime me for this intuition you have?
'You might also have my view and think an ersatz justification norm is fine, that an ersatz knowledge norm is fine, but that a truth norm + [a direct] justification norm is simpler than an ersatz justification norm.' — Is this what was intended? If so, then everything you've said seems right (and I won't buy this because I think the latter is simpler than the former).
Anyway, yeah, the tension is more awkward-middle-spot than outright contradiction. But I think (1) you should buy an ersatz knowledge/error-safety norm (because the blamelessness norm says to buy it), and then (2) if your norms are truth + blamelessness (+ ersatz knowledge/error-safety), it's simpler to just go with a knowledge-norm (which entails truth, and gets ersatz blamelessness).
Right, on the value of keeping a promise, above just doing your best + it being fulfilled (I think the desired result here is *keeping* the promise, not just fulfilling it, but of course you won't accept that): I think it's bad that I was in very significant danger of breaking my promise, even through no fault of my own, in a similar way that it would be bad for me to almost get hit by a runaway bus. I think there's something important in being able to say 'I would not break a promise', or 'If I wasn't going to be here, I wouldn't have promised' (where I'm understanding 'would' as a local necessity operator, roughly 'in all contextually relevant cases'; Williamson defends this in detail, and Ichikawa has applied it in the case of knowledge.) (This might be a bit far afield, but I think the value in non-domination might just be safety from interference, above simple non-interference — don't know if that helps.) But if someone decides upon reflection that they just don't care about unrealized danger, I don't think I could do much.
1. It's not what I originally had in mind, but it's a different route to (a less interesting version of) my conclusion. (Also, I typed something wrong. What I meant to say is "a truth norm + [a direct] justification norm is simpler than an ersatz *knowledge* norm" not "a truth norm + [a direct] justification norm is simpler than an ersatz *justification* norm." You'll disagree with that because you don't think knowledge is complex, so that argument's not for you.)
2. OK, so I think we've teased out the disagreement: I'm not sold on the idea that a pure risk of failing to get to the fountain on time, or a pure risk of missing out on the truth, is undesirable, bad, or worth caring about, whereas you think these pure risks are undesirable, bad, or worth caring about. Does that sound right?
(2) Yeah, that's right. I do worry that this might be something I'm pushed to by theory (propaganda), though — I don't have super strong convictions here (except when I think about the safety/sensitivity 'would' sentences: I feel fairly strongly that you should care about these intrinsically, and doing your best wrt promises requires caring about these).
Either Gettier is wrong or he is trivially correct. If true beliefs can be divorced from the reliability of the process by which they are apprehended, then they are not justified. But if justification requires perfect certainty for arriving at the belief, then JTBs are very few and generally not interesting. For example, I could never have a JTB that Bentham's Bulldog crushes Going Awol in any head to head comparison, for perhaps an evil Russian hacker is constantly switching which posts show up under which account. There is always a chance my causal map of justification has some contingency in it.
So I am less interested in JTBs and more interested in probabilistically justified inductions, because that's the limit of my ability to discuss how trashy Going Awol is.
Another view: JTB is overrated, and instead of talking about knowledge we’re better served talking in terms of how good our explanations are and how to make better ones. Usually with progress we don’t reach a state of knowledge, we just err less often. The real epistemological key is error correction.
How about: 'Justification seems important because (either) not violating deontic constraints on epistemic practice seems important, or because epistemic virtue seems important' - is that implausible?
Well then you’d need to explain why those are important. And that explanation probably can’t be a fancy version of ‘because they are truth-apt,’ since the problem will just recur.
I don't have an explanation (haven't thought about it). Can't I just say they're intrinsically valuable with no deeper explanation, same as I want to do with pleasure?
This doesn't seem like a particularly difficult bullet to bite IMO, at least if you flesh out the distinction by way of devices like "in your next reincarnated life, would you rather have all true but very subtly unjustified beliefs, or would you rather have all justified but mostly false beliefs?" Arguably most people would choose the former; certainly I would.
That said, the more common answer you might receive is that adhering to certain justificatory epistemic norms is (we believe) a lot more likely to help one get to the primary thing one actually cares about, namely, truth. You could try to set up some parallel with knowledge here, but I don't think that'll work, insofar as it's not getting you anything "extra." If Alice wants JTB and Bob wants knowledge, ceteris paribus I don't think they're going to end up disagreeing about anything aside from (trivially) certain abstract epistemic questions about knowledge. They'll both end up adjusting their beliefs in the same ways once they discover they're in Gettier-type scenarios and the like.
This is all well and good, but my issue is that "Justified True Belief Is Power" does not fit onto a t-shirt nearly as easily, and that's a very big problem with your argument 👕👺
Isn't the game given away with the admissions that "you don’t want to intentionally leave your beliefs up to luck." If you don't AIM at (merely) justified true beliefs, what do you aim at? The natural answer is: knowledge. That's certainly a way of "caring" about knowledge!
Your account here seems to depend on an internalist view of justification: you are concerned that your belief-producing mechanisms are not "blameworthy or criticisable," but not whether they are "faulty or unreliable." From a first-person perspective, that seems appropriate, but if a third party is wondering whether to trust Amos Wollen's claims, it matters a lot whether AW's beliefs are produced by reliable mechanisms.
The Gettier problem is closely related to the problem of "moral luck." Discussions of both problems tend to introduce many of the same misunderstandings, because people fail to distinguish between first-person and third-person meanings of "should."
Plato says knowledge is ‘true belief with an account of the reason why.’ Unclear if that maps to the 20th century JTB concept. And I think it’s in the Meno.
I subscribed to Bentham's Bulldog, and it's ruined my life. None of my friends reply to me anymore. Everything smells bad. I haven't had an erection in 3 weeks
Here's my best try at arguing against:
If aether accurately predicts the behavior of light & gravity, then any belief "light will go from [x] to [y] successfully" or "celestial body [a] will do [b]" (because of aether) is justified & true—but we're not gonna have a great time trying to do space travel or discover any more things about the universe. Having a *good, true model for why something happens*(knowledge) isn't important to the truth of this particular belief—but it is important to inform all the next beliefs that I could/should hold.
To your analogy: if I accidentally subscribed to Bentham's Newsletter, I might go back and try to subscribe to you some other way. (And again, hold the JTB that my inbox will be filled.) But, if the substack genie is still substack genie-ing, I would accidentally subscribe to Bentham and then think about killing myself again... The knowledge that a genie is getting in the way is very clearly important for all my future actions.
Thanks for this! Wouldn't a justified true belief that a genie is getting in the way be just as action-guiding? (If you complicate the scenario further with another Gettier-case, I imagine I'll make the same move again). More generally, I don't see why you couldn't Gettier the "good, true model for why something happens" analysis of knowledge, in a way that doesn't allow me to make my argument again
I think this works much better in a correspondence theory of truth, but not so much in a coherence framework, where sets of “knowledge holdings” condition other bits of data, groupings of data and models of the world that *abstract away* said data - in interaction with the larger system. The Duhem-Quine thesis
Your example is just as interesting, if in an epistemic low stakes sort of way: if I subscribed to Bentham’s Substack instead of yours, *how* I am wrong is just as important as that I am wrong in the particular decision point. Did I subscribe in error to a totally random Substack? Is yours connected to his? If you’re both smart-asses then that would certainly condition the flavor of my having made “a mistake”. If so, how? Etc. Knowing that I was wrong as a result of a practical joke you two collided on is surely more epistemically valuable that knowing I was wrong, and would govern how I course correct. It might turn out that the correct answer ends up being: don’t go back and correctly subscribe to yours but instead just follow Gwyneth Paltrow, who would never lie to me.
Knowledge seems to be this higher-order, spanning understanding that puts all of these “good enough” justified true beliefs into actions that roll into other actions, and begin to satisfy the ask of Duhem-Quine.
If the world of engagement consists of pre-established cows and pre-established fields with pre-established roles, then the farmer getting lucky is good enough to keep doing his chores. But that hardly pertains to the way we feel our way through interconnected belief points in a system full of ambiguities just beyond that next cow pasture.
So... it's Gettier cases all the way down? That seems fine to me, but also, why shouldn't we call it knowledge at that point? If you have some JTB and some true explanation for that JTB, even if the explanation is a Gettier-case (i.e., not knowledge), the JTB itself is 'known'.
If a Gettier-case is only troubling because a JTB's explanation is false, does it matter why a true explanation is true? Why can't a JTB be satisfactorily explained by a Gettier-case?
I think they should expel you for heresy.
I'm worried about a slight tension between caring about blamelessness and not caring about getting lucky; in particular, it seems blameworthy to not care about getting lucky.
Here's one sloppy way of sketching this. Suppose I just care (primarily) about beliefs being true, which makes me care instrumentally about (say) acting like someone who forms true beliefs. The latter looks like blamelessness! For instance, it excuses error in cases where you're tricked. But it also condemns skeptics who luckily avoid error in those cases. So that's one reason we might care secondarily about blamelessness, by caring primarily about truth.
But whatever it means to care about being safe from error (such that not caring is interesting and heterodox, which I think you're after), this secondary norm (roughly, justification) seems to require you to care, insofar as it's conducive to complying with the primary norm (truth). Roughly, this view seems to want to self-efface in favor of a knowledge norm, since accepting a knowledge norm is more truth-conducive than accepting a truth norm. Repent!
Now, you might think that an ersatz knowledge norm is okay — care about knowledge just insofar as it's conducive to JTB — but a full-blooded one isn't. ('Granted, you don’t want to intentionally leave your beliefs up to luck. You want to do the best you can.') But then it's hard for me to see why we shouldn't also accept only an ersatz justification norm, as sketched above — care about blamelessness just insofar as it's conducive to truth. If I reject error-safety as liberal fantasy, I also become disillusioned with blamelessness.
Here's an analogy: suppose I promise to be at the fountain outside the philosophy building at 3:00 PM. My phone's time zone glitched back to Eastern Time, so I think it's only 10:00 AM, but I happen to walk by (say, on my way to a maths lecture in the building next door). I might care about being at the fountain, and being blameless with respect to promise-keeping (i.e., acting like someone who keeps promises); but to not care about situations where I merely luck into technically fulfilling my promise (i.e., I don't even really *keep* my promise) seems blameworthy in the relevant / analogous way. Of course, I could solve this by not caring about blamelessness anymore, but it's better to just care that I *keep* my promises (rather than just that they happen to be fulfilled, or caring about that plus blamelessness), and still say that lucky cases are better than nothing.
(Again, sloppy, so don't hold me to this!)
(Set aside the 'liberal fantasy' paragraph, in light of that other thread!)
“Roughly, this view seems to want to self-efface in favor of a knowledge norm, since accepting a knowledge norm is more truth-conducive than accepting a truth norm. Repent!”
What if I don’t want to repent.
So, as you anticipated, I agree that a truth-only norm will want to self-efface into a knowledge norm, but my claim is that it’ll want just as badly to self-efface into a truth + justification norm, which is the norm I was defending.
Why not then accept an ersatz justification norm? Idk, I’m inclined to say because either one or both of the following are non-instrumentally desirable: (a) respecting deontic constraints on belief-formation, or (b) being epistemically virtuous. (Haven’t thought much about how to cash these out, like I said to Daniel. [I know you said to ignore this part of your comment but typing the reasoning out helps me think, so sue me.]). You might also have my view and think an ersatz justification norm is fine, that an ersatz knowledge norm is fine, but that a truth norm + an ersatz justification norm is simpler than an ersatz justification norm. But if you’re a knowledge-firster you won’t buy that, so I default to the other stuff I said.
On promises: I wonder if I just don’t have the relevant intuition… Suppose I rock up the fountain and learn that I got the right consequences and didn’t do anything blameworthy in the process: is there anything about the past you think it’s fitting for me to regret or think was bad? (WRT states of affairs, not the subjective aspect of my decision-making.) I don’t have the intuition (yet) that there is. But if there isn’t, then in what sense should I care about promise-keeping *itself* (as opposed to doing my best + getting the desired result)? Can you say more to prime me for this intuition you have?
'You might also have my view and think an ersatz justification norm is fine, that an ersatz knowledge norm is fine, but that a truth norm + [a direct] justification norm is simpler than an ersatz justification norm.' — Is this what was intended? If so, then everything you've said seems right (and I won't buy this because I think the latter is simpler than the former).
Anyway, yeah, the tension is more awkward-middle-spot than outright contradiction. But I think (1) you should buy an ersatz knowledge/error-safety norm (because the blamelessness norm says to buy it), and then (2) if your norms are truth + blamelessness (+ ersatz knowledge/error-safety), it's simpler to just go with a knowledge-norm (which entails truth, and gets ersatz blamelessness).
Right, on the value of keeping a promise, above just doing your best + it being fulfilled (I think the desired result here is *keeping* the promise, not just fulfilling it, but of course you won't accept that): I think it's bad that I was in very significant danger of breaking my promise, even through no fault of my own, in a similar way that it would be bad for me to almost get hit by a runaway bus. I think there's something important in being able to say 'I would not break a promise', or 'If I wasn't going to be here, I wouldn't have promised' (where I'm understanding 'would' as a local necessity operator, roughly 'in all contextually relevant cases'; Williamson defends this in detail, and Ichikawa has applied it in the case of knowledge.) (This might be a bit far afield, but I think the value in non-domination might just be safety from interference, above simple non-interference — don't know if that helps.) But if someone decides upon reflection that they just don't care about unrealized danger, I don't think I could do much.
1. It's not what I originally had in mind, but it's a different route to (a less interesting version of) my conclusion. (Also, I typed something wrong. What I meant to say is "a truth norm + [a direct] justification norm is simpler than an ersatz *knowledge* norm" not "a truth norm + [a direct] justification norm is simpler than an ersatz *justification* norm." You'll disagree with that because you don't think knowledge is complex, so that argument's not for you.)
2. OK, so I think we've teased out the disagreement: I'm not sold on the idea that a pure risk of failing to get to the fountain on time, or a pure risk of missing out on the truth, is undesirable, bad, or worth caring about, whereas you think these pure risks are undesirable, bad, or worth caring about. Does that sound right?
(1) Ah, that makes much more sense.
(2) Yeah, that's right. I do worry that this might be something I'm pushed to by theory (propaganda), though — I don't have super strong convictions here (except when I think about the safety/sensitivity 'would' sentences: I feel fairly strongly that you should care about these intrinsically, and doing your best wrt promises requires caring about these).
Nice! Going to eat dinner, will think about this after
Either Gettier is wrong or he is trivially correct. If true beliefs can be divorced from the reliability of the process by which they are apprehended, then they are not justified. But if justification requires perfect certainty for arriving at the belief, then JTBs are very few and generally not interesting. For example, I could never have a JTB that Bentham's Bulldog crushes Going Awol in any head to head comparison, for perhaps an evil Russian hacker is constantly switching which posts show up under which account. There is always a chance my causal map of justification has some contingency in it.
So I am less interested in JTBs and more interested in probabilistically justified inductions, because that's the limit of my ability to discuss how trashy Going Awol is.
Another view: JTB is overrated, and instead of talking about knowledge we’re better served talking in terms of how good our explanations are and how to make better ones. Usually with progress we don’t reach a state of knowledge, we just err less often. The real epistemological key is error correction.
Justification is overrated. If you’re right, who cares about the rest? (My two sentence imitation of Zagzebski’s swamping problem).
How about: 'Justification seems important because (either) not violating deontic constraints on epistemic practice seems important, or because epistemic virtue seems important' - is that implausible?
Well then you’d need to explain why those are important. And that explanation probably can’t be a fancy version of ‘because they are truth-apt,’ since the problem will just recur.
I don't have an explanation (haven't thought about it). Can't I just say they're intrinsically valuable with no deeper explanation, same as I want to do with pleasure?
Started writing my long comment before this thread appeared and deprecated [a part of] it, can't believe I got pre-moved like that :(
This doesn't seem like a particularly difficult bullet to bite IMO, at least if you flesh out the distinction by way of devices like "in your next reincarnated life, would you rather have all true but very subtly unjustified beliefs, or would you rather have all justified but mostly false beliefs?" Arguably most people would choose the former; certainly I would.
That said, the more common answer you might receive is that adhering to certain justificatory epistemic norms is (we believe) a lot more likely to help one get to the primary thing one actually cares about, namely, truth. You could try to set up some parallel with knowledge here, but I don't think that'll work, insofar as it's not getting you anything "extra." If Alice wants JTB and Bob wants knowledge, ceteris paribus I don't think they're going to end up disagreeing about anything aside from (trivially) certain abstract epistemic questions about knowledge. They'll both end up adjusting their beliefs in the same ways once they discover they're in Gettier-type scenarios and the like.
This is all well and good, but my issue is that "Justified True Belief Is Power" does not fit onto a t-shirt nearly as easily, and that's a very big problem with your argument 👕👺
To go further, is justified true belief so important? Perhaps justified belief is good enough.
I mean, epistemic virtue and truth seem somewhat important!
Isn't the game given away with the admissions that "you don’t want to intentionally leave your beliefs up to luck." If you don't AIM at (merely) justified true beliefs, what do you aim at? The natural answer is: knowledge. That's certainly a way of "caring" about knowledge!
Your account here seems to depend on an internalist view of justification: you are concerned that your belief-producing mechanisms are not "blameworthy or criticisable," but not whether they are "faulty or unreliable." From a first-person perspective, that seems appropriate, but if a third party is wondering whether to trust Amos Wollen's claims, it matters a lot whether AW's beliefs are produced by reliable mechanisms.
The Gettier problem is closely related to the problem of "moral luck." Discussions of both problems tend to introduce many of the same misunderstandings, because people fail to distinguish between first-person and third-person meanings of "should."
did plato prove that knowledge isn't justified true belief in Theaetetus?
Did he? I thought he accepted that analysis
Plato says knowledge is ‘true belief with an account of the reason why.’ Unclear if that maps to the 20th century JTB concept. And I think it’s in the Meno.
Well I had a random memory after rolling out of bed about a super complicated text so yes certainly.