1) If you are not too superstitious – I almost am – imagine for a moment that you suffer from cancer. Imagine that you do not yet know if the course of treatment you have undergone will save you or not. You sit down at your doctor’s desk, all tense, aware that at this point there might be only interim news – indications that a good or a bad outcome is likely. The doctor starts with “well, there are reasons to be optimistic”. Though you are still very tense, you perk up and you feel warm and light all over. You ask what the reasons are. In response, the doctor hands you a piece of paper: ironclad scientific results showing that optimism is good for the health of cancer patients.
Your heart sinks. You feel like a victim of the cruelest jest. I’ll stop here.
Some philosophers would regard what happens to you as being provided with practical reasons to believe (that everything will turn out alright) when one desperately hoped for epistemic reasons to believe (that everything will turn out alright). I, as per my previous post, think what happens is being provided with practical reasons to make oneself believe that everything will turn out alright (take an optimism course, etc.) when one desperately hoped for epistemic reasons to believe that everything will turn out alright –I think the doctor says a false sentence to you when he says there were reasons to be optimistic. For the purpose of today it does not matter which explanation is the correct one. The point is that when we theorize about epistemic reasons and putative practical reasons to believe, the philosopher’s love of symmetry can draw us towards emphasizing similarities between them (not to mention views like Susanna Rinard’s, on which all reasons as practical), but any theory of practical and epistemic reasons heading in that direction needs a way to explain the Sinking Heart intuition – the fact that in some situations, receiving practical reasons when one wants epistemic reasons is so incredibly disappointing, so not to the point, and even if the practical reasons are really, really good, the best ever, it is absurd to expect that any comfort, even nominal comfort, be thereby provided to the seeker of epistemic reasons. What the doctor seems to offer and what she delivers seem incredibly different. The seeming depth of this difference needs to be addressed by anyone who emphasizes the idea that a reason is a reason is a reason.
To move away from extreme situations, anyone who is prone to anxiety attacks knows how $@#%! maddening it is when a more cheerful spouse or friend tells you that it’s silly of you to worry and then, when your ears perk up, hoping for good news you have overlooked, gives you the following reason not to worry: “there is absolutely nothing you can DO about it at this point”. Attention, well-meaning cheerful friends and partners the world all over: the phrase “there is nothing you can do about it” never helps a card-carrying anxious person worry less. I mean never. It makes us worry more, as it points out quasi-epistemic reasons to be more anxious. “There’s nothing you can do about it now” is bad news, and being offered bad news as a reason to calm down seems patently laughable to us. Thank you! At the basis of this common communication glitch is, I suspect, the fact that practical advice on cognitive matters is only useful to the extent that you are the kind of person who can manipulate her cognitive apparatus consciously and with relative ease (I’m thinking of things like stirring one’s mind away from dwelling on certain topics). The card-carrying worrier lacks that ability, to the point that the whole idea of practical advice about cognitive matters isn’t salient to her. As a result, a statement like “it’s silly of you to worry” raises in her the hope of hearing epistemic reasons not to worry, which is then painfully dashed.
2) A quick thought about believing at will. When people asked me why I think beliefs are not voluntary in the way that actions are, I used to answer in a banal way by referring to a (by now) banal thought experiment: if offered a billion dollars to believe that two plus two equals five, or threatened with death unless I believe that two plus two equals five, I would not be able to do it at will. A standard answer to the banal thought experiment points out that actions one has overwhelming reasons not to do can also be like that. Jumping in front of a bus would be voluntary, but most people feel they can’t do it at will. But I think this answer, in a way that one would not expect from those who like the idea of practical reasons to believe, greatly overestimates how much people care about the truth. Let me take back the “two plus two” trope, as not believing that two plus two equals four probably requires a terrifying sort of mental disability, and take a more minor false belief – say, that all marigolds are perennial. If offered a billion dollars, I would still not be able to believe at will that all marigolds are perennial. To say that this is like the bus case would pay me a compliment I cannot accept. One has a hard time jumping in front of a bus because one really, really does not want to be hit by a bus. One wants very badly to avoid such a fate. Do I want to believe nothing but the truth as badly as I want to avoid being hit by a bus? Do I want to believe nothing but the truth so badly that I’d rather turn down a billion dollars than have a minor false belief? Ok, a false and irrational belief. Do I care about truth and rationality so much that I’d turn down billion dollars not to have a false and irrational belief? Nope. Alternately, one might hold that jumping in front of a bus is so hard because evolution made it frightening. But am I so scared of believing that all marigolds are perennials? No, not really. I am sure I have plenty of comparable beliefs and I manage. I’d rather have the money. I would believe at will if I could. It’s just that I can’t. We are broken hearted when we get practical reasons when we want epistemic reasons, but the reason isn’t that we are as noble as all that.