Epistemic Life is Unfair!

So you are considering quitting your secure middle class job and going to Tahiti to become a painter. You have a strong hunch that once you go there, you’ll flourish as an artist and produce truly great work. Let’s take morality out of the equation: you are not deserting dependents. You are just considering a high risk of bankruptcy and, just as bad as far you are concerned, ridicule. If you go to Tahiti, are you being rational in so doing?

Bernard Williams suggests that there is no fact of the matter until you have already gone to Tahiti and succeeded or failed. That is, in some respects, a truly scary idea. I have proposed an idea which is not as scary, but might be, to some philosophers, more annoying: there is a fact of the matter, but you, the agent in the story, can’t know it – not before you succeed or fail and, in many circumstances, not afterwards either. When I say you can’t know it, I do mean to imply that no theory of rationality can guide you into this kind of knowledge. This, however, does not mean that there can’t be a good theory of whether, given that certain beliefs, desires, emotions, etc. are in your head, you would be rational or not in going to Tahiti. If I know what’s in your mind – perhaps because I am an omniscient narrator and I made you up – then I do, given the right theory, know before you leave the house whether or not you are being rational. Since you are not akratic in this story, the more precise question is likely whether your action is based on an irrational belief about your talent or the chances that the journey to Tahiti would help.

A rule that tells you not to start a career as a painter unless you are reasonably convinced that you are a great painter, says Williams, would be pretty much unusable. To continue his thought: it would be unusable because being “reasonably convinced” is indistinguishable, in terms of how it feels to the agent, from being unreasonably convinced. Even the best artists are not reliable or rational witnesses to the quality of works they produce, and being convinced of your greatness through wishful thinking – perhaps intertwined with some midlife crisis and being sick of your job – does not always feel any different from being convinced rationally. It is in the nature of epistemic irrationality – for the moment, let’s stick to epistemic irrationality –  that there are limits on your ability to know if you are irrational or not, to the point that sometimes it’s simply impossible for you to know it. Think about the sort of irrationality originated by depression or anxiety or insecurity, the sort originated by intoxication or sleep deprivation, the sort originated in schizophrenia. Take depression as an example:

Tristan: I am a terrible person.

You: Why?

Tristan: I forgot to buy milk today.

You: That doesn’t make you a horrible person.

Tristan: You are just saying it to be nice.

You: My roommate also forgot to buy milk yesterday. Does it mean she is a terrible person?

Tristan: No!

You: well, then –

Tristan: I don’t know your roommate. She is probably just fine. But given all I know about myself, forgetting the milk is just a symptom of how horrible a person I am.

You: What you know about yourself? Like what?

Tristan: I used a horrible mixed metaphor in pro-seminar today. It was embarrassing. I am clearly wasting the money of the people paying for my fellowship. I should stop committing this crime.

You: You are a really good student. All your teachers say so.

Tristan: They are wrong. No, seriously, I have given a lot of thought to that.

You can argue till the cows come home, but Tristan is, as far as he can tell, “reasonably convinced” that he is an all- around horrible person and a failure at all he does. He has thought about it a lot. Advice along the line of “do not quit the program unless you are reasonably convinced that you are not a good student” would be wasted on him.

There are, to be sure, some heuristics that improve the chances of a moderately irrational person to diagnose herself. A lovely eastern European saying I was taught as a child was “if three people say you are drunk, go to sleep!”. I am sure the saying has rescued some people who knew it from major debacles. It also failed to rescue many others who knew it: perhaps they were so drunk they could no longer count to three, or perhaps they were merely tipsy enough to think “Oh, yes, three people say I’m drunk, sure, but Yuan and Liz always agree with each other, so they really should count as the same person, right?”. So basically I’m saying that no putative “rational agent’s manual” can be expected to guarantee its follower rational belief (and thus, action based on rational belief) because it cannot guarantee that the agent won’t be drunk, or depressed, or any number of things that can sneak on you, at the time she consults the manual.

So, I’m worried about the claim that all epistemic norms need to be “follow-able” or that when they are unfollow-able to one, one is not to be charged with irrationality for not, well, following them. One reason I decline to adopt the bright shiny new expression “epistemically blameworthy” in place of the dry-as-dust, old-style expression “epistemically irrational” is that it obscures an unfortunate Williams-esque fact: epistemic life is unfair. Epistemic irrationality is both a failure to respond to reasons and a predicament that can be forced on one – say by putting a drug in one’s coffee or by taking away the prescription drug one usually puts in one’s coffee. We feel compassion for Tristan and do not, hopefully, “blame” him for anything, as his condition is “not his fault”, but we do treat the reasoning implied in “I forgot to buy milk so I’m a terrible person” as flawed and as a symptom of irrationality.

Some would find it disturbing – not just annoying – to think that unfairness is implied by epistemic norms. But should it really be so disturbing? It shouldn’t be remotely as disturbing as a suggestion that unfairness is implied by moral norms. The connection between fairness and morality is pre-theoretical and intuitive, at least in the sense that people would agree that being fair is part of being moral, an unfair action is immoral, and fairness is particularly important when it comes to punishment and other actions related to blame, as in moral blame. It “just seems” unfair to say that something is ever both (morally) blameworthy and a predicament that isn’t the agent’s doing (“not her fault”) and you don’t need to be a philosopher to think that.  On the other hand, the idea that it is always unfair to say that something is both epistemically irrational and not the agent’s doing is an idea rarely spotted in the wild, a postulate of (only some) sophisticated theories of normativity that require that epistemology and ethics be similar, analogous, with isomorphic components: blame here and blame there. Non-Philosophers would raise their eyebrows at the sentence “it’s not her fault she is blameworthy” but “it’s not his fault that he is irrational” would seems fine to them. The asymmetry that bothers some theorists won’t normally be an issue for them. Judgments of irrationality can be “fair” or “unfair” in the sense of “accurate” or “inaccurate”, or in the sense of “biased” or “unbiased”, but when we say Tristan is irrational, even though he didn’t bring his depression about, we are not unfair – we just are just pointing out that life is.

P.S I think one complication is that one ultimately needs to distinguish rationality from intelligence, and drugs that promote/impair one of them or the other. A 13 year old is mostly smarter than a 10 year old, but less rational. See: https://theviewfromtheowlsroost.com/2017/08/13/raw-reflections-on-rationality-and-intelligence-plus-two-cat-pictures/

P.P.S Can “epistemically blameworthy” be a good title for a person who neglects to google, go the library, or deliberate long enough as she tries to figure something out? After all she neglected to do something what we under her control. Well, I can see the why one might want to use the term this way, but I think deep down the problem with her is that she is practically irrational in her search for knowledge. See: https://theviewfromtheowlsroost.com/2017/10/29/epistemology-and-sandwiches/

Epistemology and Sandwiches

To Sophie Horowitz we owe the following question: if having enough blood sugar contributes to our cognitive abilities, does it mean that we sometimes have an epistemic duty to eat a sandwich?

The last three posts in Owl’s Roost concerned some reasons to think that there are no practical reasons to believe and no duties of any sort to believe (or conclude, or become less confident, etc) – at least if we interpret “duty” in a way that’s friendly to deontologists in ethics. But never mind practical reasons to believe.  Can there be epistemic reasons to act? Or, for that matter, epistemic duties to act? For example, deliberating is an action, something that you can intentionally do. One can also intentionally act to review the evidence in a complicated case, to ask experts for their opinions, or to check the origins of that article forwarded to one on Facebook that claims that cats cause schizophrenia. Do we have epistemic duties and epistemic reasons to do these things?

If we want money, there are things that we have reasons – practical reasons – to do. Call these practical reasons “financial reasons”. Similarly, there are things that we have reasons – practical reasons – to do if we want to be healthy. Call these practical reasons  “health reasons”. “Health reasons” are practical reasons that apply to people whose goal is health, “financial reasons” are practical reasons that apply to people whose goal is money. Now, suppose your goal is not health, or wealth, or taking a trip on the Trans-Siberian Railroad, or breeding Ocicats in all the official 12 colors, or doing well on the philosophy job market without losing your mind. Your goal happens to be knowledge – in general or in a specific topic. Or maybe your goal is to be epistemically rational – you dislike superstition, wishful thinking, paranoia, and so on – or to have justified beliefs if you can. Just like health, wealth, train riding or Ocicat breeding, knowledge and justified beliefs are goals that give rise to practical reasons. Practical reasons to deliberate well, to review evidence, to avoid some news outlets, and even, at times, to eat a sandwich. Can these be called “epistemic reasons”? Yes, but in a sense parallel to “financial reasons” and “health reasons”, not in a sense that contrasts them with practical reasons. Is “eat a sandwich when hunger clouds your cognitive capacities” an epistemic norm? Yes and no. Yes in the sense that “make sure to clean your brushes when you paint pictures” is an aesthetic norm, and no in the sense that “orange doesn’t go with purple” is an aesthetic norm. Cleaning one’s brushes is conducive to creating beauty but it’s not in itself beautiful. To the extent that one wants to create beauty using paintbrushes, one usually has a reason to clean them. To the extent that one wants to come up with nice valid arguments in one’s paper, one has a reason to eat enough before writing. That does not make eating that sandwich epistemically rational in the same sense that believing what the evidence suggests is epistemically rational. Eating that sandwich is a way to make yourself epistemically rational, which happens to be what you want. In short, the adjective “epistemic”, now added to a larger number of nouns than ever before in the history of philosophy, can signify a type of normativity, a kind of reasons, or it can signify a type of object to which a norm applies, the stuff that a reason is concerned with, which happens to be knowledge or belief rather than money, paintings, Ocicats, cabbages or kings. I think the distinction needs to be kept in mind.

So….  “Epistemic duties to act” is just another name for “practical reasons that one has to act if one is after knowledge or justified belief”. Or is it really? Some might argue that there might be epistemic duties that do not depend on what one is after. Knowledge, truth, justified belief, or epistemic rationality are, say, objectively good things and one has a duty to pursue them – we’re talking something resembling a categorical imperative, not a hypothetical one. Perhaps we should seek knowledge, or at least some intrinsically valuable kinds thereof, regardless of what we happen to want. But “one should seek knowledge”, as well as “one should seek epistemic rationality” and “one should seek the truth” are only epistemic norms in the sense that they happen to be about knowledge, rationality, etc. They are not different in kind from the practical directives “one should seek beauty”, “one should seek pleasure, “one should seek honor”, and so on. Why one might want to seek knowledge is an issue for epistemologists, but it is also an issue for old-fashioned ethicists, theorists of the good life, who try to figure out what, in general, is worth pursuing. It makes sense to say that Sherlock Holmes, who refuses to pursue any knowledge that isn’t relevant directly to the cases he encounters as a detective, is missing out on something good, or on a part of the good life, and it makes sense to say (though I am not the type to say it) that he is irrational in refusing to pursue more knowledge. But to say that he is thereby epistemically irrational is odd. He is Holmes. He is as epistemically rational as it gets. If he is irrational, it’s a practical irrationality – albeit not in the colloquial sense of “practical” – in refusing to pursue episteme outside the realm of crime detection.

Epistemic Norms Aren’t Duties, Epistemic Irrationality Isn’t Blame

The words “duty” and “blame” can be used in many ways. You can blame the computer for an evening ruined by a computer failure, but computers are not morally blameworthy. You can talk about Christa Schroeder, Hitler’s secretary, performing her duties, but morally speaking her duty was – what? To work somewhere else? To become a spy? To screw up her work as much as possible? When I say that there are no epistemic blame and epistemic duties I mean to say that epistemic norms behave very differently from moral duties as deontologists talk about them and epistemic irrationality behaves very differently from moral blame as free will/moral psychology people talk about it. I do not intend to deny that epistemology is normative, but that it is normative does not imply that for every ethical concept there is an epistemological concept that is exactly isomorphic to it.

I talked in previous posts about why I think there are no practical reasons to believe, though there can be practical reasons to make yourself believe. At this point I will say nothing about duties to make yourself believe things and stick to putative duties to believe or not believe things – say, not to believe contradictions.

The thing about deontology is that you get an A for effort. Suppose, for example, Violet has a duty to return Kiyoshi’s book, which she promised to give him back on March 16th.  However, a large snow storm causes her flight to be cancelled, and despite all her efforts, she can only get back to Kiyoshi on the 17th. Any Kantian will hold that even though the book does not return to its owner on time, Violet’s good will shines like a jewel as long as she has tried. Some would say that ought implies can and therefore Violet did nothing wrong. She discharged her duty. Others would say that she has done wrong, but is exempt from blame.

Compare that to an epistemic norm: one ought not believe contradictions. Suppose Violet, who is in my intro ethics class, tries hard not to believe contradictions (my opening spiel on the first class often includes a reference to the importance of not believing them). She tries especially hard with regards to the class material, which she studies feverishly, rehashes in office hours, etc. Still, in the final paper, she writes sincerely, in response to one question, that all morality is relative to culture and, in response to another, that murder is “absolutely” wrong, regardless of circumstances. Violet’s valiant efforts not to believe contradictions do not result in her getting an A for effort – not literally, in my class, and not figuratively, vis-à-vis epistemic norms. If it is epistemically irrational to believe in contradictions, Violet is irrational in believing one, regardless of how hard she tries not to be. She is not the least bit less irrational because of her efforts.

It might seem that epistemic norms too grant As for effort, because they do sometimes grant an A for a process of responding to reasons that results in a false belief. For example, a person I met at a conference guessed quickly, plausibly and wrongly that my name is Turkish. The error makes sense – there is a place in Turkey called Arpali, and the man made an inference from that – and, assuming that the man’s degree of credence was proportional to evidence, he does get “an A” for his reasoning or for his response to epistemic reasons. But despite the tempting analogy, it’s important that the A is not literally for effort. As it happened, no effort was involved – the man’s guess seems to have come to him quickly. He made a good inference, and it doesn’t matter whether it came to him through effort or not – in fact, some might regard him as more epistemically virtuous because no effort was needed. On the other hand, if, instead of making a good guess, he were to stand there and wrinkle his forehead and come up with a guess that makes no sense at all, no amount of effort on his part would make us treat his inference as less bad.

It is an interesting question whether the effort thing is a problem for epistemic virtue talk, as opposed to duty talk.  While trying hard to return a book may discharge your duty to return it, trying hard to acquire the virtue of courage, say, does not mean that you have that virtue of courage, and trying to act generously does not automatically imply acting generously. Virtue ethics does not generally give As for effort (whether it gives some credit for effort is a different question).

Here is a related asymmetry regarding blame and the charge of epistemic irrationality. Suppose Anna attacks her professor because she believes, during a psychotic episode, that he is the devil. Anna is exempt from moral blame: “you can’t blame her, she was psychotic”. She is not exempt from the charge of epistemic irrationality that can be leveled at her belief that the professor is the devil. (“She’s not epistemically irrational, she is psychotic”? Doesn’t work. Having a psychotic episode is one way of being irrational).

Someone might ask: Ok, so we don’t have duties to believe, or not to believe,  or to infer, but don’t we have other epistemic duties – say, a duty to deliberate well, or to avoid watching fake news if we can help it? Can’t we incur epistemic blame if we fail to discharge these duties? Ok, to be continued. I mean it.

Don’t Worry, Be Practical: Two Raw Ideas and a PSA

1) If you are not too superstitious – I almost am – imagine for a moment that you suffer from cancer. Imagine that you do not yet know if the course of treatment you have undergone will save you or not.  You sit down at your doctor’s desk, all tense, aware that at this point there might be only interim news – indications that a good or a bad outcome is likely. The doctor starts with “well, there are reasons to be optimistic”. Though you are still very tense, you perk up and you feel warm and light all over. You ask what the reasons are. In response, the doctor hands you a piece of paper: ironclad scientific results showing that optimism is good for the health of cancer patients.

Your heart sinks. You feel like a victim of the cruelest jest. I’ll stop here.

Some philosophers would regard what happens to you as being provided with practical reasons to believe (that everything will turn out alright) when one desperately hoped for epistemic reasons to believe (that everything will turn out alright). I, as per my previous post, think what happens is being provided with practical reasons to make oneself believe that everything will turn out alright (take an optimism course, etc.) when one desperately hoped for epistemic reasons to believe that everything will turn out alright –I think the doctor says a false sentence to you when he says there were reasons to be optimistic. For the purpose of today it does not matter which explanation is the correct one. The point is that when we theorize about epistemic reasons and putative practical reasons to believe, the philosopher’s love of symmetry can draw us towards emphasizing similarities between them (not to mention views like Susanna Rinard’s, on which all reasons as practical), but any theory of practical and epistemic reasons heading in that direction needs a way to explain the Sinking Heart intuition – the fact that in some situations, receiving practical reasons when one wants epistemic reasons is so incredibly disappointing, so not to the point, and even if the practical reasons are really, really good, the best ever, it is absurd to expect that any comfort, even nominal comfort, be thereby provided to the seeker of epistemic reasons. What the doctor seems to offer and what she delivers seem incredibly different. The seeming depth of this difference needs to be addressed by anyone who emphasizes the idea that a reason is a reason is a reason.

To move away from extreme situations, anyone who is prone to anxiety attacks knows how $@#%! maddening it is when a more cheerful spouse or friend tells you that it’s silly of you to worry and then, when your ears perk up, hoping for good news you have overlooked, gives you the following reason not to worry: “there is absolutely nothing you can DO about it at this point”. Attention, well-meaning cheerful friends and partners the world all over: the phrase “there is nothing you can do about it” never helps a card-carrying anxious person worry less. I mean never. It makes us worry more, as it points out quasi-epistemic reasons to be more anxious. “There’s nothing you can do about it now” is bad news, and being offered bad news as a reason to calm down seems patently laughable to us. Thank you!  At the basis of this common communication glitch is, I suspect, the fact that practical advice on cognitive matters is only useful to the extent that you are the kind of person who can manipulate her cognitive apparatus consciously and with relative ease (I’m thinking of things like stirring one’s mind away from dwelling on certain topics).  The card-carrying worrier lacks that ability, to the point that the whole idea of practical advice about cognitive matters isn’t salient to her. As a result, a statement like “it’s silly of you to worry” raises in her the hope of hearing epistemic reasons not to worry, which is then painfully dashed.

2) A quick thought about believing at will.  When people asked me why I think beliefs are not voluntary in the way that actions are, I used to answer in a banal way by referring to a (by now) banal thought experiment: if offered a billion dollars to believe that two plus two equals five, or threatened with death unless I believe that two plus two equals five, I would not be able to do it at will. A standard answer to the banal thought experiment points out that actions one has overwhelming reasons not to do can also be like that. Jumping in front of a bus would be voluntary, but most people feel they can’t do it at will. But I think this answer, in a way that one would not expect from those who like the idea of practical reasons to believe, greatly overestimates how much people care about the truth. Let me take back the “two plus two” trope, as not believing that two plus two equals four probably requires a terrifying sort of mental disability, and take a more minor false belief – say, that all marigolds are perennial. If offered a billion dollars, I would still not be able to believe at will that all marigolds are perennial. To say that this is like the bus case would pay me a compliment I cannot accept. One has a hard time jumping in front of a bus because one really, really does not want to be hit by a bus. One wants very badly to avoid such a fate. Do I want to believe nothing but the truth as badly as I want to avoid being hit by a bus? Do I want to believe nothing but the truth so badly that I’d rather turn down a billion dollars than have a minor false belief? Ok, a false and irrational belief. Do I care about truth and rationality so much that I’d turn down billion dollars not to have a false and irrational belief? Nope. Alternately, one might hold that jumping in front of a bus is so hard because evolution made it frightening. But am I so scared of believing that all marigolds are perennials? No, not really. I am sure I have plenty of comparable beliefs and I manage. I’d rather have the money. I would believe at will if I could. It’s just that I can’t. We are broken hearted when we get practical reasons when we want epistemic reasons, but the reason isn’t that we are as noble as all that.

Beliefs, Erections and Tears, Or: Where are my Credence-Lowering Pills?

People who are not philosophers sometimes “come to believe” (“I’ve come to believe that cutting taxes is not the way to go”) or “start to think” (“I’m starting to think that Jolene isn’t going to show up”). Philosophers “form beliefs”. People who are not philosophers sometimes say “I have read the report, and now I’m less confident that the project will succeed”, and sometimes write “reading the report lowered my confidence in the success of the project”. Philosophers say “I have read the report and lowered my credence that the project will succeed”.

In other words, philosophers talk about routine changes in credence as if they were voluntary: I, the agent, am doing the “forming” and the “lowering”. That is kind of curious, because most mainstream epistemologists do not think beliefs are voluntary. Some think they sometimes are – perhaps in cases of self-deception, Pascal’s wager, and so on – but most take them to be non-voluntary by default. Very, very few hold that just like I decided to write now, I also decided to believe that my cats are sleeping by my computer. Yet if I were their example, they would say “Nomy looks at her desk and forms the belief that her cats are sleeping by the computer”. If I say anything to them about this choice of words, they say it’s just a convenient way to speak.

John Heil, wishing to be ecumenical in “Believing What One Ought”, says that even if we think that belief is not, strictly speaking, voluntary, we can agree that it is a “harmless shorthand” to talk about belief as if it is voluntary (talk about it using the language of action, decision, responsibility, etc). Why? Because it is generally possible for a person to take indirect steps to “eventually” bring about the formation of a belief.

OK then –so erections are voluntary! Let’s use the language of action, decision, and responsibility in talking about them! No, seriously. It is possible for a man to take indirect voluntary steps to bring it about that he has an erection. And yet, if I teach aliens or children an introduction to human nature, I do something terribly misleading if I talk about erections as if they were voluntary.

Another example: suppose Nomy is a sentimental sop and hearing a certain song can reliably bring her to tears. Heck, just playing the song in her head can bring her to tears. There are steps Nomy can take to cause herself to burst out in tears. Still, again, in that Intro Human Nature course one cannot talk about outbursts of tears as if they were voluntary without being misleading.

Last, would any self-respecting action theorist say that the phrase ‘she raised her arm’ can be a ‘harmless shorthand’ way to describe a person who caused her arm to jerk upwards by hooking it to an electrode? Or, worse, a “harmless shorthand” by which to describe a person whose arm jerked due to a type of seizure but who easily could have caused such jerking through using electrodes or prevented it through taking a medication?

In all of these cases, a “shorthand” would not be harmless – for two reasons. The more banal one is that when eliciting intuitions about birds, you don’t want the ostrich to be your paradigm case. Most erections, outbursts of tears, and arm-convulsions are not the result of intentional steps taken to bring them about, and there is a limit to what we can learn about them from the fewer cases that are. The same is true for most beliefs. Many of my beliefs are the result of no intentional action at all – my belief that the cats are here came into being as soon as they showed up in my field of vision, my belief that there are no rats in Alberta came into being as soon as a Tim Schroeder told me – and other beliefs I have are the result of deliberation, which is an action, but not an intentional step taken to bring about a particular belief. (so my belief that the ontological argument fails was not the result of intentional steps taken to make myself believe that the ontological argument fails). Whatever intuitions one might have about non-paradigmatic cases, like Pascal’s wager, could easily fail to apply to the many cases in which no step-taking has preceded a belief.

But to talk of erections and tears as if they were voluntary is also dangerous for a deeper reason, a reason that has nothing to do with the frequency of indirect-steps cases. Even if the majority of tears, erections, or convulsions were indirectly self-induced, there is still a difference between the action taken to induce the erection, tears or seizure and the thing that results from the action. If the former is voluntary, that alone doesn’t make the latter voluntary. Similarly, even if we routinely had the option of making ourselves believe something through the pressing of a button, and we routinely took advantage of this option, there will still be a difference between the act of pressing the button – an action – and the state of believing itself. If we make “pressing the button” a mental action – analogous to intentionally calling to mind images that are likely to produce an erection or an outburst of tears – it hardly matters: the mental action that produces a belief would still be different from its outcome.

Why does it matter? Because we only have practical reasons to do things voluntarily. It seems quite clear that, on those occasions, whatever their number, in which we can, in fact, for real, form a belief, there can be practical reasons to form that belief or not to form it, and it seems only slightly less clear that sometimes it could be rational to form a belief that clashes with the evidence. This, however, is taken to mean that there are practical reasons to believe. I am working on a paper arguing that this does not work.  We have practical reasons to take steps to bring a belief about. We sometimes have practical reasons to make ourselves believe something, but that’s not the same as practical reasons to believe it. No such thing. Category mistake. This is where one could ask: why not call reasons to create a belief in oneself “reasons to believe?” Isn’t that a harmless shorthand?

I don’t think so. I agree that “reasons to stay out of debt” is a nice shorthand for “reasons not to perform actions that can lead to one’s being in debt and to perform actions that are conducive to staying out of it”, but while “reasons to stay out of debt” just are reasons for various actions and inactions, you can have “reasons to believe” something without any implied reasons for any actions or inactions. Jamal’s being out of debt is an instance of rationality (or response to reasons) on his part iff Jamal’s being out of debt is the intended result of a rational course of action taken by Jamal. Gianna’s belief that Israelis don’t speak Yiddish can be perfectly rational (as in, there for good reasons) even if it’s not the result of any rational course of action taken by her. Perhaps the belief occurred in her mind as the result of unintentionally hearing a reliable person’s testimony, no action required, or was the result of an irrational action like reading Wikipedia while driving; it can still be as rational a belief as they come. When we say “reasons to believe” it is not normally a shorthand for “reasons to make oneself believe”, and so to shorthand “reasons to make oneself believe” to “reasons to believe” is not harmless, but very confusing. To be continued.

Philosophy: Truth or Dare?

Many years ago I was having a long chat with someone who later became a well-known philosopher. His work was already way cool, but looking at the theses he defended, I told him he must be aiming for the Annual David Lewis Award for Best-Defended Very Weird View. He told me that he did not always believe the views he defended. He was most interested in seeing how far he can go defending an original, counter-intuitive proposition as well as he can. What did I think? I said that it seems to me that some philosophers seek the Truth but others choose Dare.

I am more of a Truth Philosopher than a Dare Philosopher, but I doubt it’s a matter of principle, given that my personality is skewed towards candor. I’m just not a natural for writing things in which I don’t have high credence at the time of writing. However, if you are human, should you ever have high credence in a view like, say, compatibilism, which has, for a long time, been on one side of not only a peer disagreement but a veritable peer staring contest? Looking at it from one angle, the mind boggles at the hubris.

Zach Barnett, a Brown graduate student, has recently been working on this and has a recent related paper in Mind. I asked him to write about it for Owl’s Roost and he obliged. Here goes:

I want to discuss a certain dilemma that we truth-philosophers seem to face. The dilemma arises when we consider disagreement-based worries about the epistemic status of our controversial philosophical beliefs. For example:

Conciliationism: Believing in the face of disagreement is not justified – given that certain conditions are met.

Applicability: Many/most disagreements in philosophy do meet the relevant conditions.

————————————————————————————————

No Rational Belief: Many/most of our philosophically controversial beliefs are not rational.

Both premises of this argument are, of course, controversial. But suppose they’re correct. How troubling should we find this conclusion? One’s answer may depend on the type of philosopher one is. 

The dare-philosopher needn’t be troubled at all. She might think of philosophy as akin to formal debate: We choose a side, somehow or other, and defend it as well as we can manage. Belief in one’s views is nowhere required.

The truth-philosopher, however, might find the debate analogy uncomfortable. If we all viewed philosophy this way, it might seem to her that something important would be missing – namely, the sincerity with which many of us advocate for our preferred positions. She might protest: “When I do philosophy, I’m not just ‘playing the game.’ I really mean it!”

At this point, it is tempting to think – provided No Rational Belief is really true – that the truth-philosopher is just stuck: If she believes her views, she is irrational; if she withholds belief, then her views will lack a form of sincerity she deems valuable.

As someone who identifies with this concern for sincerity, I find the dilemma gripping. But I’d like to explore a way out. Perhaps the requisite sort of sincerity doesn’t require belief. An analogy helps to illustrate what I have in mind.

Logic Team: You’re on a five-player logic team. The team is to be given a logic problem with possible answers p and not-p. There is one minute allotted for each player to work out the problem alone followed by a ten-second voting phase, during which team members vote one by one. The answer favored by a majority of your team is submitted.

      Initially, you arrive at p. During the voting phase, your teammate Vi – who, in the past, has been more reliable than you on problems like this one – votes first, for not-p. You’re next. Which way should you vote?

Based on your knowledge of Vi’s stellar past performance, you might suspect that you made a mistake on this occasion. Perhaps you will cease to believe that your original answer is correct. Indeed, you might well become more confident of Vi’s answer than you are of your own.

It doesn’t follow, though, that you should vote for Vi’s answer of not-p. If all you care about is the the accuracy of your team’s verdict, it may still be better to vote for your original answer of p.

Why? In short, the explanation of this fact is that there is some value in having team members reach independent verdicts. To the extent that team members defer to the best player, independence is diminished. This relates to a phenomenon known as “wisdom of the crowd,” and it relates more directly to Condorcet’s Jury Theorem. But all of this, while interesting, is beside the point.

In light of the above observations, suppose that you do decide to vote for your original answer, despite not having much confidence in it. Still, there is still an important kind of sincerity associated with your vote: in a certain sense, p seems right to you; your thinking led you there; and, if you were asked to justify your answer, you’d have something direct to say in its defense. (In defense of not-p, you could only appeal to the fact that Vi voted for it.) So you retain a kind of sincere attachment to your original answer, even though you do not believe, all things considered, that it is correct.

To put the point more generally: In at least some collaborative, truth-seeking settings, it can make sense for a person to put forward a view she does not believe, and moreover, her commitment can still be sincere, in an important sense. Do these points hold for philosophy, too? I’m inclined to think so. Consider an example.

Turning Tide: You find physicalism more compelling than its rivals (e.g. dualism). The arguments in favor seem persuasive; you are unmoved by the objections. Physicalism also happens to be the dominant view.

      Later, the philosophical tide turns in favor of dualism. Perhaps new arguments are devised; perhaps the familiar objections to physicalism simply gain traction. You remain unimpressed. The new arguments for dualism seem weak; the old objections to physicalism continue to seem as defective to you as ever. 

Given the setup, it seems clear that you’re a sincere physicalist at all points of this story. But let’s add content to the case: You’re extremely epistemically humble and have great respect for the philosophers of mind/metaphysics of your day. All things considered, you come to consider dualism more likely than physicalism, as it becomes the dominant view. Still, this doesn’t seem to me to undermine the sincerity of your commitment to physicalism. What matters isn’t your all-things-considered level of confidence, but rather, how things sit with you, when you think about the matter directly (i.e. setting aside facts about relative popularity of the different views). When you confront the issues this way, physicalism seems clearly right to you. In philosophy, too, sincerity does not seem to require belief (or high confidence).

In sum, perhaps it is true that we cannot rationally believe our controversial views in philosophy. Still, when we think through the controversial issues directly, certain views may strike us as most compelling. Our connection to these views will bear certain hallmarks of sincerity: the views will seem right to us; our thinking will have led us to them; and, we will typically have something to say in their defense. These are the views we should advocate and identify with – at least, if we value sincerity. 

I find the proposed picture of philosophy attractive. It offers us a way of doing philosophy that is immune to worries from disagreement, while allowing for a kind of sincerity that seems worth preserving. As an added bonus, it might even make us collectively more accurate, in the long run.

That was Zach Barnett. Do I agree with him? As is usual when I talk to conciliationists, I don’t know what to think!

Raw Reflections on Rationality and Intelligence + Two Cat Pictures

So I have two cats. One is a British Shorthair named after the very English Philippa Foot. The other is an Ocicat named Catullus, after Gaius Valerius Catullus, an ancient Roman poet some of whose stuff is decisively Not Safe For Work. I often refer to the two as “the irrational animals” – as in “the irrational animals are hungry” or “thank you for taking care of the irrational animals” – but I suspect this is just an Aristotelian slur. They are probably more rational than I am, though I am surely smarter.

Can you be smarter but less rational? I hear epistemologists talk as if you can’t. But you can, easily. Consider a mentally healthy child of 11. Imagine the same child at 14. She has gotten smarter, but probably less rational.

Continue reading “Raw Reflections on Rationality and Intelligence + Two Cat Pictures”