Reflections on the Concept of Mental Disorder

“Nomy, our purpose here is to help you become just like everyone else”.

That’s what the school counselor told me when I was a kid. And then another school counselor. And then another. I am not paraphrasing, or dramatizing, or anything. Translating from Hebrew to English is all. They all referred my parents to psychologists and psychiatrists, who would help me even better toward this goal, being like everyone else, which they firmly assumed I shared, no matter what I said. Little wonder, then, that by the time I became a teenager, I was certain that the concept of a mental disorder was nothing but a tool of oppression used against unusual people by those who want everyone to become just like everyone else.

(PSA: if you have a child who is beaten up by the other kids because she’s reading Great Expectations at 11, like I did, or because she looks unusual, or because of some incidental vagary of child social dynamics, or even because she has bad social skills, do think carefully before sending her to some kind of shrink. You need to make sure your child does not get the message “the other kids beat me up because there is something wrong with me, and my parents agree, so they are sending me to be fixed”.)

Years later I had to give up my Szasz-ian extremism, because depression, along with hypomania and anxiety, threatened to kill me. Slowly it dawned on me that while the professionals of my childhood were wrong to try to cure me of reading Great Expectations, there was a case for calling some things mental disorders. Seeing my roommate react with fear and trembling to a small spider provided one datum: there was no way that her suffering was “socially constructed” in the English department sense of the term. It was real, and the term “disorder” seemed to fit it. It also seemed to fit my depression, hypomania and anxiety.

So what is a mental disorder, then? I knew what I wanted cured: my suffering. So, was I going to call any extended mental state or brain state constituting or leading to significant distress a mental disorder? That used to be pretty close to the DSM definition, and many shrinks will still tell you that if it causes either distress or disruption in functioning, it is a mental disorder. But this plausible-sounding theory is pretty terrible upon reflection. My love of Spinoza as a teenager caused me significant distress, because it caused kids to beat me up. It also interrupted my functioning, because it’s hard to function when you are beaten up. Still, loving Spinoza is not a mental disorder. Being gay in the 1970s caused one enormous suffering – everything from self-hatred to trouble with the law – and that helped keep homosexuality in the DSM till 1973 and “ego-dystonic homosexuality” (homosexuality, provided that one wants it “cured” in oneself) till much later. The distress-bad functioning based definition of mental disorder, in other words, does not block the term from being used, in the oppressive manner typical of the school-counselors of my childhood, by those who want to tell gay people to be just like everyone else.

Some later DSM writers tried to solve the problem associated with defining a mental disorder as a state of mind/brain/behavior/whatever that causes distress or trouble functioning by simply adding to it a disclaimer along the lines of “the problem has to be with the individual, not with a conflict between the individual and society”. That didn’t work, because it is the job of a definition of a mental disorder to tell us when there is a problem “with the individual” and when there isn’t. Presently, we don’t have a definition that can do this job. The reason we no longer think that a woman who refuses to be a homemaker is showing a problem with functioning is not that our definition of mental disorder improved. It’s that our moral outlook did.

Part of the trouble with defining mental disorder the way philosophers try to define things is that this would be at cross purpose with what the writers of DSM are trying to do. Philosophers look for the true, or at least the coherent. Shrinks look for the useful. Let me explain. The Diagnostic and Statistical Manual of Mental Disorders is becoming a thicker and thicker book. More and more things are called mental disorders. There is a cynical hypothesis about the cause: shrinks want people to go to them and give them money. However, there is also a charitable hypothesis: shrinks want to help people, and nowadays, however obvious a person’s suffering, she can’t get the insurance company to fund psychiatric help if the suffering isn’t defined as a disorder. If a person who suffers from grief wants to take some pills to help with insomnia or make it easier to go back to work, her grief needs to be redefined as a major depressive episode, and therefore a disorder. You have to call something a mental disorder if people are to receive help for it.  Thus, however they define “mental disorder” in the introductions to their books, when you look at the long list of things that are classified as mental disorders you see that the one thing they have in common is popular demand for insurance coverage. The trouble is, of course, that “something is a mental disorder iff people want psychiatrists to help them with it” does not sound like a definition that captures a natural kind. It is basically another incarnation of the distress/bad functioning thing.

What about natural language? “Mentally ill” replaced terms like “insane”, “crazy” and “nuts”, which are, in many ways, colloquial ways to say “patently irrational”. The things that were considered forms of insanity or forms of “neurosis” when Freud was alive and are still considered paradigmatic mental illness today basically are forms of gross irrationality, or cause gross irrationality. These would be: psychosis that leads a person to think, irrationally, that he is Napoleon; depression that becomes so bad that the person thinks that the fact that she forgot to buy milk makes her as despicable as a Nazi, or, against all evidence, that her family will be delighted to see her dead: mania that leads a person to spend all his money and run off with his secretary to pursue a business deal that he is normally plenty smart enough to see is nonsense: terrible fear of tiny, harmless spiders: etc. To this day, being told that one’s thoughts or feelings or actions are symptoms of a mental disorder can be insulting or reassuring in a way that only being told one is being grossly irrational can be. Let me explain.

Maybe I’ll start with the reassuring. Suppose you are really afraid that there is a monster under your bed – literally or figuratively – and someone convinces you that your fear is a symptom of a mental disorder. That can be wonderful news. When I express fearful or self-hating thoughts, being told “Nomy, that sounds crazy” can be music to my ears.  I am irrational! The fact that I forgot to answer an email from the secretary does not mean I am worthless! My fear or sadness is unwarranted!  Now for the insulting: if you are very sad purely because your attempts to make your country a democracy have failed, and someone refers to your sadness as a clinical problem, it can be infuriating. No, you think, I am not irrational. I am responding appropriately to reasons. Calling it a disorder is refusing to see that. This is one reason people have been angry when the last DSM amended the definition of depression in such a way that it now includes many grieving people. People who don’t want their grief in the DSM do not deny that they are suffering and do not always deny that they could use some professional help. What they want to deny is that there’s anything grossly irrational about their grief. Of course, rationality and irrationality can be woven fine. A person can start being depressed because he lost his job – presumably a reason to be sad – but then, despite the fact that he was fired due to a recession, start feeling worthless or bad because he’s unemployed, and that’s where irrationality can creep in – even gross enough irrationality for the person to count as having a disorder.

So paradigmatic mental disorders involve serious irrationality. To that you can add conditions often thought of as disabilities rather than disorders, in which the problem is not irrationality but cognitive impairment of some sort (e.g low intelligence, lack of some kind of know-how). Perhaps they too belong in some divine version of DSM.  But what about conditions that do not grossly affect one’s rationality and involve no cognitive impairment? My hunch is that there is something very problematic in calling them mental disorders, as opposed to problems, troubles, eccentricities, ways of being neuro-atypical, or sometimes even vices. If you think the DSM, considered from the aspect of truth and not insurability, is getting too thick, this just might be what’s bothering you.  But to be continued.

Epistemology and Sandwiches

To Sophie Horowitz we owe the following question: if having enough blood sugar contributes to our cognitive abilities, does it mean that we sometimes have an epistemic duty to eat a sandwich?

The last three posts in Owl’s Roost concerned some reasons to think that there are no practical reasons to believe and no duties of any sort to believe (or conclude, or become less confident, etc) – at least if we interpret “duty” in a way that’s friendly to deontologists in ethics. But never mind practical reasons to believe.  Can there be epistemic reasons to act? Or, for that matter, epistemic duties to act? For example, deliberating is an action, something that you can intentionally do. One can also intentionally act to review the evidence in a complicated case, to ask experts for their opinions, or to check the origins of that article forwarded to one on Facebook that claims that cats cause schizophrenia. Do we have epistemic duties and epistemic reasons to do these things?

If we want money, there are things that we have reasons – practical reasons – to do. Call these practical reasons “financial reasons”. Similarly, there are things that we have reasons – practical reasons – to do if we want to be healthy. Call these practical reasons  “health reasons”. “Health reasons” are practical reasons that apply to people whose goal is health, “financial reasons” are practical reasons that apply to people whose goal is money. Now, suppose your goal is not health, or wealth, or taking a trip on the trans-Siberian train, or breeding Ocicats in all the official 12 colors, or doing well on the philosophy job market without losing your mind. Your goal happens to be knowledge – in general or in a specific topic. Or maybe your goal is to be epistemically rational – you dislike superstition, wishful thinking, paranoia, and so on – or to have justified beliefs if you can. Just like health, wealth, train riding or Ocicat breeding, knowledge and justified beliefs are goals that give rise to practical reasons. Practical reasons to deliberate well, to review evidence, to avoid some news outlets, and even, at times, to eat a sandwich. Can these be called “epistemic reasons”? Yes, but in a sense parallel to “financial reasons” and “health reasons”, not in a sense that contrasts them with practical reasons. Is “eat a sandwich when hunger clouds your cognitive capacities” an epistemic norm? Yes and no. Yes in the sense that “make sure to clean your brushes when you paint pictures” is an aesthetic norm, and no in the sense that “orange doesn’t go with purple” is an aesthetic norm. Cleaning one’s brushes is conducive to creating beauty but it’s not in itself beautiful. To the extent that one wants to create beauty using paintbrushes, one usually has a reason to clean them. To the extent that one wants to come up with nice valid arguments in one’s paper, one has a reason to eat enough before writing. That does not make eating that sandwich epistemically rational in the same sense that believing what the evidence suggests is epistemically rational. Eating that sandwich is a way to make yourself epistemically rational, which happens to be what you want. In short, the adjective “epistemic”, now added to a larger number of nouns than ever before in the history of philosophy, can signify a type of normativity, a kind of reasons, or it can signify a type of object to which a norm applies, the stuff that a reason is concerned with, which happens to be knowledge or belief rather than money, paintings, Ocicats, cabbages or kings. I think the distinction needs to be kept in mind.

So….  “Epistemic duties to act” is just another name for “practical reasons that one has to act if one is after knowledge or justified belief”. Or is it really? Some might argue that there might be epistemic duties that do not depend on what one is after. Knowledge, truth, justified belief, or epistemic rationality are, say, objectively good things and one has a duty to pursue them – we’re talking something resembling a categorical imperative, not a hypothetical one. Perhaps we should seek knowledge, or at least some intrinsically valuable kinds thereof, regardless of what we happen to want. But “one should seek knowledge”, as well as “one should seek epistemic rationality” and “one should seek the truth” are only epistemic norms in the sense that they happen to be about knowledge, rationality, etc. They are not different in kind from the practical directives “one should seek beauty”, “one should seek pleasure, “one should seek honor”, and so on. Why one might want to seek knowledge is an issue for epistemologists, but it is also an issue for old-fashioned ethicists, theorists of the good life, who try to figure out what, in general, is worth pursuing. It makes sense to say that Sherlock Holmes, who refuses to pursue any knowledge that isn’t relevant directly to the cases he encounters as a detective, is missing out on something good, or on a part of the good life, and it makes sense to say (though I am not the type to say it) that he is irrational in refusing to pursue more knowledge. But to say that he is thereby epistemically irrational is odd. He is Holmes. He is as epistemically rational as it gets. If he is irrational, it’s a practical irrationality – albeit not in the colloquial sense of “practical” – in refusing to pursue episteme outside the realm of crime detection.

Epistemic Norms Aren’t Duties, Epistemic Irrationality Isn’t Blame

The words “duty” and “blame” can be used in many ways. You can blame the computer for an evening ruined by a computer failure, but computers are not morally blameworthy. You can talk about Christa Schroeder, Hitler’s secretary, performing her duties, but morally speaking her duty was – what? To work somewhere else? To become a spy? To screw up her work as much as possible? When I say that there are no epistemic blame and epistemic duties I mean to say that epistemic norms behave very differently from moral duties as deontologists talk about them and epistemic irrationality behaves very differently from moral blame as free will/moral psychology people talk about it. I do not intend to deny that epistemology is normative, but that it is normative does not imply that for every ethical concept there is an epistemological concept that is exactly isomorphic to it.

I talked in previous posts about why I think there are no practical reasons to believe, though there can be practical reasons to make yourself believe. At this point I will say nothing about duties to make yourself believe things and stick to putative duties to believe or not believe things – say, not to believe contradictions.

The thing about deontology is that you get an A for effort. Suppose, for example, Violet has a duty to return Kiyoshi’s book, which she promised to give him back on March 16th.  However, a large snow storm causes her flight to be cancelled, and despite all her efforts, she can only get back to Kiyoshi on the 17th. Any Kantian will hold that even though the book does not return to its owner on time, Violet’s good will shines like a jewel as long as she has tried. Some would say that ought implies can and therefore Violet did nothing wrong. She discharged her duty. Others would say that she has done wrong, but is exempt from blame.

Compare that to an epistemic norm: one ought not believe contradictions. Suppose Violet, who is in my intro ethics class, tries hard not to believe contradictions (my opening spiel on the first class often includes a reference to the importance of not believing them). She tries especially hard with regards to the class material, which she studies feverishly, rehashes in office hours, etc. Still, in the final paper, she writes sincerely, in response to one question, that all morality is relative to culture and, in response to another, that murder is “absolutely” wrong, regardless of circumstances. Violet’s valiant efforts not to believe contradictions do not result in her getting an A for effort – not literally, in my class, and not figuratively, vis-à-vis epistemic norms. If it is epistemically irrational to believe in contradictions, Violet is irrational in believing one, regardless of how hard she tries not to be. She is not the least bit less irrational because of her efforts.

It might seem that epistemic norms too grant As for effort, because they do sometimes grant an A for a process of responding to reasons that results in a false belief. For example, a person I met at a conference guessed quickly, plausibly and wrongly that my name is Turkish. The error makes sense – there is a place in Turkey called Arpali, and the man made an inference from that – and, assuming that the man’s degree of credence was proportional to evidence, he does get “an A” for his reasoning or for his response to epistemic reasons. But despite the tempting analogy, it’s important that the A is not literally for effort. As it happened, no effort was involved – the man’s guess seems to have come to him quickly. He made a good inference, and it doesn’t matter whether it came to him through effort or not – in fact, some might regard him as more epistemically virtuous because no effort was needed. On the other hand, if, instead of making a good guess, he were to stand there and wrinkle his forehead and come up with a guess that makes no sense at all, no amount of effort on his part would make us treat his inference as less bad.

It is an interesting question whether the effort thing is a problem for epistemic virtue talk, as opposed to duty talk.  While trying hard to return a book may discharge your duty to return it, trying hard to acquire the virtue of courage, say, does not mean that you have that virtue of courage, and trying to act generously does not automatically imply acting generously. Virtue ethics does not generally give As for effort (whether it gives some credit for effort is a different question).

Here is a related asymmetry regarding blame and the charge of epistemic irrationality. Suppose Anna attacks her professor because she believes, during a psychotic episode, that he is the devil. Anna is exempt from moral blame: “you can’t blame her, she was psychotic”. She is not exempt from the charge of epistemic irrationality that can be leveled at her belief that the professor is the devil. (“She’s not epistemically irrational, she is psychotic”? Doesn’t work. Having a psychotic episode is one way of being irrational).

Someone might ask: Ok, so we don’t have duties to believe, or not to believe,  or to infer, but don’t we have other epistemic duties – say, a duty to deliberate well, or to avoid watching fake news if we can help it? Can’t we incur epistemic blame if we fail to discharge these duties? Ok, to be continued. I mean it.

Don’t Worry, Be Practical: Two Raw Ideas and a PSA

1) If you are not too superstitious – I almost am – imagine for a moment that you suffer from cancer. Imagine that you do not yet know if the course of treatment you have undergone will save you or not.  You sit down at your doctor’s desk, all tense, aware that at this point there might be only interim news – indications that a good or a bad outcome is likely. The doctor starts with “well, there are reasons to be optimistic”. Though you are still very tense, you perk up and you feel warm and light all over. You ask what the reasons are. In response, the doctor hands you a piece of paper: ironclad scientific results showing that optimism is good for the health of cancer patients.

Your heart sinks. You feel like a victim of the cruelest jest. I’ll stop here.

Some philosophers would regard what happens to you as being provided with practical reasons to believe (that everything will turn out alright) when one desperately hoped for epistemic reasons to believe (that everything will turn out alright). I, as per my previous post, think what happens is being provided with practical reasons to make oneself believe that everything will turn out alright (take an optimism course, etc.) when one desperately hoped for epistemic reasons to believe that everything will turn out alright –I think the doctor says a false sentence to you when he says there were reasons to be optimistic. For the purpose of today it does not matter which explanation is the correct one. The point is that when we theorize about epistemic reasons and putative practical reasons to believe, the philosopher’s love of symmetry can draw us towards emphasizing similarities between them (not to mention views like Susanna Rinard’s, on which all reasons as practical), but any theory of practical and epistemic reasons heading in that direction needs a way to explain the Sinking Heart intuition – the fact that in some situations, receiving practical reasons when one wants epistemic reasons is so incredibly disappointing, so not to the point, and even if the practical reasons are really, really good, the best ever, it is absurd to expect that any comfort, even nominal comfort, be thereby provided to the seeker of epistemic reasons. What the doctor seems to offer and what she delivers seem incredibly different. The seeming depth of this difference needs to be addressed by anyone who emphasizes the idea that a reason is a reason is a reason.

To move away from extreme situations, anyone who is prone to anxiety attacks knows how $@#%! maddening it is when a more cheerful spouse or friend tells you that it’s silly of you to worry and then, when your ears perk up, hoping for good news you have overlooked, gives you the following reason not to worry: “there is absolutely nothing you can DO about it at this point”. Attention, well-meaning cheerful friends and partners the world all over: the phrase “there is nothing you can do about it” never helps a card-carrying anxious person worry less. I mean never. It makes us worry more, as it points out quasi-epistemic reasons to be more anxious. “There’s nothing you can do about it now” is bad news, and being offered bad news as a reason to calm down seems patently laughable to us. Thank you!  At the basis of this common communication glitch is, I suspect, the fact that practical advice on cognitive matters is only useful to the extent that you are the kind of person who can manipulate her cognitive apparatus consciously and with relative ease (I’m thinking of things like stirring one’s mind away from dwelling on certain topics).  The card-carrying worrier lacks that ability, to the point that the whole idea of practical advice about cognitive matters isn’t salient to her. As a result, a statement like “it’s silly of you to worry” raises in her the hope of hearing epistemic reasons not to worry, which is then painfully dashed.

2) A quick thought about believing at will.  When people asked me why I think beliefs are not voluntary in the way that actions are, I used to answer in a banal way by referring to a (by now) banal thought experiment: if offered a billion dollars to believe that two plus two equals five, or threatened with death unless I believe that two plus two equals five, I would not be able to do it at will. A standard answer to the banal thought experiment points out that actions one has overwhelming reasons not to do can also be like that. Jumping in front of a bus would be voluntary, but most people feel they can’t do it at will. But I think this answer, in a way that one would not expect from those who like the idea of practical reasons to believe, greatly overestimates how much people care about the truth. Let me take back the “two plus two” trope, as not believing that two plus two equals four probably requires a terrifying sort of mental disability, and take a more minor false belief – say, that all marigolds are perennial. If offered a billion dollars, I would still not be able to believe at will that all marigolds are perennial. To say that this is like the bus case would pay me a compliment I cannot accept. One has a hard time jumping in front of a bus because one really, really does not want to be hit by a bus. One wants very badly to avoid such a fate. Do I want to believe nothing but the truth as badly as I want to avoid being hit by a bus? Do I want to believe nothing but the truth so badly that I’d rather turn down a billion dollars than have a minor false belief? Ok, a false and irrational belief. Do I care about truth and rationality so much that I’d turn down billion dollars not to have a false and irrational belief? Nope. Alternately, one might hold that jumping in front of a bus is so hard because evolution made it frightening. But am I so scared of believing that all marigolds are perennials? No, not really. I am sure I have plenty of comparable beliefs and I manage. I’d rather have the money. I would believe at will if I could. It’s just that I can’t. We are broken hearted when we get practical reasons when we want epistemic reasons, but the reason isn’t that we are as noble as all that.

Beliefs, Erections and Tears, Or: Where are my Credence-Lowering Pills?

People who are not philosophers sometimes “come to believe” (“I’ve come to believe that cutting taxes is not the way to go”) or “start to think” (“I’m starting to think that Jolene isn’t going to show up”). Philosophers “form beliefs”. People who are not philosophers sometimes say “I have read the report, and now I’m less confident that the project will succeed”, and sometimes write “reading the report lowered my confidence in the success of the project”. Philosophers say “I have read the report and lowered my credence that the project will succeed”.

In other words, philosophers talk about routine changes in credence as if they were voluntary: I, the agent, am doing the “forming” and the “lowering”. That is kind of curious, because most mainstream epistemologists do not think beliefs are voluntary. Some think they sometimes are – perhaps in cases of self-deception, Pascal’s wager, and so on – but most take them to be non-voluntary by default. Very, very few hold that just like I decided to write now, I also decided to believe that my cats are sleeping by my computer. Yet if I were their example, they would say “Nomy looks at her desk and forms the belief that her cats are sleeping by the computer”. If I say anything to them about this choice of words, they say it’s just a convenient way to speak.

John Heil, wishing to be ecumenical in “Believing What One Ought”, says that even if we think that belief is not, strictly speaking, voluntary, we can agree that it is a “harmless shorthand” to talk about belief as if it is voluntary (talk about it using the language of action, decision, responsibility, etc). Why? Because it is generally possible for a person to take indirect steps to “eventually” bring about the formation of a belief.

OK then –so erections are voluntary! Let’s use the language of action, decision, and responsibility in talking about them! No, seriously. It is possible for a man to take indirect voluntary steps to bring it about that he has an erection. And yet, if I teach aliens or children an introduction to human nature, I do something terribly misleading if I talk about erections as if they were voluntary.

Another example: suppose Nomy is a sentimental sop and hearing a certain song can reliably bring her to tears. Heck, just playing the song in her head can bring her to tears. There are steps Nomy can take to cause herself to burst out in tears. Still, again, in that Intro Human Nature course one cannot talk about outbursts of tears as if they were voluntary without being misleading.

Last, would any self-respecting action theorist say that the phrase ‘she raised her arm’ can be a ‘harmless shorthand’ way to describe a person who caused her arm to jerk upwards by hooking it to an electrode? Or, worse, a “harmless shorthand” by which to describe a person whose arm jerked due to a type of seizure but who easily could have caused such jerking through using electrodes or prevented it through taking a medication?

In all of these cases, a “shorthand” would not be harmless – for two reasons. The more banal one is that when eliciting intuitions about birds, you don’t want the ostrich to be your paradigm case. Most erections, outbursts of tears, and arm-convulsions are not the result of intentional steps taken to bring them about, and there is a limit to what we can learn about them from the fewer cases that are. The same is true for most beliefs. Many of my beliefs are the result of no intentional action at all – my belief that the cats are here came into being as soon as they showed up in my field of vision, my belief that there are no rats in Alberta came into being as soon as a Tim Schroeder told me – and other beliefs I have are the result of deliberation, which is an action, but not an intentional step taken to bring about a particular belief. (so my belief that the ontological argument fails was not the result of intentional steps taken to make myself believe that the ontological argument fails). Whatever intuitions one might have about non-paradigmatic cases, like Pascal’s wager, could easily fail to apply to the many cases in which no step-taking has preceded a belief.

But to talk of erections and tears as if they were voluntary is also dangerous for a deeper reason, a reason that has nothing to do with the frequency of indirect-steps cases. Even if the majority of tears, erections, or convulsions were indirectly self-induced, there is still a difference between the action taken to induce the erection, tears or seizure and the thing that results from the action. If the former is voluntary, that alone doesn’t make the latter voluntary. Similarly, even if we routinely had the option of making ourselves believe something through the pressing of a button, and we routinely took advantage of this option, there will still be a difference between the act of pressing the button – an action – and the state of believing itself. If we make “pressing the button” a mental action – analogous to intentionally calling to mind images that are likely to produce an erection or an outburst of tears – it hardly matters: the mental action that produces a belief would still be different from its outcome.

Why does it matter? Because we only have practical reasons to do things voluntarily. It seems quite clear that, on those occasions, whatever their number, in which we can, in fact, for real, form a belief, there can be practical reasons to form that belief or not to form it, and it seems only slightly less clear that sometimes it could be rational to form a belief that clashes with the evidence. This, however, is taken to mean that there are practical reasons to believe. I am working on a paper arguing that this does not work.  We have practical reasons to take steps to bring a belief about. We sometimes have practical reasons to make ourselves believe something, but that’s not the same as practical reasons to believe it. No such thing. Category mistake. This is where one could ask: why not call reasons to create a belief in oneself “reasons to believe?” Isn’t that a harmless shorthand?

I don’t think so. I agree that “reasons to stay out of debt” is a nice shorthand for “reasons not to perform actions that can lead to one’s being in debt and to perform actions that are conducive to staying out of it”, but while “reasons to stay out of debt” just are reasons for various actions and inactions, you can have “reasons to believe” something without any implied reasons for any actions or inactions. Jamal’s being out of debt is an instance of rationality (or response to reasons) on his part iff Jamal’s being out of debt is the intended result of a rational course of action taken by Jamal. Gianna’s belief that Israelis don’t speak Yiddish can be perfectly rational (as in, there for good reasons) even if it’s not the result of any rational course of action taken by her. Perhaps the belief occurred in her mind as the result of unintentionally hearing a reliable person’s testimony, no action required, or was the result of an irrational action like reading Wikipedia while driving; it can still be as rational a belief as they come. When we say “reasons to believe” it is not normally a shorthand for “reasons to make oneself believe”, and so to shorthand “reasons to make oneself believe” to “reasons to believe” is not harmless, but very confusing. To be continued.

Philosophy: Truth or Dare?

Many years ago I was having a long chat with someone who later became a well-known philosopher. His work was already way cool, but looking at the theses he defended, I told him he must be aiming for the Annual David Lewis Award for Best-Defended Very Weird View. He told me that he did not always believe the views he defended. He was most interested in seeing how far he can go defending an original, counter-intuitive proposition as well as he can. What did I think? I said that it seems to me that some philosophers seek the Truth but others choose Dare.

I am more of a Truth Philosopher than a Dare Philosopher, but I doubt it’s a matter of principle, given that my personality is skewed towards candor. I’m just not a natural for writing things in which I don’t have high credence at the time of writing. However, if you are human, should you ever have high credence in a view like, say, compatibilism, which has, for a long time, been on one side of not only a peer disagreement but a veritable peer staring contest? Looking at it from one angle, the mind boggles at the hubris.

Zach Barnett, a Brown graduate student, has recently been working on this and has a recent related paper in Mind. I asked him to write about it for Owl’s Roost and he obliged. Here goes:

I want to discuss a certain dilemma that we truth-philosophers seem to face. The dilemma arises when we consider disagreement-based worries about the epistemic status of our controversial philosophical beliefs. For example:

Conciliationism: Believing in the face of disagreement is not justified – given that certain conditions are met.

Applicability: Many/most disagreements in philosophy do meet the relevant conditions.

————————————————————————————————

No Rational Belief: Many/most of our philosophically controversial beliefs are not rational.

Both premises of this argument are, of course, controversial. But suppose they’re correct. How troubling should we find this conclusion? One’s answer may depend on the type of philosopher one is. 

The dare-philosopher needn’t be troubled at all. She might think of philosophy as akin to formal debate: We choose a side, somehow or other, and defend it as well as we can manage. Belief in one’s views is nowhere required.

The truth-philosopher, however, might find the debate analogy uncomfortable. If we all viewed philosophy this way, it might seem to her that something important would be missing – namely, the sincerity with which many of us advocate for our preferred positions. She might protest: “When I do philosophy, I’m not just ‘playing the game.’ I really mean it!”

At this point, it is tempting to think – provided No Rational Belief is really true – that the truth-philosopher is just stuck: If she believes her views, she is irrational; if she withholds belief, then her views will lack a form of sincerity she deems valuable.

As someone who identifies with this concern for sincerity, I find the dilemma gripping. But I’d like to explore a way out. Perhaps the requisite sort of sincerity doesn’t require belief. An analogy helps to illustrate what I have in mind.

Logic Team: You’re on a five-player logic team. The team is to be given a logic problem with possible answers p and not-p. There is one minute allotted for each player to work out the problem alone followed by a ten-second voting phase, during which team members vote one by one. The answer favored by a majority of your team is submitted.

      Initially, you arrive at p. During the voting phase, your teammate Vi – who, in the past, has been more reliable than you on problems like this one – votes first, for not-p. You’re next. Which way should you vote?

Based on your knowledge of Vi’s stellar past performance, you might suspect that you made a mistake on this occasion. Perhaps you will cease to believe that your original answer is correct. Indeed, you might well become more confident of Vi’s answer than you are of your own.

It doesn’t follow, though, that you should vote for Vi’s answer of not-p. If all you care about is the the accuracy of your team’s verdict, it may still be better to vote for your original answer of p.

Why? In short, the explanation of this fact is that there is some value in having team members reach independent verdicts. To the extent that team members defer to the best player, independence is diminished. This relates to a phenomenon known as “wisdom of the crowd,” and it relates more directly to Condorcet’s Jury Theorem. But all of this, while interesting, is beside the point.

In light of the above observations, suppose that you do decide to vote for your original answer, despite not having much confidence in it. Still, there is still an important kind of sincerity associated with your vote: in a certain sense, p seems right to you; your thinking led you there; and, if you were asked to justify your answer, you’d have something direct to say in its defense. (In defense of not-p, you could only appeal to the fact that Vi voted for it.) So you retain a kind of sincere attachment to your original answer, even though you do not believe, all things considered, that it is correct.

To put the point more generally: In at least some collaborative, truth-seeking settings, it can make sense for a person to put forward a view she does not believe, and moreover, her commitment can still be sincere, in an important sense. Do these points hold for philosophy, too? I’m inclined to think so. Consider an example.

Turning Tide: You find physicalism more compelling than its rivals (e.g. dualism). The arguments in favor seem persuasive; you are unmoved by the objections. Physicalism also happens to be the dominant view.

      Later, the philosophical tide turns in favor of dualism. Perhaps new arguments are devised; perhaps the familiar objections to physicalism simply gain traction. You remain unimpressed. The new arguments for dualism seem weak; the old objections to physicalism continue to seem as defective to you as ever. 

Given the setup, it seems clear that you’re a sincere physicalist at all points of this story. But let’s add content to the case: You’re extremely epistemically humble and have great respect for the philosophers of mind/metaphysics of your day. All things considered, you come to consider dualism more likely than physicalism, as it becomes the dominant view. Still, this doesn’t seem to me to undermine the sincerity of your commitment to physicalism. What matters isn’t your all-things-considered level of confidence, but rather, how things sit with you, when you think about the matter directly (i.e. setting aside facts about relative popularity of the different views). When you confront the issues this way, physicalism seems clearly right to you. In philosophy, too, sincerity does not seem to require belief (or high confidence).

In sum, perhaps it is true that we cannot rationally believe our controversial views in philosophy. Still, when we think through the controversial issues directly, certain views may strike us as most compelling. Our connection to these views will bear certain hallmarks of sincerity: the views will seem right to us; our thinking will have led us to them; and, we will typically have something to say in their defense. These are the views we should advocate and identify with – at least, if we value sincerity. 

I find the proposed picture of philosophy attractive. It offers us a way of doing philosophy that is immune to worries from disagreement, while allowing for a kind of sincerity that seems worth preserving. As an added bonus, it might even make us collectively more accurate, in the long run.

That was Zach Barnett. Do I agree with him? As is usual when I talk to conciliationists, I don’t know what to think!

Raw Reflections on Virtue, Blame and Baseball

In a much argued-about verse in the Hebrew Bible, we are told that Noah was a righteous man and “perfect in his generations” or “blameless among his contemporaries” or something like that (I grew up on the Hebrew, and so I can say: the weirdness is in the original). The verse has been treated as an interpretative riddle because it’s not clear what being “blameless among one’s contemporaries” amounts to. Was the guy really a righteous person (as is suggested by the subsequent text telling us that he walked with God) or was he a righteous person only by comparison to his contemporaries, who were dreadful enough to bring a flood on themselves?

My friend Tim Schroeder would probably have suggested that, given his time, Noah must have had had an excellent Value Over Replacement Moral Agent. It’s kinda like Value Over Replacement Player. Here’s how Wikipedia explains the concept of Value Over Replacement Player:

In baseballvalue over replacement player (or VORP) is a statistic (…) that demonstrates how much a hitter contributes offensively or how much a pitcher contributes to his team in comparison to a fictitious “replacement player” (…) A replacement player performs at “replacement level,” which is the level of performance an average team can expect when trying to replace a player at minimal cost, also known as “freely available talent.”

Tim and I have been toying with the idea that while rightness, wrongness and permissibility of actions are not the sort of things that depend on what your contemporaries are doing, ordinary judgments of the virtue of particular people (“she’s a really good person”, “he’s a jerk”, and so on) are really about something akin to a person’s Value Over Replacement Moral Agent or VORMA. The amount of blame one deserves for a wrong action or credit for a right action also seems to be at least partially a matter of VORMA. Thus a modest person who is thanked profusely for his good action might wave it off by saying “come on, anyone would have done this in my place”, while a defensive person blamed emphatically for her bad action might protest that “I’m no worse than the next person”. Both statements allude to a comparison to a sort of moral “replacement player” – an agent who would, morally speaking, perform at “replacement level”, the level we would expect from a random stranger, or, more likely, a random stranger in a similar time, place, context – whom we would regard as neither morally good nor morally bad.

I have been reading a cool paper by Gideon Rosen on doing wrong things under duress. A person who commits a crime under a credible threat of being shot if she refuses to commit it seems to be excused for blame, Rosen says, even if, as Aristotle would have it, the person acted freely, or, as contemporary agency theorist would have it, the person acted autonomously. The person who commits a crime so as not to be killed is not necessarily acting under conditions of reduced agency, so where is the excuse from? Rosen thinks, like I do, that excuses are about quality of will, and argues that the person who acts immorally under (bad enough) duress does not, roughly, show a great enough lack of moral concern to justify our blaming her in the Scanlonian sense of the “blame” – that is, socially distancing ourselves from her. Simply falling short of the ideal of having enough moral concern to never do anything wrong does not justify such distancing.

Without getting into the details or Rosen’s view, I would not be surprised if this has something to do with VORMA as well. Even in cases in which a person who commits a crime to avoid being killed acts wrongly, and I agree with Rosen there are many such cases, the wrongdoer does not usually show negative VORMA. If I were to shun the wrongdoer, I would arguably be inconsistent in so far as I do not shun, well, typical humanity, who would have acted the same way.  I suspect that even if I happened to be unusually courageous, a major league moral agent, and escape my own criteria for shunning, there would still be something very problematic about shunning typical humanity.

VORMA might also explain the ambivalence we feel towards some people whom it is not utterly crazy to describe as “perfect in their generations” or “blameless among their contemporaries”, like Noah. “My grandfather was a really, really good person!”, says your friend. She forgets, when she says it, that she thinks her grandfather was sexist in various ways – though, to be sure, a lot less so than his neighbors. Heck, she forgets that by her own standards, eating meat is immoral, and her grandfather sure had a lot of it. But unlike the Replacement Player in baseball, who is clearly defined in terms of average performance of players you would find in second tier professional teams, our choice of pool of imagined Replacement Moral Agents seems inevitably sensitive to pragmatics and contexts. Your friend’s grandfather had magnificent VORMA if all the bad things he did were done by almost everyone in his demographics and time period and if he often acted well where almost none of them would have. While we might have useful ideals of virtuous people who always do the right thing, the phrase “wonderful person” when applied to a real human might normally mean something more analogous to a star baseball player. As we know, such players get it wrong a lot of the time!

PS Eric Schwitzgebel has very interesting related work about how we want “a grade of B” in morality.

PSS for why I don’t think the grandfather is simply excused from his sexism by moral ignorance, see my paper “Huckleberry Finn Revisited”.

Kantianism vs Cute Things

“We love everything over which we have a decisive superiority, so we can toy with it, while it has a pleasant cheerfulness about it: little dogs, birds, grandchildren”.

Immanuel Kant

I don’t normally argue for or against Kant, recognizing that figuring out exactly what he means takes expertise I don’t have. I normally argue with contemporary Kantians, because if I don’t get what they mean, I can email them and ask, or they can tell me I’m wrong in Q&A. Yet I can’t resist the quote above. It is, of course, offensive to grandparents everywhere, and to anyone who has ever valued the love of a grandparent. See, your grandparents “loved” you because you were so small and weak and they could toy with you and relish being on the right side of the power imbalance between you. It doesn’t sound like love to me. It sounds like some kind of chaste perversion.

Continue reading “Kantianism vs Cute Things”

The Problem With Imagining (2): Simulation, Tragedy and Farce

When you try to understand a person, you imagine yourself in her situation, and some psychologists call it “simulation”. I tentatively use the term “Runaway Simulation” to describe the countless cases when a reasonable working assumption – “the other person thinks and feels the way I would have thought and felt if I were in their situation” – morphs into a stubborn belief that persists despite loads of glaring counter-evidence.

Sometimes it’s nearly harmless: you love looking at pictures of your children and can’t imagine anyone could fail to enjoy pictures of your children, so you post too many baby pictures on Facebook. You are a ravenous person and so you doubt anyone, however generally honest, who claims to be full after a salad. You are an organized person and you ask someone like me for her flight itinerary six month in advance, despite your experience with her disorderly lifestyle. But things can get trickier. You meet a person who claims not to want children, and you can’t imagine not wanting children, so you come up with some other explanation for her having no children and claiming she doesn’t want them. Perhaps she had a bad mother and is afraid she might be a bad one too? Perhaps she is afraid of commitment in general? Perhaps her romantic partner is wrong for her, and not wanting children is her unconscious’s way to tell her the relationship isn’t working? You violate Ockham’s Razor like nobody’s business, because the best explanation is under your nose: she just doesn’t want children. This however you can’t imagine, and we humans trust our imaginations a lot. Like a twisted Holmes, you accept an improbable story because the alternative seems impossible, and some profound misunderstandings begin that way.

Continue reading “The Problem With Imagining (2): Simulation, Tragedy and Farce”

What Kantianism Gets Wrong

With regard to moral theory I have two hunches. One is that the wellbeing of one’s fellow humans is an intrinsic moral value. Intrinsic not only in that a moral agent will care about it for its own sake but also in that its value is not derivative from other values, like, say, that of rational agency. So Kantianism is false. The other is that the wellbeing of one’s fellow humans isn’t the only intrinsic moral value.  There are virtues that are independent of  benevolence, and respect, of the sort that makes paternalism wrong, is one of them. So utilitarianism doesn’t work either.

But after decades of Kantian dominance in analytical ethics, some of us have become used to thinking of concern for the wellbeing of others as a somehow coarse, primitive virtue befitting swine and Jeremy Bentham, unless it is somehow mediated by, derived from or explained through something more complicated and refined, like the value of rational agency.

Suppose one is roughly Kantian. Reverence for rational agency is the one basis of morality as far as one is concerned, where rational agency is thought of roughly as the capacity to set ends. What to do with the sense that benevolence is a major part of morality? The answer seems to be “think of benevolence in terms of a duty to adopt and promote other people’s ends”. Now suppose that, as many contemporary Kantians do,  you reject the idea that adopting and promoting a person’s ends is the same thing as protecting her wellbeing – after all, most humans have some ends for which they are willing to sacrifice some wellbeing. In this case, what you say is that at the heart of benevolence we have a duty to adopt and promote people’s ends. We also have a duty to protect human wellbeing because, even though it’s not the only thing people care about, it is a very important end for all agents.

I don’t think this works, though. My argument goes like this:

  1. If the reason protecting a person’s wellbeing is important is purely the fact that her wellbeing is an important end to her and we have a duty to adopt her ends, then it would be of at least equal moral importance to protect any end that is at least equally important to her.
  2. Protecting an agent’s wellbeing is something we are morally called upon to do in some cases where, other things being equal, we would not be called upon to protect her pathway to achieving another equally important (to her!) end.

Therefore, it is false that the reason protecting a person’s wellbeing is important is purely the fact that her wellbeing is an important end to her and we have a duty to adopt her ends.

 Let me talk about 2) and why it’s plausible.

Take a case where an economically comfortable person, let’s call her Mercedes, is asked for help by her desperate acquaintance, Roger. She can, by paying 50 dollars, rescue him from being beaten up. If beaten up, Roger would suffer pain and then have to spend some days in a hospital, but he is not going to be killed. I am trying to stick to 1000 words so let me just promise you I have a half-way-realistic case.  Now imagine an alternative scenario in which a person – call him Leonard – asks Mercedes for 50 dollars because without them, a great opportunity to travel and spread his Mennonite religion will have to be relinquished.  Leonard’s end (spreading his religion) is as at least important to him as Roger’s end (not being beaten up) is important to Roger, and more important to Leonard than Leonard’s own wellbeing – he is willing to suffer for it if needed. For all Mercedes knows, spreading Leonard’s religion is itself strictly morally neutral – she has no particular reason to spread it independently of him.

There is an asymmetry between the cases. In the first scenario, Mercedes would display a lack of benevolence – perhaps of decency! – if she were to refuse to rescue Roger from a beating by giving him $50, given that this would be easy for her, no harm would be caused by it to anyone, etc. In the second scenario there is no such presumption. If Mercedes likes Leonard’s cause, it makes sense for her to make a donation. If she’s indifferent to his cause, no compelling reason to donate is provided by the very fact that Leonard would be ready, if worst comes to worst, to suffer for his cause. Unless she does fear for his wellbeing – fears, for example, that Leonard is a bad shape and will plunge into a horrible depression if she declines – Mercedes is not any less of a good Samaritan, certainly isn’t a sub-decent Samaritan, for not wanting to donate to another’s morally neutral cause, however crucial her donation would be to the cause.

If all that made Roger’s wellbeing matter morally was its importance to him as an end, she would have had as much of a duty to help Leonard.

Some Kantians would reply that what matters here isn’t protecting Roger’s wellbeing but the fact that Roger might lose rational agency. Roger, however, is not in danger of death or brain damage. He might suffer pain, but it takes a truly extreme amount of suffering to deprive someone of basic human rationality. His ability to perform successful actions will be impaired for a few days, but being a rational agent is not about being a successful performer of actions – it is about being responsive to practical reasons. It would be quite wrong to say that anyone with whom the world does not collaborate – because of an injury, or due to being in chains for that matter  – is thereby not a rational agent. Further more, preventing a few days of suffering is more morally urgent than preventing a few days of involuntary deep sleep with no significant harm expected, though involuntary sleep deprives you of agency if anything does.

There is something special about wellbeing.