Notes from a Character

When was the last time you tried to get rid of one habit you had that you didn’t like? How easy was that?

That’s normally my first response when they ask me whether I think we can change our characters intentionally, through trying. I don’t know why this question is so often addressed to me. Perhaps it’s because people think I’m a virtue ethicist. Maybe it’s because I have gone through a more thorough intentional change in my character than most people I know have. Those of us who successfully made such a change, perhaps even more than those who tried and failed, know that many philosophers speak about “cultivating” our characters or “managing” our characters in too casual a way. Intentionally changing character is hard. It’s God damn hard. It’s @#$%&! hard.

My name is Nomy and I’m too candid. However, I am, it seems, employable (even interview-able! If you are also too candid, you know that can be much harder). I also have great friends. These things weren’t true back when I was a teenager capable of telling a person that he is ugly without feeling anger at him or expecting him to be insulted. Mine was a case of grand social incompetence that today would have gotten me diagnosed as “on the spectrum” very quickly (erroneously, I hasten to add. My problem was bad upbringing, in a broad neo-Aristotelian sense). Every step of the many years long journey from there to minimal practical wisdom was the result of gargantuan effort. It sounds like I’m bragging, and in a way I am, but what I want to get across most of all is the frustrating difficulty of it, the fumbling, the repeated not-even-close failures, the times you think you have finally become an agreeable human only to discover that you once more offended someone that you hadn’t the slightest desire to hurt, or that yet another person said “ah, her? we thought she might be difficult” – without you having any idea why. It was exhausting – and we’re talking getting from utterly clueless to merely too candid; we are not talking becoming a person worthy of raising a flag with a red maple leaf on it, say, or the kind of diplomatic person that a woman is still (unfairly) expected to be.

So, intentional character change: possible but insanely hard, requires help from others, isn’t just a matter of practice through repeated action, and should not be talked about lightly, as in suggesting that every time a person is blameworthy for an akratic action what they are really blameworthy for is not having, some years earlier, done the obvious thing and gone to character school, where remedial courses are always available. But sometimes people ask me about whether my (and Schroeder’s) view allows for people intentionally becoming more virtuous. For Tim and me, to be virtuous is to have good will – want the right things, de re – and not to want to wrong things, de re.  If you don’t like desires, you can have a pretty similar view involving concerns or cares otherwise interpreted. Your intrinsic desire situation likely matters not only to your patterns of behavior but to your cognition as well (e.g if you want humans not to suffer, you are more likely to notice the sad person standing in the corner; if you want equality, you are more likely to notice that a movie is racist), but nobody, strictly speaking, is morally virtuous just because of a cognitive talent or morally vicious just because of a cognitive fault. Being capable of noticing the sad person in the corner because you’re an observant novelist scores you no moral points, and being incapable of noticing racism in a movie because you came from far away and don’t get the cultural references does not lose you any). By this measure, it is likely that I did not become more virtuous than I used to be. My quality of will didn’t change – my cognitions and habits did. So, in the strict Arpaly/Schroeder sense…. Is it possible to change from a not-so-virtuous person into a virtuous one, intentionally?

Seems like that would be impossible, paradoxical, self-defeating. The virtuous person is defined by the things she intrinsically desires, or if you prefer, what she cares about. She desires, let’s say, that humans be safe from suffering, that people be treated equally, that she doesn’t lie – the details would vary depending on what the best normative theory tells us morality is about. Simply acting out of a desire to be virtuous (de dicto) is not virtuous. In fact, even acting out of a desire to be virtuous de re is not virtuous: the right reason to save a person from a fire is that he not suffer or die, it’s not that you, the agent, be compassionate, and thus the virtuous person would act out of a desire to prevent suffering or death, not out of a desire to have the virtue of compassion. Self-defeating, right?

Except not really. Peter Railton pointed out that this kind of thing looks paradoxical in theory only if we ignore the various ways in which we can act upon ourselves in practice (his examples: the hedonist who decides he needs non-selfish desires in order to be happy, the ambitious tennis player who needs a bit less focus on winning, a bit more love of the game in order to win). Imagine a person who wants to be virtuous, who roughly knows (or has true beliefs regarding) what virtue is about (some other time about the person who doesn’t), but does not have the desires or cares of the virtuous person. More realistically, she has some of them, to some extent, but she falls short of what we are willing to call virtue. At first, her actions will not be expressions of virtue, but intrinsic desires do change, however slowly or gradually. They often spontaneously develop out of a more derivative form of desire: you want to learn philosophy in order to do well in law school, and by the end of the course you want it intrinsically. You start playing baseball to please your parents and find yourself continuing to do it long after they have died. If virtue is about desires or cares, it stands to reason that sometimes you can start out volunteering at a homeless shelter because you get a warm and fuzzy feeling from thinking of yourself as virtuous, or even because you get a warm and fuzzy feeling about others believing that you are virtuous, and then find yourself attracted instead to the grateful looks of some of the people in the shelter, and who knows, as the makeup of your motives shifts, find yourself moved to help when nobody is there to praise you. In ancient Jewish sources, much importance is attributed to studying the Torah for its own sake, the only praiseworthy way to do it. However, the advice for the person who cannot muster such pious motivation is to start by mustering ulterior motives and the intrinsic ones will “come”. I like this attitude. It doesn’t always work, oh no, but it strikes me as more likely to work than the practice of scrutinizing people’s motives – oneself or others – and verbally skewering them if one suspects any “virtue signaling” in the mix. Incidentally, Thomas Hill has a great article on how even Kant, the guy who brought us moral worth, didn’t like the scrutinizing thing – and he didn’t even believe in mixed motives!

So, granted: hard as it to intentionally acquire or ditch habits of thought or action, it seems even harder to intentionally acquire or ditch an intrinsic desire. Ever tried making yourself a lover of movies when you totally aren’t one, or getting rid of that desire to be tall? But there is no paradox involved, merely an “empirical” difficulty. Such difficulties can be tragic enough, but there is no need to deny that sometimes people intentionally become somewhat more virtuous than they were before. Not by sheer act of will, but by such things as hanging out with virtuous people and have it rub off on you, finding optimistic types who “believe in you” and seeing if you will automatically rise to meet their expectations, following the Talmudic advice to start from exciting ulterior motives and hope for the best, reading and watching memorable and vivid representations of the point of view of those whom your actions affect. Prosaic takes on human nature, which take moral motivation to be similar to philosophy-studying motivation or baseball-playing motivation or whatever, can be depressing, but they can be rather comforting on those occasions in which prosaic methods work. I can’t pretend any other kind of method worked for me.

The Man Who Was Afraid of the Weasel – A Rant About Mental Disorders

“But the man who was afraid of the weasel had a disease”.

Who? What? Where did this come from? Aristotle doesn’t tell us. As usual, he’s not talking to us – presumably he’s lecturing to some free citizens of his city state, of which there are – what, 35,000?  – who presumably all know the same gossip. With these people he can refer to “the man who was afraid of the weasel” just like you could refer, talking to people at your academic institution, to “the student who published that article in defense of sweat shops a few years back”.

I admit it: I have always been oddly curious about this example. Who was that man who was afraid of the weasel? What was the incident like? Why did Aristotle think it was a disease? For a while, whenever I met an Aristotle expert, I asked them if anything further was known about this story. They always told me that no, it is lost to posterity, and so I am left with my fantasy version of the event, no doubt influenced by my childhood in Israel: a very dark night, an Athenian military encampment, some soldiers sneaking a weasel into a sleeping man’s tent, or whatever they had back then instead of a tent, the man shrieking with terror and looking silly as he jumps to his feet and runs away. Later, the embarrassment, the humiliations reserved for a man who failed a harsh ideal of masculinity (either you are the type who is willing to die in battle, or you’re a coward!) in a tightly knit society. He wonders if he’ll ever live it down, and would not be totally dismissive of the thought that people might still read about him 2,500 years later.

Mustela nivalis -British Wildlife Centre-4.jpg
By Keven Law – originally posted to Flickr as On the lookout…, CC BY-SA 2.0, Link

Sadly, we will never know what happened, but the question of what makes a fear – or some other mental state or behavior – a disease or a symptom of disease is still with us. Suppose we try to settle the question of whether someone should count as having a mental illness or not. Is this child what some anti-intellectual cultures call “a nerd”, or does she have a mild version of “autism spectrum disorder”? Is this man grieving for his late wife, or does he suffer from a “major depressive episode” triggered by her death? I wrote in my post before last that very often, we still go about answering this question in a way that has nothing to do with science or with what metaphysicians call “carving the universe at the joints”. Those who are in favor of taking the nerdy child or the grieving man to have a mental disorder argue for it on the basis of the premise that the girl or the man could receive some help if so categorized. The girl could use some breaks at school, the man could use some therapy and some compassion from his employer, say. They often cannot, in this society, receive these things if we don’t call them mentally ill, so it’s practically a moral imperative to call them mentally ill. Therefore, they are mentally ill. If the concept of mental illness or mental disorder is to be anywhere near scientific, this is a pretty bad argument. True, wanting to help is a good motive. We are not talking some evil pharma companies plotting to include the grieving man in DSM so that they can sell him pills. But it’s a bad argument, all the same.

Those who hold that the girl or the man does not have a mental disorder also use arguments that have nothing to do with science or whatever joints the universe might have, though they, too, have good intentions. “I don’t want this child to be stigmatized as having a mental disorder just because she is nerdy, it will make her feel bad, therefore she does not have a mental disorder” is one. “It is insulting to me and to the memory of my wife to call my grief a mental disorder, so I don’t have one” is another. Full disclosure: I am often intuitively sympathetic to the conclusions of these bad arguments. Calling fairly ordinary aspects of grief an illness sounds problematic to me. I have a few doubts (I do mean doubts, not certainties) about the idea of “autism spectrum”, as it seems questionable to me that a child who is so terrified of human closeness that she refuses, from infancy, to be hugged or touched by mom or dad and a child who craves affection as much as anyone but fails to make friendships with peers because he can’t figure out how one starts a conversation suffer from two varieties and/or degrees of the same problem or trait. Still, that does not allow me, or anyone else, to argue that a diagnosis is dubious simply because it’s insulting. Some people who are so depressed that they spend most of their time crying on their apartment floors are firmly convinced that their depression is due to their superior insight into the nature of the world, or the fact that they have figured out that happiness is not valuable and only shallow people think it is. Such people often take offense if you suggest that the problem is their neurotransmitters, which made them gloomy long before they could spell “nihilism”. Still, it might very well be true – spoken as a person who had pretty bad depressive episodes herself – that the insulting diagnosis is, for some of them, correct.

So what? So “mental disorder” is not a scientific concept as long as we decide who “gets” to have a disorder or not to have it on the basis of practical rather than theoretical considerations. This is a problem, because ultimately, seriously scientific research into mental disorders is the best way to help those who have them, and for that we need “mental disorder”, as well as “autism”, “depression” etc. to be theoretically respectable concepts. What has to go, I think – not that I know how to make it go! – is the medicalization of suffering. By that I mean not merely the fact that more problems are considered diseases than before – that’s a mixed bag, as it is plenty good that epileptics are viewed as sick rather than possessed by the devil. Nor do I merely mean that some problems are considered diseases which are probably not diseases. When I say that suffering has been medicalized, I mean that a person who is suffering can increasingly receive neither help nor sympathy unless her suffering is regarded as a medical problem, and “medical” suffering is somehow perceived as more “real” and more deserving of remedy than ordinary suffering. In a morally ideal world, a person whose life is a mess because she’s in the middle of a divorce could come to her employer, explain her situation, and get a bit of slack from her. In this world she needs to go to a doctor, tell the doctor exactly what she would have told her employer, and, on the basis of that, get a note that says she has clinical depression – a disorder – and needs, well, exactly the type of consideration she would have asked for. There is something ridiculous about this.

The predicament of a kid who is bullied by everyone in class or who is simply friendless through k-12 is a bad one, both in terms of experience and of impact, and should be treated seriously. If I had to reincarnate as a child and had the choice, I would take a mild bona fide medical condition over this predicament. If the nerdy girl from my example is facing it, and if there is anything we adults can do without making things worse in another way (there often isn’t), we should do it. It shouldn’t matter one iota if the problem is literally a matter of health. Health is not the only good! illness is not the only bad! If we remember that, perhaps we can approach the question of whether she is best described as having a mental disorder, or autism spectrum disorder in particular, in the spirit of inquiry unfettered by the sense that if we say no, we thereby deprive her of help or sympathy, and so yes is the only decent thing to say. Such inquiry can bring with it better help for both those with autism and those without it. But the blogger who was obsessed with the man who was afraid of the weasel finished her rant.

Back by Popular Demand

Metaethics! Guitar: Michael Smith.

 

Lyrics:

It ain’t necessarily so

it ain’t necessarily so

What ethicist say

Can sound good in a way

But it ain’t necessarily so

 

Morality trumps other oughts

Morality trumps other oughts

Morality trumps other oughts

No rational action

Can be an infraction

Morality trumps other oughts

 

For Eudaemonia

(You get the idea)

Be virtuous by day and night!

Departures from virtue

Are all gonna hurt you!

Sometimes I wanna say “yeah, right!”

 

We always give laws to ourselves

We always give laws to ourselves

We lose our potential

For being agential

When we break them laws

From ourselves

 

I say it ain’t necessarily so

It ain’t necessarily so

i’ll say this, though, frankly

They’ll stare at me blankly:

It ain’t necessarily so!

 

Reflections on the Concept of Mental Disorder

“Nomy, our purpose here is to help you become just like everyone else”.

That’s what the school counselor told me when I was a kid. And then another school counselor. And then another. I am not paraphrasing, or dramatizing, or anything. Translating from Hebrew to English is all. They all referred my parents to psychologists and psychiatrists, who would help me even better toward this goal, being like everyone else, which they firmly assumed I shared, no matter what I said. Little wonder, then, that by the time I became a teenager, I was certain that the concept of a mental disorder was nothing but a tool of oppression used against unusual people by those who want everyone to become just like everyone else.

(PSA: if you have a child who is beaten up by the other kids because she’s reading Great Expectations at 11, like I did, or because she looks unusual, or because of some incidental vagary of child social dynamics, or even because she has bad social skills, do think carefully before sending her to some kind of shrink. You need to make sure your child does not get the message “the other kids beat me up because there is something wrong with me, and my parents agree, so they are sending me to be fixed”.)

Years later I had to give up my Szasz-ian extremism, because depression, along with hypomania and anxiety, threatened to kill me. Slowly it dawned on me that while the professionals of my childhood were wrong to try to cure me of reading Great Expectations, there was a case for calling some things mental disorders. Seeing my roommate react with fear and trembling to a small spider provided one datum: there was no way that her suffering was “socially constructed” in the English department sense of the term. It was real, and the term “disorder” seemed to fit it. It also seemed to fit my depression, hypomania and anxiety.

So what is a mental disorder, then? I knew what I wanted cured: my suffering. So, was I going to call any extended mental state or brain state constituting or leading to significant distress a mental disorder? That used to be pretty close to the DSM definition, and many shrinks will still tell you that if it causes either distress or disruption in functioning, it is a mental disorder. But this plausible-sounding theory is pretty terrible upon reflection. My love of Spinoza as a teenager caused me significant distress, because it caused kids to beat me up. It also interrupted my functioning, because it’s hard to function when you are beaten up. Still, loving Spinoza is not a mental disorder. Being gay in the 1970s caused one enormous suffering – everything from self-hatred to trouble with the law – and that helped keep homosexuality in the DSM till 1973 and “ego-dystonic homosexuality” (homosexuality, provided that one wants it “cured” in oneself) till much later. The distress-bad functioning based definition of mental disorder, in other words, does not block the term from being used, in the oppressive manner typical of the school-counselors of my childhood, by those who want to tell gay people to be just like everyone else.

Some later DSM writers tried to solve the problem associated with defining a mental disorder as a state of mind/brain/behavior/whatever that causes distress or trouble functioning by simply adding to it a disclaimer along the lines of “the problem has to be with the individual, not with a conflict between the individual and society”. That didn’t work, because it is the job of a definition of a mental disorder to tell us when there is a problem “with the individual” and when there isn’t. Presently, we don’t have a definition that can do this job. The reason we no longer think that a woman who refuses to be a homemaker is showing a problem with functioning is not that our definition of mental disorder improved. It’s that our moral outlook did.

Part of the trouble with defining mental disorder the way philosophers try to define things is that this would be at cross purpose with what the writers of DSM are trying to do. Philosophers look for the true, or at least the coherent. Shrinks look for the useful. Let me explain. The Diagnostic and Statistical Manual of Mental Disorders is becoming a thicker and thicker book. More and more things are called mental disorders. There is a cynical hypothesis about the cause: shrinks want people to go to them and give them money. However, there is also a charitable hypothesis: shrinks want to help people, and nowadays, however obvious a person’s suffering, she can’t get the insurance company to fund psychiatric help if the suffering isn’t defined as a disorder. If a person who suffers from grief wants to take some pills to help with insomnia or make it easier to go back to work, her grief needs to be redefined as a major depressive episode, and therefore a disorder. You have to call something a mental disorder if people are to receive help for it.  Thus, however they define “mental disorder” in the introductions to their books, when you look at the long list of things that are classified as mental disorders you see that the one thing they have in common is popular demand for insurance coverage. The trouble is, of course, that “something is a mental disorder iff people want psychiatrists to help them with it” does not sound like a definition that captures a natural kind. It is basically another incarnation of the distress/bad functioning thing.

What about natural language? “Mentally ill” replaced terms like “insane”, “crazy” and “nuts”, which are, in many ways, colloquial ways to say “patently irrational”. The things that were considered forms of insanity or forms of “neurosis” when Freud was alive and are still considered paradigmatic mental illness today basically are forms of gross irrationality, or cause gross irrationality. These would be: psychosis that leads a person to think, irrationally, that he is Napoleon; depression that becomes so bad that the person thinks that the fact that she forgot to buy milk makes her as despicable as a Nazi, or, against all evidence, that her family will be delighted to see her dead: mania that leads a person to spend all his money and run off with his secretary to pursue a business deal that he is normally plenty smart enough to see is nonsense: terrible fear of tiny, harmless spiders: etc. To this day, being told that one’s thoughts or feelings or actions are symptoms of a mental disorder can be insulting or reassuring in a way that only being told one is being grossly irrational can be. Let me explain.

Maybe I’ll start with the reassuring. Suppose you are really afraid that there is a monster under your bed – literally or figuratively – and someone convinces you that your fear is a symptom of a mental disorder. That can be wonderful news. When I express fearful or self-hating thoughts, being told “Nomy, that sounds crazy” can be music to my ears.  I am irrational! The fact that I forgot to answer an email from the secretary does not mean I am worthless! My fear or sadness is unwarranted!  Now for the insulting: if you are very sad purely because your attempts to make your country a democracy have failed, and someone refers to your sadness as a clinical problem, it can be infuriating. No, you think, I am not irrational. I am responding appropriately to reasons. Calling it a disorder is refusing to see that. This is one reason people have been angry when the last DSM amended the definition of depression in such a way that it now includes many grieving people. People who don’t want their grief in the DSM do not deny that they are suffering and do not always deny that they could use some professional help. What they want to deny is that there’s anything grossly irrational about their grief. Of course, rationality and irrationality can be woven fine. A person can start being depressed because he lost his job – presumably a reason to be sad – but then, despite the fact that he was fired due to a recession, start feeling worthless or bad because he’s unemployed, and that’s where irrationality can creep in – even gross enough irrationality for the person to count as having a disorder.

So paradigmatic mental disorders involve serious irrationality. To that you can add conditions often thought of as disabilities rather than disorders, in which the problem is not irrationality but cognitive impairment of some sort (e.g low intelligence, lack of some kind of know-how). Perhaps they too belong in some divine version of DSM.  But what about conditions that do not grossly affect one’s rationality and involve no cognitive impairment? My hunch is that there is something very problematic in calling them mental disorders, as opposed to problems, troubles, eccentricities, ways of being neuro-atypical, or sometimes even vices. If you think the DSM, considered from the aspect of truth and not insurability, is getting too thick, this just might be what’s bothering you.  But to be continued.

Epistemology and Sandwiches

To Sophie Horowitz we owe the following question: if having enough blood sugar contributes to our cognitive abilities, does it mean that we sometimes have an epistemic duty to eat a sandwich?

The last three posts in Owl’s Roost concerned some reasons to think that there are no practical reasons to believe and no duties of any sort to believe (or conclude, or become less confident, etc) – at least if we interpret “duty” in a way that’s friendly to deontologists in ethics. But never mind practical reasons to believe.  Can there be epistemic reasons to act? Or, for that matter, epistemic duties to act? For example, deliberating is an action, something that you can intentionally do. One can also intentionally act to review the evidence in a complicated case, to ask experts for their opinions, or to check the origins of that article forwarded to one on Facebook that claims that cats cause schizophrenia. Do we have epistemic duties and epistemic reasons to do these things?

If we want money, there are things that we have reasons – practical reasons – to do. Call these practical reasons “financial reasons”. Similarly, there are things that we have reasons – practical reasons – to do if we want to be healthy. Call these practical reasons  “health reasons”. “Health reasons” are practical reasons that apply to people whose goal is health, “financial reasons” are practical reasons that apply to people whose goal is money. Now, suppose your goal is not health, or wealth, or taking a trip on the trans-Siberian train, or breeding Ocicats in all the official 12 colors, or doing well on the philosophy job market without losing your mind. Your goal happens to be knowledge – in general or in a specific topic. Or maybe your goal is to be epistemically rational – you dislike superstition, wishful thinking, paranoia, and so on – or to have justified beliefs if you can. Just like health, wealth, train riding or Ocicat breeding, knowledge and justified beliefs are goals that give rise to practical reasons. Practical reasons to deliberate well, to review evidence, to avoid some news outlets, and even, at times, to eat a sandwich. Can these be called “epistemic reasons”? Yes, but in a sense parallel to “financial reasons” and “health reasons”, not in a sense that contrasts them with practical reasons. Is “eat a sandwich when hunger clouds your cognitive capacities” an epistemic norm? Yes and no. Yes in the sense that “make sure to clean your brushes when you paint pictures” is an aesthetic norm, and no in the sense that “orange doesn’t go with purple” is an aesthetic norm. Cleaning one’s brushes is conducive to creating beauty but it’s not in itself beautiful. To the extent that one wants to create beauty using paintbrushes, one usually has a reason to clean them. To the extent that one wants to come up with nice valid arguments in one’s paper, one has a reason to eat enough before writing. That does not make eating that sandwich epistemically rational in the same sense that believing what the evidence suggests is epistemically rational. Eating that sandwich is a way to make yourself epistemically rational, which happens to be what you want. In short, the adjective “epistemic”, now added to a larger number of nouns than ever before in the history of philosophy, can signify a type of normativity, a kind of reasons, or it can signify a type of object to which a norm applies, the stuff that a reason is concerned with, which happens to be knowledge or belief rather than money, paintings, Ocicats, cabbages or kings. I think the distinction needs to be kept in mind.

So….  “Epistemic duties to act” is just another name for “practical reasons that one has to act if one is after knowledge or justified belief”. Or is it really? Some might argue that there might be epistemic duties that do not depend on what one is after. Knowledge, truth, justified belief, or epistemic rationality are, say, objectively good things and one has a duty to pursue them – we’re talking something resembling a categorical imperative, not a hypothetical one. Perhaps we should seek knowledge, or at least some intrinsically valuable kinds thereof, regardless of what we happen to want. But “one should seek knowledge”, as well as “one should seek epistemic rationality” and “one should seek the truth” are only epistemic norms in the sense that they happen to be about knowledge, rationality, etc. They are not different in kind from the practical directives “one should seek beauty”, “one should seek pleasure, “one should seek honor”, and so on. Why one might want to seek knowledge is an issue for epistemologists, but it is also an issue for old-fashioned ethicists, theorists of the good life, who try to figure out what, in general, is worth pursuing. It makes sense to say that Sherlock Holmes, who refuses to pursue any knowledge that isn’t relevant directly to the cases he encounters as a detective, is missing out on something good, or on a part of the good life, and it makes sense to say (though I am not the type to say it) that he is irrational in refusing to pursue more knowledge. But to say that he is thereby epistemically irrational is odd. He is Holmes. He is as epistemically rational as it gets. If he is irrational, it’s a practical irrationality – albeit not in the colloquial sense of “practical” – in refusing to pursue episteme outside the realm of crime detection.

Epistemic Norms Aren’t Duties, Epistemic Irrationality Isn’t Blame

The words “duty” and “blame” can be used in many ways. You can blame the computer for an evening ruined by a computer failure, but computers are not morally blameworthy. You can talk about Christa Schroeder, Hitler’s secretary, performing her duties, but morally speaking her duty was – what? To work somewhere else? To become a spy? To screw up her work as much as possible? When I say that there are no epistemic blame and epistemic duties I mean to say that epistemic norms behave very differently from moral duties as deontologists talk about them and epistemic irrationality behaves very differently from moral blame as free will/moral psychology people talk about it. I do not intend to deny that epistemology is normative, but that it is normative does not imply that for every ethical concept there is an epistemological concept that is exactly isomorphic to it.

I talked in previous posts about why I think there are no practical reasons to believe, though there can be practical reasons to make yourself believe. At this point I will say nothing about duties to make yourself believe things and stick to putative duties to believe or not believe things – say, not to believe contradictions.

The thing about deontology is that you get an A for effort. Suppose, for example, Violet has a duty to return Kiyoshi’s book, which she promised to give him back on March 16th.  However, a large snow storm causes her flight to be cancelled, and despite all her efforts, she can only get back to Kiyoshi on the 17th. Any Kantian will hold that even though the book does not return to its owner on time, Violet’s good will shines like a jewel as long as she has tried. Some would say that ought implies can and therefore Violet did nothing wrong. She discharged her duty. Others would say that she has done wrong, but is exempt from blame.

Compare that to an epistemic norm: one ought not believe contradictions. Suppose Violet, who is in my intro ethics class, tries hard not to believe contradictions (my opening spiel on the first class often includes a reference to the importance of not believing them). She tries especially hard with regards to the class material, which she studies feverishly, rehashes in office hours, etc. Still, in the final paper, she writes sincerely, in response to one question, that all morality is relative to culture and, in response to another, that murder is “absolutely” wrong, regardless of circumstances. Violet’s valiant efforts not to believe contradictions do not result in her getting an A for effort – not literally, in my class, and not figuratively, vis-à-vis epistemic norms. If it is epistemically irrational to believe in contradictions, Violet is irrational in believing one, regardless of how hard she tries not to be. She is not the least bit less irrational because of her efforts.

It might seem that epistemic norms too grant As for effort, because they do sometimes grant an A for a process of responding to reasons that results in a false belief. For example, a person I met at a conference guessed quickly, plausibly and wrongly that my name is Turkish. The error makes sense – there is a place in Turkey called Arpali, and the man made an inference from that – and, assuming that the man’s degree of credence was proportional to evidence, he does get “an A” for his reasoning or for his response to epistemic reasons. But despite the tempting analogy, it’s important that the A is not literally for effort. As it happened, no effort was involved – the man’s guess seems to have come to him quickly. He made a good inference, and it doesn’t matter whether it came to him through effort or not – in fact, some might regard him as more epistemically virtuous because no effort was needed. On the other hand, if, instead of making a good guess, he were to stand there and wrinkle his forehead and come up with a guess that makes no sense at all, no amount of effort on his part would make us treat his inference as less bad.

It is an interesting question whether the effort thing is a problem for epistemic virtue talk, as opposed to duty talk.  While trying hard to return a book may discharge your duty to return it, trying hard to acquire the virtue of courage, say, does not mean that you have that virtue of courage, and trying to act generously does not automatically imply acting generously. Virtue ethics does not generally give As for effort (whether it gives some credit for effort is a different question).

Here is a related asymmetry regarding blame and the charge of epistemic irrationality. Suppose Anna attacks her professor because she believes, during a psychotic episode, that he is the devil. Anna is exempt from moral blame: “you can’t blame her, she was psychotic”. She is not exempt from the charge of epistemic irrationality that can be leveled at her belief that the professor is the devil. (“She’s not epistemically irrational, she is psychotic”? Doesn’t work. Having a psychotic episode is one way of being irrational).

Someone might ask: Ok, so we don’t have duties to believe, or not to believe,  or to infer, but don’t we have other epistemic duties – say, a duty to deliberate well, or to avoid watching fake news if we can help it? Can’t we incur epistemic blame if we fail to discharge these duties? Ok, to be continued. I mean it.

Don’t Worry, Be Practical: Two Raw Ideas and a PSA

1) If you are not too superstitious – I almost am – imagine for a moment that you suffer from cancer. Imagine that you do not yet know if the course of treatment you have undergone will save you or not.  You sit down at your doctor’s desk, all tense, aware that at this point there might be only interim news – indications that a good or a bad outcome is likely. The doctor starts with “well, there are reasons to be optimistic”. Though you are still very tense, you perk up and you feel warm and light all over. You ask what the reasons are. In response, the doctor hands you a piece of paper: ironclad scientific results showing that optimism is good for the health of cancer patients.

Your heart sinks. You feel like a victim of the cruelest jest. I’ll stop here.

Some philosophers would regard what happens to you as being provided with practical reasons to believe (that everything will turn out alright) when one desperately hoped for epistemic reasons to believe (that everything will turn out alright). I, as per my previous post, think what happens is being provided with practical reasons to make oneself believe that everything will turn out alright (take an optimism course, etc.) when one desperately hoped for epistemic reasons to believe that everything will turn out alright –I think the doctor says a false sentence to you when he says there were reasons to be optimistic. For the purpose of today it does not matter which explanation is the correct one. The point is that when we theorize about epistemic reasons and putative practical reasons to believe, the philosopher’s love of symmetry can draw us towards emphasizing similarities between them (not to mention views like Susanna Rinard’s, on which all reasons as practical), but any theory of practical and epistemic reasons heading in that direction needs a way to explain the Sinking Heart intuition – the fact that in some situations, receiving practical reasons when one wants epistemic reasons is so incredibly disappointing, so not to the point, and even if the practical reasons are really, really good, the best ever, it is absurd to expect that any comfort, even nominal comfort, be thereby provided to the seeker of epistemic reasons. What the doctor seems to offer and what she delivers seem incredibly different. The seeming depth of this difference needs to be addressed by anyone who emphasizes the idea that a reason is a reason is a reason.

To move away from extreme situations, anyone who is prone to anxiety attacks knows how $@#%! maddening it is when a more cheerful spouse or friend tells you that it’s silly of you to worry and then, when your ears perk up, hoping for good news you have overlooked, gives you the following reason not to worry: “there is absolutely nothing you can DO about it at this point”. Attention, well-meaning cheerful friends and partners the world all over: the phrase “there is nothing you can do about it” never helps a card-carrying anxious person worry less. I mean never. It makes us worry more, as it points out quasi-epistemic reasons to be more anxious. “There’s nothing you can do about it now” is bad news, and being offered bad news as a reason to calm down seems patently laughable to us. Thank you!  At the basis of this common communication glitch is, I suspect, the fact that practical advice on cognitive matters is only useful to the extent that you are the kind of person who can manipulate her cognitive apparatus consciously and with relative ease (I’m thinking of things like stirring one’s mind away from dwelling on certain topics).  The card-carrying worrier lacks that ability, to the point that the whole idea of practical advice about cognitive matters isn’t salient to her. As a result, a statement like “it’s silly of you to worry” raises in her the hope of hearing epistemic reasons not to worry, which is then painfully dashed.

2) A quick thought about believing at will.  When people asked me why I think beliefs are not voluntary in the way that actions are, I used to answer in a banal way by referring to a (by now) banal thought experiment: if offered a billion dollars to believe that two plus two equals five, or threatened with death unless I believe that two plus two equals five, I would not be able to do it at will. A standard answer to the banal thought experiment points out that actions one has overwhelming reasons not to do can also be like that. Jumping in front of a bus would be voluntary, but most people feel they can’t do it at will. But I think this answer, in a way that one would not expect from those who like the idea of practical reasons to believe, greatly overestimates how much people care about the truth. Let me take back the “two plus two” trope, as not believing that two plus two equals four probably requires a terrifying sort of mental disability, and take a more minor false belief – say, that all marigolds are perennial. If offered a billion dollars, I would still not be able to believe at will that all marigolds are perennial. To say that this is like the bus case would pay me a compliment I cannot accept. One has a hard time jumping in front of a bus because one really, really does not want to be hit by a bus. One wants very badly to avoid such a fate. Do I want to believe nothing but the truth as badly as I want to avoid being hit by a bus? Do I want to believe nothing but the truth so badly that I’d rather turn down a billion dollars than have a minor false belief? Ok, a false and irrational belief. Do I care about truth and rationality so much that I’d turn down billion dollars not to have a false and irrational belief? Nope. Alternately, one might hold that jumping in front of a bus is so hard because evolution made it frightening. But am I so scared of believing that all marigolds are perennials? No, not really. I am sure I have plenty of comparable beliefs and I manage. I’d rather have the money. I would believe at will if I could. It’s just that I can’t. We are broken hearted when we get practical reasons when we want epistemic reasons, but the reason isn’t that we are as noble as all that.

Beliefs, Erections and Tears, Or: Where are my Credence-Lowering Pills?

People who are not philosophers sometimes “come to believe” (“I’ve come to believe that cutting taxes is not the way to go”) or “start to think” (“I’m starting to think that Jolene isn’t going to show up”). Philosophers “form beliefs”. People who are not philosophers sometimes say “I have read the report, and now I’m less confident that the project will succeed”, and sometimes write “reading the report lowered my confidence in the success of the project”. Philosophers say “I have read the report and lowered my credence that the project will succeed”.

In other words, philosophers talk about routine changes in credence as if they were voluntary: I, the agent, am doing the “forming” and the “lowering”. That is kind of curious, because most mainstream epistemologists do not think beliefs are voluntary. Some think they sometimes are – perhaps in cases of self-deception, Pascal’s wager, and so on – but most take them to be non-voluntary by default. Very, very few hold that just like I decided to write now, I also decided to believe that my cats are sleeping by my computer. Yet if I were their example, they would say “Nomy looks at her desk and forms the belief that her cats are sleeping by the computer”. If I say anything to them about this choice of words, they say it’s just a convenient way to speak.

John Heil, wishing to be ecumenical in “Believing What One Ought”, says that even if we think that belief is not, strictly speaking, voluntary, we can agree that it is a “harmless shorthand” to talk about belief as if it is voluntary (talk about it using the language of action, decision, responsibility, etc). Why? Because it is generally possible for a person to take indirect steps to “eventually” bring about the formation of a belief.

OK then –so erections are voluntary! Let’s use the language of action, decision, and responsibility in talking about them! No, seriously. It is possible for a man to take indirect voluntary steps to bring it about that he has an erection. And yet, if I teach aliens or children an introduction to human nature, I do something terribly misleading if I talk about erections as if they were voluntary.

Another example: suppose Nomy is a sentimental sop and hearing a certain song can reliably bring her to tears. Heck, just playing the song in her head can bring her to tears. There are steps Nomy can take to cause herself to burst out in tears. Still, again, in that Intro Human Nature course one cannot talk about outbursts of tears as if they were voluntary without being misleading.

Last, would any self-respecting action theorist say that the phrase ‘she raised her arm’ can be a ‘harmless shorthand’ way to describe a person who caused her arm to jerk upwards by hooking it to an electrode? Or, worse, a “harmless shorthand” by which to describe a person whose arm jerked due to a type of seizure but who easily could have caused such jerking through using electrodes or prevented it through taking a medication?

In all of these cases, a “shorthand” would not be harmless – for two reasons. The more banal one is that when eliciting intuitions about birds, you don’t want the ostrich to be your paradigm case. Most erections, outbursts of tears, and arm-convulsions are not the result of intentional steps taken to bring them about, and there is a limit to what we can learn about them from the fewer cases that are. The same is true for most beliefs. Many of my beliefs are the result of no intentional action at all – my belief that the cats are here came into being as soon as they showed up in my field of vision, my belief that there are no rats in Alberta came into being as soon as a Tim Schroeder told me – and other beliefs I have are the result of deliberation, which is an action, but not an intentional step taken to bring about a particular belief. (so my belief that the ontological argument fails was not the result of intentional steps taken to make myself believe that the ontological argument fails). Whatever intuitions one might have about non-paradigmatic cases, like Pascal’s wager, could easily fail to apply to the many cases in which no step-taking has preceded a belief.

But to talk of erections and tears as if they were voluntary is also dangerous for a deeper reason, a reason that has nothing to do with the frequency of indirect-steps cases. Even if the majority of tears, erections, or convulsions were indirectly self-induced, there is still a difference between the action taken to induce the erection, tears or seizure and the thing that results from the action. If the former is voluntary, that alone doesn’t make the latter voluntary. Similarly, even if we routinely had the option of making ourselves believe something through the pressing of a button, and we routinely took advantage of this option, there will still be a difference between the act of pressing the button – an action – and the state of believing itself. If we make “pressing the button” a mental action – analogous to intentionally calling to mind images that are likely to produce an erection or an outburst of tears – it hardly matters: the mental action that produces a belief would still be different from its outcome.

Why does it matter? Because we only have practical reasons to do things voluntarily. It seems quite clear that, on those occasions, whatever their number, in which we can, in fact, for real, form a belief, there can be practical reasons to form that belief or not to form it, and it seems only slightly less clear that sometimes it could be rational to form a belief that clashes with the evidence. This, however, is taken to mean that there are practical reasons to believe. I am working on a paper arguing that this does not work.  We have practical reasons to take steps to bring a belief about. We sometimes have practical reasons to make ourselves believe something, but that’s not the same as practical reasons to believe it. No such thing. Category mistake. This is where one could ask: why not call reasons to create a belief in oneself “reasons to believe?” Isn’t that a harmless shorthand?

I don’t think so. I agree that “reasons to stay out of debt” is a nice shorthand for “reasons not to perform actions that can lead to one’s being in debt and to perform actions that are conducive to staying out of it”, but while “reasons to stay out of debt” just are reasons for various actions and inactions, you can have “reasons to believe” something without any implied reasons for any actions or inactions. Jamal’s being out of debt is an instance of rationality (or response to reasons) on his part iff Jamal’s being out of debt is the intended result of a rational course of action taken by Jamal. Gianna’s belief that Israelis don’t speak Yiddish can be perfectly rational (as in, there for good reasons) even if it’s not the result of any rational course of action taken by her. Perhaps the belief occurred in her mind as the result of unintentionally hearing a reliable person’s testimony, no action required, or was the result of an irrational action like reading Wikipedia while driving; it can still be as rational a belief as they come. When we say “reasons to believe” it is not normally a shorthand for “reasons to make oneself believe”, and so to shorthand “reasons to make oneself believe” to “reasons to believe” is not harmless, but very confusing. To be continued.

Philosophy: Truth or Dare?

Many years ago I was having a long chat with someone who later became a well-known philosopher. His work was already way cool, but looking at the theses he defended, I told him he must be aiming for the Annual David Lewis Award for Best-Defended Very Weird View. He told me that he did not always believe the views he defended. He was most interested in seeing how far he can go defending an original, counter-intuitive proposition as well as he can. What did I think? I said that it seems to me that some philosophers seek the Truth but others choose Dare.

I am more of a Truth Philosopher than a Dare Philosopher, but I doubt it’s a matter of principle, given that my personality is skewed towards candor. I’m just not a natural for writing things in which I don’t have high credence at the time of writing. However, if you are human, should you ever have high credence in a view like, say, compatibilism, which has, for a long time, been on one side of not only a peer disagreement but a veritable peer staring contest? Looking at it from one angle, the mind boggles at the hubris.

Zach Barnett, a Brown graduate student, has recently been working on this and has a recent related paper in Mind. I asked him to write about it for Owl’s Roost and he obliged. Here goes:

I want to discuss a certain dilemma that we truth-philosophers seem to face. The dilemma arises when we consider disagreement-based worries about the epistemic status of our controversial philosophical beliefs. For example:

Conciliationism: Believing in the face of disagreement is not justified – given that certain conditions are met.

Applicability: Many/most disagreements in philosophy do meet the relevant conditions.

————————————————————————————————

No Rational Belief: Many/most of our philosophically controversial beliefs are not rational.

Both premises of this argument are, of course, controversial. But suppose they’re correct. How troubling should we find this conclusion? One’s answer may depend on the type of philosopher one is. 

The dare-philosopher needn’t be troubled at all. She might think of philosophy as akin to formal debate: We choose a side, somehow or other, and defend it as well as we can manage. Belief in one’s views is nowhere required.

The truth-philosopher, however, might find the debate analogy uncomfortable. If we all viewed philosophy this way, it might seem to her that something important would be missing – namely, the sincerity with which many of us advocate for our preferred positions. She might protest: “When I do philosophy, I’m not just ‘playing the game.’ I really mean it!”

At this point, it is tempting to think – provided No Rational Belief is really true – that the truth-philosopher is just stuck: If she believes her views, she is irrational; if she withholds belief, then her views will lack a form of sincerity she deems valuable.

As someone who identifies with this concern for sincerity, I find the dilemma gripping. But I’d like to explore a way out. Perhaps the requisite sort of sincerity doesn’t require belief. An analogy helps to illustrate what I have in mind.

Logic Team: You’re on a five-player logic team. The team is to be given a logic problem with possible answers p and not-p. There is one minute allotted for each player to work out the problem alone followed by a ten-second voting phase, during which team members vote one by one. The answer favored by a majority of your team is submitted.

      Initially, you arrive at p. During the voting phase, your teammate Vi – who, in the past, has been more reliable than you on problems like this one – votes first, for not-p. You’re next. Which way should you vote?

Based on your knowledge of Vi’s stellar past performance, you might suspect that you made a mistake on this occasion. Perhaps you will cease to believe that your original answer is correct. Indeed, you might well become more confident of Vi’s answer than you are of your own.

It doesn’t follow, though, that you should vote for Vi’s answer of not-p. If all you care about is the the accuracy of your team’s verdict, it may still be better to vote for your original answer of p.

Why? In short, the explanation of this fact is that there is some value in having team members reach independent verdicts. To the extent that team members defer to the best player, independence is diminished. This relates to a phenomenon known as “wisdom of the crowd,” and it relates more directly to Condorcet’s Jury Theorem. But all of this, while interesting, is beside the point.

In light of the above observations, suppose that you do decide to vote for your original answer, despite not having much confidence in it. Still, there is still an important kind of sincerity associated with your vote: in a certain sense, p seems right to you; your thinking led you there; and, if you were asked to justify your answer, you’d have something direct to say in its defense. (In defense of not-p, you could only appeal to the fact that Vi voted for it.) So you retain a kind of sincere attachment to your original answer, even though you do not believe, all things considered, that it is correct.

To put the point more generally: In at least some collaborative, truth-seeking settings, it can make sense for a person to put forward a view she does not believe, and moreover, her commitment can still be sincere, in an important sense. Do these points hold for philosophy, too? I’m inclined to think so. Consider an example.

Turning Tide: You find physicalism more compelling than its rivals (e.g. dualism). The arguments in favor seem persuasive; you are unmoved by the objections. Physicalism also happens to be the dominant view.

      Later, the philosophical tide turns in favor of dualism. Perhaps new arguments are devised; perhaps the familiar objections to physicalism simply gain traction. You remain unimpressed. The new arguments for dualism seem weak; the old objections to physicalism continue to seem as defective to you as ever. 

Given the setup, it seems clear that you’re a sincere physicalist at all points of this story. But let’s add content to the case: You’re extremely epistemically humble and have great respect for the philosophers of mind/metaphysics of your day. All things considered, you come to consider dualism more likely than physicalism, as it becomes the dominant view. Still, this doesn’t seem to me to undermine the sincerity of your commitment to physicalism. What matters isn’t your all-things-considered level of confidence, but rather, how things sit with you, when you think about the matter directly (i.e. setting aside facts about relative popularity of the different views). When you confront the issues this way, physicalism seems clearly right to you. In philosophy, too, sincerity does not seem to require belief (or high confidence).

In sum, perhaps it is true that we cannot rationally believe our controversial views in philosophy. Still, when we think through the controversial issues directly, certain views may strike us as most compelling. Our connection to these views will bear certain hallmarks of sincerity: the views will seem right to us; our thinking will have led us to them; and, we will typically have something to say in their defense. These are the views we should advocate and identify with – at least, if we value sincerity. 

I find the proposed picture of philosophy attractive. It offers us a way of doing philosophy that is immune to worries from disagreement, while allowing for a kind of sincerity that seems worth preserving. As an added bonus, it might even make us collectively more accurate, in the long run.

That was Zach Barnett. Do I agree with him? As is usual when I talk to conciliationists, I don’t know what to think!

Raw Reflections on Virtue, Blame and Baseball

In a much argued-about verse in the Hebrew Bible, we are told that Noah was a righteous man and “perfect in his generations” or “blameless among his contemporaries” or something like that (I grew up on the Hebrew, and so I can say: the weirdness is in the original). The verse has been treated as an interpretative riddle because it’s not clear what being “blameless among one’s contemporaries” amounts to. Was the guy really a righteous person (as is suggested by the subsequent text telling us that he walked with God) or was he a righteous person only by comparison to his contemporaries, who were dreadful enough to bring a flood on themselves?

My friend Tim Schroeder would probably have suggested that, given his time, Noah must have had had an excellent Value Over Replacement Moral Agent. It’s kinda like Value Over Replacement Player. Here’s how Wikipedia explains the concept of Value Over Replacement Player:

In baseballvalue over replacement player (or VORP) is a statistic (…) that demonstrates how much a hitter contributes offensively or how much a pitcher contributes to his team in comparison to a fictitious “replacement player” (…) A replacement player performs at “replacement level,” which is the level of performance an average team can expect when trying to replace a player at minimal cost, also known as “freely available talent.”

Tim and I have been toying with the idea that while rightness, wrongness and permissibility of actions are not the sort of things that depend on what your contemporaries are doing, ordinary judgments of the virtue of particular people (“she’s a really good person”, “he’s a jerk”, and so on) are really about something akin to a person’s Value Over Replacement Moral Agent or VORMA. The amount of blame one deserves for a wrong action or credit for a right action also seems to be at least partially a matter of VORMA. Thus a modest person who is thanked profusely for his good action might wave it off by saying “come on, anyone would have done this in my place”, while a defensive person blamed emphatically for her bad action might protest that “I’m no worse than the next person”. Both statements allude to a comparison to a sort of moral “replacement player” – an agent who would, morally speaking, perform at “replacement level”, the level we would expect from a random stranger, or, more likely, a random stranger in a similar time, place, context – whom we would regard as neither morally good nor morally bad.

I have been reading a cool paper by Gideon Rosen on doing wrong things under duress. A person who commits a crime under a credible threat of being shot if she refuses to commit it seems to be excused for blame, Rosen says, even if, as Aristotle would have it, the person acted freely, or, as contemporary agency theorist would have it, the person acted autonomously. The person who commits a crime so as not to be killed is not necessarily acting under conditions of reduced agency, so where is the excuse from? Rosen thinks, like I do, that excuses are about quality of will, and argues that the person who acts immorally under (bad enough) duress does not, roughly, show a great enough lack of moral concern to justify our blaming her in the Scanlonian sense of the “blame” – that is, socially distancing ourselves from her. Simply falling short of the ideal of having enough moral concern to never do anything wrong does not justify such distancing.

Without getting into the details or Rosen’s view, I would not be surprised if this has something to do with VORMA as well. Even in cases in which a person who commits a crime to avoid being killed acts wrongly, and I agree with Rosen there are many such cases, the wrongdoer does not usually show negative VORMA. If I were to shun the wrongdoer, I would arguably be inconsistent in so far as I do not shun, well, typical humanity, who would have acted the same way.  I suspect that even if I happened to be unusually courageous, a major league moral agent, and escape my own criteria for shunning, there would still be something very problematic about shunning typical humanity.

VORMA might also explain the ambivalence we feel towards some people whom it is not utterly crazy to describe as “perfect in their generations” or “blameless among their contemporaries”, like Noah. “My grandfather was a really, really good person!”, says your friend. She forgets, when she says it, that she thinks her grandfather was sexist in various ways – though, to be sure, a lot less so than his neighbors. Heck, she forgets that by her own standards, eating meat is immoral, and her grandfather sure had a lot of it. But unlike the Replacement Player in baseball, who is clearly defined in terms of average performance of players you would find in second tier professional teams, our choice of pool of imagined Replacement Moral Agents seems inevitably sensitive to pragmatics and contexts. Your friend’s grandfather had magnificent VORMA if all the bad things he did were done by almost everyone in his demographics and time period and if he often acted well where almost none of them would have. While we might have useful ideals of virtuous people who always do the right thing, the phrase “wonderful person” when applied to a real human might normally mean something more analogous to a star baseball player. As we know, such players get it wrong a lot of the time!

PS Eric Schwitzgebel has very interesting related work about how we want “a grade of B” in morality.

PSS for why I don’t think the grandfather is simply excused from his sexism by moral ignorance, see my paper “Huckleberry Finn Revisited”.