(I hope they deliver). Happy new year to all readers of The View from the Owl’s Roost!
So you are considering quitting your secure middle class job and going to Tahiti to become a painter. You have a strong hunch that once you go there, you’ll flourish as an artist and produce truly great work. Let’s take morality out of the equation: you are not deserting dependents. You are just considering a high risk of bankruptcy and, just as bad as far you are concerned, ridicule. If you go to Tahiti, are you being rational in so doing?
Bernard Williams suggests that there is no fact of the matter until you have already gone to Tahiti and succeeded or failed. That is, in some respects, a truly scary idea. I have proposed an idea which is not as scary, but might be, to some philosophers, more annoying: there is a fact of the matter, but you, the agent in the story, can’t know it – not before you succeed or fail and, in many circumstances, not afterwards either. When I say you can’t know it, I do mean to imply that no theory of rationality can guide you into this kind of knowledge. This, however, does not mean that there can’t be a good theory of whether, given that certain beliefs, desires, emotions, etc. are in your head, you would be rational or not in going to Tahiti. If I know what’s in your mind – perhaps because I am an omniscient narrator and I made you up – then I do, given the right theory, know before you leave the house whether or not you are being rational. Since you are not akratic in this story, the more precise question is likely whether your action is based on an irrational belief about your talent or the chances that the journey to Tahiti would help.
A rule that tells you not to start a career as a painter unless you are reasonably convinced that you are a great painter, says Williams, would be pretty much unusable. To continue his thought: it would be unusable because being “reasonably convinced” is indistinguishable, in terms of how it feels to the agent, from being unreasonably convinced. Even the best artists are not reliable or rational witnesses to the quality of works they produce, and being convinced of your greatness through wishful thinking – perhaps intertwined with some midlife crisis and being sick of your job – does not always feel any different from being convinced rationally. It is in the nature of epistemic irrationality – for the moment, let’s stick to epistemic irrationality – that there are limits on your ability to know if you are irrational or not, to the point that sometimes it’s simply impossible for you to know it. Think about the sort of irrationality originated by depression or anxiety or insecurity, the sort originated by intoxication or sleep deprivation, the sort originated in schizophrenia. Take depression as an example:
Tristan: I am a terrible person.
Tristan: I forgot to buy milk today.
You: That doesn’t make you a horrible person.
Tristan: You are just saying it to be nice.
You: My roommate also forgot to buy milk yesterday. Does it mean she is a terrible person?
You: well, then –
Tristan: I don’t know your roommate. She is probably just fine. But given all I know about myself, forgetting the milk is just a symptom of how horrible a person I am.
You: What you know about yourself? Like what?
Tristan: I used a horrible mixed metaphor in pro-seminar today. It was embarrassing. I am clearly wasting the money of the people paying for my fellowship. I should stop committing this crime.
You: You are a really good student. All your teachers say so.
Tristan: They are wrong. No, seriously, I have given a lot of thought to that.
You can argue till the cows come home, but Tristan is, as far as he can tell, “reasonably convinced” that he is an all- around horrible person and a failure at all he does. He has thought about it a lot. Advice along the line of “do not quit the program unless you are reasonably convinced that you are not a good student” would be wasted on him.
There are, to be sure, some heuristics that improve the chances of a moderately irrational person to diagnose herself. A lovely eastern European saying I was taught as a child was “if three people say you are drunk, go to sleep!”. I am sure the saying has rescued some people who knew it from major debacles. It also failed to rescue many others who knew it: perhaps they were so drunk they could no longer count to three, or perhaps they were merely tipsy enough to think “Oh, yes, three people say I’m drunk, sure, but Yuan and Liz always agree with each other, so they really should count as the same person, right?”. So basically I’m saying that no putative “rational agent’s manual” can be expected to guarantee its follower rational belief (and thus, action based on rational belief) because it cannot guarantee that the agent won’t be drunk, or depressed, or any number of things that can sneak on you, at the time she consults the manual.
So, I’m worried about the claim that all epistemic norms need to be “follow-able” or that when they are unfollow-able to one, one is not to be charged with irrationality for not, well, following them. One reason I decline to adopt the bright shiny new expression “epistemically blameworthy” in place of the dry-as-dust, old-style expression “epistemically irrational” is that it obscures an unfortunate Williams-esque fact: epistemic life is unfair. Epistemic irrationality is both a failure to respond to reasons and a predicament that can be forced on one – say by putting a drug in one’s coffee or by taking away the prescription drug one usually puts in one’s coffee. We feel compassion for Tristan and do not, hopefully, “blame” him for anything, as his condition is “not his fault”, but we do treat the reasoning implied in “I forgot to buy milk so I’m a terrible person” as flawed and as a symptom of irrationality.
Some would find it disturbing – not just annoying – to think that unfairness is implied by epistemic norms. But should it really be so disturbing? It shouldn’t be remotely as disturbing as a suggestion that unfairness is implied by moral norms. The connection between fairness and morality is pre-theoretical and intuitive, at least in the sense that people would agree that being fair is part of being moral, an unfair action is immoral, and fairness is particularly important when it comes to punishment and other actions related to blame, as in moral blame. It “just seems” unfair to say that something is ever both (morally) blameworthy and a predicament that isn’t the agent’s doing (“not her fault”) and you don’t need to be a philosopher to think that. On the other hand, the idea that it is always unfair to say that something is both epistemically irrational and not the agent’s doing is an idea rarely spotted in the wild, a postulate of (only some) sophisticated theories of normativity that require that epistemology and ethics be similar, analogous, with isomorphic components: blame here and blame there. Non-Philosophers would raise their eyebrows at the sentence “it’s not her fault she is blameworthy” but “it’s not his fault that he is irrational” would seems fine to them. The asymmetry that bothers some theorists won’t normally be an issue for them. Judgments of irrationality can be “fair” or “unfair” in the sense of “accurate” or “inaccurate”, or in the sense of “biased” or “unbiased”, but when we say Tristan is irrational, even though he didn’t bring his depression about, we are not unfair – we just are just pointing out that life is.
P.S I think one complication is that one ultimately needs to distinguish rationality from intelligence, and drugs that promote/impair one of them or the other. A 13 year old is mostly smarter than a 10 year old, but less rational. See: https://theviewfromtheowlsroost.com/2017/08/13/raw-reflections-on-rationality-and-intelligence-plus-two-cat-pictures/
P.P.S Can “epistemically blameworthy” be a good title for a person who neglects to google, go the library, or deliberate long enough as she tries to figure something out? After all she neglected to do something what we under her control. Well, I can see the why one might want to use the term this way, but I think deep down the problem with her is that she is practically irrational in her search for knowledge. See: https://theviewfromtheowlsroost.com/2017/10/29/epistemology-and-sandwiches/
When was the last time you tried to get rid of one habit you had that you didn’t like? How easy was that?
That’s normally my first response when they ask me whether I think we can change our characters intentionally, through trying. I don’t know why this question is so often addressed to me. Perhaps it’s because people think I’m a virtue ethicist. Maybe it’s because I have gone through a more thorough intentional change in my character than most people I know have. Those of us who successfully made such a change, perhaps even more than those who tried and failed, know that many philosophers speak about “cultivating” our characters or “managing” our characters in too casual a way. Intentionally changing character is hard. It’s God damn hard. It’s @#$%&! hard.
My name is Nomy and I’m too candid. However, I am, it seems, employable (even interview-able! If you are also too candid, you know that can be much harder). I also have great friends. These things weren’t true back when I was a teenager capable of telling a person that he is ugly without feeling anger at him or expecting him to be insulted. Mine was a case of grand social incompetence that today would have gotten me diagnosed as “on the spectrum” very quickly (erroneously, I hasten to add. My problem was bad upbringing, in a broad neo-Aristotelian sense). Every step of the many years long journey from there to minimal practical wisdom was the result of gargantuan effort. It sounds like I’m bragging, and in a way I am, but what I want to get across most of all is the frustrating difficulty of it, the fumbling, the repeated not-even-close failures, the times you think you have finally become an agreeable human only to discover that you once more offended someone that you hadn’t the slightest desire to hurt, or that yet another person said “ah, her? we thought she might be difficult” – without you having any idea why. It was exhausting – and we’re talking getting from utterly clueless to merely too candid; we are not talking becoming a person worthy of raising a flag with a red maple leaf on it, say, or the kind of diplomatic person that a woman is still (unfairly) expected to be.
So, intentional character change: possible but insanely hard, requires help from others, isn’t just a matter of practice through repeated action, and should not be talked about lightly, as in suggesting that every time a person is blameworthy for an akratic action what they are really blameworthy for is not having, some years earlier, done the obvious thing and gone to character school, where remedial courses are always available for free. But sometimes people ask me about whether my (and Schroeder’s) view allows for people intentionally becoming more virtuous. For Tim and me, to be virtuous is to have good will – want the right things, de re – and not to want to wrong things, de re. If you don’t like desires, you can have a pretty similar view involving concerns or cares otherwise interpreted. Your intrinsic desire situation likely matters not only to your patterns of behavior but to your cognition as well (e.g if you want humans not to suffer, you are more likely to notice the sad person standing in the corner; if you want equality, you are more likely to notice that a movie is racist), but nobody, strictly speaking, is morally virtuous just because of a cognitive talent or morally vicious just because of a cognitive fault. Being capable of noticing the sad person in the corner because you’re an observant novelist scores you no moral points, and being incapable of noticing racism in a movie because you came from far away and don’t get the cultural references does not lose you any). By this measure, it is likely that I did not become more virtuous than I used to be. My quality of will didn’t change – my cognitions and habits did. So, in the strict Arpaly/Schroeder sense…. Is it possible to change from a not-so-virtuous person into a virtuous one, intentionally?
Seems like that would be impossible, paradoxical, self-defeating. The virtuous person is defined by the things she intrinsically desires, or if you prefer, what she cares about. She desires, let’s say, that humans be safe from suffering, that people be treated equally, that she doesn’t lie – the details would vary depending on what the best normative theory tells us morality is about. Simply acting out of a desire to be virtuous (de dicto) is not virtuous. In fact, even acting out of a desire to be virtuous de re is not virtuous: the right reason to save a person from a fire is that he not suffer or die, it’s not that you, the agent, be compassionate, and thus the virtuous person would act out of a desire to prevent suffering or death, not out of a desire to have the virtue of compassion. Self-defeating, right?
Except not really. Peter Railton pointed out that this kind of thing looks paradoxical in theory only if we ignore the various ways in which we can act upon ourselves in practice (his examples: the hedonist who decides he needs non-selfish desires in order to be happy, the ambitious tennis player who needs a bit less focus on winning, a bit more love of the game in order to win). Imagine a person who wants to be virtuous, who roughly knows (or has true beliefs regarding) what virtue is about (some other time about the person who doesn’t), but does not have the desires or cares of the virtuous person. More realistically, she has some of them, to some extent, but she falls short of what we are willing to call virtue. At first, her actions will not be expressions of virtue, but intrinsic desires do change, however slowly or gradually. They often spontaneously develop out of a more derivative form of desire: you want to learn philosophy in order to do well in law school, and by the end of the course you want it intrinsically. You start playing baseball to please your parents and find yourself continuing to do it long after they have died. If virtue is about desires or cares, it stands to reason that sometimes you can start out volunteering at a homeless shelter because you get a warm and fuzzy feeling from thinking of yourself as virtuous, or even because you get a warm and fuzzy feeling about others believing that you are virtuous, and then find yourself attracted instead to the grateful looks of some of the people in the shelter, and who knows, as the makeup of your motives shifts, find yourself moved to help when nobody is there to praise you. In ancient Jewish sources, much importance is attributed to studying the Torah for its own sake, the only praiseworthy way to do it. However, the advice for the person who cannot muster such pious motivation is to start by mustering ulterior motives and the intrinsic ones will “come”. I like this attitude. It doesn’t always work, oh no, but it strikes me as more likely to work than the practice of scrutinizing people’s motives – oneself or others – and verbally skewering them if one suspects any “virtue signaling” in the mix. Incidentally, Thomas Hill has a great article on how even Kant, the guy who brought us moral worth, didn’t like the scrutinizing thing – and he didn’t even believe in mixed motives!
So, granted: hard as it to intentionally acquire or ditch habits of thought or action, it seems even harder to intentionally acquire or ditch an intrinsic desire. Ever tried making yourself a lover of movies when you totally aren’t one, or getting rid of that desire to be tall? But there is no paradox involved, merely an “empirical” difficulty. Such difficulties can be tragic enough, but there is no need to deny that sometimes people intentionally become somewhat more virtuous than they were before. Not by sheer act of will, but by such things as hanging out with virtuous people and have it rub off on you, finding optimistic types who “believe in you” and seeing if you will automatically rise to meet their expectations, following the Talmudic advice to start from exciting ulterior motives and hope for the best, reading and watching memorable and vivid representations of the point of view of those whom your actions affect. Prosaic takes on human nature, which take moral motivation to be similar to philosophy-studying motivation or baseball-playing motivation or whatever, can be depressing, but they can be rather comforting on those occasions in which prosaic methods work. I can’t pretend any other kind of method worked for me.
“But the man who was afraid of the weasel had a disease”.
Who? What? Where did this come from? Aristotle doesn’t tell us. As usual, he’s not talking to us – presumably he’s lecturing to some free citizens of his city state, of which there are – what, 35,000? – who presumably all know the same gossip. With these people he can refer to “the man who was afraid of the weasel” just like you could refer, talking to people at your academic institution, to “the student who published that article in defense of sweat shops a few years back”.
I admit it: I have always been oddly curious about this example. Who was that man who was afraid of the weasel? What was the incident like? Why did Aristotle think it was a disease? For a while, whenever I met an Aristotle expert, I asked them if anything further was known about this story. They always told me that no, it is lost to posterity, and so I am left with my fantasy version of the event, no doubt influenced by my childhood in Israel: a very dark night, an Athenian military encampment, some soldiers sneaking a weasel into a sleeping man’s tent, or whatever they had back then instead of a tent, the man shrieking with terror and looking silly as he jumps to his feet and runs away. Later, the embarrassment, the humiliations reserved for a man who failed a harsh ideal of masculinity (either you are the type who is willing to die in battle, or you’re a coward!) in a tightly knit society. He wonders if he’ll ever live it down, and would not be totally dismissive of the thought that people might still read about him 2,500 years later.
By Keven Law – originally posted to Flickr as On the lookout…, CC BY-SA 2.0, Link
Sadly, we will never know what happened, but the question of what makes a fear – or some other mental state or behavior – a disease or a symptom of disease is still with us. Suppose we try to settle the question of whether someone should count as having a mental illness or not. Is this child what some anti-intellectual cultures call “a nerd”, or does she have a mild version of “autism spectrum disorder”? Is this man grieving for his late wife, or does he suffer from a “major depressive episode” triggered by her death? I wrote in my post before last that very often, we still go about answering this question in a way that has nothing to do with science or with what metaphysicians call “carving the universe at the joints”. Those who are in favor of taking the nerdy child or the grieving man to have a mental disorder argue for it on the basis of the premise that the girl or the man could receive some help if so categorized. The girl could use some breaks at school, the man could use some therapy and some compassion from his employer, say. They often cannot, in this society, receive these things if we don’t call them mentally ill, so it’s practically a moral imperative to call them mentally ill. Therefore, they are mentally ill. If the concept of mental illness or mental disorder is to be anywhere near scientific, this is a pretty bad argument. True, wanting to help is a good motive. We are not talking some evil pharma companies plotting to include the grieving man in DSM so that they can sell him pills. But it’s a bad argument, all the same.
Those who hold that the girl or the man does not have a mental disorder also use arguments that have nothing to do with science or whatever joints the universe might have, though they, too, have good intentions. “I don’t want this child to be stigmatized as having a mental disorder just because she is nerdy, it will make her feel bad, therefore she does not have a mental disorder” is one. “It is insulting to me and to the memory of my wife to call my grief a mental disorder, so I don’t have one” is another. Full disclosure: I am often intuitively sympathetic to the conclusions of these bad arguments. Calling fairly ordinary aspects of grief an illness sounds problematic to me. I have a few doubts (I do mean doubts, not certainties) about the idea of “autism spectrum”, as it seems questionable to me that a child who is so terrified of human closeness that she refuses, from infancy, to be hugged or touched by mom or dad and a child who craves affection as much as anyone but fails to make friendships with peers because he can’t figure out how one starts a conversation suffer from two varieties and/or degrees of the same problem or trait. Still, that does not allow me, or anyone else, to argue that a diagnosis is dubious simply because it’s insulting. Some people who are so depressed that they spend most of their time crying on their apartment floors are firmly convinced that their depression is due to their superior insight into the nature of the world, or the fact that they have figured out that happiness is not valuable and only shallow people think it is. Such people often take offense if you suggest that the problem is their neurotransmitters, which made them gloomy long before they could spell “nihilism”. Still, it might very well be true – spoken as a person who had pretty bad depressive episodes herself – that the insulting diagnosis is, for some of them, correct.
So what? So “mental disorder” is not a scientific concept as long as we decide who “gets” to have a disorder or not to have it on the basis of practical rather than theoretical considerations. This is a problem, because ultimately, seriously scientific research into mental disorders is the best way to help those who have them, and for that we need “mental disorder”, as well as “autism”, “depression” etc. to be theoretically respectable concepts. What has to go, I think – not that I know how to make it go! – is the medicalization of suffering. By that I mean not merely the fact that more problems are considered diseases than before – that’s a mixed bag, as it is plenty good that epileptics are viewed as sick rather than possessed by the devil. Nor do I merely mean that some problems are considered diseases which are probably not diseases. When I say that suffering has been medicalized, I mean that a person who is suffering can increasingly receive neither help nor sympathy unless her suffering is regarded as a medical problem, and “medical” suffering is somehow perceived as more “real” and more deserving of remedy than ordinary suffering. In a morally ideal world, a person whose life is a mess because she’s in the middle of a divorce could come to her employer, explain her situation, and get a bit of slack from her. In this world she needs to go to a doctor, tell the doctor exactly what she would have told her employer, and, on the basis of that, get a note that says she has clinical depression – a disorder – and needs, well, exactly the type of consideration she would have asked for. There is something ridiculous about this.
The predicament of a kid who is bullied by everyone in class or who is simply friendless through k-12 is a bad one, both in terms of experience and of impact, and should be treated seriously. If I had to reincarnate as a child and had the choice, I would take a mild bona fide medical condition over this predicament. If the nerdy girl from my example is facing it, and if there is anything we adults can do without making things worse in another way (there often isn’t), we should do it. It shouldn’t matter one iota if the problem is literally a matter of health. Health is not the only good! illness is not the only bad! If we remember that, perhaps we can approach the question of whether she is best described as having a mental disorder, or autism spectrum disorder in particular, in the spirit of inquiry unfettered by the sense that if we say no, we thereby deprive her of help or sympathy, and so yes is the only decent thing to say. Such inquiry can bring with it better help for both those with autism and those without it. But the blogger who was obsessed with the man who was afraid of the weasel finished her rant.
Metaethics! Guitar: Michael Smith.
It ain’t necessarily so
it ain’t necessarily so
What ethicist say
Can sound good in a way
But it ain’t necessarily so
Morality trumps other oughts
Morality trumps other oughts
No rational action
Can be an infraction
Morality trumps other oughts
(You get the idea)
Be virtuous by day and night!
Departures from virtue
Are all gonna hurt you!
Sometimes I wanna say “yeah, right!”
We always give laws to ourselves
We always give laws to ourselves
We lose our potential
For being agential
When we break them laws
I say it ain’t necessarily so
It ain’t necessarily so
i’ll say this, though, frankly
They’ll stare at me blankly:
It ain’t necessarily so!
“Nomy, our purpose here is to help you become just like everyone else”.
That’s what the school counselor told me when I was a kid. And then another school counselor. And then another. I am not paraphrasing, or dramatizing, or anything. Translating from Hebrew to English is all. They all referred my parents to psychologists and psychiatrists, who would help me even better toward this goal, being like everyone else, which they firmly assumed I shared, no matter what I said. Little wonder, then, that by the time I became a teenager, I was certain that the concept of a mental disorder was nothing but a tool of oppression used against unusual people by those who want everyone to become just like everyone else.
(PSA: if you have a child who is beaten up by the other kids because she’s reading Great Expectations at 11, like I did, or because she looks unusual, or because of some incidental vagary of child social dynamics, or even because she has bad social skills, do think carefully before sending her to some kind of shrink. You need to make sure your child does not get the message “the other kids beat me up because there is something wrong with me, and my parents agree, so they are sending me to be fixed”.)
Years later I had to give up my Szasz-ian extremism, because depression, along with hypomania and anxiety, threatened to kill me. Slowly it dawned on me that while the professionals of my childhood were wrong to try to cure me of reading Great Expectations, there was a case for calling some things mental disorders. Seeing my roommate react with fear and trembling to a small spider provided one datum: there was no way that her suffering was “socially constructed” in the English department sense of the term. It was real, and the term “disorder” seemed to fit it. It also seemed to fit my depression, hypomania and anxiety.
So what is a mental disorder, then? I knew what I wanted cured: my suffering. So, was I going to call any extended mental state or brain state constituting or leading to significant distress a mental disorder? That used to be pretty close to the DSM definition, and many shrinks will still tell you that if it causes either distress or disruption in functioning, it is a mental disorder. But this plausible-sounding theory is pretty terrible upon reflection. My love of Spinoza as a teenager caused me significant distress, because it caused kids to beat me up. It also interrupted my functioning, because it’s hard to function when you are beaten up. Still, loving Spinoza is not a mental disorder. Being gay in the 1970s caused one enormous suffering – everything from self-hatred to trouble with the law – and that helped keep homosexuality in the DSM till 1973 and “ego-dystonic homosexuality” (homosexuality, provided that one wants it “cured” in oneself) till much later. The distress-bad functioning based definition of mental disorder, in other words, does not block the term from being used, in the oppressive manner typical of the school-counselors of my childhood, by those who want to tell gay people to be just like everyone else.
Some later DSM writers tried to solve the problem associated with defining a mental disorder as a state of mind/brain/behavior/whatever that causes distress or trouble functioning by simply adding to it a disclaimer along the lines of “the problem has to be with the individual, not with a conflict between the individual and society”. That didn’t work, because it is the job of a definition of a mental disorder to tell us when there is a problem “with the individual” and when there isn’t. Presently, we don’t have a definition that can do this job. The reason we no longer think that a woman who refuses to be a homemaker is showing a problem with functioning is not that our definition of mental disorder improved. It’s that our moral outlook did.
Part of the trouble with defining mental disorder the way philosophers try to define things is that this would be at cross purpose with what the writers of DSM are trying to do. Philosophers look for the true, or at least the coherent. Shrinks look for the useful. Let me explain. The Diagnostic and Statistical Manual of Mental Disorders is becoming a thicker and thicker book. More and more things are called mental disorders. There is a cynical hypothesis about the cause: shrinks want people to go to them and give them money. However, there is also a charitable hypothesis: shrinks want to help people, and nowadays, however obvious a person’s suffering, she can’t get the insurance company to fund psychiatric help if the suffering isn’t defined as a disorder. If a person who suffers from grief wants to take some pills to help with insomnia or make it easier to go back to work, her grief needs to be redefined as a major depressive episode, and therefore a disorder. You have to call something a mental disorder if people are to receive help for it. Thus, however they define “mental disorder” in the introductions to their books, when you look at the long list of things that are classified as mental disorders you see that the one thing they have in common is popular demand for insurance coverage. The trouble is, of course, that “something is a mental disorder iff people want psychiatrists to help them with it” does not sound like a definition that captures a natural kind. It is basically another incarnation of the distress/bad functioning thing.
What about natural language? “Mentally ill” replaced terms like “insane”, “crazy” and “nuts”, which are, in many ways, colloquial ways to say “patently irrational”. The things that were considered forms of insanity or forms of “neurosis” when Freud was alive and are still considered paradigmatic mental illness today basically are forms of gross irrationality, or cause gross irrationality. These would be: psychosis that leads a person to think, irrationally, that he is Napoleon; depression that becomes so bad that the person thinks that the fact that she forgot to buy milk makes her as despicable as a Nazi, or, against all evidence, that her family will be delighted to see her dead: mania that leads a person to spend all his money and run off with his secretary to pursue a business deal that he is normally plenty smart enough to see is nonsense: terrible fear of tiny, harmless spiders: etc. To this day, being told that one’s thoughts or feelings or actions are symptoms of a mental disorder can be insulting or reassuring in a way that only being told one is being grossly irrational can be. Let me explain.
Maybe I’ll start with the reassuring. Suppose you are really afraid that there is a monster under your bed – literally or figuratively – and someone convinces you that your fear is a symptom of a mental disorder. That can be wonderful news. When I express fearful or self-hating thoughts, being told “Nomy, that sounds crazy” can be music to my ears. I am irrational! The fact that I forgot to answer an email from the secretary does not mean I am worthless! My fear or sadness is unwarranted! Now for the insulting: if you are very sad purely because your attempts to make your country a democracy have failed, and someone refers to your sadness as a clinical problem, it can be infuriating. No, you think, I am not irrational. I am responding appropriately to reasons. Calling it a disorder is refusing to see that. This is one reason people have been angry when the last DSM amended the definition of depression in such a way that it now includes many grieving people. People who don’t want their grief in the DSM do not deny that they are suffering and do not always deny that they could use some professional help. What they want to deny is that there’s anything grossly irrational about their grief. Of course, rationality and irrationality can be woven fine. A person can start being depressed because he lost his job – presumably a reason to be sad – but then, despite the fact that he was fired due to a recession, start feeling worthless or bad because he’s unemployed, and that’s where irrationality can creep in – even gross enough irrationality for the person to count as having a disorder.
So paradigmatic mental disorders involve serious irrationality. To that you can add conditions often thought of as disabilities rather than disorders, in which the problem is not irrationality but cognitive impairment of some sort (e.g low intelligence, lack of some kind of know-how). Perhaps they too belong in some divine version of DSM. But what about conditions that do not grossly affect one’s rationality and involve no cognitive impairment? My hunch is that there is something very problematic in calling them mental disorders, as opposed to problems, troubles, eccentricities, ways of being neuro-atypical, or sometimes even vices. If you think the DSM, considered from the aspect of truth and not insurability, is getting too thick, this just might be what’s bothering you. But to be continued.
To Sophie Horowitz we owe the following question: if having enough blood sugar contributes to our cognitive abilities, does it mean that we sometimes have an epistemic duty to eat a sandwich?
The last three posts in Owl’s Roost concerned some reasons to think that there are no practical reasons to believe and no duties of any sort to believe (or conclude, or become less confident, etc) – at least if we interpret “duty” in a way that’s friendly to deontologists in ethics. But never mind practical reasons to believe. Can there be epistemic reasons to act? Or, for that matter, epistemic duties to act? For example, deliberating is an action, something that you can intentionally do. One can also intentionally act to review the evidence in a complicated case, to ask experts for their opinions, or to check the origins of that article forwarded to one on Facebook that claims that cats cause schizophrenia. Do we have epistemic duties and epistemic reasons to do these things?
If we want money, there are things that we have reasons – practical reasons – to do. Call these practical reasons “financial reasons”. Similarly, there are things that we have reasons – practical reasons – to do if we want to be healthy. Call these practical reasons “health reasons”. “Health reasons” are practical reasons that apply to people whose goal is health, “financial reasons” are practical reasons that apply to people whose goal is money. Now, suppose your goal is not health, or wealth, or taking a trip on the Trans-Siberian Railroad, or breeding Ocicats in all the official 12 colors, or doing well on the philosophy job market without losing your mind. Your goal happens to be knowledge – in general or in a specific topic. Or maybe your goal is to be epistemically rational – you dislike superstition, wishful thinking, paranoia, and so on – or to have justified beliefs if you can. Just like health, wealth, train riding or Ocicat breeding, knowledge and justified beliefs are goals that give rise to practical reasons. Practical reasons to deliberate well, to review evidence, to avoid some news outlets, and even, at times, to eat a sandwich. Can these be called “epistemic reasons”? Yes, but in a sense parallel to “financial reasons” and “health reasons”, not in a sense that contrasts them with practical reasons. Is “eat a sandwich when hunger clouds your cognitive capacities” an epistemic norm? Yes and no. Yes in the sense that “make sure to clean your brushes when you paint pictures” is an aesthetic norm, and no in the sense that “orange doesn’t go with purple” is an aesthetic norm. Cleaning one’s brushes is conducive to creating beauty but it’s not in itself beautiful. To the extent that one wants to create beauty using paintbrushes, one usually has a reason to clean them. To the extent that one wants to come up with nice valid arguments in one’s paper, one has a reason to eat enough before writing. That does not make eating that sandwich epistemically rational in the same sense that believing what the evidence suggests is epistemically rational. Eating that sandwich is a way to make yourself epistemically rational, which happens to be what you want. In short, the adjective “epistemic”, now added to a larger number of nouns than ever before in the history of philosophy, can signify a type of normativity, a kind of reasons, or it can signify a type of object to which a norm applies, the stuff that a reason is concerned with, which happens to be knowledge or belief rather than money, paintings, Ocicats, cabbages or kings. I think the distinction needs to be kept in mind.
So…. “Epistemic duties to act” is just another name for “practical reasons that one has to act if one is after knowledge or justified belief”. Or is it really? Some might argue that there might be epistemic duties that do not depend on what one is after. Knowledge, truth, justified belief, or epistemic rationality are, say, objectively good things and one has a duty to pursue them – we’re talking something resembling a categorical imperative, not a hypothetical one. Perhaps we should seek knowledge, or at least some intrinsically valuable kinds thereof, regardless of what we happen to want. But “one should seek knowledge”, as well as “one should seek epistemic rationality” and “one should seek the truth” are only epistemic norms in the sense that they happen to be about knowledge, rationality, etc. They are not different in kind from the practical directives “one should seek beauty”, “one should seek pleasure, “one should seek honor”, and so on. Why one might want to seek knowledge is an issue for epistemologists, but it is also an issue for old-fashioned ethicists, theorists of the good life, who try to figure out what, in general, is worth pursuing. It makes sense to say that Sherlock Holmes, who refuses to pursue any knowledge that isn’t relevant directly to the cases he encounters as a detective, is missing out on something good, or on a part of the good life, and it makes sense to say (though I am not the type to say it) that he is irrational in refusing to pursue more knowledge. But to say that he is thereby epistemically irrational is odd. He is Holmes. He is as epistemically rational as it gets. If he is irrational, it’s a practical irrationality – albeit not in the colloquial sense of “practical” – in refusing to pursue episteme outside the realm of crime detection.
The words “duty” and “blame” can be used in many ways. You can blame the computer for an evening ruined by a computer failure, but computers are not morally blameworthy. You can talk about Christa Schroeder, Hitler’s secretary, performing her duties, but morally speaking her duty was – what? To work somewhere else? To become a spy? To screw up her work as much as possible? When I say that there are no epistemic blame and epistemic duties I mean to say that epistemic norms behave very differently from moral duties as deontologists talk about them and epistemic irrationality behaves very differently from moral blame as free will/moral psychology people talk about it. I do not intend to deny that epistemology is normative, but that it is normative does not imply that for every ethical concept there is an epistemological concept that is exactly isomorphic to it.
I talked in previous posts about why I think there are no practical reasons to believe, though there can be practical reasons to make yourself believe. At this point I will say nothing about duties to make yourself believe things and stick to putative duties to believe or not believe things – say, not to believe contradictions.
The thing about deontology is that you get an A for effort. Suppose, for example, Violet has a duty to return Kiyoshi’s book, which she promised to give him back on March 16th. However, a large snow storm causes her flight to be cancelled, and despite all her efforts, she can only get back to Kiyoshi on the 17th. Any Kantian will hold that even though the book does not return to its owner on time, Violet’s good will shines like a jewel as long as she has tried. Some would say that ought implies can and therefore Violet did nothing wrong. She discharged her duty. Others would say that she has done wrong, but is exempt from blame.
Compare that to an epistemic norm: one ought not believe contradictions. Suppose Violet, who is in my intro ethics class, tries hard not to believe contradictions (my opening spiel on the first class often includes a reference to the importance of not believing them). She tries especially hard with regards to the class material, which she studies feverishly, rehashes in office hours, etc. Still, in the final paper, she writes sincerely, in response to one question, that all morality is relative to culture and, in response to another, that murder is “absolutely” wrong, regardless of circumstances. Violet’s valiant efforts not to believe contradictions do not result in her getting an A for effort – not literally, in my class, and not figuratively, vis-à-vis epistemic norms. If it is epistemically irrational to believe in contradictions, Violet is irrational in believing one, regardless of how hard she tries not to be. She is not the least bit less irrational because of her efforts.
It might seem that epistemic norms too grant As for effort, because they do sometimes grant an A for a process of responding to reasons that results in a false belief. For example, a person I met at a conference guessed quickly, plausibly and wrongly that my name is Turkish. The error makes sense – there is a place in Turkey called Arpali, and the man made an inference from that – and, assuming that the man’s degree of credence was proportional to evidence, he does get “an A” for his reasoning or for his response to epistemic reasons. But despite the tempting analogy, it’s important that the A is not literally for effort. As it happened, no effort was involved – the man’s guess seems to have come to him quickly. He made a good inference, and it doesn’t matter whether it came to him through effort or not – in fact, some might regard him as more epistemically virtuous because no effort was needed. On the other hand, if, instead of making a good guess, he were to stand there and wrinkle his forehead and come up with a guess that makes no sense at all, no amount of effort on his part would make us treat his inference as less bad.
It is an interesting question whether the effort thing is a problem for epistemic virtue talk, as opposed to duty talk. While trying hard to return a book may discharge your duty to return it, trying hard to acquire the virtue of courage, say, does not mean that you have that virtue of courage, and trying to act generously does not automatically imply acting generously. Virtue ethics does not generally give As for effort (whether it gives some credit for effort is a different question).
Here is a related asymmetry regarding blame and the charge of epistemic irrationality. Suppose Anna attacks her professor because she believes, during a psychotic episode, that he is the devil. Anna is exempt from moral blame: “you can’t blame her, she was psychotic”. She is not exempt from the charge of epistemic irrationality that can be leveled at her belief that the professor is the devil. (“She’s not epistemically irrational, she is psychotic”? Doesn’t work. Having a psychotic episode is one way of being irrational).
Someone might ask: Ok, so we don’t have duties to believe, or not to believe, or to infer, but don’t we have other epistemic duties – say, a duty to deliberate well, or to avoid watching fake news if we can help it? Can’t we incur epistemic blame if we fail to discharge these duties? Ok, to be continued. I mean it.
1) If you are not too superstitious – I almost am – imagine for a moment that you suffer from cancer. Imagine that you do not yet know if the course of treatment you have undergone will save you or not. You sit down at your doctor’s desk, all tense, aware that at this point there might be only interim news – indications that a good or a bad outcome is likely. The doctor starts with “well, there are reasons to be optimistic”. Though you are still very tense, you perk up and you feel warm and light all over. You ask what the reasons are. In response, the doctor hands you a piece of paper: ironclad scientific results showing that optimism is good for the health of cancer patients.
Your heart sinks. You feel like a victim of the cruelest jest. I’ll stop here.
Some philosophers would regard what happens to you as being provided with practical reasons to believe (that everything will turn out alright) when one desperately hoped for epistemic reasons to believe (that everything will turn out alright). I, as per my previous post, think what happens is being provided with practical reasons to make oneself believe that everything will turn out alright (take an optimism course, etc.) when one desperately hoped for epistemic reasons to believe that everything will turn out alright –I think the doctor says a false sentence to you when he says there were reasons to be optimistic. For the purpose of today it does not matter which explanation is the correct one. The point is that when we theorize about epistemic reasons and putative practical reasons to believe, the philosopher’s love of symmetry can draw us towards emphasizing similarities between them (not to mention views like Susanna Rinard’s, on which all reasons as practical), but any theory of practical and epistemic reasons heading in that direction needs a way to explain the Sinking Heart intuition – the fact that in some situations, receiving practical reasons when one wants epistemic reasons is so incredibly disappointing, so not to the point, and even if the practical reasons are really, really good, the best ever, it is absurd to expect that any comfort, even nominal comfort, be thereby provided to the seeker of epistemic reasons. What the doctor seems to offer and what she delivers seem incredibly different. The seeming depth of this difference needs to be addressed by anyone who emphasizes the idea that a reason is a reason is a reason.
To move away from extreme situations, anyone who is prone to anxiety attacks knows how $@#%! maddening it is when a more cheerful spouse or friend tells you that it’s silly of you to worry and then, when your ears perk up, hoping for good news you have overlooked, gives you the following reason not to worry: “there is absolutely nothing you can DO about it at this point”. Attention, well-meaning cheerful friends and partners the world all over: the phrase “there is nothing you can do about it” never helps a card-carrying anxious person worry less. I mean never. It makes us worry more, as it points out quasi-epistemic reasons to be more anxious. “There’s nothing you can do about it now” is bad news, and being offered bad news as a reason to calm down seems patently laughable to us. Thank you! At the basis of this common communication glitch is, I suspect, the fact that practical advice on cognitive matters is only useful to the extent that you are the kind of person who can manipulate her cognitive apparatus consciously and with relative ease (I’m thinking of things like stirring one’s mind away from dwelling on certain topics). The card-carrying worrier lacks that ability, to the point that the whole idea of practical advice about cognitive matters isn’t salient to her. As a result, a statement like “it’s silly of you to worry” raises in her the hope of hearing epistemic reasons not to worry, which is then painfully dashed.
2) A quick thought about believing at will. When people asked me why I think beliefs are not voluntary in the way that actions are, I used to answer in a banal way by referring to a (by now) banal thought experiment: if offered a billion dollars to believe that two plus two equals five, or threatened with death unless I believe that two plus two equals five, I would not be able to do it at will. A standard answer to the banal thought experiment points out that actions one has overwhelming reasons not to do can also be like that. Jumping in front of a bus would be voluntary, but most people feel they can’t do it at will. But I think this answer, in a way that one would not expect from those who like the idea of practical reasons to believe, greatly overestimates how much people care about the truth. Let me take back the “two plus two” trope, as not believing that two plus two equals four probably requires a terrifying sort of mental disability, and take a more minor false belief – say, that all marigolds are perennial. If offered a billion dollars, I would still not be able to believe at will that all marigolds are perennial. To say that this is like the bus case would pay me a compliment I cannot accept. One has a hard time jumping in front of a bus because one really, really does not want to be hit by a bus. One wants very badly to avoid such a fate. Do I want to believe nothing but the truth as badly as I want to avoid being hit by a bus? Do I want to believe nothing but the truth so badly that I’d rather turn down a billion dollars than have a minor false belief? Ok, a false and irrational belief. Do I care about truth and rationality so much that I’d turn down billion dollars not to have a false and irrational belief? Nope. Alternately, one might hold that jumping in front of a bus is so hard because evolution made it frightening. But am I so scared of believing that all marigolds are perennials? No, not really. I am sure I have plenty of comparable beliefs and I manage. I’d rather have the money. I would believe at will if I could. It’s just that I can’t. We are broken hearted when we get practical reasons when we want epistemic reasons, but the reason isn’t that we are as noble as all that.
People who are not philosophers sometimes “come to believe” (“I’ve come to believe that cutting taxes is not the way to go”) or “start to think” (“I’m starting to think that Jolene isn’t going to show up”). Philosophers “form beliefs”. People who are not philosophers sometimes say “I have read the report, and now I’m less confident that the project will succeed”, and sometimes write “reading the report lowered my confidence in the success of the project”. Philosophers say “I have read the report and lowered my credence that the project will succeed”.
In other words, philosophers talk about routine changes in credence as if they were voluntary: I, the agent, am doing the “forming” and the “lowering”. That is kind of curious, because most mainstream epistemologists do not think beliefs are voluntary. Some think they sometimes are – perhaps in cases of self-deception, Pascal’s wager, and so on – but most take them to be non-voluntary by default. Very, very few hold that just like I decided to write now, I also decided to believe that my cats are sleeping by my computer. Yet if I were their example, they would say “Nomy looks at her desk and forms the belief that her cats are sleeping by the computer”. If I say anything to them about this choice of words, they say it’s just a convenient way to speak.
John Heil, wishing to be ecumenical in “Believing What One Ought”, says that even if we think that belief is not, strictly speaking, voluntary, we can agree that it is a “harmless shorthand” to talk about belief as if it is voluntary (talk about it using the language of action, decision, responsibility, etc). Why? Because it is generally possible for a person to take indirect steps to “eventually” bring about the formation of a belief.
OK then –so erections are voluntary! Let’s use the language of action, decision, and responsibility in talking about them! No, seriously. It is possible for a man to take indirect voluntary steps to bring it about that he has an erection. And yet, if I teach aliens or children an introduction to human nature, I do something terribly misleading if I talk about erections as if they were voluntary.
Another example: suppose Nomy is a sentimental sop and hearing a certain song can reliably bring her to tears. Heck, just playing the song in her head can bring her to tears. There are steps Nomy can take to cause herself to burst out in tears. Still, again, in that Intro Human Nature course one cannot talk about outbursts of tears as if they were voluntary without being misleading.
Last, would any self-respecting action theorist say that the phrase ‘she raised her arm’ can be a ‘harmless shorthand’ way to describe a person who caused her arm to jerk upwards by hooking it to an electrode? Or, worse, a “harmless shorthand” by which to describe a person whose arm jerked due to a type of seizure but who easily could have caused such jerking through using electrodes or prevented it through taking a medication?
In all of these cases, a “shorthand” would not be harmless – for two reasons. The more banal one is that when eliciting intuitions about birds, you don’t want the ostrich to be your paradigm case. Most erections, outbursts of tears, and arm-convulsions are not the result of intentional steps taken to bring them about, and there is a limit to what we can learn about them from the fewer cases that are. The same is true for most beliefs. Many of my beliefs are the result of no intentional action at all – my belief that the cats are here came into being as soon as they showed up in my field of vision, my belief that there are no rats in Alberta came into being as soon as a Tim Schroeder told me – and other beliefs I have are the result of deliberation, which is an action, but not an intentional step taken to bring about a particular belief. (so my belief that the ontological argument fails was not the result of intentional steps taken to make myself believe that the ontological argument fails). Whatever intuitions one might have about non-paradigmatic cases, like Pascal’s wager, could easily fail to apply to the many cases in which no step-taking has preceded a belief.
But to talk of erections and tears as if they were voluntary is also dangerous for a deeper reason, a reason that has nothing to do with the frequency of indirect-steps cases. Even if the majority of tears, erections, or convulsions were indirectly self-induced, there is still a difference between the action taken to induce the erection, tears or seizure and the thing that results from the action. If the former is voluntary, that alone doesn’t make the latter voluntary. Similarly, even if we routinely had the option of making ourselves believe something through the pressing of a button, and we routinely took advantage of this option, there will still be a difference between the act of pressing the button – an action – and the state of believing itself. If we make “pressing the button” a mental action – analogous to intentionally calling to mind images that are likely to produce an erection or an outburst of tears – it hardly matters: the mental action that produces a belief would still be different from its outcome.
Why does it matter? Because we only have practical reasons to do things voluntarily. It seems quite clear that, on those occasions, whatever their number, in which we can, in fact, for real, form a belief, there can be practical reasons to form that belief or not to form it, and it seems only slightly less clear that sometimes it could be rational to form a belief that clashes with the evidence. This, however, is taken to mean that there are practical reasons to believe. I am working on a paper arguing that this does not work. We have practical reasons to take steps to bring a belief about. We sometimes have practical reasons to make ourselves believe something, but that’s not the same as practical reasons to believe it. No such thing. Category mistake. This is where one could ask: why not call reasons to create a belief in oneself “reasons to believe?” Isn’t that a harmless shorthand?
I don’t think so. I agree that “reasons to stay out of debt” is a nice shorthand for “reasons not to perform actions that can lead to one’s being in debt and to perform actions that are conducive to staying out of it”, but while “reasons to stay out of debt” just are reasons for various actions and inactions, you can have “reasons to believe” something without any implied reasons for any actions or inactions. Jamal’s being out of debt is an instance of rationality (or response to reasons) on his part iff Jamal’s being out of debt is the intended result of a rational course of action taken by Jamal. Gianna’s belief that Israelis don’t speak Yiddish can be perfectly rational (as in, there for good reasons) even if it’s not the result of any rational course of action taken by her. Perhaps the belief occurred in her mind as the result of unintentionally hearing a reliable person’s testimony, no action required, or was the result of an irrational action like reading Wikipedia while driving; it can still be as rational a belief as they come. When we say “reasons to believe” it is not normally a shorthand for “reasons to make oneself believe”, and so to shorthand “reasons to make oneself believe” to “reasons to believe” is not harmless, but very confusing. To be continued.