Sympathy and Accidentality

Mill famously pointed out that a great way to make a moral theory look bad is to assume “universal idiocy” along with it. It’s an old trick, and yet sometimes contemporary Kantians (and others) seem to pull off something pretty similar when they make compassion look bad.

Consider a celebrated example from Barbara Herman. If I see a man leaving the art museum in the middle of the night struggling with a heavy package, says Herman, sympathy, if unchecked by duty, which she takes to involve concern for morality de dicto, would move me to help him carry his package. Unaware of Mill’s saying, an undergrad of mine responded to the example by saying that there would have been nothing to check in his case: “I’m not an idiot. I wouldn’t have sympathized with an art thief!”. Thus he denied what some Kantians imply – that when we help for reasons other than concern for moral duty or self-interest our motivation is something they call “sympathy”, which seems to be a childish impulse, perhaps even a compulsion, to remove suffering – an impulse almost entirely indifferent to the context in which the apparent suffering occurs. Christine Korsgaard, who does not accept this view, refers to it as Kant’s belief that emotions are stupid. I call it the Sympathy Myth.

Zoe Johnson King, Keshav Singh and Paulina Sliwa seem to hold a related view: that when I help someone, or do something else morally right, while thinking it wrong, I am at most moved by one of the moral considerations relevant to the case (such as “someone is suffering”) and not by any of the others. If that were true, it would make it quite the accident that I do the right thing in any case to which more than one moral consideration applies. Perhaps Huck Finn seem to have an appropriate reaction to Jim’s predicament – a person being deprived of freedom is prima facie a bad thing – but it is still an accident that he does the right thing because, for all we know, he would have helped Jim even if he were not an escaped slave but an escaped serial killer (worse than an art thief!), or a man suffering from severe psychosis whose pursuers only wanted to provide him with decent treatment, or in any number of possible situations in which helping him escape would not have been the right thing to do.

Continue reading “Sympathy and Accidentality”

Accidentally On Purpose

The message sounded loud and clear through the train’s PA system. The voice sounded like that of a fairly young man. The text went like this:

We have reached
Washington, D.C.
The nation’s capital!
Capital of the free world!
Mind the gap.

I thought it was great. Which gap should we pay attention to? Rich and poor? Free and unfree? Propaganda and reality? And what happens if we fail to heed the ominous warning? The ambiguity was delicious.

Except to this day I have no idea whether I witnessed poetry in motion or only ran into found poetry, so called. The train conductor (or whatever exactly his job was) could have pulled off a subtle piece of performance art or he could have simply prefaced a rushed repetition of the routine warning about the gap between the train and the platform with some random clichés, or with clichés expressing real awe at Washington. In order to create a poem (or a piece of performance art) you need to think of yourself as creating one. You need to “know what you’re doing”. While in the field of poetry there might be varieties and degrees of such “knowledge” – poems can come to people in dreams, for example – it seems clear that if the guy had no idea, conscious or otherwise, that his text was to have an ironic touch he does not get any credit for producing that ironic touch. Why? Because he produced it accidentally.

Continue reading “Accidentally On Purpose”

Epistemic Song

Jamie Dreier, Nomy Arpaly, Dave Estlund

(To the tune of “Mambo No. 5,” New lyrics by Arpaly, Dreier, Estlund)

Point 1, point 2, point 3, point 4
Everybody grab a credence, then grab some more
Point 5, point 6, point 7, wait!
Man you’re crazy if you think that you can hit .8!2.
It’s the latest dance, the best in town:
Grab a word like “epistemic”, then add a noun
Like “angst” or “insouciance” or “indulgence” or “greed”
If you want a paper topic that’s all you need!

CHORUS (all 3 sing)
Epistemic trespass on my lawn,
some epistemic charity, bring it on!
Injustice epistemic 123,
don’t give me your polemic: not for me!

Epistemic duties, off my back
Epistemic peers, gone off the track
Epistemic deference? outta town!
Epistemic democrats, vote em down!

Forget old virtue, vice and blame
Every word remotely ethics-like is now fair game
If “duty” doesn’t really seem to hit the spot
Epistemic consequentialism’s also hot”

Epistemic disrespect or were you just teasin’?
Epistemic rationality and epistemic reason
If you think this construction is epistemic fine
Then you must be out of your epistemic mind!

Kantianism for Wimps

In a paper that’s going to be in the supplemental volume of the Proceedings of the Aristotelian Society – or maybe it’s already there? – I argue that the moral imperative to help those who are doing badly cannot be properly accounted for through the position of a Kantian duty to adopt and promote the ends of other agents. I pick on contemporary Kantians, not on Kant, and I don’t defend utilitarianism or Aristotelianism, neither of which is my view. Taking it to be true that agents have a lot of ends that are different from their wellbeing – even ignoring specifically moral ends – I argue that a person’s end of avoiding ill-being is significant to moral people in a way that her other ends are not and that the difference cannot be traced simply to the fact that for most people, avoiding misery is a particularly important end. That is, we are often morally obligated to –  or at least have a significant pro tanto reason to – protect a person’s wellbeing where we would have no such obligation or pro tanto reason to protect another end she has, even if the agent values and prioritizes that end as highly as anyone does their wellbeing and more than she cares about her own. The 1000(ish) word version of the argument is here on this blog, under the heading What Kantianism Gets Wrong.

Now, Herman offers an ingenious Kantian explanation of why we seem to have “a duty of easy rescue” – a duty to help the person bleeding by the side of the road if it is not difficult to do so. My undergrads point out that it would be very weird to say here that our duty to help others is “imperfect”, as one cannot avoid the duty of easy rescue simply by having already performed a hundred other easy rescues this year. To such an undergrad Herman says: fair enough, the imperfect duty to provide mutual aid does not provide a good explanation for the duty of easy rescue. The duty of easy rescue is a separate one, it cannot be avoided thru “I already gave at the office”, and it kicks in in places where without your help, the person in question is in danger of losing her rational agency. In my paper I argue that this move would not be enough by way of an answer to me, because there exist cases in which we are obligated to help a person if we can do so easily in which the person does not face a danger of losing her rational agency. She is not going to die, she is not going to suffer from a severe neurological or psychiatric disorder, she’s just going to suffer serious ill-being if you don’t, say, call 911. Furthermore, there are cases that fall short of emergencies in which we do still have significant reasons to help alleviate misery – and no comparable reason to protect or promote any number of ends other than wellbeing that the agent might have. We still need an explanation for the importance of wellbeing beyond the importance of ends.

This is where many a Kantian has told me the following: the wellbeing of one’s fellow humans is morally valuable because wellbeing is required for rational agency.

Such a view, I fear, might  insult some badly-off people.

If one is to think that rational agency is required for full-fledged moral status (by which I mean having all the rights that adults have, including rights not to be “paternalized”), one cannot have too demanding an idea of rational agency in mind. There is a legitimate sense of “rational” in which very, very few people are rational, but it cannot be that only these few have full-fledged moral status. Pretty much everyone whom it is immoral to hospitalize forcibly has full-fledge moral status and so pretty much everyone whom it is immoral to hospitalize forcibly should count, for Kantian purposes, as a rational agent.

There are, to be sure, cases where either a person suffers such extreme ill-being that she loses her rational agency or – more often, I suspect – the same extreme condition causes both ill-being and loss of rational agency. Torture is exhibit A. There is also malnutrition bad enough to harm the brain and severe mental conditions such as psychosis or severe depression.

However, protecting a person’s wellbeing is a morally important consideration in cases that are not like that at all. My example of Roger in the prior post is not an unusual one. If it’s not too hard, I have a duty to help a person who has a broken leg or who, without my intervention, might needlessly break a leg. Breaking a leg does not make anyone lose their rational agency.

Humans are capable of staying no-less-rational-than-average in pretty extreme conditions. It seems pretty clear from Primo Levi’s memories of Auschwitz that he was, while there, a rational agent. Admittedly, Levi strikes the reader as heroic, and admittedly, he became depressed and suicidal in his old age, in what is easy to imagine as trauma catching up with him, but you get my point. There are many people on this earth who are quite miserable and as capable of responding to practical reasons as any citizen in the kingdom of ends. If you can respond properly to practical reasons, aka set ends (which isn’t the same as achieving your ends, which requires the world collaborating with you) then your rational agency isn’t lost.

Perhaps a Kantian might say that for ill-being to be morally important it needn’t be the case that the sufferer lose their rational agency: it is enough that the sufferer be seriously impaired in their ability to exercise rational agency – that is, their ability to promote many of their ends. This kind of move certainly adds more conditions to our list or conditions with which you have a strong reason to help a person. A person struck by a severe physical illness that doesn’t hurt the brain retains her rational agency but loses efficacy in achieving a large number of ends that she might have. That’s one reason there’s a drop in her wellbeing in the first place. However, it is not that hard to imagine cases in which a person’s predicament only seriously impairs him in achieving one of his ends: his wellbeing. That’s true in the case of Roger, and it can be true even in cases of people whose ill-being is lifelong. Some humans who are deeply miserable – due to having gone through a tragedy, for example, or, in a different way, due to chronic pain – achieve a lot of rationally set ends, and we do not wish to be insulting and doubt their basic rationality, their achievements, or their misery.

In the prior post I say that it would be more urgent to prevent a suffering-inducing insect bite than to prevent a sleep inducing insect bite, though it’s sleep that clearly cancels out agency, not suffering. Since writing the prior post, it has been pointed out to me that sometimes it would be more important to prevent involuntary sleep than it would be to prevent pain. However, quite often, preventing involuntary sleep, which sure deprives you of agency, is less morally important than preventing a pain that isn’t quite strong enough to deprive you of your agency (or even of your efficacy – perhaps it doesn’t last that long, or it’s in a period where you don’t do much anyway, or you are one of the relatively stoical people I just mentioned). Furthermore, loss of agency or efficacy through suffering is worse than equal loss of agency or efficacy that isn’t accompanied by suffering, or so we think in medical contexts.

This reminds me a bit of Nagel saying that the main bad thing about non-conventional weapons that cause severe pain is that they hurt the victim’s dignity. One can harm a person’s dignity (and efficacy, and rationality) quite badly by deceptively getting them too drunk to walk, or by hypnotizing them into clucking like a chicken. I’m not saying that won’t be evil, especially if I were to make people do something of which they seriously disapprove (as per David Sussman), but using very painful means instead of the booze or the hypnosis has a whole additional, different dimension of terribleness. To be honest, back in 1990, when I was a teenager and Sadam Hussein threatened to use chemical weapons on the part of Israel I was in – it didn’t sound like an idle threat at the time – I didn’t think very much about the potential effect on my dignity, or on my rational agency. I was terrified of the potential suffering and that was basically it. But then again, maybe I’m just a wimp.

Anyway, my purpose here is not to deny that rational agency is morally important. I think it matters quite a bit in terms of justice and rights. I have argued that as far as benevolence goes, wellbeing is important in a way that’s independent of rational agency, and here I’m defending the view that in order to see someone as meriting compassion, meriting benevolence, or even generating a duty of easy rescue, we do not need to ask to what extent her ill-being will interrupt her rational agency or her efficacy in achieving rationally set ends. Now there’s one thought too many.


Excuse My Technical Term

I used to think that the main problem with moral psychologists’ use of ‘autonomy’ is that ‘autonomy’ is too ambiguous a word. There is the autonomy that all rational beings supposedly have, vs. the autonomy of which I would have more of if I could drive a car, to mark only one ambiguity. In my first book, I listed 8 different ways in which the term is used and I would not be surprised if someone else finds 10 more. It is very easy to conflate at least 2 “autonomies”, but now I think the ambiguous nature of the word ‘autonomy’ is not the only problem with using it.  It’s  equally bad that ‘autonomy’ as used by some of us moral psychologists is a term of art that is used as if it were a “natural language” term. Agent autonomy is, correspondingly, a theoretical construct about which we are expected to have pre-theoretical intuitions. The technical nature of the term ‘autonomy’ (and often of related and even fancy-er terms like ‘agential authority’) can easily become invisible to those who use it regularly, much the way I imagine some songwriters no longer notice that “self” does not, in fact, rhyme with “else”.

I will grant that ‘autonomy’ has various uses in natural language: there are autonomous vehicles, after all, and a Basque Autonomous Community. One can also grant that ‘autonomy’ meaning something like “the right to make decisions for oneself, free of coercion, especially paternalistic coercion” is almost natural English – American medical and nursing students take to it very quickly. However, ‘personal autonomy’ as used by moral psychologists is no more ordinary English than,say,  ‘internal reason’. Saying that ‘autonomy’ means “self rule” isn’t helpful. ‘Self rule’ is only used in ordinary language with regard to nations, not individuals. ‘Agential authority’ or ‘agential’ for that matter is clearly philosophers’ talk – my spellchecker won’t even let me write “agential”. Even ‘agent’ is a term of art, unless we are talking about the sort of agent who spies or the sort who might help you break into the entertainment industry. Non-philosophers who are plenty educated enough to bandy about such words as ‘irrational’ or  ‘bad faith’ never, and I mean never, say “I wonder if, when you scream at me, it’s an autonomous action on your part” or  “he is so in love that his self-rule is compromised”, or “I figure her belief in astrology expresses no agential authority”. ‘Self-control’ is the closest natural English term we have.

Why does it matter? Terms of art are legit, of course, and philosophy is not all about natural language nor all about intuitions. However, it is an error to use terms of art, steeped as they are in theory, to elicit intuitions. We do not have pre-theoretical intuitions about which individuals and actions are autonomous.

The word “autonomy” does have natural connotations. It is a suggestive term. If asked who is more autonomous, a master or a slave, any guessing undergrad will notice that the slave sounds less autonomous. But if you raise such a question as “who is more autonomous, a slave with perfect self-control or a master who suffers from chronic, ubiquitous, terrible weakness of will?” – do not expect natural language or pre-theoretical intuition to give you the answer. Are rational beliefs more autonomous? Do they form in a more autonomous way? If I were the guessing undergrad I would say “yes”, because autonomy sounds like a good thing and rationality is presumably a good thing. Thus it sounds more plausible that they go together than that they conflict. Why be pessimistic? Beyond the positive connotation shared by ‘rationality’ and ‘autonomy’, I think here is only one honest pre-theoretical answer to the question whether rational beliefs form in a more autonomous way, and the answer is “I don’t know”. This can be followed by: what exactly do you mean by ‘autonomous’, and how is it different from what you call ‘rational’?

A term of art with rich connotations is a treacherous thing. Consider by contrast the term “internal reason” – a boring, bloodless, un-suggestive, connotation-free technical term. The words “internal” and “external” are massively overused by analytical philosophers. I often wish philosophers called their views more original and imaginative names than “internalism” or “externalism”. For one, there are too many internalisms and externalisms and it’s mighty confusing. I could go on further, but I admit it’s basically a stylistic and pedagogical issue. A person with a better verbal memory than mine and a much greater patience with monotony might find nothing amiss with the way we call things “internalism” and “externalism”.

‘Autonomy’ has a deeper problem, as can be shown by the fact that so many people argue about what makes an action autonomous (or not). People don’t disagree that way about which reasons are internal. If you happen to think that Bernard Williams, when he lists the sort of things that can give rise to internal reasons, includes things that don’t belong together – say, desires and values – you do not as a rule argue that Williams made an error and called some reasons “internal” that in fact aren’t (as we can all intuit!). You say that the distinction needs to be redrawn or that a new distinction needs to be added. On the other hand, two philosophers could easily come to argue as to how autonomous an agent Homer Simpson is, and then it is often understandably hard for them to keep their hands off their intuition pumps.

I have been contemplating the term ‘agential authority’ because I have been asked if I don’t think (“don’t you think?”) that irrational beliefs express less agential authority than rational beliefs.  The question was asked as if intuition would be enough, or almost enough, to show me that as well. ‘Agential authority’ is another term of art which is sometimes treated as natural language. It does not have as many meanings and connotations, but it is metaphorically evocative in a potentially misleading way.

Imagine that you are trying hard to grade papers despite feeling urges to do just about anything else – play with the cat, watch Netflix, go for a walk (they tell me some people even clean). In such moments, it is natural for you to feel as if your psyche resembles a country, your deliberating self is like a legitimate government, and whatever it is inside you that doesn’t follow your best judgment (the urges? the fraction of the your inner “nation” that support these urges? not clear, really, but whatever it is) is like an organization that defies the authority of the government.  Since grading the papers is usually the rational thing to do, such experiences can lend  plausibility to the idea that rationality in general is like good government and irrationality in general like crime or insurrection. Now, some of you might recall that elsewhere I take the analogy between your deliberating self and the government to be a bad one even in the case of akratic action. It strikes m as similar to the analogy we make between people and kettles when we feel that by expressing anger we “blow off steam” and thus save ourselves from bigger anger: very intuitive but, despite years of great minds accepting it, ultimately mistaken (PSA: science shows that when you “let out” the anger you feel you increase it. Nothing “blows off”. Kettles have nothing to do with it). This isn’t the place to get into my arguments against people being like countries, but they are not based on denying the intuitive appeal of the self-government trope when you contemplate akrasia.

However, I think that analogizing irrationality in general, and epistemic irrationality in particular, to failure of government invites doubts of the sort that Hume expressed when he considered the theologians’ analogy between the world and a complex artifact that surely cannot be there by accident. Hume asked, essentially, what non-religious reasons we have to see the world as analogous to a lovely object made by an artisan and not, say, to a lovely plant growing from the ground.  In a similar vein, take ordinary irrational belief formation. I do not see a particular (pre-theoretical) reason to liken the person who irrationally comes to believe that Elvis is alive to a country experiencing insurrection or infiltration – (i.e. a failure of authority) –  and not to a plant suffering from rot, or a heat-guided missile guided the wrong way, or Starbucks putting Leonard Cohen’s Hallelujah in its cheery Christmas music mix, or any number of other things going wrong. So no, I don’t see any (pre theoretical) reason to take irrational beliefs to express less ‘agential authority’ than others. Why not think that they express more, at least sometimes? Who is more authorial of, or has more authority over, her beliefs, the one who boldly tortures the data until they confess, producing an original conspiracy theory, or the wimp who surrenders to the power of the data? I can see why we might call the latter more rational, but I see no strong pre-theoretical, intuitive pull to calling her more “agential”. Or to calling her less “agential”. Or even to thinking, as my undergrads would put it, that belief showing “agential authority” is a thing.

P.S Is epistemic blame a thing? Here are some thoughts. Is ’identification’ any better than ‘autonomy’ or ‘agential authority’? Here is a link – it’s called Just the Booze Talking. Tim Schroeder and I also have a paper about identification but it isn’t funny, really.


The Cool Dude or: I am Not a Virtue Ethicist

Aristotle doesn’t talk about the Moral Person. He talks about the Cool Dude!

Thus said one of my undergrads after reading the Nicomachean Ethics. What did he mean? Partially, he meant something similar to what Anscombe said when she said the following:

If someone professes to be expounding Aristotle and talks in a modern fashion about “moral” such-and-such he must be very imperceptive if he does not constantly feel like someone whose jaws have somehow got out of alignment: the teeth don’t come together in a proper bite.

The label “cool dude” seems, according to my undergrad, to fit Aristotle’s excellent person more than the label “moral person” does. After all, he has high self-esteem, an expensive house in impeccable taste, a penchant for giving great parties, not to mention a wonderful sense of when to tell jokes, when not to brag, how to accept honors without being “ranking-obsessed” (as undergrads called the honor-lover that year). The label “moral person” does not seem to have much to do with these things. When I told a friend about my undergrad’s comments, the friend suggested that someone translate the Nicomachean Ethics for undergrads. Whenever it says “virtuous person” we should say “cool guy”. Whenever it says “fine” or “noble” we should say “awesome”. That way we can quote Aristotle as saying:

The cool guy does awesome things because they are awesome.

In one of my talks, someone shouted “you mean awesome de re”, but that’s already interpretation.

I don’t just admire Aristotle. I often enjoy Aristotle. His use of cases and the way he does, long before “analytical” philosophy was officially a thing, dig so masterfully into to the truth beyond common beliefs and the paradoxes they create. But I am not a virtue ethicist. Everywhere I travel people tell me, critically or approvingly, that I am a virtue ethicist. Come on! A person is allowed to have the word “virtue” in the title of her book without being a virtue ethicist! Kant wrote The Doctrine of Virtue. He talked about virtue. A lot. Hume, of course, talked about virtues. We might owe it to 20th century virtue ethicists that talk of virtue has regained an important place in ethics, but we can’t give virtue ethicists a monopoly over the word “virtue”. To be fair, “virtue ethics” is not used as a super-precise term, but I don’t think it can be stretched to fit me. Here’s (the short version of ) why not:

1. When I say “a virtuous person” I don’t mean “the cool dude”, the fine human specimen, the person who is excellent at being a person, or the person with arete. I also don’t mean the phronimos. The natural language term that I have before me is “good person” (or “very good person”) and its counterparts in Hebrew and in Basque. Also German, I suppose, though I forgot my German. The point is that ancient Greek has nothing obvious to do with it). I also like the ever-so-dull and dreary term “moral person” and use it interchangeably with “virtuous person” (though I have been known to refer to the opposite number of the virtuous person with the less dreary term “asshole”, the word “vice” having been regrettably drained of its moral force by being used to refer to eating ice cream.) At least according to the dictionary, this is a legit use of the term “virtuous person”, but some people who identify as virtue ethicists don’t like it. Others do own up to talking about the moral person, someone having a long time ago discreetly removed wit and magnificence from their list of virtues and put charity and honesty in, but still think of her, the virtuous agent, also as a fine human specimen and a phronimos. I don’t. Some of my reasons not to have been better articulated by others, some coming up.

2. I think you can be a perfectly good person and still not always do the right thing in the right circumstances. That’s because doing the right thing sometimes requires that you be smart in addition to being good, and sometimes it also requires that your judgment not be clouded by, say, depression,or anxiety, or that you not be saddled with autistic difficulties in understanding other people’s emotional cues, or that you have life experience. Perhaps you cannot be unintelligent as the cool dude (what do I know about coolness? Ask my undergrad!) and perhaps being below a certain level of intelligence makes it harder to have a good life, but even if these things are true, I don’t think anyone should ever be regarded as less morally good because she is not very smart – it’s like saying one is less moral because one is blind or deaf.  Granted, to be morally good, a person needs to be conceptually sophisticated enough to have concepts like “harm” and ‘truth”,  which rules out my cats as potential moral agents. You might protest that the phronimos possesses wisdom, not smartness, but wisdom comes on top of smartness;  it seems to require quite a bit of smartness to get off the ground. Now, even in a smart person, conditions like autism, depression, anxiety, or lack of experience seem to interfere with the possibility of acting with practical wisdom. I need not deny that some of these conditions can interfere with your chances of a good life, but morality again is something else. I know so many people who think they are somehow bad people because of their depression or autism or anxiety or cluelessness! Typically, we tell them that they are mistaken. These conditions are morally neutral and while there are some situations where a person of strong virtuous motivation can overcome their influence, she often cannot. I agree with Kantian intuition that an honest-to-God good will shines through regardless of just about everything, most especially including cognitive limitations and suchlike obstacles. There’s just this small disagreement about what “will” is…. (All of this has been argued for in my last book and in a paper called “Duty, Desire and the Good Person”).

3. And then there’s Eudaemonia. Or happiness. Or wellbeing. In a forthcoming paper I argue that while virtue ethicists such as Hursthouse have said convincing things to the effect that if you want to flourish, you shouldn’t let yourself become a narcissist, a Nazi, a cynic or a purely selfish sort, this does not – as Copp and Sobel already pointed out –  amount to an argument in favor of being downright good as an obvious path to flourishing. Granted, some parts of Wolf’s “Moral Saints” seem to depict Ned Flanders rather than an actual moral paragon (Father Knows Best? Really?) and I don’t agree that even very moral people are automatically uncool men and women. Still, I argue that a person who is somewhere between “decent” and “jerk” – a morally mediocre agent – often has no reason to pursue further moral virtue, and has some reasons not to pursue it, in so far as what she wants is to flourish (she might, of course, have other reasons to be a better person). Imagine a morally mediocre person who has loving relationships with other individuals, good health, decent material conditions, and cool (though not necessarily morally significant) things he can do, with a bit of work, using his talents and abilities. Is he likely to become happier if he starts noticing injustice in the news and feeling the resulting heartache, or if he starts working in a soup kitchen during time that he would have spent playing Jazz with his friends? I argue that it’s often not the case. Some virtue ethicists  would argue that you cannot have good, truly loving relationships with other individuals unless you are all-around kind and honest. That’s, on my view, like saying that you can’t be genuinely, lovingly devoted to your dog, cat, or other cherished nonhuman animal unless you are also a vegetarian: it sounds like it should be true, yet it is clearly false.

4. Umm, yes, I forgot to mention that I don’t think that the reason helping the person bleeding by the side of the road is the right action is that it is what a good or benevolent person would characteristically do. I am still trying to figure out what the reason is, but right now I imagine something along the lines of  “it is an action that protects the bleeding person from severe illbeing without violating any rights (…)”. I don’t think any facts about the habits or concerns of moral agents are built into the right-making features of this action: the relevant facts are about what it does or doesn’t do to moral patients.

There’s more, but ok, that’s enough. So why do I like to talk about virtues anyway? Because I think our intuitions about the rightness of moral actions and the worth of moral motives are deeply and interestingly intertwined, and these in turn are intertwined with our intuitions of the goodness and badness of persons, and each of the three topics is fascinating in its own right. Even if one does not endorse defining the right action through reference to the virtuous agent, as I do not, one can still see that some of the best clues to what morality is about come from looking at what we expect a moral person to be concerned with (or an immoral person not to be concerned with).

A very tall man was once at a talk I gave. When it was time for questions, he got up, towering over me in the small room, and said “I’m sorry, the reason I’m standing up is not to be intimidating but to make eye contact”. He then asked: do we really care about the inner lives of agents? When I talk to my kids, I teach them right and wrong actions. Nothing about motives”. I pointed out that if we didn’t care about the inner lives of agents, it would have been enough for him to do the right thing – stand up – and he needn’t have bothered to make sure that I know he was doing it in order to make eye contact – a fact about his inner life.  Where there is natural talk of right and wrong actions, there is also talk of good people and assholes. You don’t have to be a virtue ethicist to be interested in these things. Or, I suspect, an ethicist at all. If you like gossip, you’re probably in.

Epistemic Life is Unfair!

So you are considering quitting your secure middle class job and going to Tahiti to become a painter. You have a strong hunch that once you go there, you’ll flourish as an artist and produce truly great work. Let’s take morality out of the equation: you are not deserting dependents. You are just considering a high risk of bankruptcy and, just as bad as far you are concerned, ridicule. If you go to Tahiti, are you being rational in so doing?

Bernard Williams suggests that there is no fact of the matter until you have already gone to Tahiti and succeeded or failed. That is, in some respects, a truly scary idea. I have proposed an idea which is not as scary, but might be, to some philosophers, more annoying: there is a fact of the matter, but you, the agent in the story, can’t know it – not before you succeed or fail and, in many circumstances, not afterwards either. When I say you can’t know it, I do mean to imply that no theory of rationality can guide you into this kind of knowledge. This, however, does not mean that there can’t be a good theory of whether, given that certain beliefs, desires, emotions, etc. are in your head, you would be rational or not in going to Tahiti. If I know what’s in your mind – perhaps because I am an omniscient narrator and I made you up – then I do, given the right theory, know before you leave the house whether or not you are being rational. Since you are not akratic in this story, the more precise question is likely whether your action is based on an irrational belief about your talent or the chances that the journey to Tahiti would help.

A rule that tells you not to start a career as a painter unless you are reasonably convinced that you are a great painter, says Williams, would be pretty much unusable. To continue his thought: it would be unusable because being “reasonably convinced” is indistinguishable, in terms of how it feels to the agent, from being unreasonably convinced. Even the best artists are not reliable or rational witnesses to the quality of works they produce, and being convinced of your greatness through wishful thinking – perhaps intertwined with some midlife crisis and being sick of your job – does not always feel any different from being convinced rationally. It is in the nature of epistemic irrationality – for the moment, let’s stick to epistemic irrationality –  that there are limits on your ability to know if you are irrational or not, to the point that sometimes it’s simply impossible for you to know it. Think about the sort of irrationality originated by depression or anxiety or insecurity, the sort originated by intoxication or sleep deprivation, the sort originated in schizophrenia. Take depression as an example:

Tristan: I am a terrible person.

You: Why?

Tristan: I forgot to buy milk today.

You: That doesn’t make you a horrible person.

Tristan: You are just saying it to be nice.

You: My roommate also forgot to buy milk yesterday. Does it mean she is a terrible person?

Tristan: No!

You: well, then –

Tristan: I don’t know your roommate. She is probably just fine. But given all I know about myself, forgetting the milk is just a symptom of how horrible a person I am.

You: What you know about yourself? Like what?

Tristan: I used a horrible mixed metaphor in pro-seminar today. It was embarrassing. I am clearly wasting the money of the people paying for my fellowship. I should stop committing this crime.

You: You are a really good student. All your teachers say so.

Tristan: They are wrong. No, seriously, I have given a lot of thought to that.

You can argue till the cows come home, but Tristan is, as far as he can tell, “reasonably convinced” that he is an all- around horrible person and a failure at all he does. He has thought about it a lot. Advice along the line of “do not quit the program unless you are reasonably convinced that you are not a good student” would be wasted on him.

There are, to be sure, some heuristics that improve the chances of a moderately irrational person to diagnose herself. A lovely eastern European saying I was taught as a child was “if three people say you are drunk, go to sleep!”. I am sure the saying has rescued some people who knew it from major debacles. It also failed to rescue many others who knew it: perhaps they were so drunk they could no longer count to three, or perhaps they were merely tipsy enough to think “Oh, yes, three people say I’m drunk, sure, but Yuan and Liz always agree with each other, so they really should count as the same person, right?”. So basically I’m saying that no putative “rational agent’s manual” can be expected to guarantee its follower rational belief (and thus, action based on rational belief) because it cannot guarantee that the agent won’t be drunk, or depressed, or any number of things that can sneak on you, at the time she consults the manual.

So, I’m worried about the claim that all epistemic norms need to be “follow-able” or that when they are unfollow-able to one, one is not to be charged with irrationality for not, well, following them. One reason I decline to adopt the bright shiny new expression “epistemically blameworthy” in place of the dry-as-dust, old-style expression “epistemically irrational” is that it obscures an unfortunate Williams-esque fact: epistemic life is unfair. Epistemic irrationality is both a failure to respond to reasons and a predicament that can be forced on one – say by putting a drug in one’s coffee or by taking away the prescription drug one usually puts in one’s coffee. We feel compassion for Tristan and do not, hopefully, “blame” him for anything, as his condition is “not his fault”, but we do treat the reasoning implied in “I forgot to buy milk so I’m a terrible person” as flawed and as a symptom of irrationality.

Some would find it disturbing – not just annoying – to think that unfairness is implied by epistemic norms. But should it really be so disturbing? It shouldn’t be remotely as disturbing as a suggestion that unfairness is implied by moral norms. The connection between fairness and morality is pre-theoretical and intuitive, at least in the sense that people would agree that being fair is part of being moral, an unfair action is immoral, and fairness is particularly important when it comes to punishment and other actions related to blame, as in moral blame. It “just seems” unfair to say that something is ever both (morally) blameworthy and a predicament that isn’t the agent’s doing (“not her fault”) and you don’t need to be a philosopher to think that.  On the other hand, the idea that it is always unfair to say that something is both epistemically irrational and not the agent’s doing is an idea rarely spotted in the wild, a postulate of (only some) sophisticated theories of normativity that require that epistemology and ethics be similar, analogous, with isomorphic components: blame here and blame there. Non-Philosophers would raise their eyebrows at the sentence “it’s not her fault she is blameworthy” but “it’s not his fault that he is irrational” would seems fine to them. The asymmetry that bothers some theorists won’t normally be an issue for them. Judgments of irrationality can be “fair” or “unfair” in the sense of “accurate” or “inaccurate”, or in the sense of “biased” or “unbiased”, but when we say Tristan is irrational, even though he didn’t bring his depression about, we are not unfair – we just are just pointing out that life is.

P.S I think one complication is that one ultimately needs to distinguish rationality from intelligence, and drugs that promote/impair one of them or the other. A 13 year old is mostly smarter than a 10 year old, but less rational. See:

P.P.S Can “epistemically blameworthy” be a good title for a person who neglects to google, go the library, or deliberate long enough as she tries to figure something out? After all she neglected to do something what we under her control. Well, I can see the why one might want to use the term this way, but I think deep down the problem with her is that she is practically irrational in her search for knowledge. See:

Notes from a Character

When was the last time you tried to get rid of one habit you had that you didn’t like? How easy was that?

That’s normally my first response when they ask me whether I think we can change our characters intentionally, through trying. I don’t know why this question is so often addressed to me. Perhaps it’s because people think I’m a virtue ethicist. Maybe it’s because I have gone through a more thorough intentional change in my character than most people I know have. Those of us who successfully made such a change, perhaps even more than those who tried and failed, know that many philosophers speak about “cultivating” our characters or “managing” our characters in too casual a way. Intentionally changing character is hard. It’s God damn hard. It’s @#$%&! hard.

My name is Nomy and I’m too candid. However, I am, it seems, employable (even interview-able! If you are also too candid, you know that can be much harder). I also have great friends. These things weren’t true back when I was a teenager capable of telling a person that he is ugly without feeling anger at him or expecting him to be insulted. Mine was a case of grand social incompetence that today would have gotten me diagnosed as “on the spectrum” very quickly (erroneously, I hasten to add. My problem was bad upbringing, in a broad neo-Aristotelian sense). Every step of the many years long journey from there to minimal practical wisdom was the result of gargantuan effort. It sounds like I’m bragging, and in a way I am, but what I want to get across most of all is the frustrating difficulty of it, the fumbling, the repeated not-even-close failures, the times you think you have finally become an agreeable human only to discover that you once more offended someone that you hadn’t the slightest desire to hurt, or that yet another person said “ah, her? we thought she might be difficult” – without you having any idea why. It was exhausting – and we’re talking getting from utterly clueless to merely too candid; we are not talking becoming a person worthy of raising a flag with a red maple leaf on it, say, or the kind of diplomatic person that a woman is still (unfairly) expected to be.

So, intentional character change: possible but insanely hard, requires help from others, isn’t just a matter of practice through repeated action, and should not be talked about lightly, as in suggesting that every time a person is blameworthy for an akratic action what they are really blameworthy for is not having, some years earlier, done the obvious thing and gone to character school, where remedial courses are always available for free. But sometimes people ask me about whether my (and Schroeder’s) view allows for people intentionally becoming more virtuous. For Tim and me, to be virtuous is to have good will – want the right things, de re – and not to want to wrong things, de re.  If you don’t like desires, you can have a pretty similar view involving concerns or cares otherwise interpreted. Your intrinsic desire situation likely matters not only to your patterns of behavior but to your cognition as well (e.g if you want humans not to suffer, you are more likely to notice the sad person standing in the corner; if you want equality, you are more likely to notice that a movie is racist), but nobody, strictly speaking, is morally virtuous just because of a cognitive talent or morally vicious just because of a cognitive fault. Being capable of noticing the sad person in the corner because you’re an observant novelist scores you no moral points, and being incapable of noticing racism in a movie because you came from far away and don’t get the cultural references does not lose you any). By this measure, it is likely that I did not become more virtuous than I used to be. My quality of will didn’t change – my cognitions and habits did. So, in the strict Arpaly/Schroeder sense…. Is it possible to change from a not-so-virtuous person into a virtuous one, intentionally?

Seems like that would be impossible, paradoxical, self-defeating. The virtuous person is defined by the things she intrinsically desires, or if you prefer, what she cares about. She desires, let’s say, that humans be safe from suffering, that people be treated equally, that she doesn’t lie – the details would vary depending on what the best normative theory tells us morality is about. Simply acting out of a desire to be virtuous (de dicto) is not virtuous. In fact, even acting out of a desire to be virtuous de re is not virtuous: the right reason to save a person from a fire is that he not suffer or die, it’s not that you, the agent, be compassionate, and thus the virtuous person would act out of a desire to prevent suffering or death, not out of a desire to have the virtue of compassion. Self-defeating, right?

Except not really. Peter Railton pointed out that this kind of thing looks paradoxical in theory only if we ignore the various ways in which we can act upon ourselves in practice (his examples: the hedonist who decides he needs non-selfish desires in order to be happy, the ambitious tennis player who needs a bit less focus on winning, a bit more love of the game in order to win). Imagine a person who wants to be virtuous, who roughly knows (or has true beliefs regarding) what virtue is about (some other time about the person who doesn’t), but does not have the desires or cares of the virtuous person. More realistically, she has some of them, to some extent, but she falls short of what we are willing to call virtue. At first, her actions will not be expressions of virtue, but intrinsic desires do change, however slowly or gradually. They often spontaneously develop out of a more derivative form of desire: you want to learn philosophy in order to do well in law school, and by the end of the course you want it intrinsically. You start playing baseball to please your parents and find yourself continuing to do it long after they have died. If virtue is about desires or cares, it stands to reason that sometimes you can start out volunteering at a homeless shelter because you get a warm and fuzzy feeling from thinking of yourself as virtuous, or even because you get a warm and fuzzy feeling about others believing that you are virtuous, and then find yourself attracted instead to the grateful looks of some of the people in the shelter, and who knows, as the makeup of your motives shifts, find yourself moved to help when nobody is there to praise you. In ancient Jewish sources, much importance is attributed to studying the Torah for its own sake, the only praiseworthy way to do it. However, the advice for the person who cannot muster such pious motivation is to start by mustering ulterior motives and the intrinsic ones will “come”. I like this attitude. It doesn’t always work, oh no, but it strikes me as more likely to work than the practice of scrutinizing people’s motives – oneself or others – and verbally skewering them if one suspects any “virtue signaling” in the mix. Incidentally, Thomas Hill has a great article on how even Kant, the guy who brought us moral worth, didn’t like the scrutinizing thing – and he didn’t even believe in mixed motives!

So, granted: hard as it to intentionally acquire or ditch habits of thought or action, it seems even harder to intentionally acquire or ditch an intrinsic desire. Ever tried making yourself a lover of movies when you totally aren’t one, or getting rid of that desire to be tall? But there is no paradox involved, merely an “empirical” difficulty. Such difficulties can be tragic enough, but there is no need to deny that sometimes people intentionally become somewhat more virtuous than they were before. Not by sheer act of will, but by such things as hanging out with virtuous people and have it rub off on you, finding optimistic types who “believe in you” and seeing if you will automatically rise to meet their expectations, following the Talmudic advice to start from exciting ulterior motives and hope for the best, reading and watching memorable and vivid representations of the point of view of those whom your actions affect. Prosaic takes on human nature, which take moral motivation to be similar to philosophy-studying motivation or baseball-playing motivation or whatever, can be depressing, but they can be rather comforting on those occasions in which prosaic methods work. I can’t pretend any other kind of method worked for me.