Don’t Worry, Be Practical: Two Raw Ideas and a PSA

1) If you are not too superstitious – I almost am – imagine for a moment that you suffer from cancer. Imagine that you do not yet know if the course of treatment you have undergone will save you or not.  You sit down at your doctor’s desk, all tense, aware that at this point there might be only interim news – indications that a good or a bad outcome is likely. The doctor starts with “well, there are reasons to be optimistic”. Though you are still very tense, you perk up and you feel warm and light all over. You ask what the reasons are. In response, the doctor hands you a piece of paper: ironclad scientific results showing that optimism is good for the health of cancer patients.

Your heart sinks. You feel like a victim of the cruelest jest. I’ll stop here.

Some philosophers would regard what happens to you as being provided with practical reasons to believe (that everything will turn out alright) when one desperately hoped for epistemic reasons to believe (that everything will turn out alright). I, as per my previous post, think what happens is being provided with practical reasons to make oneself believe that everything will turn out alright (take an optimism course, etc.) when one desperately hoped for epistemic reasons to believe that everything will turn out alright –I think the doctor says a false sentence to you when he says there were reasons to be optimistic. For the purpose of today it does not matter which explanation is the correct one. The point is that when we theorize about epistemic reasons and putative practical reasons to believe, the philosopher’s love of symmetry can draw us towards emphasizing similarities between them (not to mention views like Susanna Rinard’s, on which all reasons as practical), but any theory of practical and epistemic reasons heading in that direction needs a way to explain the Sinking Heart intuition – the fact that in some situations, receiving practical reasons when one wants epistemic reasons is so incredibly disappointing, so not to the point, and even if the practical reasons are really, really good, the best ever, it is absurd to expect that any comfort, even nominal comfort, be thereby provided to the seeker of epistemic reasons. What the doctor seems to offer and what she delivers seem incredibly different. The seeming depth of this difference needs to be addressed by anyone who emphasizes the idea that a reason is a reason is a reason.

To move away from extreme situations, anyone who is prone to anxiety attacks knows how $@#%! maddening it is when a more cheerful spouse or friend tells you that it’s silly of you to worry and then, when your ears perk up, hoping for good news you have overlooked, gives you the following reason not to worry: “there is absolutely nothing you can DO about it at this point”. Attention, well-meaning cheerful friends and partners the world all over: the phrase “there is nothing you can do about it” never helps a card-carrying anxious person worry less. I mean never. It makes us worry more, as it points out quasi-epistemic reasons to be more anxious. “There’s nothing you can do about it now” is bad news, and being offered bad news as a reason to calm down seems patently laughable to us. Thank you!  At the basis of this common communication glitch is, I suspect, the fact that practical advice on cognitive matters is only useful to the extent that you are the kind of person who can manipulate her cognitive apparatus consciously and with relative ease (I’m thinking of things like stirring one’s mind away from dwelling on certain topics).  The card-carrying worrier lacks that ability, to the point that the whole idea of practical advice about cognitive matters isn’t salient to her. As a result, a statement like “it’s silly of you to worry” raises in her the hope of hearing epistemic reasons not to worry, which is then painfully dashed.

2) A quick thought about believing at will.  When people asked me why I think beliefs are not voluntary in the way that actions are, I used to answer in a banal way by referring to a (by now) banal thought experiment: if offered a billion dollars to believe that two plus two equals five, or threatened with death unless I believe that two plus two equals five, I would not be able to do it at will. A standard answer to the banal thought experiment points out that actions one has overwhelming reasons not to do can also be like that. Jumping in front of a bus would be voluntary, but most people feel they can’t do it at will. But I think this answer, in a way that one would not expect from those who like the idea of practical reasons to believe, greatly overestimates how much people care about the truth. Let me take back the “two plus two” trope, as not believing that two plus two equals four probably requires a terrifying sort of mental disability, and take a more minor false belief – say, that all marigolds are perennial. If offered a billion dollars, I would still not be able to believe at will that all marigolds are perennial. To say that this is like the bus case would pay me a compliment I cannot accept. One has a hard time jumping in front of a bus because one really, really does not want to be hit by a bus. One wants very badly to avoid such a fate. Do I want to believe nothing but the truth as badly as I want to avoid being hit by a bus? Do I want to believe nothing but the truth so badly that I’d rather turn down a billion dollars than have a minor false belief? Ok, a false and irrational belief. Do I care about truth and rationality so much that I’d turn down billion dollars not to have a false and irrational belief? Nope. Alternately, one might hold that jumping in front of a bus is so hard because evolution made it frightening. But am I so scared of believing that all marigolds are perennials? No, not really. I am sure I have plenty of comparable beliefs and I manage. I’d rather have the money. I would believe at will if I could. It’s just that I can’t. We are broken hearted when we get practical reasons when we want epistemic reasons, but the reason isn’t that we are as noble as all that.

Beliefs, Erections and Tears, Or: Where are my Credence-Lowering Pills?

People who are not philosophers sometimes “come to believe” (“I’ve come to believe that cutting taxes is not the way to go”) or “start to think” (“I’m starting to think that Jolene isn’t going to show up”). Philosophers “form beliefs”. People who are not philosophers sometimes say “I have read the report, and now I’m less confident that the project will succeed”, and sometimes write “reading the report lowered my confidence in the success of the project”. Philosophers say “I have read the report and lowered my credence that the project will succeed”.

In other words, philosophers talk about routine changes in credence as if they were voluntary: I, the agent, am doing the “forming” and the “lowering”. That is kind of curious, because most mainstream epistemologists do not think beliefs are voluntary. Some think they sometimes are – perhaps in cases of self-deception, Pascal’s wager, and so on – but most take them to be non-voluntary by default. Very, very few hold that just like I decided to write now, I also decided to believe that my cats are sleeping by my computer. Yet if I were their example, they would say “Nomy looks at her desk and forms the belief that her cats are sleeping by the computer”. If I say anything to them about this choice of words, they say it’s just a convenient way to speak.

John Heil, wishing to be ecumenical in “Believing What One Ought”, says that even if we think that belief is not, strictly speaking, voluntary, we can agree that it is a “harmless shorthand” to talk about belief as if it is voluntary (talk about it using the language of action, decision, responsibility, etc). Why? Because it is generally possible for a person to take indirect steps to “eventually” bring about the formation of a belief.

OK then –so erections are voluntary! Let’s use the language of action, decision, and responsibility in talking about them! No, seriously. It is possible for a man to take indirect voluntary steps to bring it about that he has an erection. And yet, if I teach aliens or children an introduction to human nature, I do something terribly misleading if I talk about erections as if they were voluntary.

Another example: suppose Nomy is a sentimental sop and hearing a certain song can reliably bring her to tears. Heck, just playing the song in her head can bring her to tears. There are steps Nomy can take to cause herself to burst out in tears. Still, again, in that Intro Human Nature course one cannot talk about outbursts of tears as if they were voluntary without being misleading.

Last, would any self-respecting action theorist say that the phrase ‘she raised her arm’ can be a ‘harmless shorthand’ way to describe a person who caused her arm to jerk upwards by hooking it to an electrode? Or, worse, a “harmless shorthand” by which to describe a person whose arm jerked due to a type of seizure but who easily could have caused such jerking through using electrodes or prevented it through taking a medication?

In all of these cases, a “shorthand” would not be harmless – for two reasons. The more banal one is that when eliciting intuitions about birds, you don’t want the ostrich to be your paradigm case. Most erections, outbursts of tears, and arm-convulsions are not the result of intentional steps taken to bring them about, and there is a limit to what we can learn about them from the fewer cases that are. The same is true for most beliefs. Many of my beliefs are the result of no intentional action at all – my belief that the cats are here came into being as soon as they showed up in my field of vision, my belief that there are no rats in Alberta came into being as soon as a Tim Schroeder told me – and other beliefs I have are the result of deliberation, which is an action, but not an intentional step taken to bring about a particular belief. (so my belief that the ontological argument fails was not the result of intentional steps taken to make myself believe that the ontological argument fails). Whatever intuitions one might have about non-paradigmatic cases, like Pascal’s wager, could easily fail to apply to the many cases in which no step-taking has preceded a belief.

But to talk of erections and tears as if they were voluntary is also dangerous for a deeper reason, a reason that has nothing to do with the frequency of indirect-steps cases. Even if the majority of tears, erections, or convulsions were indirectly self-induced, there is still a difference between the action taken to induce the erection, tears or seizure and the thing that results from the action. If the former is voluntary, that alone doesn’t make the latter voluntary. Similarly, even if we routinely had the option of making ourselves believe something through the pressing of a button, and we routinely took advantage of this option, there will still be a difference between the act of pressing the button – an action – and the state of believing itself. If we make “pressing the button” a mental action – analogous to intentionally calling to mind images that are likely to produce an erection or an outburst of tears – it hardly matters: the mental action that produces a belief would still be different from its outcome.

Why does it matter? Because we only have practical reasons to do things voluntarily. It seems quite clear that, on those occasions, whatever their number, in which we can, in fact, for real, form a belief, there can be practical reasons to form that belief or not to form it, and it seems only slightly less clear that sometimes it could be rational to form a belief that clashes with the evidence. This, however, is taken to mean that there are practical reasons to believe. I am working on a paper arguing that this does not work.  We have practical reasons to take steps to bring a belief about. We sometimes have practical reasons to make ourselves believe something, but that’s not the same as practical reasons to believe it. No such thing. Category mistake. This is where one could ask: why not call reasons to create a belief in oneself “reasons to believe?” Isn’t that a harmless shorthand?

I don’t think so. I agree that “reasons to stay out of debt” is a nice shorthand for “reasons not to perform actions that can lead to one’s being in debt and to perform actions that are conducive to staying out of it”, but while “reasons to stay out of debt” just are reasons for various actions and inactions, you can have “reasons to believe” something without any implied reasons for any actions or inactions. Jamal’s being out of debt is an instance of rationality (or response to reasons) on his part iff Jamal’s being out of debt is the intended result of a rational course of action taken by Jamal. Gianna’s belief that Israelis don’t speak Yiddish can be perfectly rational (as in, there for good reasons) even if it’s not the result of any rational course of action taken by her. Perhaps the belief occurred in her mind as the result of unintentionally hearing a reliable person’s testimony, no action required, or was the result of an irrational action like reading Wikipedia while driving; it can still be as rational a belief as they come. When we say “reasons to believe” it is not normally a shorthand for “reasons to make oneself believe”, and so to shorthand “reasons to make oneself believe” to “reasons to believe” is not harmless, but very confusing. To be continued.

Philosophy: Truth or Dare?

Many years ago I was having a long chat with someone who later became a well-known philosopher. His work was already way cool, but looking at the theses he defended, I told him he must be aiming for the Annual David Lewis Award for Best-Defended Very Weird View. He told me that he did not always believe the views he defended. He was most interested in seeing how far he can go defending an original, counter-intuitive proposition as well as he can. What did I think? I said that it seems to me that some philosophers seek the Truth but others choose Dare.

I am more of a Truth Philosopher than a Dare Philosopher, but I doubt it’s a matter of principle, given that my personality is skewed towards candor. I’m just not a natural for writing things in which I don’t have high credence at the time of writing. However, if you are human, should you ever have high credence in a view like, say, compatibilism, which has, for a long time, been on one side of not only a peer disagreement but a veritable peer staring contest? Looking at it from one angle, the mind boggles at the hubris.

Zach Barnett, a Brown graduate student, has recently been working on this and has a recent related paper in Mind. I asked him to write about it for Owl’s Roost and he obliged. Here goes:

I want to discuss a certain dilemma that we truth-philosophers seem to face. The dilemma arises when we consider disagreement-based worries about the epistemic status of our controversial philosophical beliefs. For example:

Conciliationism: Believing in the face of disagreement is not justified – given that certain conditions are met.

Applicability: Many/most disagreements in philosophy do meet the relevant conditions.

————————————————————————————————

No Rational Belief: Many/most of our philosophically controversial beliefs are not rational.

Both premises of this argument are, of course, controversial. But suppose they’re correct. How troubling should we find this conclusion? One’s answer may depend on the type of philosopher one is. 

The dare-philosopher needn’t be troubled at all. She might think of philosophy as akin to formal debate: We choose a side, somehow or other, and defend it as well as we can manage. Belief in one’s views is nowhere required.

The truth-philosopher, however, might find the debate analogy uncomfortable. If we all viewed philosophy this way, it might seem to her that something important would be missing – namely, the sincerity with which many of us advocate for our preferred positions. She might protest: “When I do philosophy, I’m not just ‘playing the game.’ I really mean it!”

At this point, it is tempting to think – provided No Rational Belief is really true – that the truth-philosopher is just stuck: If she believes her views, she is irrational; if she withholds belief, then her views will lack a form of sincerity she deems valuable.

As someone who identifies with this concern for sincerity, I find the dilemma gripping. But I’d like to explore a way out. Perhaps the requisite sort of sincerity doesn’t require belief. An analogy helps to illustrate what I have in mind.

Logic Team: You’re on a five-player logic team. The team is to be given a logic problem with possible answers p and not-p. There is one minute allotted for each player to work out the problem alone followed by a ten-second voting phase, during which team members vote one by one. The answer favored by a majority of your team is submitted.

      Initially, you arrive at p. During the voting phase, your teammate Vi – who, in the past, has been more reliable than you on problems like this one – votes first, for not-p. You’re next. Which way should you vote?

Based on your knowledge of Vi’s stellar past performance, you might suspect that you made a mistake on this occasion. Perhaps you will cease to believe that your original answer is correct. Indeed, you might well become more confident of Vi’s answer than you are of your own.

It doesn’t follow, though, that you should vote for Vi’s answer of not-p. If all you care about is the the accuracy of your team’s verdict, it may still be better to vote for your original answer of p.

Why? In short, the explanation of this fact is that there is some value in having team members reach independent verdicts. To the extent that team members defer to the best player, independence is diminished. This relates to a phenomenon known as “wisdom of the crowd,” and it relates more directly to Condorcet’s Jury Theorem. But all of this, while interesting, is beside the point.

In light of the above observations, suppose that you do decide to vote for your original answer, despite not having much confidence in it. Still, there is still an important kind of sincerity associated with your vote: in a certain sense, p seems right to you; your thinking led you there; and, if you were asked to justify your answer, you’d have something direct to say in its defense. (In defense of not-p, you could only appeal to the fact that Vi voted for it.) So you retain a kind of sincere attachment to your original answer, even though you do not believe, all things considered, that it is correct.

To put the point more generally: In at least some collaborative, truth-seeking settings, it can make sense for a person to put forward a view she does not believe, and moreover, her commitment can still be sincere, in an important sense. Do these points hold for philosophy, too? I’m inclined to think so. Consider an example.

Turning Tide: You find physicalism more compelling than its rivals (e.g. dualism). The arguments in favor seem persuasive; you are unmoved by the objections. Physicalism also happens to be the dominant view.

      Later, the philosophical tide turns in favor of dualism. Perhaps new arguments are devised; perhaps the familiar objections to physicalism simply gain traction. You remain unimpressed. The new arguments for dualism seem weak; the old objections to physicalism continue to seem as defective to you as ever. 

Given the setup, it seems clear that you’re a sincere physicalist at all points of this story. But let’s add content to the case: You’re extremely epistemically humble and have great respect for the philosophers of mind/metaphysics of your day. All things considered, you come to consider dualism more likely than physicalism, as it becomes the dominant view. Still, this doesn’t seem to me to undermine the sincerity of your commitment to physicalism. What matters isn’t your all-things-considered level of confidence, but rather, how things sit with you, when you think about the matter directly (i.e. setting aside facts about relative popularity of the different views). When you confront the issues this way, physicalism seems clearly right to you. In philosophy, too, sincerity does not seem to require belief (or high confidence).

In sum, perhaps it is true that we cannot rationally believe our controversial views in philosophy. Still, when we think through the controversial issues directly, certain views may strike us as most compelling. Our connection to these views will bear certain hallmarks of sincerity: the views will seem right to us; our thinking will have led us to them; and, we will typically have something to say in their defense. These are the views we should advocate and identify with – at least, if we value sincerity. 

I find the proposed picture of philosophy attractive. It offers us a way of doing philosophy that is immune to worries from disagreement, while allowing for a kind of sincerity that seems worth preserving. As an added bonus, it might even make us collectively more accurate, in the long run.

That was Zach Barnett. Do I agree with him? As is usual when I talk to conciliationists, I don’t know what to think!

Raw Reflections on Virtue, Blame and Baseball

In a much argued-about verse in the Hebrew Bible, we are told that Noah was a righteous man and “perfect in his generations” or “blameless among his contemporaries” or something like that (I grew up on the Hebrew, and so I can say: the weirdness is in the original). The verse has been treated as an interpretative riddle because it’s not clear what being “blameless among one’s contemporaries” amounts to. Was the guy really a righteous person (as is suggested by the subsequent text telling us that he walked with God) or was he a righteous person only by comparison to his contemporaries, who were dreadful enough to bring a flood on themselves?

My friend Tim Schroeder would probably have suggested that, given his time, Noah must have had had an excellent Value Over Replacement Moral Agent. It’s kinda like Value Over Replacement Player. Here’s how Wikipedia explains the concept of Value Over Replacement Player:

In baseballvalue over replacement player (or VORP) is a statistic (…) that demonstrates how much a hitter contributes offensively or how much a pitcher contributes to his team in comparison to a fictitious “replacement player” (…) A replacement player performs at “replacement level,” which is the level of performance an average team can expect when trying to replace a player at minimal cost, also known as “freely available talent.”

Tim and I have been toying with the idea that while rightness, wrongness and permissibility of actions are not the sort of things that depend on what your contemporaries are doing, ordinary judgments of the virtue of particular people (“she’s a really good person”, “he’s a jerk”, and so on) are really about something akin to a person’s Value Over Replacement Moral Agent or VORMA. The amount of blame one deserves for a wrong action or credit for a right action also seems to be at least partially a matter of VORMA. Thus a modest person who is thanked profusely for his good action might wave it off by saying “come on, anyone would have done this in my place”, while a defensive person blamed emphatically for her bad action might protest that “I’m no worse than the next person”. Both statements allude to a comparison to a sort of moral “replacement player” – an agent who would, morally speaking, perform at “replacement level”, the level we would expect from a random stranger, or, more likely, a random stranger in a similar time, place, context – whom we would regard as neither morally good nor morally bad.

I have been reading a cool paper by Gideon Rosen on doing wrong things under duress. A person who commits a crime under a credible threat of being shot if she refuses to commit it seems to be excused for blame, Rosen says, even if, as Aristotle would have it, the person acted freely, or, as contemporary agency theorist would have it, the person acted autonomously. The person who commits a crime so as not to be killed is not necessarily acting under conditions of reduced agency, so where is the excuse from? Rosen thinks, like I do, that excuses are about quality of will, and argues that the person who acts immorally under (bad enough) duress does not, roughly, show a great enough lack of moral concern to justify our blaming her in the Scanlonian sense of the “blame” – that is, socially distancing ourselves from her. Simply falling short of the ideal of having enough moral concern to never do anything wrong does not justify such distancing.

Without getting into the details or Rosen’s view, I would not be surprised if this has something to do with VORMA as well. Even in cases in which a person who commits a crime to avoid being killed acts wrongly, and I agree with Rosen there are many such cases, the wrongdoer does not usually show negative VORMA. If I were to shun the wrongdoer, I would arguably be inconsistent in so far as I do not shun, well, typical humanity, who would have acted the same way.  I suspect that even if I happened to be unusually courageous, a major league moral agent, and escape my own criteria for shunning, there would still be something very problematic about shunning typical humanity.

VORMA might also explain the ambivalence we feel towards some people whom it is not utterly crazy to describe as “perfect in their generations” or “blameless among their contemporaries”, like Noah. “My grandfather was a really, really good person!”, says your friend. She forgets, when she says it, that she thinks her grandfather was sexist in various ways – though, to be sure, a lot less so than his neighbors. Heck, she forgets that by her own standards, eating meat is immoral, and her grandfather sure had a lot of it. But unlike the Replacement Player in baseball, who is clearly defined in terms of average performance of players you would find in second tier professional teams, our choice of pool of imagined Replacement Moral Agents seems inevitably sensitive to pragmatics and contexts. Your friend’s grandfather had magnificent VORMA if all the bad things he did were done by almost everyone in his demographics and time period and if he often acted well where almost none of them would have. While we might have useful ideals of virtuous people who always do the right thing, the phrase “wonderful person” when applied to a real human might normally mean something more analogous to a star baseball player. As we know, such players get it wrong a lot of the time!

PS Eric Schwitzgebel has very interesting related work about how we want “a grade of B” in morality.

PSS for why I don’t think the grandfather is simply excused from his sexism by moral ignorance, see my paper “Huckleberry Finn Revisited”.

Kantianism vs Cute Things

“We love everything over which we have a decisive superiority, so we can toy with it, while it has a pleasant cheerfulness about it: little dogs, birds, grandchildren”.

Immanuel Kant

I don’t normally argue for or against Kant, recognizing that figuring out exactly what he means takes expertise I don’t have. I normally argue with contemporary Kantians, because if I don’t get what they mean, I can email them and ask, or they can tell me I’m wrong in Q&A. Yet I can’t resist the quote above. It is, of course, offensive to grandparents everywhere, and to anyone who has ever valued the love of a grandparent. See, your grandparents “loved” you because you were so small and weak and they could toy with you and relish being on the right side of the power imbalance between you. It doesn’t sound like love to me. It sounds like some kind of chaste perversion.

Continue reading “Kantianism vs Cute Things”

The Problem With Imagining (2): Simulation, Tragedy and Farce

When you try to understand a person, you imagine yourself in her situation, and some psychologists call it “simulation”. I tentatively use the term “Runaway Simulation” to describe the countless cases when a reasonable working assumption – “the other person thinks and feels the way I would have thought and felt if I were in their situation” – morphs into a stubborn belief that persists despite loads of glaring counter-evidence.

Sometimes it’s nearly harmless: you love looking at pictures of your children and can’t imagine anyone could fail to enjoy pictures of your children, so you post too many baby pictures on Facebook. You are a ravenous person and so you doubt anyone, however generally honest, who claims to be full after a salad. You are an organized person and you ask someone like me for her flight itinerary six month in advance, despite your experience with her disorderly lifestyle. But things can get trickier. You meet a person who claims not to want children, and you can’t imagine not wanting children, so you come up with some other explanation for her having no children and claiming she doesn’t want them. Perhaps she had a bad mother and is afraid she might be a bad one too? Perhaps she is afraid of commitment in general? Perhaps her romantic partner is wrong for her, and not wanting children is her unconscious’s way to tell her the relationship isn’t working? You violate Ockham’s Razor like nobody’s business, because the best explanation is under your nose: she just doesn’t want children. This however you can’t imagine, and we humans trust our imaginations a lot. Like a twisted Holmes, you accept an improbable story because the alternative seems impossible, and some profound misunderstandings begin that way.

Continue reading “The Problem With Imagining (2): Simulation, Tragedy and Farce”

What Kantianism Gets Wrong

With regard to moral theory I have two hunches. One is that the wellbeing of one’s fellow humans is an intrinsic moral value. Intrinsic not only in that a moral agent will care about it for its own sake but also in that its value is not derivative from other values, like, say, that of rational agency. So Kantianism is false. The other is that the wellbeing of one’s fellow humans isn’t the only intrinsic moral value.  There are virtues that are independent of  benevolence, and respect, of the sort that makes paternalism wrong, is one of them. So utilitarianism doesn’t work either.

But after decades of Kantian dominance in analytical ethics, some of us have become used to thinking of concern for the wellbeing of others as a somehow coarse, primitive virtue befitting swine and Jeremy Bentham, unless it is somehow mediated by, derived from or explained through something more complicated and refined, like the value of rational agency.

Suppose one is roughly Kantian. Reverence for rational agency is the one basis of morality as far as one is concerned, where rational agency is thought of roughly as the capacity to set ends. What to do with the sense that benevolence is a major part of morality? The answer seems to be “think of benevolence in terms of a duty to adopt and promote other people’s ends”. Now suppose that, as many contemporary Kantians do,  you reject the idea that adopting and promoting a person’s ends is the same thing as protecting her wellbeing – after all, most humans have some ends for which they are willing to sacrifice some wellbeing. In this case, what you say is that at the heart of benevolence we have a duty to adopt and promote people’s ends. We also have a duty to protect human wellbeing because, even though it’s not the only thing people care about, it is a very important end for all agents.

I don’t think this works, though. My argument goes like this:

  1. If the reason protecting a person’s wellbeing is important is purely the fact that her wellbeing is an important end to her and we have a duty to adopt her ends, then it would be of at least equal moral importance to protect any end that is at least equally important to her.
  2. Protecting an agent’s wellbeing is something we are morally called upon to do in some cases where, other things being equal, we would not be called upon to protect her pathway to achieving another equally important (to her!) end.

Therefore, it is false that the reason protecting a person’s wellbeing is important is purely the fact that her wellbeing is an important end to her and we have a duty to adopt her ends.

 Let me talk about 2) and why it’s plausible.

Take a case where an economically comfortable person, let’s call her Mercedes, is asked for help by her desperate acquaintance, Roger. She can, by paying 50 dollars, rescue him from being beaten up. If beaten up, Roger would suffer pain and then have to spend some days in a hospital, but he is not going to be killed. I am trying to stick to 1000 words so let me just promise you I have a half-way-realistic case.  Now imagine an alternative scenario in which a person – call him Leonard – asks Mercedes for 50 dollars because without them, a great opportunity to travel and spread his Mennonite religion will have to be relinquished.  Leonard’s end (spreading his religion) is as at least important to him as Roger’s end (not being beaten up) is important to Roger, and more important to Leonard than Leonard’s own wellbeing – he is willing to suffer for it if needed. For all Mercedes knows, spreading Leonard’s religion is itself strictly morally neutral – she has no particular reason to spread it independently of him.

There is an asymmetry between the cases. In the first scenario, Mercedes would display a lack of benevolence – perhaps of decency! – if she were to refuse to rescue Roger from a beating by giving him $50, given that this would be easy for her, no harm would be caused by it to anyone, etc. In the second scenario there is no such presumption. If Mercedes likes Leonard’s cause, it makes sense for her to make a donation. If she’s indifferent to his cause, no compelling reason to donate is provided by the very fact that Leonard would be ready, if worst comes to worst, to suffer for his cause. Unless she does fear for his wellbeing – fears, for example, that Leonard is a bad shape and will plunge into a horrible depression if she declines – Mercedes is not any less of a good Samaritan, certainly isn’t a sub-decent Samaritan, for not wanting to donate to another’s morally neutral cause, however crucial her donation would be to the cause.

If all that made Roger’s wellbeing matter morally was its importance to him as an end, she would have had as much of a duty to help Leonard.

Some Kantians would reply that what matters here isn’t protecting Roger’s wellbeing but the fact that Roger might lose rational agency. Roger, however, is not in danger of death or brain damage. He might suffer pain, but it takes a truly extreme amount of suffering to deprive someone of basic human rationality. His ability to perform successful actions will be impaired for a few days, but being a rational agent is not about being a successful performer of actions – it is about being responsive to practical reasons. It would be quite wrong to say that anyone with whom the world does not collaborate – because of an injury, or due to being in chains for that matter  – is thereby not a rational agent. Further more, preventing a few days of suffering is more morally urgent than preventing a few days of involuntary deep sleep with no significant harm expected, though involuntary sleep deprives you of agency if anything does.

There is something special about wellbeing.

Just the Booze Talking

I still think,contra many agency theorists, that when you act and feel that it’s not really you acting, as happens when you think “it’s the anger talking, not me”, when you feel passive with regard to your action, “possessed” by some alien motivation – that feeling, however disturbing, doesn’t mean that your action is in any way not yours. Or less yours. Or less deeply yours. Or not a full-blooded action. It offers no deep insight into agency. It’s just…a feeling.

Does everyone experience this feeling, sometimes called “alienation”, salient to eminent philosophers from Harry Frankfurt on? No, some people assure me that don’t, and I

Continue reading “Just the Booze Talking”

Raw Reflections on Rationality and Intelligence + Two Cat Pictures

So I have two cats. One is a British Shorthair named after the very English Philippa Foot. The other is an Ocicat named Catullus, after Gaius Valerius Catullus, an ancient Roman poet some of whose stuff is decisively Not Safe For Work. I often refer to the two as “the irrational animals” – as in “the irrational animals are hungry” or “thank you for taking care of the irrational animals” – but I suspect this is just an Aristotelian slur. They are probably more rational than I am, though I am surely smarter.

Can you be smarter but less rational? I hear epistemologists talk as if you can’t. But you can, easily. Consider a mentally healthy child of 11. Imagine the same child at 14. She has gotten smarter, but probably less rational.

Continue reading “Raw Reflections on Rationality and Intelligence + Two Cat Pictures”

Motivation Without Charm

O Duty,
Why hast thou not the visage of a sweetie or a cutie?

 That’s Ogden Nash. Now Kant:

Duty! Sublime and mighty name that embraces nothing charming or insinuating but requires submission, and yet does not seek to move the will by threatening anything that would arouse natural aversion or terror in the mind but only holds forth a law that of itself finds entry into the mind and yet gains reluctant reverence (though not always obedience), a law before which all inclinations are dumb, even though they secretly work against it; what origin is there worthy of you, and where is to be found the root of your noble descent which proudly rejects all kinship with the inclinations, descent from which is the indispensable condition of that worth which human beings alone can give themselves?

 (Exhale)

Kant appeals powerfully to the sense that doing the right thing often feels different from doing something you want to do. The Neo-Humean – as in one who thinks moral motivation, like other important motivation, is based on desire – is often asked: if, as a good person, you do right because you want to do right– (de dicto or de re, doesn’t matter for the moment) – why doesn’t doing right because you want to do right feel like going to the beach because you want to go to the beach?

Fair question. Let me have a go.

Continue reading “Motivation Without Charm”