Kantianism for Wimps

In a paper that’s going to be in the supplemental volume of the Proceedings of the Aristotelian Society – or maybe it’s already there? – I argue that the moral imperative to help those who are doing badly cannot be properly accounted for through the position of a Kantian duty to adopt and promote the ends of other agents. I pick on contemporary Kantians, not on Kant, and I don’t defend utilitarianism or Aristotelianism, neither of which is my view. Taking it to be true that agents have a lot of ends that are different from their wellbeing – even ignoring specifically moral ends – I argue that a person’s end of avoiding ill-being is significant to moral people in a way that her other ends are not and that the difference cannot be traced simply to the fact that for most people, avoiding misery is a particularly important end. That is, we are often morally obligated to –  or at least have a significant pro tanto reason to – protect a person’s wellbeing where we would have no such obligation or pro tanto reason to protect another end she has, even if the agent values and prioritizes that end as highly as anyone does their wellbeing and more than she cares about her own. The 1000(ish) word version of the argument is here on this blog, under the heading What Kantianism Gets Wrong.

Now, Herman offers an ingenious Kantian explanation of why we seem to have “a duty of easy rescue” – a duty to help the person bleeding by the side of the road if it is not difficult to do so. My undergrads point out that it would be very weird to say here that our duty to help others is “imperfect”, as one cannot avoid the duty of easy rescue simply by having already performed a hundred other easy rescues this year. To such an undergrad Herman says: fair enough, the imperfect duty to provide mutual aid does not provide a good explanation for the duty of easy rescue. The duty of easy rescue is a separate one, it cannot be avoided thru “I already gave at the office”, and it kicks in in places where without your help, the person in question is in danger of losing her rational agency. In my paper I argue that this move would not be enough by way of an answer to me, because there exist cases in which we are obligated to help a person if we can do so easily in which the person does not face a danger of losing her rational agency. She is not going to die, she is not going to suffer from a severe neurological or psychiatric disorder, she’s just going to suffer serious ill-being if you don’t, say, call 911. Furthermore, there are cases that fall short of emergencies in which we do still have significant reasons to help alleviate misery – and no comparable reason to protect or promote any number of ends other than wellbeing that the agent might have. We still need an explanation for the importance of wellbeing beyond the importance of ends.

This is where many a Kantian has told me the following: the wellbeing of one’s fellow humans is morally valuable because wellbeing is required for rational agency.

Such a view, I fear, might  insult some badly-off people.

If one is to think that rational agency is required for full-fledged moral status (by which I mean having all the rights that adults have, including rights not to be “paternalized”), one cannot have too demanding an idea of rational agency in mind. There is a legitimate sense of “rational” in which very, very few people are rational, but it cannot be that only these few have full-fledged moral status. Pretty much everyone whom it is immoral to hospitalize forcibly has full-fledge moral status and so pretty much everyone whom it is immoral to hospitalize forcibly should count, for Kantian purposes, as a rational agent.

There are, to be sure, cases where either a person suffers such extreme ill-being that she loses her rational agency or – more often, I suspect – the same extreme condition causes both ill-being and loss of rational agency. Torture is exhibit A. There is also malnutrition bad enough to harm the brain and severe mental conditions such as psychosis or severe depression.

However, protecting a person’s wellbeing is a morally important consideration in cases that are not like that at all. My example of Roger in the prior post is not an unusual one. If it’s not too hard, I have a duty to help a person who has a broken leg or who, without my intervention, might needlessly break a leg. Breaking a leg does not make anyone lose their rational agency.

Humans are capable of staying no-less-rational-than-average in pretty extreme conditions. It seems pretty clear from Primo Levi’s memories of Auschwitz that he was, while there, a rational agent. Admittedly, Levi strikes the reader as heroic, and admittedly, he became depressed and suicidal in his old age, in what is easy to imagine as trauma catching up with him, but you get my point. There are many people on this earth who are quite miserable and as capable of responding to practical reasons as any citizen in the kingdom of ends. If you can respond properly to practical reasons, aka set ends (which isn’t the same as achieving your ends, which requires the world collaborating with you) then your rational agency isn’t lost.

Perhaps a Kantian might say that for ill-being to be morally important it needn’t be the case that the sufferer lose their rational agency: it is enough that the sufferer be seriously impaired in their ability to exercise rational agency – that is, their ability to promote many of their ends. This kind of move certainly adds more conditions to our list or conditions with which you have a strong reason to help a person. A person struck by a severe physical illness that doesn’t hurt the brain retains her rational agency but loses efficacy in achieving a large number of ends that she might have. That’s one reason there’s a drop in her wellbeing in the first place. However, it is not that hard to imagine cases in which a person’s predicament only seriously impairs him in achieving one of his ends: his wellbeing. That’s true in the case of Roger, and it can be true even in cases of people whose ill-being is lifelong. Some humans who are deeply miserable – due to having gone through a tragedy, for example, or, in a different way, due to chronic pain – achieve a lot of rationally set ends, and we do not wish to be insulting and doubt their basic rationality, their achievements, or their misery.

In the prior post I say that it would be more urgent to prevent a suffering-inducing insect bite than to prevent a sleep inducing insect bite, though it’s sleep that clearly cancels out agency, not suffering. Since writing the prior post, it has been pointed out to me that sometimes it would be more important to prevent involuntary sleep than it would be to prevent pain. However, quite often, preventing involuntary sleep, which sure deprives you of agency, is less morally important than preventing a pain that isn’t quite strong enough to deprive you of your agency (or even of your efficacy – perhaps it doesn’t last that long, or it’s in a period where you don’t do much anyway, or you are one of the relatively stoical people I just mentioned). Furthermore, loss of agency or efficacy through suffering is worse than equal loss of agency or efficacy that isn’t accompanied by suffering, or so we think in medical contexts.

This reminds me a bit of Nagel saying that the main bad thing about non-conventional weapons that cause severe pain is that they hurt the victim’s dignity. One can harm a person’s dignity (and efficacy, and rationality) quite badly by deceptively getting them too drunk to walk, or by hypnotizing them into clucking like a chicken. I’m not saying that won’t be evil, especially if I were to make people do something of which they seriously disapprove (as per David Sussman), but using very painful means instead of the booze or the hypnosis has a whole additional, different dimension of terribleness. To be honest, back in 1990, when I was a teenager and Sadam Hussein threatened to use chemical weapons on the part of Israel I was in – it didn’t sound like an idle threat at the time – I didn’t think very much about the potential effect on my dignity, or on my rational agency. I was terrified of the potential suffering and that was basically it. But then again, maybe I’m just a wimp.

Anyway, my purpose here is not to deny that rational agency is morally important. I think it matters quite a bit in terms of justice and rights. I have argued that as far as benevolence goes, wellbeing is important in a way that’s independent of rational agency, and here I’m defending the view that in order to see someone as meriting compassion, meriting benevolence, or even generating a duty of easy rescue, we do not need to ask to what extent her ill-being will interrupt her rational agency or her efficacy in achieving rationally set ends. Now there’s one thought too many.


Raw Reflections on Virtue, Blame and Baseball

In a much argued-about verse in the Hebrew Bible, we are told that Noah was a righteous man and “perfect in his generations” or “blameless among his contemporaries” or something like that (I grew up on the Hebrew, and so I can say: the weirdness is in the original). The verse has been treated as an interpretative riddle because it’s not clear what being “blameless among one’s contemporaries” amounts to. Was the guy really a righteous person (as is suggested by the subsequent text telling us that he walked with God) or was he a righteous person only by comparison to his contemporaries, who were dreadful enough to bring a flood on themselves?

My friend Tim Schroeder would probably have suggested that, given his time, Noah must have had had an excellent Value Over Replacement Moral Agent. It’s kinda like Value Over Replacement Player. Here’s how Wikipedia explains the concept of Value Over Replacement Player:

In baseballvalue over replacement player (or VORP) is a statistic (…) that demonstrates how much a hitter contributes offensively or how much a pitcher contributes to his team in comparison to a fictitious “replacement player” (…) A replacement player performs at “replacement level,” which is the level of performance an average team can expect when trying to replace a player at minimal cost, also known as “freely available talent.”

Tim and I have been toying with the idea that while rightness, wrongness and permissibility of actions are not the sort of things that depend on what your contemporaries are doing, ordinary judgments of the virtue of particular people (“she’s a really good person”, “he’s a jerk”, and so on) are really about something akin to a person’s Value Over Replacement Moral Agent or VORMA. The amount of blame one deserves for a wrong action or credit for a right action also seems to be at least partially a matter of VORMA. Thus a modest person who is thanked profusely for his good action might wave it off by saying “come on, anyone would have done this in my place”, while a defensive person blamed emphatically for her bad action might protest that “I’m no worse than the next person”. Both statements allude to a comparison to a sort of moral “replacement player” – an agent who would, morally speaking, perform at “replacement level”, the level we would expect from a random stranger, or, more likely, a random stranger in a similar time, place, context – whom we would regard as neither morally good nor morally bad.

I have been reading a cool paper by Gideon Rosen on doing wrong things under duress. A person who commits a crime under a credible threat of being shot if she refuses to commit it seems to be excused for blame, Rosen says, even if, as Aristotle would have it, the person acted freely, or, as contemporary agency theorist would have it, the person acted autonomously. The person who commits a crime so as not to be killed is not necessarily acting under conditions of reduced agency, so where is the excuse from? Rosen thinks, like I do, that excuses are about quality of will, and argues that the person who acts immorally under (bad enough) duress does not, roughly, show a great enough lack of moral concern to justify our blaming her in the Scanlonian sense of the “blame” – that is, socially distancing ourselves from her. Simply falling short of the ideal of having enough moral concern to never do anything wrong does not justify such distancing.

Without getting into the details or Rosen’s view, I would not be surprised if this has something to do with VORMA as well. Even in cases in which a person who commits a crime to avoid being killed acts wrongly, and I agree with Rosen there are many such cases, the wrongdoer does not usually show negative VORMA. If I were to shun the wrongdoer, I would arguably be inconsistent in so far as I do not shun, well, typical humanity, who would have acted the same way.  I suspect that even if I happened to be unusually courageous, a major league moral agent, and escape my own criteria for shunning, there would still be something very problematic about shunning typical humanity.

VORMA might also explain the ambivalence we feel towards some people whom it is not utterly crazy to describe as “perfect in their generations” or “blameless among their contemporaries”, like Noah. “My grandfather was a really, really good person!”, says your friend. She forgets, when she says it, that she thinks her grandfather was sexist in various ways – though, to be sure, a lot less so than his neighbors. Heck, she forgets that by her own standards, eating meat is immoral, and her grandfather sure had a lot of it. But unlike the Replacement Player in baseball, who is clearly defined in terms of average performance of players you would find in second tier professional teams, our choice of pool of imagined Replacement Moral Agents seems inevitably sensitive to pragmatics and contexts. Your friend’s grandfather had magnificent VORMA if all the bad things he did were done by almost everyone in his demographics and time period and if he often acted well where almost none of them would have. While we might have useful ideals of virtuous people who always do the right thing, the phrase “wonderful person” when applied to a real human might normally mean something more analogous to a star baseball player. As we know, such players get it wrong a lot of the time!

PS Eric Schwitzgebel has very interesting related work about how we want “a grade of B” in morality.

PPS for why I don’t think the grandfather is simply excused from his sexism by moral ignorance, see my paper “Huckleberry Finn Revisited”.

What Kantianism Gets Wrong

With regard to moral theory I have two hunches. One is that the wellbeing of one’s fellow humans is an intrinsic moral value. Intrinsic not only in that a moral agent will care about it for its own sake but also in that its value is not derivative from other values, like, say, that of rational agency. So Kantianism is false. The other is that the wellbeing of one’s fellow humans isn’t the only intrinsic moral value.  There are virtues that are independent of  benevolence, and respect, of the sort that makes paternalism wrong, is one of them. So utilitarianism doesn’t work either.

But after decades of Kantian dominance in analytical ethics, some of us have become used to thinking of concern for the wellbeing of others as a somehow coarse, primitive virtue befitting swine and Jeremy Bentham, unless it is somehow mediated by, derived from or explained through something more complicated and refined, like the value of rational agency.

Suppose one is roughly Kantian. Reverence for rational agency is the one basis of morality as far as one is concerned, where rational agency is thought of roughly as the capacity to set ends. What to do with the sense that benevolence is a major part of morality? The answer seems to be “think of benevolence in terms of a duty to adopt and promote other people’s ends”. Now suppose that, as many contemporary Kantians do,  you reject the idea that adopting and promoting a person’s ends is the same thing as protecting her wellbeing – after all, most humans have some ends for which they are willing to sacrifice some wellbeing. In this case, what you say is that at the heart of benevolence we have a duty to adopt and promote people’s ends. We also have a duty to protect human wellbeing because, even though it’s not the only thing people care about, it is a very important end for all agents.

I don’t think this works, though. My argument goes like this:

  1. If the reason protecting a person’s wellbeing is important is purely the fact that her wellbeing is an important end to her and we have a duty to adopt her ends, then it would be of at least equal moral importance to protect any end that is at least equally important to her.
  2. Protecting an agent’s wellbeing is something we are morally called upon to do in some cases where, other things being equal, we would not be called upon to protect her pathway to achieving another equally important (to her!) end.

Therefore, it is false that the reason protecting a person’s wellbeing is important is purely the fact that her wellbeing is an important end to her and we have a duty to adopt her ends.

 Let me talk about 2) and why it’s plausible.

Take a case where an economically comfortable person, let’s call her Mercedes, is asked for help by her desperate acquaintance, Roger. She can, by paying 50 dollars, rescue him from being beaten up. If beaten up, Roger would suffer pain and then have to spend some days in a hospital, but he is not going to be killed. I am trying to stick to 1000 words so let me just promise you I have a half-way-realistic case.  Now imagine an alternative scenario in which a person – call him Leonard – asks Mercedes for 50 dollars because without them, a great opportunity to travel and spread his Mennonite religion will have to be relinquished.  Leonard’s end (spreading his religion) is as at least important to him as Roger’s end (not being beaten up) is important to Roger, and more important to Leonard than Leonard’s own wellbeing – he is willing to suffer for it if needed. For all Mercedes knows, spreading Leonard’s religion is itself strictly morally neutral – she has no particular reason to spread it independently of him.

There is an asymmetry between the cases. In the first scenario, Mercedes would display a lack of benevolence – perhaps of decency! – if she were to refuse to rescue Roger from a beating by giving him $50, given that this would be easy for her, no harm would be caused by it to anyone, etc. In the second scenario there is no such presumption. If Mercedes likes Leonard’s cause, it makes sense for her to make a donation. If she’s indifferent to his cause, no compelling reason to donate is provided by the very fact that Leonard would be ready, if worst comes to worst, to suffer for his cause. Unless she does fear for his wellbeing – fears, for example, that Leonard is a bad shape and will plunge into a horrible depression if she declines – Mercedes is not any less of a good Samaritan, certainly isn’t a sub-decent Samaritan, for not wanting to donate to another’s morally neutral cause, however crucial her donation would be to the cause.

If all that made Roger’s wellbeing matter morally was its importance to him as an end, she would have had as much of a duty to help Leonard.

Some Kantians would reply that what matters here isn’t protecting Roger’s wellbeing but the fact that Roger might lose rational agency. Roger, however, is not in danger of death or brain damage. He might suffer pain, but it takes a truly extreme amount of suffering to deprive someone of basic human rationality. His ability to perform successful actions will be impaired for a few days, but being a rational agent is not about being a successful performer of actions – it is about being responsive to practical reasons. It would be quite wrong to say that anyone with whom the world does not collaborate – because of an injury, or due to being in chains for that matter  – is thereby not a rational agent. Further more, preventing a few days of suffering is more morally urgent than preventing a few days of involuntary deep sleep with no significant harm expected, though involuntary sleep deprives you of agency if anything does.

There is something special about wellbeing.