Raw Reflections on Virtue, Blame and Baseball

In a much argued-about verse in the Hebrew Bible, we are told that Noah was a righteous man and “perfect in his generations” or “blameless among his contemporaries” or something like that (I grew up on the Hebrew, and so I can say: the weirdness is in the original). The verse has been treated as an interpretative riddle because it’s not clear what being “blameless among one’s contemporaries” amounts to. Was the guy really a righteous person (as is suggested by the subsequent text telling us that he walked with God) or was he a righteous person only by comparison to his contemporaries, who were dreadful enough to bring a flood on themselves?

My friend Tim Schroeder would probably have suggested that, given his time, Noah must have had had an excellent Value Over Replacement Moral Agent. It’s kinda like Value Over Replacement Player. Here’s how Wikipedia explains the concept of Value Over Replacement Player:

In baseballvalue over replacement player (or VORP) is a statistic (…) that demonstrates how much a hitter contributes offensively or how much a pitcher contributes to his team in comparison to a fictitious “replacement player” (…) A replacement player performs at “replacement level,” which is the level of performance an average team can expect when trying to replace a player at minimal cost, also known as “freely available talent.”

Tim and I have been toying with the idea that while rightness, wrongness and permissibility of actions are not the sort of things that depend on what your contemporaries are doing, ordinary judgments of the virtue of particular people (“she’s a really good person”, “he’s a jerk”, and so on) are really about something akin to a person’s Value Over Replacement Moral Agent or VORMA. The amount of blame one deserves for a wrong action or credit for a right action also seems to be at least partially a matter of VORMA. Thus a modest person who is thanked profusely for his good action might wave it off by saying “come on, anyone would have done this in my place”, while a defensive person blamed emphatically for her bad action might protest that “I’m no worse than the next person”. Both statements allude to a comparison to a sort of moral “replacement player” – an agent who would, morally speaking, perform at “replacement level”, the level we would expect from a random stranger, or, more likely, a random stranger in a similar time, place, context – whom we would regard as neither morally good nor morally bad.

I have been reading a cool paper by Gideon Rosen on doing wrong things under duress. A person who commits a crime under a credible threat of being shot if she refuses to commit it seems to be excused for blame, Rosen says, even if, as Aristotle would have it, the person acted freely, or, as contemporary agency theorist would have it, the person acted autonomously. The person who commits a crime so as not to be killed is not necessarily acting under conditions of reduced agency, so where is the excuse from? Rosen thinks, like I do, that excuses are about quality of will, and argues that the person who acts immorally under (bad enough) duress does not, roughly, show a great enough lack of moral concern to justify our blaming her in the Scanlonian sense of the “blame” – that is, socially distancing ourselves from her. Simply falling short of the ideal of having enough moral concern to never do anything wrong does not justify such distancing.

Without getting into the details or Rosen’s view, I would not be surprised if this has something to do with VORMA as well. Even in cases in which a person who commits a crime to avoid being killed acts wrongly, and I agree with Rosen there are many such cases, the wrongdoer does not usually show negative VORMA. If I were to shun the wrongdoer, I would arguably be inconsistent in so far as I do not shun, well, typical humanity, who would have acted the same way.  I suspect that even if I happened to be unusually courageous, a major league moral agent, and escape my own criteria for shunning, there would still be something very problematic about shunning typical humanity.

VORMA might also explain the ambivalence we feel towards some people whom it is not utterly crazy to describe as “perfect in their generations” or “blameless among their contemporaries”, like Noah. “My grandfather was a really, really good person!”, says your friend. She forgets, when she says it, that she thinks her grandfather was sexist in various ways – though, to be sure, a lot less so than his neighbors. Heck, she forgets that by her own standards, eating meat is immoral, and her grandfather sure had a lot of it. But unlike the Replacement Player in baseball, who is clearly defined in terms of average performance of players you would find in second tier professional teams, our choice of pool of imagined Replacement Moral Agents seems inevitably sensitive to pragmatics and contexts. Your friend’s grandfather had magnificent VORMA if all the bad things he did were done by almost everyone in his demographics and time period and if he often acted well where almost none of them would have. While we might have useful ideals of virtuous people who always do the right thing, the phrase “wonderful person” when applied to a real human might normally mean something more analogous to a star baseball player. As we know, such players get it wrong a lot of the time!

PS Eric Schwitzgebel has very interesting related work about how we want “a grade of B” in morality.

PSS for why I don’t think the grandfather is simply excused from his sexism by moral ignorance, see my paper “Huckleberry Finn Revisited”.

What Kantianism Gets Wrong

With regard to moral theory I have two hunches. One is that the wellbeing of one’s fellow humans is an intrinsic moral value. Intrinsic not only in that a moral agent will care about it for its own sake but also in that its value is not derivative from other values, like, say, that of rational agency. So Kantianism is false. The other is that the wellbeing of one’s fellow humans isn’t the only intrinsic moral value.  There are virtues that are independent of  benevolence, and respect, of the sort that makes paternalism wrong, is one of them. So utilitarianism doesn’t work either.

But after decades of Kantian dominance in analytical ethics, some of us have become used to thinking of concern for the wellbeing of others as a somehow coarse, primitive virtue befitting swine and Jeremy Bentham, unless it is somehow mediated by, derived from or explained through something more complicated and refined, like the value of rational agency.

Suppose one is roughly Kantian. Reverence for rational agency is the one basis of morality as far as one is concerned, where rational agency is thought of roughly as the capacity to set ends. What to do with the sense that benevolence is a major part of morality? The answer seems to be “think of benevolence in terms of a duty to adopt and promote other people’s ends”. Now suppose that, as many contemporary Kantians do,  you reject the idea that adopting and promoting a person’s ends is the same thing as protecting her wellbeing – after all, most humans have some ends for which they are willing to sacrifice some wellbeing. In this case, what you say is that at the heart of benevolence we have a duty to adopt and promote people’s ends. We also have a duty to protect human wellbeing because, even though it’s not the only thing people care about, it is a very important end for all agents.

I don’t think this works, though. My argument goes like this:

  1. If the reason protecting a person’s wellbeing is important is purely the fact that her wellbeing is an important end to her and we have a duty to adopt her ends, then it would be of at least equal moral importance to protect any end that is at least equally important to her.
  2. Protecting an agent’s wellbeing is something we are morally called upon to do in some cases where, other things being equal, we would not be called upon to protect her pathway to achieving another equally important (to her!) end.

Therefore, it is false that the reason protecting a person’s wellbeing is important is purely the fact that her wellbeing is an important end to her and we have a duty to adopt her ends.

 Let me talk about 2) and why it’s plausible.

Take a case where an economically comfortable person, let’s call her Mercedes, is asked for help by her desperate acquaintance, Roger. She can, by paying 50 dollars, rescue him from being beaten up. If beaten up, Roger would suffer pain and then have to spend some days in a hospital, but he is not going to be killed. I am trying to stick to 1000 words so let me just promise you I have a half-way-realistic case.  Now imagine an alternative scenario in which a person – call him Leonard – asks Mercedes for 50 dollars because without them, a great opportunity to travel and spread his Mennonite religion will have to be relinquished.  Leonard’s end (spreading his religion) is as at least important to him as Roger’s end (not being beaten up) is important to Roger, and more important to Leonard than Leonard’s own wellbeing – he is willing to suffer for it if needed. For all Mercedes knows, spreading Leonard’s religion is itself strictly morally neutral – she has no particular reason to spread it independently of him.

There is an asymmetry between the cases. In the first scenario, Mercedes would display a lack of benevolence – perhaps of decency! – if she were to refuse to rescue Roger from a beating by giving him $50, given that this would be easy for her, no harm would be caused by it to anyone, etc. In the second scenario there is no such presumption. If Mercedes likes Leonard’s cause, it makes sense for her to make a donation. If she’s indifferent to his cause, no compelling reason to donate is provided by the very fact that Leonard would be ready, if worst comes to worst, to suffer for his cause. Unless she does fear for his wellbeing – fears, for example, that Leonard is a bad shape and will plunge into a horrible depression if she declines – Mercedes is not any less of a good Samaritan, certainly isn’t a sub-decent Samaritan, for not wanting to donate to another’s morally neutral cause, however crucial her donation would be to the cause.

If all that made Roger’s wellbeing matter morally was its importance to him as an end, she would have had as much of a duty to help Leonard.

Some Kantians would reply that what matters here isn’t protecting Roger’s wellbeing but the fact that Roger might lose rational agency. Roger, however, is not in danger of death or brain damage. He might suffer pain, but it takes a truly extreme amount of suffering to deprive someone of basic human rationality. His ability to perform successful actions will be impaired for a few days, but being a rational agent is not about being a successful performer of actions – it is about being responsive to practical reasons. It would be quite wrong to say that anyone with whom the world does not collaborate – because of an injury, or due to being in chains for that matter  – is thereby not a rational agent. Further more, preventing a few days of suffering is more morally urgent than preventing a few days of involuntary deep sleep with no significant harm expected, though involuntary sleep deprives you of agency if anything does.

There is something special about wellbeing.