Remember that post about Efficient charitable giving? We got into a bit of a discussion about Peter Singer in the comments. My brother Screwy, who is a professional philosopher, read the discussion and wrote some comments in response. He's given me permission to post his argument here. Much more informative than me flailing about trying to discuss utilitarianism and other bits of moral philosophy without really knowing what I'm talking about!
I somewhat disagree with Screwy's final paragraph; I'm reasonably happy that if someone advocates killing children then their moral position is pretty obviously worthless to me. I don't feel obliged to spend my life carefully picking over their arguments to find exactly which false premise or false inference led them to what is to me a completely abhorrent conclusion. Especially since Singer is pretty obviously going to beat me in any philosophical debate, as he's had a lot more training. I am prepared to stick my neck out and say he's just wrong in spite of this. I agree that it's not necessarily helpful to just call him a monster or a baby-murderer; from all I understand, he's a reasonably pleasant chap. But I think he's so much in love with his clever argument that he's unable to notice that it rests on massive, morally unacceptable prejudice against disabled people.
I was reading your blog the other day and I had some things to say about Mr Singer and your reading of him. I really should post it to your blog but somehow I didn't want to get involved with what looked like a messy argument.
I have a few points:
I think your criticism of utilitarianism is fundamentally correct. The two reasons not to be a utilitarian are:
1) It's false (I think that's due to Jonathon Dancy)
2) It's immoral (that's due to Michael Morris).
Obviously those two reasons are not going to be convincing to a utilitarian. But, and this I take it is your objection, when you end up concluding that torturing a cat and enjoying it is better than torturing a cat and not enjoying it, something has gone wrong. It's even more terribly wrong when you are justifying killing people. You need to drop at least one of your premises and not accept your conclusion. As they say, one woman's modus ponens is another woman's modus tollens. The problem is it's tricky to work out what's gone wrong.
I think the best objections to utilitarianism that I have come across come from Bernard Williams in his 'Ethics and the Limits of Philosophy'.
What I think goes on with utilitarianism is that it appeals to people who have a broadly physicalist world view. That is to say they think that what there is can be described in the language of a natural science. They then notice that practical reasons, of which moral reasons are a subset, are rather peculiar. They are peculiar because they compel agents to act. It looks like I cannot think that I have reason to be nice to my mother and not be nice to my mother. (Of course there is the phenomenon of weakness of the will but you have to accuse somebody of practical irrationality. One tempting solution to the problem of the weakness of the will is to deny that agents really grasped the practical reasons. So, when I'm horrible to Mum I must have forgotten that I have reason to be nice to her.) However, it doesn't look like natural sciences, including Psychology, can account for there being facts in the world such that recognition of those facts compels you to act. What is required is that you find the facts compelling. That looks like a non-normative fact about your psychological make-up. As it happens, I am predisposed to find the fact that somebody is my mother motivating in terms of my behaviour towards her. Why do I find certain facts compelling? Well, I have certain desires. Those desires can be altruistic, but what I need to have is some desires. (Crude utilitarians deny that my desires could be altruistic in any deep sense. For some stupid reason they think that humans are only motivated by pleasure and fear of pain. There is no need to be a crude ultilitarian and indeed Peter Singer is not a crude utilitarian.)
A quick recap of the picture so far. There are no reason-giving facts because no natural science can account for normative status. Instead, there are physical facts and psychological facts. Because I have the desire set that I have (this desire set being describable by psychology), certain facts will cause me to act in the way I do. This is how we explain the idea that facts can provide reasons. They provide reasons for an agent.
The next move is to try to show that any agent, or at least any agent who is committed to being moral, will find the same facts compelling in the same ways. In particular, we need to show that we all ought to be committed to maximising desire satisfaction (modulo the different measures appealed to by different utilitarians). The way Singer tries to argue, if I remember correctly, is as follows (Incidentally the Singer argument is fairly standard as a way into utilitarianism):
1) I desire lots of outcomes to occur.
2) Other people desire lots of outcomes to occur.
3) I have no reason to privilege my desires over other people's desires.
4) It is irrational to privilege my desires over other people's.
5) If I am being rational, I will give equal consideration to all desires that there are.
6) I can only engage in moral reasoning if I am being rational.
7) I am engaging in moral reasoning.
8) I will give equal consideration to all desires that there are.
Of course anyone can run that argument so anyone who is engaged in moral reasoning will end up committed to giving equal consideration to all desires that there are. I can't remember the chapter and verse but it is in his introduction to practical ethics. (Incidentally I think it is important when dealing with controversial arguments that you take them seriously. It is easy to set someone up as a bogey man.)
The problem with the argument is that premise three is false. What counts as a reason is in part determined by what desires an agent has. I am altruistic but I'm not that altruistic, so I do have reason to privelege my desires over other people's desires. To be fair to Singer I haven't seen him engage in the broadly physicalist line of reasoning that leads to the belief that what reasons I have are at least in part determined by what desires I in fact have. However, if he appeals to a robust notion of reason, it is still not obvious that three is true. The best reason to belive three would be because you are already committed to utilitarianism. It would make the above argument question-begging. Once you acknowledge, as I do in fact, that there are reason-giving facts out there, it's not at all obvious that some form of utilitarianism is the way to go. It looks like you need to appeal to a faculty of moral intuition to explain how we have access to those facts. That's a controversial claim, but if it can be defended, then because our intuitions tell strongly against utilitarianism, it looks like we have reason to deny three. Once you make space for a faculty of moral intuition, pointing out the counter-intuitive consequences of utlitarianism is no longer question begging. Our intuitions provide prima facie reasons to reject utilitarianism. Three comes out false whichever way you look at it.
So much for Utilitarianism. What really got me annoyed was all your commentators' knee jerk assumption that all there was to morality was giving money to charity. There's just no way that that can be an outcome even of a utilitarian moral system. I would be surprised if Singer is committed to that view. I don't know who these rational-giving people are, or Singer's relationship with them but I would be surprised if he is in fact sympathetic to their views They sound extremely right-wing and Singer is, although not a revolutionary socialist, not exactly a massive fan of libertarian capitalism. Singer has an argument that results in his claim that it is deeply immoral not to hand over cash to starving people. It is independent of his utilitarianism and there's no suggestion that somebody who gives lots of money to charity has somehow disposed of all of their moral duties. The argument is simple and goes like this:
1) If I know of serious suffering, and, without undue detriment to myself, can do something to alleviate the suffering, and I don't do it, then I am behaving extremely badly.
2) I know of serious suffering (Singer's example at the time was a famine in Bangledesh)
3) Foregoing some luxuries will allow me to do something to alleviate the suffering without undue detriment to myself. (Singer's example was not buying a colour TV but instead giving the money to famine relief.)
4) If I don't forego some luxuries I am behaving extremely badly.
Singer points out that most of us don't give up luxuries so, if we are to be rational, we ought either to deny 1) or give up some luxuries. I think it's a powerful argument and I think one is correct. So, I really ought to give up more luxuries. Note how we've not appealed to utilitarianism or concluded that my overriding moral duty is to give money to famine relief. It might even be possible to show that any disposable income I get from forgoing some luxuries ought to go elsewhere and so, if I am behaving morally, I won't be in a position to alleviate suffering without undue detriment to myself. What we certainly don't have is an argument to show that morality starts and stops with giving to famine relief or that all my charitable givings should go to famine relief. I think the latter disjunct comes about because the argument doesn't appeal to utilitarianism at all.
What about Singer, animals and infants? Here we need to be careful. What Singer holds is that the category of 'person' is not a morally significant category. Again, the argument does not immediately appeal to utilitarianism. Singer wants the defender of the thesis that people are somehow special to give him a reason to believe it. He then, rather crudely, points out that cognitive capacity won't do because some humans have less cognitive capacities than some non-human animals. The obvious category are neonates. But you might argue they're going to develop into pretty sophisticated things. So Singer says 'Aha, what about people with learning disabilities?' It's clear that Singer hasn't spent much time with people with learning disabilities. I think he has to think that some people have the mental age of a X yr old and this is obviously silly. However, I suspect as a matter of empirical fact, it's going to turn out correct. Some primates are going to have more cognitive capacities than some humans will ever have (if only because some of those humans are going to die young). He then has an argument against killing animals for food, that says if you think that we can kill non-human animals because they lack cognitive capacities, you ought to think it's ok to kill some human animals. You don't think it's ok to kill some human animals so you'd better stop killing non-human animals for meat. I think the argument is ok but probably overreaches itself. He presents it in such a way that he seems to take it as obvious that only cognitive capcities are going to be relevant here and that seems a bit odd and probably utilitarian. However, it certainly doesn't advocate killing disabled people.
However, Singer does think that it's sometimes ok to kill human beings. This is because he is a utilitarian. He has odd views about desires. I think the basic idea is that there are all these desires floating around in the universe. Our moral duty is to maximise desire satisfaction. The simple version of the argument is that the desires of the mother outweigh the desires of the infant and so she can bump off her child. She can't do it willy-nilly but if the child requires a lot of looking after a lot of her desires won't be fulfilled because she will have to spend all her time looking after the child. The argument is horrific. But the challenge is to explain what's morally relevant about the category of personhood.
I also think you should be a bit wary about calling him a murderer. It must be ok to explore and defend controversial arguments.I find Singer annoying because he is like a clever undergraduate who has found a powerful argument for a ridiculous conclusion and stops there. However, the right way to treat clever undergraduates and Peter Singer is to unpick the arguments and try to find out both what's compelling about them and where they've gone wrong. It's not helpful to call him names.
I somewhat disagree with Screwy's final paragraph; I'm reasonably happy that if someone advocates killing children then their moral position is pretty obviously worthless to me. I don't feel obliged to spend my life carefully picking over their arguments to find exactly which false premise or false inference led them to what is to me a completely abhorrent conclusion. Especially since Singer is pretty obviously going to beat me in any philosophical debate, as he's had a lot more training. I am prepared to stick my neck out and say he's just wrong in spite of this. I agree that it's not necessarily helpful to just call him a monster or a baby-murderer; from all I understand, he's a reasonably pleasant chap. But I think he's so much in love with his clever argument that he's unable to notice that it rests on massive, morally unacceptable prejudice against disabled people.
(no subject)
Date: 2012-09-04 06:01 pm (UTC)You can say "I prefer A to B", "I prefer B to C" and still logically say "I prefer C to A" (since, at the very root, we're dealing with uncertainty and there are many cases where probabilities are non-transitive).
So, yes, there are cases where utilitarianism works, without strict transitivity guarantees, but if you're trying to build a sound, universal, framework, that transitivity needs to be guaranteed (or stated as an axiom).
As far as "does it pose a problem", of course it does. Humans are famously bad about reasoning about non-transitive systems. There may also be no "best" outcome (but, at least, there's almost always a "not the worst" choice).
(no subject)
Date: 2012-09-05 12:14 pm (UTC)I think I know what you mean, but I'm scared if I try to verbalise it I'll get it a bit wrong and lead to a massive misunderstanding. Can you give an example of what sort of intransitivity you're thinking of? (I assume I understand the statistics, but want to know which things are non-transitive that you think would be/wouldn't be in a moral system.)
As far as "does it pose a problem", of course it does
Maybe that was too euphemistic, I didn't mean "present a problem" as in "difficult", I meant I didn't see how it was compatible with a moral system at all.
I assumed (you may tell me I'm wrong?) a moral system is supposed to tell you which choice to make when you have a choice to make. So I'd expect answers like "do this" or "the real world effects are too complicated to say" or "all the options suck so much there's no good answer".
But non-transitive choices just seem like they would produce contradictions like "do both X and not X" or something, which doesn't feel like "an imperfect moral system" but "a big pile of words that don't mean anything".
I think I've misunderstood what you're trying to say, but I'm not sure what you are trying to say?
(no subject)
Date: 2012-09-06 03:34 pm (UTC)I assumed (you may tell me I'm wrong?) a moral system is supposed to tell you which choice to make when you have a choice to make. So I'd expect answers like "do this" or "the real world effects are too complicated to say" or "all the options suck so much there's no good answer".
Utilitarianism, classically, only allows you to choose between two actions (usually "do X" or "do not do X"). In many (but not all) real situations, you have more than two possible actions ("do X", "do Y", "do Z"). If you blindly pair-try these, you may well end up making a choice that depends heavily on the (probably essentially random) first elimination you did in your analysis, whereas the right answer would have been "this is too complex" (possibly implying "find an action D that is superior to all your intransitive choices"). But doing that would require actually engaging with the problem of non-transitivity.
And therein lies the problem. Utilitarianism, as it stands, requires transitivity to be meaningful, but neither states axiomatically that transitivity exists nor considers even proving it. And without that, it is, essentially, a pile of words that often (but not always) happen to actually give guidance.
So, yes, I think you understood me correctly, even if I was unintentionally abstruse (I sometimes forget that most people have not indefinitely suspended a study towards a masters in philosophy on essentially these grounds).
(no subject)
Date: 2012-09-06 03:42 pm (UTC):) LOL.
Thank you. Will come back and reply properly later.
(no subject)
Date: 2012-09-06 04:02 pm (UTC)Can you explain more? I understood the basic idea of utilitarianism to be "calculate the utility of each outcome, then choose the one with the greatest utility". (Where "utility" is some real number representing how "good" or "desirable" an outcome is.)
That's not practical in practice because you can never actually calculate utilities except for comparing similar things (eg. two people dying being worse than one person dying, assuming "dying is bad").
But it seems to work equally well for multiple actions: choose the one out of three that has the highest utility.
Do you think utilitarianism says something else? Or is that not related to what you were saying?
(no subject)
Date: 2012-09-07 11:56 am (UTC)The next evolution was to use more refined utility metrics. Even further sophistication only looks at the utility delta, ignoring any utility changes unrelated to the (very narrow) action(s) under inspection, specifically because it's hard to do whole-system inspections.
But that lands you right in the transitivity issue. And we're back to where we started.
As for "choose the one that has the highest utility", in the dice example, the dice all have the same sum, so they represent (essentially) three actions with the same utility and you can only choose one by pair-wise comparison.
(no subject)
Date: 2012-09-06 04:12 pm (UTC)That was exactly the example I was thinking of. But I'm not sure how it relates to "prefer"?
I can think of intransitivity in human preferences in cases where it looks (to me) somewhat irrational. Eg. psychology experiments are full of cases where someone is given a choice between A and B and usually chooses B, but given a choice between A, B and C usually chooses A. The assumption seemed to be that if people had perfect information, or a determination to choose based on which they'd actually enjoy most later, they'd have a consistent preferred order of A, B and C, but that using heuristics and habits we usually use as shortcuts in decision making resulted in a "false" preference.
Are you thinking of things like that?
Or your comment about probability sounds like you think apparent paradoxes like the non-transitive dice directly lead to non-transitive choices in preference however rational or irrational we are -- are you saying that?
Do you think the dice can translate into a non-transitive preference, or was that just an example of non-transitivity generally?
(no subject)
Date: 2012-09-07 11:48 am (UTC)I think I think it's hard to get a total order, much easier to pick "the better of two", but that still leaves you open to this type of problem. I also believe that there are multiple situations where there simply is no best course of action (or more than one that is equally good, in the end).
There are even more interesting problems with (some) utilitarian systems, when you start integrating changes in utility between multiple actors.