Remember that post about Efficient charitable giving? We got into a bit of a discussion about Peter Singer in the comments. My brother Screwy, who is a professional philosopher, read the discussion and wrote some comments in response. He's given me permission to post his argument here. Much more informative than me flailing about trying to discuss utilitarianism and other bits of moral philosophy without really knowing what I'm talking about!
I somewhat disagree with Screwy's final paragraph; I'm reasonably happy that if someone advocates killing children then their moral position is pretty obviously worthless to me. I don't feel obliged to spend my life carefully picking over their arguments to find exactly which false premise or false inference led them to what is to me a completely abhorrent conclusion. Especially since Singer is pretty obviously going to beat me in any philosophical debate, as he's had a lot more training. I am prepared to stick my neck out and say he's just wrong in spite of this. I agree that it's not necessarily helpful to just call him a monster or a baby-murderer; from all I understand, he's a reasonably pleasant chap. But I think he's so much in love with his clever argument that he's unable to notice that it rests on massive, morally unacceptable prejudice against disabled people.
I was reading your blog the other day and I had some things to say about Mr Singer and your reading of him. I really should post it to your blog but somehow I didn't want to get involved with what looked like a messy argument.
I have a few points:
I think your criticism of utilitarianism is fundamentally correct. The two reasons not to be a utilitarian are:
1) It's false (I think that's due to Jonathon Dancy)
2) It's immoral (that's due to Michael Morris).
Obviously those two reasons are not going to be convincing to a utilitarian. But, and this I take it is your objection, when you end up concluding that torturing a cat and enjoying it is better than torturing a cat and not enjoying it, something has gone wrong. It's even more terribly wrong when you are justifying killing people. You need to drop at least one of your premises and not accept your conclusion. As they say, one woman's modus ponens is another woman's modus tollens. The problem is it's tricky to work out what's gone wrong.
I think the best objections to utilitarianism that I have come across come from Bernard Williams in his 'Ethics and the Limits of Philosophy'.
What I think goes on with utilitarianism is that it appeals to people who have a broadly physicalist world view. That is to say they think that what there is can be described in the language of a natural science. They then notice that practical reasons, of which moral reasons are a subset, are rather peculiar. They are peculiar because they compel agents to act. It looks like I cannot think that I have reason to be nice to my mother and not be nice to my mother. (Of course there is the phenomenon of weakness of the will but you have to accuse somebody of practical irrationality. One tempting solution to the problem of the weakness of the will is to deny that agents really grasped the practical reasons. So, when I'm horrible to Mum I must have forgotten that I have reason to be nice to her.) However, it doesn't look like natural sciences, including Psychology, can account for there being facts in the world such that recognition of those facts compels you to act. What is required is that you find the facts compelling. That looks like a non-normative fact about your psychological make-up. As it happens, I am predisposed to find the fact that somebody is my mother motivating in terms of my behaviour towards her. Why do I find certain facts compelling? Well, I have certain desires. Those desires can be altruistic, but what I need to have is some desires. (Crude utilitarians deny that my desires could be altruistic in any deep sense. For some stupid reason they think that humans are only motivated by pleasure and fear of pain. There is no need to be a crude ultilitarian and indeed Peter Singer is not a crude utilitarian.)
A quick recap of the picture so far. There are no reason-giving facts because no natural science can account for normative status. Instead, there are physical facts and psychological facts. Because I have the desire set that I have (this desire set being describable by psychology), certain facts will cause me to act in the way I do. This is how we explain the idea that facts can provide reasons. They provide reasons for an agent.
The next move is to try to show that any agent, or at least any agent who is committed to being moral, will find the same facts compelling in the same ways. In particular, we need to show that we all ought to be committed to maximising desire satisfaction (modulo the different measures appealed to by different utilitarians). The way Singer tries to argue, if I remember correctly, is as follows (Incidentally the Singer argument is fairly standard as a way into utilitarianism):
1) I desire lots of outcomes to occur.
2) Other people desire lots of outcomes to occur.
3) I have no reason to privilege my desires over other people's desires.
4) It is irrational to privilege my desires over other people's.
5) If I am being rational, I will give equal consideration to all desires that there are.
6) I can only engage in moral reasoning if I am being rational.
7) I am engaging in moral reasoning.
8) I will give equal consideration to all desires that there are.
Of course anyone can run that argument so anyone who is engaged in moral reasoning will end up committed to giving equal consideration to all desires that there are. I can't remember the chapter and verse but it is in his introduction to practical ethics. (Incidentally I think it is important when dealing with controversial arguments that you take them seriously. It is easy to set someone up as a bogey man.)
The problem with the argument is that premise three is false. What counts as a reason is in part determined by what desires an agent has. I am altruistic but I'm not that altruistic, so I do have reason to privelege my desires over other people's desires. To be fair to Singer I haven't seen him engage in the broadly physicalist line of reasoning that leads to the belief that what reasons I have are at least in part determined by what desires I in fact have. However, if he appeals to a robust notion of reason, it is still not obvious that three is true. The best reason to belive three would be because you are already committed to utilitarianism. It would make the above argument question-begging. Once you acknowledge, as I do in fact, that there are reason-giving facts out there, it's not at all obvious that some form of utilitarianism is the way to go. It looks like you need to appeal to a faculty of moral intuition to explain how we have access to those facts. That's a controversial claim, but if it can be defended, then because our intuitions tell strongly against utilitarianism, it looks like we have reason to deny three. Once you make space for a faculty of moral intuition, pointing out the counter-intuitive consequences of utlitarianism is no longer question begging. Our intuitions provide prima facie reasons to reject utilitarianism. Three comes out false whichever way you look at it.
So much for Utilitarianism. What really got me annoyed was all your commentators' knee jerk assumption that all there was to morality was giving money to charity. There's just no way that that can be an outcome even of a utilitarian moral system. I would be surprised if Singer is committed to that view. I don't know who these rational-giving people are, or Singer's relationship with them but I would be surprised if he is in fact sympathetic to their views They sound extremely right-wing and Singer is, although not a revolutionary socialist, not exactly a massive fan of libertarian capitalism. Singer has an argument that results in his claim that it is deeply immoral not to hand over cash to starving people. It is independent of his utilitarianism and there's no suggestion that somebody who gives lots of money to charity has somehow disposed of all of their moral duties. The argument is simple and goes like this:
1) If I know of serious suffering, and, without undue detriment to myself, can do something to alleviate the suffering, and I don't do it, then I am behaving extremely badly.
2) I know of serious suffering (Singer's example at the time was a famine in Bangledesh)
3) Foregoing some luxuries will allow me to do something to alleviate the suffering without undue detriment to myself. (Singer's example was not buying a colour TV but instead giving the money to famine relief.)
4) If I don't forego some luxuries I am behaving extremely badly.
Singer points out that most of us don't give up luxuries so, if we are to be rational, we ought either to deny 1) or give up some luxuries. I think it's a powerful argument and I think one is correct. So, I really ought to give up more luxuries. Note how we've not appealed to utilitarianism or concluded that my overriding moral duty is to give money to famine relief. It might even be possible to show that any disposable income I get from forgoing some luxuries ought to go elsewhere and so, if I am behaving morally, I won't be in a position to alleviate suffering without undue detriment to myself. What we certainly don't have is an argument to show that morality starts and stops with giving to famine relief or that all my charitable givings should go to famine relief. I think the latter disjunct comes about because the argument doesn't appeal to utilitarianism at all.
What about Singer, animals and infants? Here we need to be careful. What Singer holds is that the category of 'person' is not a morally significant category. Again, the argument does not immediately appeal to utilitarianism. Singer wants the defender of the thesis that people are somehow special to give him a reason to believe it. He then, rather crudely, points out that cognitive capacity won't do because some humans have less cognitive capacities than some non-human animals. The obvious category are neonates. But you might argue they're going to develop into pretty sophisticated things. So Singer says 'Aha, what about people with learning disabilities?' It's clear that Singer hasn't spent much time with people with learning disabilities. I think he has to think that some people have the mental age of a X yr old and this is obviously silly. However, I suspect as a matter of empirical fact, it's going to turn out correct. Some primates are going to have more cognitive capacities than some humans will ever have (if only because some of those humans are going to die young). He then has an argument against killing animals for food, that says if you think that we can kill non-human animals because they lack cognitive capacities, you ought to think it's ok to kill some human animals. You don't think it's ok to kill some human animals so you'd better stop killing non-human animals for meat. I think the argument is ok but probably overreaches itself. He presents it in such a way that he seems to take it as obvious that only cognitive capcities are going to be relevant here and that seems a bit odd and probably utilitarian. However, it certainly doesn't advocate killing disabled people.
However, Singer does think that it's sometimes ok to kill human beings. This is because he is a utilitarian. He has odd views about desires. I think the basic idea is that there are all these desires floating around in the universe. Our moral duty is to maximise desire satisfaction. The simple version of the argument is that the desires of the mother outweigh the desires of the infant and so she can bump off her child. She can't do it willy-nilly but if the child requires a lot of looking after a lot of her desires won't be fulfilled because she will have to spend all her time looking after the child. The argument is horrific. But the challenge is to explain what's morally relevant about the category of personhood.
I also think you should be a bit wary about calling him a murderer. It must be ok to explore and defend controversial arguments.I find Singer annoying because he is like a clever undergraduate who has found a powerful argument for a ridiculous conclusion and stops there. However, the right way to treat clever undergraduates and Peter Singer is to unpick the arguments and try to find out both what's compelling about them and where they've gone wrong. It's not helpful to call him names.
I somewhat disagree with Screwy's final paragraph; I'm reasonably happy that if someone advocates killing children then their moral position is pretty obviously worthless to me. I don't feel obliged to spend my life carefully picking over their arguments to find exactly which false premise or false inference led them to what is to me a completely abhorrent conclusion. Especially since Singer is pretty obviously going to beat me in any philosophical debate, as he's had a lot more training. I am prepared to stick my neck out and say he's just wrong in spite of this. I agree that it's not necessarily helpful to just call him a monster or a baby-murderer; from all I understand, he's a reasonably pleasant chap. But I think he's so much in love with his clever argument that he's unable to notice that it rests on massive, morally unacceptable prejudice against disabled people.
(no subject)
Date: 2012-09-02 08:34 pm (UTC)I am interested to know what counts as "moral intuitions". If these are things that we are consciously aware of, that leap into consciousness without showing their working, then clearly they vary, both between people and within people. I can contemplate trolley problems, and sometimes feel that letting five people die is better than killing one, and vice versa, depending on what mood I am in. Clearly the fundamental nature of morality does not vary with my mood, unless you are to accept a very extreme form of moral relativism. Another thing which I find can have an affect on my intuitions, as defined above, is doing thought experiments. I find I have a problem replicating thought experiments; the results change varying on what else I have read. If by "moral intuitions" he means a sort of stable consensus amongst present day professional philosophers, a subject defined at least in part by the departure of the natural sciences from it, well... If he means something else, then it would be interesting to know what it is that he is talking about, and how it is that we can know anything about it.
I am highly concerned about this rubbishing of utilitarian moral intuitions. See for example this abstract Selective impairment of cognitive empathy for moral judgment in adults with high functioning autism. "We conclude that greater prevalence of utilitarianism in HFA/AS is associated with difficulties in specific aspects of social cognition." This is a matter of some considerable personal interest.
ETA - I note that (what I, and many cognitive psychologists) would call intuitions can be Just Plain Wrong. "A bat and ball together cost £1.10. The bat costs £1 more than the ball. How much does the ball cost?" There is an intuitive answer to this, which most people give, which is 10p. It was the first answer that came into my head, although I knew not to trust it. It is also the wrong answer. Do people have reasons for giving this answer? Arguably, yes; presumably, the heuristics that give rise to the 10p answer give the right answer or at any rate a good answer in a great many circumstances, many people may find that relying on these intuitions is a good life strategy - at least a good life strategy in a life where getting the right answer to tricky questions isn't very valuable.
When people make those silly hypotheticals, they think they're dealing with nice isolated context-free problems... except have they considered that their intuitions, trained by life, may count as context? Have they considered that the answers given people that are more given to reasoning in a decontextualised manner might be better answers to those problems than answers given by those who habitually drag context into everything without really realising it?
(no subject)
Date: 2012-09-04 08:46 am (UTC)(no subject)
Date: 2012-09-04 10:54 am (UTC)FWIW, I assumed everyone agreed you can't "deduce ethics purely logically from first principles" however much you might want to. But I feel shocked that if I try to describe utilitarianism, that's how I sound to someone else.
I would have described moral systems as generally trying to give a generalisation of our moral intuitions, both to (a) point out where our intuitions are probably wrong and (b) provide a convenient shorthand when we can do the "right thing" without thinking everything through in advance.
I agree with you that utilitarianism disagrees with our intuitions on stuff like "having a greater responsibility towards our immediate family", and I provisionally agree that this is a way it falls short and needs to be replaced with something.
I agree that Trolley problems aren't important except insofar as they help our intuition for other difficult situations, and I agree people often focus on them too much when concentrating on real life problems would be more productive, but I assume it _can_ be helpful (though I'm open to being convinced otherwise).
In a similar vein of "we should concentrate on real life ethics, but I feel like it's a good thing to have SOME thoughts about an underlying system", it sounds like you're advocating for ONLY using moral intuitions. Did I read that right, and do you mean that we never need worry that they may be wrong, or just that you think it's not the biggest practical concern at the moment?
(no subject)
Date: 2012-09-04 11:35 am (UTC)(no subject)
Date: 2012-09-04 12:30 pm (UTC)I object to the idea that logical consistency should be the ultimate aim. A moral system where I can't find any chink in the logic, but which leads people to do awful things, is worse than an approach which generally lets people try to be nice and not harmful as much as they can, even if they're somewhat inconsistent in the choices they make in different circumstances.
(no subject)
Date: 2012-09-04 01:23 pm (UTC)we don't have to think through everything from first principles
I apologise for picking out this when it wasn't exactly what you were trying to say, but it seems related to the misunderstanding: aren't "first principles" exactly one of the things we're trying to decide?
I sort of hate trolley problems
I think I know what you mean, although I don't have the same visceral reaction. I think they are often useful, but I agree they can also be very overused, and "does my moral system work in an artificially extreme situation" is an interesting thought experiment, not a requirement!
I object to the idea that logical consistency should be the ultimate aim
Yes. I hope I didn't give the idea that I do think that?
I think a reasonable amount of consistency is needed, but the idea of a moral system which is consistent everywhere is unattainable, and it's more important to get one which matches our moral intuitions on the important things, and push all the inconsistency out to extreme examples we hope won't come up.
(no subject)
Date: 2012-09-04 10:57 am (UTC)That said, I think you should have a go at reading some of the things Bentham and especially Mill actually wrote - if nothing else, the quotes on their wikipedia pages; there's plenty to disagree with there, but you may well be surprised at what their positions actually are, and it would mean you can argue against real positions rather than strawmen. I tend to regard utilitarianism as being a bit like the Golden Rule (or in scientific terms, a bit like Newtonian mechanics); it doesn't cover everything, it gets some things wrong, it can require a lot of interpretation in cases, often its better to follow the law or your customs or experiences or emotions or whatever, but it's a way of thinking about things. I find it better to think there's some unity to these things; even if there isn't one pure foundational principle, that things are simpler towards the bottom than they are at the top, that things connect up. To think of morality as a random jumble of arbitrary principles (a strawman, I know, it's an uncharitable interpretation of some of the things I see around me) that varies wildly from person to person and culture to culture is dispiriting to me. When I read Mill I can get a feeling that there's some sense to things, that there's some point to it all.
Again, apologies - I felt threatened, and I do have this bad habit of trying to lash out with logic when that happens. I should try to curtail it.
(no subject)
Date: 2012-09-04 11:56 am (UTC)I don't have any particular animus against utilitarians; I was actually a little surprised to find that my brother was so ready to dismiss that whole school of thought as false and immoral. I don't find utilitarianism particularly appealing myself, but it's as useful a way as any other of deriving a moral system. I think it's likely I'd get on moderately well with Mill, and you're very likely right that the thing I'm disagreeing with isn't what real utilitarians actually believe. A lot of my impatience with utilitarianism comes from arguing with self-righteous teenagers on the internet who don't really understand the principles they're espousing either. I know there are plenty of utilitarians who succeed in not coming to pro-murder conclusions; it's the pro-murder that I have a problem with, not the underlying principles that are being applied.
I am not completely convinced that ethics and morality can be systematized in a way that's analogous to building up all of physics from a few fundamental laws. But it's not inherently awful to try, and I do have some sympathy for looking for something more satisfying than . You strike me as someone who's really thoughtful about moral philosophy, indeed I think you're probably more successful at this kind of reasoning than I am.
(no subject)
Date: 2012-09-04 03:06 pm (UTC)If your proposed system-of-ethics violates your moral intuitions, it is likely incorrect. If it conforms to your moral intuitions, you can only say that it conforms to your moral intuitions.
[ begin polite rant ]
Furthermore, utilitarianism (in all the guises I have seen it) requires transitivity in the value function. That is, it cannot EVER be the case that if you have three actions, A B and C, their respective utility function values (Let's call them Ua, Ub, and Uc) are Ua < Ub, Ub < Uc, Uc < Ua.
Now, I happen to think that that is pretty unlikely. There's a lot of cases involving human interaction that show non-transitivity and while I cannot cite a specific action-triangle where it would happen, I seem to recall having constructed one (in the mid-90s, details are hazy...). So, before I trust utilitarian system further than I can throw R M Hare's book on preference utilitarianism, I would want to see a comprehensive proof that the utility function used is, indeed, transitive. Because, without that, the resulting system is not sound.
[ end polite rant ]
(no subject)
Date: 2012-09-04 03:26 pm (UTC)Also really useful point about utilitarianism. Transitivity seems like a very worthwhile thing to check for, if you're trying to be logically consistent.
(no subject)
Date: 2012-09-04 03:48 pm (UTC)This sounds about right to me.
I would want to see a comprehensive proof that the utility function used is, indeed, transitive.
I don't have much theoretical knowledge, but it seems like people trying to think about utility functions assume that given two possible outcomes, we can say which is "preferable" and that decision is inherently transitive.
It seems like attempts to make an explicit utility function are defined in terms of real numbers, so they would automatically fulfil the transitive condition, but are always horribly flawed. The idea of a non-total ordering on utility is interesting, but I don't know if people have tried it -- I think that in terms of making any decisions, you have to concentrate on a subset which is ordered.
But wouldn't a nontransitive set of three outcomes present problems for most systems of morality? Are there not some times when you choose "which outcome is best"?
(no subject)
Date: 2012-09-04 06:01 pm (UTC)You can say "I prefer A to B", "I prefer B to C" and still logically say "I prefer C to A" (since, at the very root, we're dealing with uncertainty and there are many cases where probabilities are non-transitive).
So, yes, there are cases where utilitarianism works, without strict transitivity guarantees, but if you're trying to build a sound, universal, framework, that transitivity needs to be guaranteed (or stated as an axiom).
As far as "does it pose a problem", of course it does. Humans are famously bad about reasoning about non-transitive systems. There may also be no "best" outcome (but, at least, there's almost always a "not the worst" choice).
(no subject)
Date: 2012-09-05 12:14 pm (UTC)I think I know what you mean, but I'm scared if I try to verbalise it I'll get it a bit wrong and lead to a massive misunderstanding. Can you give an example of what sort of intransitivity you're thinking of? (I assume I understand the statistics, but want to know which things are non-transitive that you think would be/wouldn't be in a moral system.)
As far as "does it pose a problem", of course it does
Maybe that was too euphemistic, I didn't mean "present a problem" as in "difficult", I meant I didn't see how it was compatible with a moral system at all.
I assumed (you may tell me I'm wrong?) a moral system is supposed to tell you which choice to make when you have a choice to make. So I'd expect answers like "do this" or "the real world effects are too complicated to say" or "all the options suck so much there's no good answer".
But non-transitive choices just seem like they would produce contradictions like "do both X and not X" or something, which doesn't feel like "an imperfect moral system" but "a big pile of words that don't mean anything".
I think I've misunderstood what you're trying to say, but I'm not sure what you are trying to say?
(no subject)
Date: 2012-09-06 03:34 pm (UTC)I assumed (you may tell me I'm wrong?) a moral system is supposed to tell you which choice to make when you have a choice to make. So I'd expect answers like "do this" or "the real world effects are too complicated to say" or "all the options suck so much there's no good answer".
Utilitarianism, classically, only allows you to choose between two actions (usually "do X" or "do not do X"). In many (but not all) real situations, you have more than two possible actions ("do X", "do Y", "do Z"). If you blindly pair-try these, you may well end up making a choice that depends heavily on the (probably essentially random) first elimination you did in your analysis, whereas the right answer would have been "this is too complex" (possibly implying "find an action D that is superior to all your intransitive choices"). But doing that would require actually engaging with the problem of non-transitivity.
And therein lies the problem. Utilitarianism, as it stands, requires transitivity to be meaningful, but neither states axiomatically that transitivity exists nor considers even proving it. And without that, it is, essentially, a pile of words that often (but not always) happen to actually give guidance.
So, yes, I think you understood me correctly, even if I was unintentionally abstruse (I sometimes forget that most people have not indefinitely suspended a study towards a masters in philosophy on essentially these grounds).
(no subject)
Date: 2012-09-06 03:42 pm (UTC):) LOL.
Thank you. Will come back and reply properly later.
(no subject)
Date: 2012-09-06 04:02 pm (UTC)Can you explain more? I understood the basic idea of utilitarianism to be "calculate the utility of each outcome, then choose the one with the greatest utility". (Where "utility" is some real number representing how "good" or "desirable" an outcome is.)
That's not practical in practice because you can never actually calculate utilities except for comparing similar things (eg. two people dying being worse than one person dying, assuming "dying is bad").
But it seems to work equally well for multiple actions: choose the one out of three that has the highest utility.
Do you think utilitarianism says something else? Or is that not related to what you were saying?
(no subject)
Date: 2012-09-07 11:56 am (UTC)The next evolution was to use more refined utility metrics. Even further sophistication only looks at the utility delta, ignoring any utility changes unrelated to the (very narrow) action(s) under inspection, specifically because it's hard to do whole-system inspections.
But that lands you right in the transitivity issue. And we're back to where we started.
As for "choose the one that has the highest utility", in the dice example, the dice all have the same sum, so they represent (essentially) three actions with the same utility and you can only choose one by pair-wise comparison.
(no subject)
Date: 2012-09-06 04:12 pm (UTC)That was exactly the example I was thinking of. But I'm not sure how it relates to "prefer"?
I can think of intransitivity in human preferences in cases where it looks (to me) somewhat irrational. Eg. psychology experiments are full of cases where someone is given a choice between A and B and usually chooses B, but given a choice between A, B and C usually chooses A. The assumption seemed to be that if people had perfect information, or a determination to choose based on which they'd actually enjoy most later, they'd have a consistent preferred order of A, B and C, but that using heuristics and habits we usually use as shortcuts in decision making resulted in a "false" preference.
Are you thinking of things like that?
Or your comment about probability sounds like you think apparent paradoxes like the non-transitive dice directly lead to non-transitive choices in preference however rational or irrational we are -- are you saying that?
Do you think the dice can translate into a non-transitive preference, or was that just an example of non-transitivity generally?
(no subject)
Date: 2012-09-07 11:48 am (UTC)I think I think it's hard to get a total order, much easier to pick "the better of two", but that still leaves you open to this type of problem. I also believe that there are multiple situations where there simply is no best course of action (or more than one that is equally good, in the end).
There are even more interesting problems with (some) utilitarian systems, when you start integrating changes in utility between multiple actors.
(no subject)
Date: 2012-09-02 09:56 pm (UTC)Why would I be shocked by Singer's position when thousands of foetuses are being killed for being disabled every year in the UK alone?
(no subject)
Date: 2012-09-04 08:57 am (UTC)But anyway, I don't think consistency is the ultimate moral virtue; it's not better for Singer to be consistent in wanting to remove disabled people from existence than for some less logically rigorous person to profess that it's better to be dead than disabled but not actually live their life applying that stated belief. Maybe the latter person is a hypocrite, but I would far rather someone who's hypocritical about their prejudices than someone who is consistent about their harmful prejudices!
(no subject)
Date: 2012-09-04 10:17 pm (UTC)At least Singer puts forward his views in a clear logical manner which is relatively easy to engage with and refute, rather than just declaring that anyone who disagrees with them is a heartless misogynist. The reason there are fewer than half as many people with Downs Syndrome alive in this country as there should be, isn't because of people like Singer, who make a flawed clear logical argument, it's because of people who dress their hatred of disabled people up as kindness and refuse to even allow mention of the possibility that's it's anything else. I think his prejudices are probably a lot less harmful than those of the majority of the populations.
So, that's why I'm perplexed when people react so strongly to his views. To me they don't seem that different to those of the majority. I worry that people are just arguing that he's crossed a line as a way reassuring themselves that their position is on the correct side of the line.
(no subject)
Date: 2012-09-06 10:57 am (UTC)I mentioned him partly because I think that being obsessed with financial efficiency as a gauge of morality can very easily lead to, or at least be connected with, being more concerned about the expense of allowing disabled people needed accommodations than about the value of human life. And partly because I was sending people to the Giving What We Can page, and I was concerned that some people in that discussion would be upset at being reminded of the argument that it's "kind" and "humane" to kill disabled people when they weren't expecting to encounter that stuff. I would have done the same if a page that was ostensibly about efficient giving had a prominent pro-abortion article making those kinds of arguments.
You're right, Singer's views are just a clearer statement of what a lot of people already believe. I am against the majority view in this respect, and I also think Singer gets far more kudos than he deserves for being "radical" and "maverick". In fact my problem with him is the opposite of what people seem to think is my problem with him: I don't think he's shocking or outrageous, I think he's going along with the existing hierarchy, which I believe does moral harm in the world and not only to unborn children.
(no subject)
Date: 2012-09-03 11:48 am (UTC)My impression of what he said is like that of a mathematician who's discovered a "paradox". For the purposes of people interested in the truth, you can completely ignore it and anyone who believes it and you'll be right, but from the point of view of mathematicians finding the truth, it's interesting to dissect the argument and see (a) what it does say (b) if the flaw is interestingly relevant to other arguments (c) if they can work with the author to a mutual understanding.
(no subject)
Date: 2012-09-04 09:02 am (UTC)(no subject)
Date: 2012-09-04 06:05 pm (UTC)(no subject)
Date: 2012-09-04 06:38 pm (UTC)(no subject)
Date: 2012-09-05 10:37 am (UTC)Some clarification
Date: 2012-09-07 02:08 pm (UTC)With respect to killing disabled people, Singer doesn't think you can just kill disabled people. Singer thinks that the relevant utility is desire satisfaction. He sees nothing morally relevant about the status of being the person. The only morally relevant consideration is how many desires get satisfied. He thinks that this gives women the right to terminate pregnancies when bringing up the child is going to result in less desires being satisfied. Pretty horrific but the challenge is to show what is wrong with the argument.
I don't think that's a lot clearer.
YAB