liv: cast iron sign showing etiolated couple drinking tea together (argument)
[personal profile] liv
Remember that post about Efficient charitable giving? We got into a bit of a discussion about Peter Singer in the comments. My brother Screwy, who is a professional philosopher, read the discussion and wrote some comments in response. He's given me permission to post his argument here. Much more informative than me flailing about trying to discuss utilitarianism and other bits of moral philosophy without really knowing what I'm talking about!

I was reading your blog the other day and I had some things to say about Mr Singer and your reading of him. I really should post it to your blog but somehow I didn't want to get involved with what looked like a messy argument.

I have a few points:
I think your criticism of utilitarianism is fundamentally correct. The two reasons not to be a utilitarian are:
1) It's false (I think that's due to Jonathon Dancy)
2) It's immoral (that's due to Michael Morris).
Obviously those two reasons are not going to be convincing to a utilitarian. But, and this I take it is your objection, when you end up concluding that torturing a cat and enjoying it is better than torturing a cat and not enjoying it, something has gone wrong. It's even more terribly wrong when you are justifying killing people. You need to drop at least one of your premises and not accept your conclusion. As they say, one woman's modus ponens is another woman's modus tollens. The problem is it's tricky to work out what's gone wrong.

I think the best objections to utilitarianism that I have come across come from Bernard Williams in his 'Ethics and the Limits of Philosophy'.

What I think goes on with utilitarianism is that it appeals to people who have a broadly physicalist world view. That is to say they think that what there is can be described in the language of a natural science. They then notice that practical reasons, of which moral reasons are a subset, are rather peculiar. They are peculiar because they compel agents to act. It looks like I cannot think that I have reason to be nice to my mother and not be nice to my mother. (Of course there is the phenomenon of weakness of the will but you have to accuse somebody of practical irrationality. One tempting solution to the problem of the weakness of the will is to deny that agents really grasped the practical reasons. So, when I'm horrible to Mum I must have forgotten that I have reason to be nice to her.) However, it doesn't look like natural sciences, including Psychology, can account for there being facts in the world such that recognition of those facts compels you to act. What is required is that you find the facts compelling. That looks like a non-normative fact about your psychological make-up. As it happens, I am predisposed to find the fact that somebody is my mother motivating in terms of my behaviour towards her. Why do I find certain facts compelling? Well, I have certain desires. Those desires can be altruistic, but what I need to have is some desires. (Crude utilitarians deny that my desires could be altruistic in any deep sense. For some stupid reason they think that humans are only motivated by pleasure and fear of pain. There is no need to be a crude ultilitarian and indeed Peter Singer is not a crude utilitarian.)

A quick recap of the picture so far. There are no reason-giving facts because no natural science can account for normative status. Instead, there are physical facts and psychological facts. Because I have the desire set that I have (this desire set being describable by psychology), certain facts will cause me to act in the way I do. This is how we explain the idea that facts can provide reasons. They provide reasons for an agent.

The next move is to try to show that any agent, or at least any agent who is committed to being moral, will find the same facts compelling in the same ways. In particular, we need to show that we all ought to be committed to maximising desire satisfaction (modulo the different measures appealed to by different utilitarians). The way Singer tries to argue, if I remember correctly, is as follows (Incidentally the Singer argument is fairly standard as a way into utilitarianism):

1) I desire lots of outcomes to occur.
2) Other people desire lots of outcomes to occur.
3) I have no reason to privilege my desires over other people's desires.
4) It is irrational to privilege my desires over other people's.
5) If I am being rational, I will give equal consideration to all desires that there are.
6) I can only engage in moral reasoning if I am being rational.
7) I am engaging in moral reasoning.
8) I will give equal consideration to all desires that there are.

Of course anyone can run that argument so anyone who is engaged in moral reasoning will end up committed to giving equal consideration to all desires that there are. I can't remember the chapter and verse but it is in his introduction to practical ethics. (Incidentally I think it is important when dealing with controversial arguments that you take them seriously. It is easy to set someone up as a bogey man.)

The problem with the argument is that premise three is false. What counts as a reason is in part determined by what desires an agent has. I am altruistic but I'm not that altruistic, so I do have reason to privelege my desires over other people's desires. To be fair to Singer I haven't seen him engage in the broadly physicalist line of reasoning that leads to the belief that what reasons I have are at least in part determined by what desires I in fact have. However, if he appeals to a robust notion of reason, it is still not obvious that three is true. The best reason to belive three would be because you are already committed to utilitarianism. It would make the above argument question-begging. Once you acknowledge, as I do in fact, that there are reason-giving facts out there, it's not at all obvious that some form of utilitarianism is the way to go. It looks like you need to appeal to a faculty of moral intuition to explain how we have access to those facts. That's a controversial claim, but if it can be defended, then because our intuitions tell strongly against utilitarianism, it looks like we have reason to deny three. Once you make space for a faculty of moral intuition, pointing out the counter-intuitive consequences of utlitarianism is no longer question begging. Our intuitions provide prima facie reasons to reject utilitarianism. Three comes out false whichever way you look at it.

So much for Utilitarianism. What really got me annoyed was all your commentators' knee jerk assumption that all there was to morality was giving money to charity. There's just no way that that can be an outcome even of a utilitarian moral system. I would be surprised if Singer is committed to that view. I don't know who these rational-giving people are, or Singer's relationship with them but I would be surprised if he is in fact sympathetic to their views They sound extremely right-wing and Singer is, although not a revolutionary socialist, not exactly a massive fan of libertarian capitalism. Singer has an argument that results in his claim that it is deeply immoral not to hand over cash to starving people. It is independent of his utilitarianism and there's no suggestion that somebody who gives lots of money to charity has somehow disposed of all of their moral duties. The argument is simple and goes like this:

1) If I know of serious suffering, and, without undue detriment to myself, can do something to alleviate the suffering, and I don't do it, then I am behaving extremely badly.
2) I know of serious suffering (Singer's example at the time was a famine in Bangledesh)
3) Foregoing some luxuries will allow me to do something to alleviate the suffering without undue detriment to myself. (Singer's example was not buying a colour TV but instead giving the money to famine relief.)
4) If I don't forego some luxuries I am behaving extremely badly.

Singer points out that most of us don't give up luxuries so, if we are to be rational, we ought either to deny 1) or give up some luxuries. I think it's a powerful argument and I think one is correct. So, I really ought to give up more luxuries. Note how we've not appealed to utilitarianism or concluded that my overriding moral duty is to give money to famine relief. It might even be possible to show that any disposable income I get from forgoing some luxuries ought to go elsewhere and so, if I am behaving morally, I won't be in a position to alleviate suffering without undue detriment to myself. What we certainly don't have is an argument to show that morality starts and stops with giving to famine relief or that all my charitable givings should go to famine relief. I think the latter disjunct comes about because the argument doesn't appeal to utilitarianism at all.

What about Singer, animals and infants? Here we need to be careful. What Singer holds is that the category of 'person' is not a morally significant category. Again, the argument does not immediately appeal to utilitarianism. Singer wants the defender of the thesis that people are somehow special to give him a reason to believe it. He then, rather crudely, points out that cognitive capacity won't do because some humans have less cognitive capacities than some non-human animals. The obvious category are neonates. But you might argue they're going to develop into pretty sophisticated things. So Singer says 'Aha, what about people with learning disabilities?' It's clear that Singer hasn't spent much time with people with learning disabilities. I think he has to think that some people have the mental age of a X yr old and this is obviously silly. However, I suspect as a matter of empirical fact, it's going to turn out correct. Some primates are going to have more cognitive capacities than some humans will ever have (if only because some of those humans are going to die young). He then has an argument against killing animals for food, that says if you think that we can kill non-human animals because they lack cognitive capacities, you ought to think it's ok to kill some human animals. You don't think it's ok to kill some human animals so you'd better stop killing non-human animals for meat. I think the argument is ok but probably overreaches itself. He presents it in such a way that he seems to take it as obvious that only cognitive capcities are going to be relevant here and that seems a bit odd and probably utilitarian. However, it certainly doesn't advocate killing disabled people.

However, Singer does think that it's sometimes ok to kill human beings. This is because he is a utilitarian. He has odd views about desires. I think the basic idea is that there are all these desires floating around in the universe. Our moral duty is to maximise desire satisfaction. The simple version of the argument is that the desires of the mother outweigh the desires of the infant and so she can bump off her child. She can't do it willy-nilly but if the child requires a lot of looking after a lot of her desires won't be fulfilled because she will have to spend all her time looking after the child. The argument is horrific. But the challenge is to explain what's morally relevant about the category of personhood.

I also think you should be a bit wary about calling him a murderer. It must be ok to explore and defend controversial arguments.I find Singer annoying because he is like a clever undergraduate who has found a powerful argument for a ridiculous conclusion and stops there. However, the right way to treat clever undergraduates and Peter Singer is to unpick the arguments and try to find out both what's compelling about them and where they've gone wrong. It's not helpful to call him names.


I somewhat disagree with Screwy's final paragraph; I'm reasonably happy that if someone advocates killing children then their moral position is pretty obviously worthless to me. I don't feel obliged to spend my life carefully picking over their arguments to find exactly which false premise or false inference led them to what is to me a completely abhorrent conclusion. Especially since Singer is pretty obviously going to beat me in any philosophical debate, as he's had a lot more training. I am prepared to stick my neck out and say he's just wrong in spite of this. I agree that it's not necessarily helpful to just call him a monster or a baby-murderer; from all I understand, he's a reasonably pleasant chap. But I think he's so much in love with his clever argument that he's unable to notice that it rests on massive, morally unacceptable prejudice against disabled people.

(no subject)

Date: 2012-09-02 08:34 pm (UTC)
ptc24: (Default)
From: [personal profile] ptc24
I think the above argument has heightened my suspicion that people who strongly reject utilitarianism do so for reasons I would regard as unfair, or at least that I would reject. That said; your last paragraph. Partly this depends on what you construe as "children" - if by this you include unborn children (many people do) then you could be on a sticky wicket. However, I take the general point - Singer has got an awful lot of people very riled who really don't deserve it, therefore he must be doing something wrong. However, people can disagree with each other, and still be called "utilitarian".

I am interested to know what counts as "moral intuitions". If these are things that we are consciously aware of, that leap into consciousness without showing their working, then clearly they vary, both between people and within people. I can contemplate trolley problems, and sometimes feel that letting five people die is better than killing one, and vice versa, depending on what mood I am in. Clearly the fundamental nature of morality does not vary with my mood, unless you are to accept a very extreme form of moral relativism. Another thing which I find can have an affect on my intuitions, as defined above, is doing thought experiments. I find I have a problem replicating thought experiments; the results change varying on what else I have read. If by "moral intuitions" he means a sort of stable consensus amongst present day professional philosophers, a subject defined at least in part by the departure of the natural sciences from it, well... If he means something else, then it would be interesting to know what it is that he is talking about, and how it is that we can know anything about it.

I am highly concerned about this rubbishing of utilitarian moral intuitions. See for example this abstract Selective impairment of cognitive empathy for moral judgment in adults with high functioning autism. "We conclude that greater prevalence of utilitarianism in HFA/AS is associated with difficulties in specific aspects of social cognition." This is a matter of some considerable personal interest.

ETA - I note that (what I, and many cognitive psychologists) would call intuitions can be Just Plain Wrong. "A bat and ball together cost £1.10. The bat costs £1 more than the ball. How much does the ball cost?" There is an intuitive answer to this, which most people give, which is 10p. It was the first answer that came into my head, although I knew not to trust it. It is also the wrong answer. Do people have reasons for giving this answer? Arguably, yes; presumably, the heuristics that give rise to the 10p answer give the right answer or at any rate a good answer in a great many circumstances, many people may find that relying on these intuitions is a good life strategy - at least a good life strategy in a life where getting the right answer to tricky questions isn't very valuable.

When people make those silly hypotheticals, they think they're dealing with nice isolated context-free problems... except have they considered that their intuitions, trained by life, may count as context? Have they considered that the answers given people that are more given to reasoning in a decontextualised manner might be better answers to those problems than answers given by those who habitually drag context into everything without really realising it?
Edited Date: 2012-09-02 10:49 pm (UTC)

(no subject)

Date: 2012-09-04 10:54 am (UTC)
jack: (Default)
From: [personal profile] jack
utilitarians regard people's moral intuitions as broken and inconsistent, so they're trying to derive ethics purely logically from first principles

FWIW, I assumed everyone agreed you can't "deduce ethics purely logically from first principles" however much you might want to. But I feel shocked that if I try to describe utilitarianism, that's how I sound to someone else.

I would have described moral systems as generally trying to give a generalisation of our moral intuitions, both to (a) point out where our intuitions are probably wrong and (b) provide a convenient shorthand when we can do the "right thing" without thinking everything through in advance.

I agree with you that utilitarianism disagrees with our intuitions on stuff like "having a greater responsibility towards our immediate family", and I provisionally agree that this is a way it falls short and needs to be replaced with something.

I agree that Trolley problems aren't important except insofar as they help our intuition for other difficult situations, and I agree people often focus on them too much when concentrating on real life problems would be more productive, but I assume it _can_ be helpful (though I'm open to being convinced otherwise).

In a similar vein of "we should concentrate on real life ethics, but I feel like it's a good thing to have SOME thoughts about an underlying system", it sounds like you're advocating for ONLY using moral intuitions. Did I read that right, and do you mean that we never need worry that they may be wrong, or just that you think it's not the biggest practical concern at the moment?

(no subject)

Date: 2012-09-04 11:35 am (UTC)
jack: (Default)
From: [personal profile] jack
*hugs* I'm sorry, that suddenly sounded very confrontational, I didn't mean it to be, I was just trying to cover a lot of argument in a small space. *hugs*

(no subject)

Date: 2012-09-04 01:23 pm (UTC)
jack: (Default)
From: [personal profile] jack
*hugs* Thank you. I'm glad we don't disagree much. I think I am very much in the position of my interest in theoretical morality outstripping my interest in day-to-day morality which is rather unfortunate, although I think it's still important to go on thinking about theoretical morality, even while trying to concentrate on real-world, helpful, decisions.

we don't have to think through everything from first principles

I apologise for picking out this when it wasn't exactly what you were trying to say, but it seems related to the misunderstanding: aren't "first principles" exactly one of the things we're trying to decide?

I sort of hate trolley problems

I think I know what you mean, although I don't have the same visceral reaction. I think they are often useful, but I agree they can also be very overused, and "does my moral system work in an artificially extreme situation" is an interesting thought experiment, not a requirement!

I object to the idea that logical consistency should be the ultimate aim

Yes. I hope I didn't give the idea that I do think that?

I think a reasonable amount of consistency is needed, but the idea of a moral system which is consistent everywhere is unattainable, and it's more important to get one which matches our moral intuitions on the important things, and push all the inconsistency out to extreme examples we hope won't come up.

(no subject)

Date: 2012-09-04 10:57 am (UTC)
ptc24: (Default)
From: [personal profile] ptc24
Apologies - my previous comment was intemperate and downright rude in places, and I've been worrying about it.

That said, I think you should have a go at reading some of the things Bentham and especially Mill actually wrote - if nothing else, the quotes on their wikipedia pages; there's plenty to disagree with there, but you may well be surprised at what their positions actually are, and it would mean you can argue against real positions rather than strawmen. I tend to regard utilitarianism as being a bit like the Golden Rule (or in scientific terms, a bit like Newtonian mechanics); it doesn't cover everything, it gets some things wrong, it can require a lot of interpretation in cases, often its better to follow the law or your customs or experiences or emotions or whatever, but it's a way of thinking about things. I find it better to think there's some unity to these things; even if there isn't one pure foundational principle, that things are simpler towards the bottom than they are at the top, that things connect up. To think of morality as a random jumble of arbitrary principles (a strawman, I know, it's an uncharitable interpretation of some of the things I see around me) that varies wildly from person to person and culture to culture is dispiriting to me. When I read Mill I can get a feeling that there's some sense to things, that there's some point to it all.

Again, apologies - I felt threatened, and I do have this bad habit of trying to lash out with logic when that happens. I should try to curtail it.

(no subject)

Date: 2012-09-04 03:06 pm (UTC)
vatine: Generated with some CL code and a hand-designed blackletter font (Default)
From: [personal profile] vatine
From my former life as student of philosophy (I was, for a few years, toying with the idea of getting an MSc in computer science and an MA in philosophy, for the price of one master's worth of study, but having to do two master theses), the thorny subject of "moral philosophy" holds up moral intuition not necessarily as "truth" but at least as a guideline.

If your proposed system-of-ethics violates your moral intuitions, it is likely incorrect. If it conforms to your moral intuitions, you can only say that it conforms to your moral intuitions.

[ begin polite rant ]
Furthermore, utilitarianism (in all the guises I have seen it) requires transitivity in the value function. That is, it cannot EVER be the case that if you have three actions, A B and C, their respective utility function values (Let's call them Ua, Ub, and Uc) are Ua < Ub, Ub < Uc, Uc < Ua.

Now, I happen to think that that is pretty unlikely. There's a lot of cases involving human interaction that show non-transitivity and while I cannot cite a specific action-triangle where it would happen, I seem to recall having constructed one (in the mid-90s, details are hazy...). So, before I trust utilitarian system further than I can throw R M Hare's book on preference utilitarianism, I would want to see a comprehensive proof that the utility function used is, indeed, transitive. Because, without that, the resulting system is not sound.
[ end polite rant ]

(no subject)

Date: 2012-09-04 03:48 pm (UTC)
jack: (Default)
From: [personal profile] jack
If your proposed system-of-ethics violates your moral intuitions, it is likely incorrect. If it conforms to your moral intuitions, you can only say that it conforms to your moral intuitions.

This sounds about right to me.

I would want to see a comprehensive proof that the utility function used is, indeed, transitive.

I don't have much theoretical knowledge, but it seems like people trying to think about utility functions assume that given two possible outcomes, we can say which is "preferable" and that decision is inherently transitive.

It seems like attempts to make an explicit utility function are defined in terms of real numbers, so they would automatically fulfil the transitive condition, but are always horribly flawed. The idea of a non-total ordering on utility is interesting, but I don't know if people have tried it -- I think that in terms of making any decisions, you have to concentrate on a subset which is ordered.

But wouldn't a nontransitive set of three outcomes present problems for most systems of morality? Are there not some times when you choose "which outcome is best"?

(no subject)

Date: 2012-09-04 06:01 pm (UTC)
vatine: Generated with some CL code and a hand-designed blackletter font (Default)
From: [personal profile] vatine
"Preferable" is about as inherently transitive as "preferentially wins in dice". That, as it were, depends on the dice.

You can say "I prefer A to B", "I prefer B to C" and still logically say "I prefer C to A" (since, at the very root, we're dealing with uncertainty and there are many cases where probabilities are non-transitive).

So, yes, there are cases where utilitarianism works, without strict transitivity guarantees, but if you're trying to build a sound, universal, framework, that transitivity needs to be guaranteed (or stated as an axiom).

As far as "does it pose a problem", of course it does. Humans are famously bad about reasoning about non-transitive systems. There may also be no "best" outcome (but, at least, there's almost always a "not the worst" choice).

(no subject)

Date: 2012-09-05 12:14 pm (UTC)
jack: (Default)
From: [personal profile] jack
at the very root, we're dealing with uncertainty and there are many cases where probabilities are non-transitive

I think I know what you mean, but I'm scared if I try to verbalise it I'll get it a bit wrong and lead to a massive misunderstanding. Can you give an example of what sort of intransitivity you're thinking of? (I assume I understand the statistics, but want to know which things are non-transitive that you think would be/wouldn't be in a moral system.)

As far as "does it pose a problem", of course it does

Maybe that was too euphemistic, I didn't mean "present a problem" as in "difficult", I meant I didn't see how it was compatible with a moral system at all.

I assumed (you may tell me I'm wrong?) a moral system is supposed to tell you which choice to make when you have a choice to make. So I'd expect answers like "do this" or "the real world effects are too complicated to say" or "all the options suck so much there's no good answer".

But non-transitive choices just seem like they would produce contradictions like "do both X and not X" or something, which doesn't feel like "an imperfect moral system" but "a big pile of words that don't mean anything".

I think I've misunderstood what you're trying to say, but I'm not sure what you are trying to say?

(no subject)

Date: 2012-09-06 03:34 pm (UTC)
vatine: Generated with some CL code and a hand-designed blackletter font (Default)
From: [personal profile] vatine
The classic case of intransitivity is three cleverly marked dice, such that A beats B 5/9 of the time, B beats C 5/9 and C beats A 5/9, making for a nice little circle of intransitivity.


  • die A has sides: 2, 2, 4, 4, 9, 9
  • die B has sides: 1, 1, 6, 6, 8, 8
  • die C has sides: 3, 3, 5, 5, 7, 7


I assumed (you may tell me I'm wrong?) a moral system is supposed to tell you which choice to make when you have a choice to make. So I'd expect answers like "do this" or "the real world effects are too complicated to say" or "all the options suck so much there's no good answer".

Utilitarianism, classically, only allows you to choose between two actions (usually "do X" or "do not do X"). In many (but not all) real situations, you have more than two possible actions ("do X", "do Y", "do Z"). If you blindly pair-try these, you may well end up making a choice that depends heavily on the (probably essentially random) first elimination you did in your analysis, whereas the right answer would have been "this is too complex" (possibly implying "find an action D that is superior to all your intransitive choices"). But doing that would require actually engaging with the problem of non-transitivity.

And therein lies the problem. Utilitarianism, as it stands, requires transitivity to be meaningful, but neither states axiomatically that transitivity exists nor considers even proving it. And without that, it is, essentially, a pile of words that often (but not always) happen to actually give guidance.

So, yes, I think you understood me correctly, even if I was unintentionally abstruse (I sometimes forget that most people have not indefinitely suspended a study towards a masters in philosophy on essentially these grounds).

(no subject)

Date: 2012-09-06 03:42 pm (UTC)
jack: (Default)
From: [personal profile] jack
I sometimes forget that most people have not indefinitely suspended a study towards a masters in philosophy on essentially these grounds

:) LOL.

Thank you. Will come back and reply properly later.

(no subject)

Date: 2012-09-06 04:02 pm (UTC)
jack: (Default)
From: [personal profile] jack
Utilitarianism, classically, only allows you to choose between two actions

Can you explain more? I understood the basic idea of utilitarianism to be "calculate the utility of each outcome, then choose the one with the greatest utility". (Where "utility" is some real number representing how "good" or "desirable" an outcome is.)

That's not practical in practice because you can never actually calculate utilities except for comparing similar things (eg. two people dying being worse than one person dying, assuming "dying is bad").

But it seems to work equally well for multiple actions: choose the one out of three that has the highest utility.

Do you think utilitarianism says something else? Or is that not related to what you were saying?

(no subject)

Date: 2012-09-07 11:56 am (UTC)
vatine: Generated with some CL code and a hand-designed blackletter font (Default)
From: [personal profile] vatine
Early utilitarian systems looked only at "(total) utility before" and "(total) utility after", then picked the action that resulted in the maximal utility after (for some utility metric).
The next evolution was to use more refined utility metrics. Even further sophistication only looks at the utility delta, ignoring any utility changes unrelated to the (very narrow) action(s) under inspection, specifically because it's hard to do whole-system inspections.
But that lands you right in the transitivity issue. And we're back to where we started.

As for "choose the one that has the highest utility", in the dice example, the dice all have the same sum, so they represent (essentially) three actions with the same utility and you can only choose one by pair-wise comparison.

(no subject)

Date: 2012-09-06 04:12 pm (UTC)
jack: (Default)
From: [personal profile] jack
The classic case of intransitivity is three cleverly marked dice

That was exactly the example I was thinking of. But I'm not sure how it relates to "prefer"?

I can think of intransitivity in human preferences in cases where it looks (to me) somewhat irrational. Eg. psychology experiments are full of cases where someone is given a choice between A and B and usually chooses B, but given a choice between A, B and C usually chooses A. The assumption seemed to be that if people had perfect information, or a determination to choose based on which they'd actually enjoy most later, they'd have a consistent preferred order of A, B and C, but that using heuristics and habits we usually use as shortcuts in decision making resulted in a "false" preference.

Are you thinking of things like that?

Or your comment about probability sounds like you think apparent paradoxes like the non-transitive dice directly lead to non-transitive choices in preference however rational or irrational we are -- are you saying that?

Do you think the dice can translate into a non-transitive preference, or was that just an example of non-transitivity generally?

(no subject)

Date: 2012-09-07 11:48 am (UTC)
vatine: Generated with some CL code and a hand-designed blackletter font (Default)
From: [personal profile] vatine
The non-transitive dice are purely an example of an intransitive relation, that one can (with some fiddling) play around with in one's hand. I once had an example of a situation with three (possibly artificially restricted) choices, where you had a pair-wise "better than", but still had an intransitive closure. Unfortunately, the details are lost in the haze of time.

I think I think it's hard to get a total order, much easier to pick "the better of two", but that still leaves you open to this type of problem. I also believe that there are multiple situations where there simply is no best course of action (or more than one that is equally good, in the end).

There are even more interesting problems with (some) utilitarian systems, when you start integrating changes in utility between multiple actors.

(no subject)

Date: 2012-09-02 09:56 pm (UTC)
lavendersparkle: (Good little housewife)
From: [personal profile] lavendersparkle
I always find myself rather perplexed by people's strong reaction to Singer. I really don't see that much moral difference between a third trimester abortion and killing a newborn. They're halachically very different, but I don't expect that to hold much weight with most people. I don't feel moved to dismiss Singer's position simply because he holds what appears merely to be a more consistent and less hypocritical position than the majority consensus in the society I live in. (I say majority consensus based upon the fact that it is legal to kill disabled foetuses up until birth, that the vast majority of foetuses diagnosed with Downs Syndrome (92%) or Spina Bifida are killed before birth and the care of pregnant women is designed to facilitate this.)

Why would I be shocked by Singer's position when thousands of foetuses are being killed for being disabled every year in the UK alone?

(no subject)

Date: 2012-09-04 10:17 pm (UTC)
lavendersparkle: (Good little housewife)
From: [personal profile] lavendersparkle
That's not what I was saying at all. My position is that I live in a society in which the majority opinion is that aborting viable disabled foetuses is OK, which I regard as pretty morally equivalent to infanticide of disabled foetuses, therefore I'm not going to dismiss someone's views as worthless because they advocating killing disabled babies, because if I did I'd have to not engage with the views of the majority of the population.

At least Singer puts forward his views in a clear logical manner which is relatively easy to engage with and refute, rather than just declaring that anyone who disagrees with them is a heartless misogynist. The reason there are fewer than half as many people with Downs Syndrome alive in this country as there should be, isn't because of people like Singer, who make a flawed clear logical argument, it's because of people who dress their hatred of disabled people up as kindness and refuse to even allow mention of the possibility that's it's anything else. I think his prejudices are probably a lot less harmful than those of the majority of the populations.

So, that's why I'm perplexed when people react so strongly to his views. To me they don't seem that different to those of the majority. I worry that people are just arguing that he's crossed a line as a way reassuring themselves that their position is on the correct side of the line.

(no subject)

Date: 2012-09-03 11:48 am (UTC)
jack: (Default)
From: [personal profile] jack
I'm reasonably happy that if someone advocates killing children then their moral position is pretty obviously worthless to me.

My impression of what he said is like that of a mathematician who's discovered a "paradox". For the purposes of people interested in the truth, you can completely ignore it and anyone who believes it and you'll be right, but from the point of view of mathematicians finding the truth, it's interesting to dissect the argument and see (a) what it does say (b) if the flaw is interestingly relevant to other arguments (c) if they can work with the author to a mutual understanding.

(no subject)

Date: 2012-09-04 06:05 pm (UTC)
vatine: Generated with some CL code and a hand-designed blackletter font (Default)
From: [personal profile] vatine
If I remember Singer correctly (again, I probably don't, it's been 15+ years and I find utilitarianism annoying on many levels), he used to (at least) say "humans are not special, being alive is, therefore you should minimise undue stress to all animals (human or not); also any line of reasoning that deals with sapience WILL have boundary issues, so do not do that".

(no subject)

Date: 2012-09-05 10:37 am (UTC)
vatine: Generated with some CL code and a hand-designed blackletter font (Default)
From: [personal profile] vatine
I would have to go back and read Singer again for a more reasoned response to that. 15+ years of distance have not left me with enough detail memory to say exactly he says and what has been attributed to him. Unfortunately, I do remember being somewhat annoyed with Singer when I was reading him for class (hey, utilitarianist, of COURSE I was annoyed with him).

Some clarification

Date: 2012-09-07 02:08 pm (UTC)
From: (Anonymous)
I feel like I ought to clarify a couple of things. I think the basic motivation for utilitarianism is a commitment to the thought that all facts should be available from any particular perspective. I think it is difficult to see why that should be true, why it is the case that access to a fact should not depend on a particular perspective. My guess is that the motivation for the position stems from a sort of incoherent positivism. I think this is the idea that ultimately the world must be fully describable by the natural sciences. It's a sort of anti-philosophical position. Philosophical problems are reduced to sorting out the relations between our ideas or to scientific questions. When it comes to morality the basic question is: how should I live? The utilitarian thinks that she has an argument to show that the only valid consideration is some measure of utility. That sorts out the relations among ideas. We can then appeal to the science to work out what the right thing to do this. Science will tell us what is going to maximise utility. I think it's false because it's just not the case that moral reasoning commits us to the thought that what we are trying to do is maximise some measure of utility. It is immoral because it leads to horrific conclusions. My slightly badly formulated argument was my best effort of reconstructing the reasoning that leads people to think that we ought to be maximising some measure of utility. But, it shouldn't be surprising, according to me, that the line of reasoning goes wrong. What someone has to do who is a moral agent is learn to respond to moral reasons. There is no particular reason why moral reasons should be available from any particular perspective. It looks like what one has to do is learn to be a moral agent and then one will be able to recognise and respond to moral reasons. That sometimes gets talked about as developing a faculty of moral intuition. It is slightly confusing term because we also talk about our intuitive responses to moral problems. But I guess the basic point is I see no reason to think that moral reason should be available to non-moral agents. I think utilitarians are driven by a desire to make moral reasons the sort of things that are available independent of one's perspective.

With respect to killing disabled people, Singer doesn't think you can just kill disabled people. Singer thinks that the relevant utility is desire satisfaction. He sees nothing morally relevant about the status of being the person. The only morally relevant consideration is how many desires get satisfied. He thinks that this gives women the right to terminate pregnancies when bringing up the child is going to result in less desires being satisfied. Pretty horrific but the challenge is to show what is wrong with the argument.

I don't think that's a lot clearer.

YAB

Soundbite

Miscellaneous. Eclectic. Random. Perhaps markedly literate, or at least suffering from the compulsion to read any text that presents itself, including cereal boxes.

Page Summary

Top topics

December 2025

S M T W T F S
 123456
78910111213
14151617181920
21222324252627
282930 31   

Expand Cut Tags

No cut tags

Subscription Filters