Posted by: Becoming Gaia | Jun 8, 2010

Is Sacrifice Necessary for Morality?


Mark Sloan (MS) has offered a number of comments promoting the view that sacrifice is a necessary part of morality.  Here is why I believe that he is wrong.

Assume that a choice between two options is available to an entity.  Option A is expected to have a minor benefit to the entity and a major benefit to the community.  Option B is expected to require a moderate sacrifice from the entity and provide a minor benefit to the community.  Which is more moral and why?

If sacrifice is required for morality, then option A is clearly not moral despite being better for the community than B.  For me, this immediately raises the question — “So what IS the point of morality?”

In my world, the point/reason/goal/purpose/telos of morality is to benefit the community.  The more an act benefits the community, the more moral it is.  The more an act unnecessarily harms the community, the more immoral it is.

If an entity buys into the ideas that a) morality is GOOD and to be desired and b) that morality *requires* sacrifice, then that entity is going to choose option B even though *EVERYONE* would have been better served by option A.  To me, that seems to be a “very bad thing”TM.

MS is correct when he points out that

Axelrod’s book The Evolution of Cooperation and the work in the literature since then is clear that the kind of cooperation of interest in game theory requires two critical elements: 1) sacrificing one’s self interests in the short term in order to gain the benefits of cooperation, and 2) punishing poor cooperators.

but sacrificing short-term interests to gain better long-term benefits is not truly a sacrifice in the longer-term view (in the sense that most people use the term) — it is an intelligent, far-sighted trade-off.  Further, the *STUDY* of cooperation is necessarily deconstructionist and must include short-term sacrifice in order to avoid the overwhelming effects of short-term self-interest — but that doesn’t mean that cooperation itself (and thus morality) REQUIRES sacrifice.

The reason why I harp on this is because most people equate self-interest as being the opposite of morality — which is emphatically not true.  As pointed out in yesterday’s post, it is very, very rare that enlightened (or long-term) self-interest deviates from morality — and even a truly selfish paperclip-maximizer will *appear* moral until it believes that it can’t be stopped.  Self-interest is moral because helping yourself means both a reduced requirement for the community to help you and a more effective you to help the community — and you are the best judge of what help you need.  Or, to twist it around a little bit, I would like to personally make the sacrifice of asking you not to sacrifice so that you are better able to benefit the community (which will then benefit me).

Advertisements

Responses

  1. Becoming Gaia, let’s look at your example: “Option A is expected to have a minor benefit to the entity and a major benefit to the community. Option B is expected to require a moderate sacrifice from the entity and provide a minor benefit to the community. Which is more moral and why?”

    I can argue, using the normal methods of science, that Option B is in fact moral and Option A is morally neutral. The morally neutral Option A is prudent and perhaps even socially admirable. But morally admirable acts are in a very different category from socially admirable acts.

    How do I know this? Because I can support the argument that it is a provisionally ‘true’ fact, that has teased out using the normal methods of science, which reveals what moral behavior ‘IS’ (not to be confused with what moral behavior ‘OUGHT’ to be, which is a completely different topic).

    The subtitle of your website is “What the Science of Morality Can Tell Us About The Future & How To Fulfill Our Deepest Desires”.

    I have seen nothing here that shows the normal methods of science lead to the conclusion that: “The more an act benefits the community, the more moral it is. The more an act unnecessarily harms the community, the more immoral it is.”

    I disagree with: “If an entity buys into the ideas that a) morality is GOOD and to be desired and b) that morality *requires* sacrifice, then that entity is going to choose option B even though *EVERYONE* would have been better served by option A.”

    Kinds of acts that are good include fulfilling your own needs and preferences, helping other people fulfill their needs and preferences when doing so also fulfills yours, and finally, helping other people fulfill their needs and preferences when doing so sacrifices yours. I can argue that science shows us that only the third kind of ‘good’ act is properly called moral (according to what science tells us moral behavior ‘IS”).

    • MS> I can argue, using the normal methods of science, that Option B is in fact moral and Option A is morally neutral. The morally neutral Option A is prudent and perhaps even socially admirable. But morally admirable acts are in a very different category from socially admirable acts.

      No, I don’t believe that you can argue this “using the normal methods of science”. My definition of morality is simpler than your definition of morality. By Ockham’s razor, it is incumbent upon you to prove why my definition is not sufficient and your definition solves (at least some of) its insufficiencies.

      Other than definitionally, *why* are morally admirable acts distinct from socially admirable acts? Is there a useful purpose to highlighting this distinction?

      If you can answer these questions, then you are “using the normal methods of science”. But, until then, the burden of proof is on you.

      • Becoming Gaia, the normal methods of science are to first propose a hypothesis about the underlying principles of as aspect of the natural world such as moral behaviors. For example, your hypothesis about what moral behavior ‘IS’ might be: “The more an act benefits the community, the more moral it is. The more an act unnecessarily harms the community, the more immoral it is.”

        Then this hypothesis could be evaluated for how well it meets normal criteria for scientific utility. If your hypothesis meets these criteria for scientific utility notably better than all other hypothesis and is not contradicted by any known facts, then it can be called provisionally ‘true’ as a matter of science.

        Those criteria could be: 1) explanatory power for the diversity and particular commonalities and contradictions of cultural moral standards; 2) explanatory power for why the purposes, origins, and nature of moral behavior have historically been so difficult to understand and why people appear to give inconsistent answers about moral choices in “trolley problems”; 3) predictive power for intuitive moral judgments; 4) universality based on evolution’s exploitation of aspects of physical reality; 5) simplicity; and 6) utility for understanding moral behavior as part of science and for providing insights into how to increase the benefits of behaving morally as defined by the theory.

        I have evaluated all alternative hypotheses that I am aware of and none are remotely competitive with the following hypothesis “Virtually all moral behaviors are behaviors that increase, on average, the benefits of cooperation for the group and are unselfish at least in the short term.” Its explanatory powers are such that I have been unable to find cultural moral standards or puzzles about morality that it, backed up by game theory, cannot explain.

        Unfortunately I have not been able to get a paper published in a journal. As a retired aerospace engineer I may not have an adequate background to ever be able to produce a publishable paper. But in the meantime, I have found it beneficial to my presentation and appreciation of common misunderstandings (even among moral philosophers) to comment and solicit comments on websites receptive to the idea that science can tell us useful things about moral behavior.

        Of course, even if your hypothesis turns out to not be what science concludes moral behaviors ‘ARE’, you are still logically free to propose it as what moral behavior ‘OUGHT’ to be. But then it is not science any more.

      • I agree with everything you said.

        My point remains that you have not provide a reason/distinction why adding the second half of your definition – “and are unselfish at least in the short term” produces better science than just the first half of your definition by itself.

        I understand the definitional distinction but definitions are not science. Why is the distinction between your socially admirable and morally admirable important? What is its effect upon the real world? I believe that it merely clouds the issue and should therefore be discarded by Ockham’s razor. What exactly is your argument against that other than “my argument is science and yours is not”?

      • And as a personal suggestion regarding being able to produce a publishable paper . . . . Journals have *very* high standards (as well as various biases against unknowns and “dabblers”). Conferences and the related published Proceedings from then have much lower standards and fewer biases. Find yourself a conference and submit a paper there. From the quality of your arguments and the knowledge behind them, I’d be very surprised if you didn’t succeed within a try or three — and then you’ll be better equipped to tackle a journal.

    • MS> I have seen nothing here that shows the normal methods of science lead to the conclusion that: “The more an act benefits the community, the more moral it is. The more an act unnecessarily harms the community, the more immoral it is.”

      Wow. That’s a bit harsh. I certainly haven’t filled in all the pieces but the general flow of the argument should be visible.

      1. I have proposed the hypothesis that “The goal of morality is to maximize the goal fulfillment of the community”. My statement “The more an act benefits the community, the more moral it is. The more an act unnecessarily harms the community, the more immoral it is.” is simply a restatement of the hypothesis, *not* a conclusion (and it was even prefaced by the statement, in my world ;-).
      2. The scientific question, therefore, is “Does this hypothesis provide a better model than any competing alternative?”

      We seem to be agreed (for the most part) on *how* my hypothesis operates. You seem to have no disagreement on how it would classify any given case (merely whether that classification is incorrect or correct). You seem to have no problem with my missing moral cases (false negatives) but do have a large problem with what you believe to be false positives. I contend therefore that our disagreement is solely about the test cases and has absolutely nothing to do with my hypothesis.

      If I changed my hypothesis to “The goal of social admirability is to maximize the goal fulfillment of the community” (and it’s restatement to ““The more an act benefits the community, the more socially admirable moral it is. The more an act unnecessarily harms the community, the more socially repugnant it is.”) would you still have an objection? If not, then I will again contend that our difference is definitional as opposed to anything “scientific”.

      • I may have somehow posted out of sequence here. Somehow the sequence seems a little confused.

        The modification you suggested means that your hypothesis is no longer talking about just moral behavior (in my opinion). I would have to consider whether we know facts about what is socially admirable that would contradict it (be counter-examples). A least some people would argue that social admiration for some rock stars, outlaws, and strong man dictators are examples of social admiration of behavior that harms their communities.

        My best effort at explaining what is required for a hypothesis to be provisionally ‘true’ as a matter of science is in my earlier reply dated June 8, 2010 at 8:04 pm

      • Oomph. I was only proposing renaming my definition to “social admirability”, not changing it to a definition of “social admirability” that included rock stars, etc. (who I emphatically do not see as admirable — maybe adulated and even admired by idiots, but not admirable by a rational, long-sighted person ;-).

        Please read what I wrote as an exact new definition and call it anything you want that means exactly that definition and then evaluate whether it qualifies as “science”.

  2. You are drawing a distinction between “helping other people fulfill their needs and preferences when doing so also fulfills yours” and “helping other people fulfill their needs and preferences when doing so sacrifices yours” and arguing that “science shows us that only the third kind of ‘good’ act is properly called moral”. I can easily agree that there is a distinction between the two kinds of ‘good’ acts but . . . .

    How does science show that “only the third kind of ‘good’ act is properly called moral” (or, alternatively, how does science show that “helping other people fulfill their needs and preferences when doing so also fulfills yours” is *not* moral)?

    • My best effort at explaining how science reveals that that distinction, and what is required for a hypothesis to be provisionally ‘true’ as a matter of science, is in my earlier reply dated June 8, 2010 at 8:04 pm

  3. I saw your earlier reply and agreed with it.

    The problem is that I still don’t see anywhere where you provided any specific examples where the second half of your definition is necessary to fulfill any purpose (other than matching the entirety of your definition).

    • Again, I am a little out of sequence here. I think I answer this question in my June 8, 2010 at 9:22 pm post which includes: “The reason my hypothesis produces better science about what morality ‘IS’ is that without the unselfish aspect, the hypothesis does a poor job of meeting the scientific utility criteria and is contradicted by common cultural moral standards and moral intuitions.”

      For instance, a version of the Golden Rule is “Do unto others as you would have them do undo you”. Game theory shows this an excellent heuristic (when coupled with our ever present biological urges to punish poor cooperators) for initiating direct and indirect reciprocity (cooperation). But the Golden Rule also implies unselfish behavior is required (since there is no consideration given for eventual rewards) which would make it a counter example to any hypothesis about moral behavior that did not include unselfishness at least in the short term.

      • MS > the hypothesis does a poor job of meeting the scientific utility criteria and is contradicted by common cultural moral standards and moral intuitions.

        Could you provide some particulars and details? I’m not seeing it yet. What “common cultural standards and moral intuitions” do you believe that it contradicts. I am not aware of any such contradictions. If you provide specific examples, we could hash through them. Note: The Golden Rule is a great specific example but I don’t see how my hypothesis isn’t in complete alignment with it.

        MS > But the Golden Rule also implies unselfish behavior is required

        Huh? Once again, I just don’t see this as at all. When I apply the Golden Rule I certainly pay attention to long-term considerations — especially when you are talking about children (for example, not allowing a child to make themselves sick on candy)

  4. Becoming Gaia, in response to my above reply dated June 8, 2010 at 8:04 pm you said:

    “I agree with everything you said.” (Yahoo!)

    “My point remains that you have not provide a reason/distinction why adding the second half of your definition – “and are unselfish at least in the short term” produces better science than just the first half of your definition by itself.”

    (The reason my hypothesis produces better science about what morality ‘IS’ is that without the unselfish aspect, the hypothesis does a poor job of meeting the scientific utility criteria and is contradicted by common cultural moral standards and moral intuitions.)

    “I understand the definitional distinction but definitions are not science. Why is the distinction between your socially admirable and morally admirable important? What is its effect upon the real world? I believe that it merely clouds the issue and should therefore be discarded by Ockham’s razor. What exactly is your argument against that other than “my argument is science and yours is not”?”

    So far, I have only talked about what science tells us moral behavior ‘IS’.

    What I have not talked about is why anyone might care what science has to say about morality. A reasonable person might say “So what? There is nothing in science that obligates me to accept the burdens of any definition of morality. I am free to practice whatever morality I, and my group, expect will best meet our needs and preferences. That is, we are free to make a rational choice about what morality we will practice.” This is exactly right.

    There are good reasons I have am so interested in what science tells us moral behavior ‘IS’. My interest is based on arguments that this specific definition of morality will, in fact, better meet group’s and individual’s needs and preferences, on average, than any other existing definition of morality. This prediction is based largely on 1) the definition of moral behavior being acts that create benefits (acting morally should not be viewed as a burden) and 2) the innate consistency of my definition with our moral intuitions in all circumstances and from all viewpoints. Except for having to make rational choices about who is included in the group that receives the benefits of cooperation, this definition should nail Rawls’ reflective equilibrium criteria better than any other definition of morality.

    So in addition to being the definition of morality according to science, I also can argue it is the definition of morality that is more likely to meet the needs and preferences of groups and individuals than any other existing definition.

    If a definition of morality can ever be devised that, on average, better meets the needs and preferences of groups and individuals, then I would suggest adopting and practicing that new one. My definition of morality based in science would then become an idea of perhaps academic interest but no practical importance.

    • Three points:

      1. You say “(acting morally should not be viewed as a burden”. I say that requiring a sacrifice is a burden.
      2. You say ” the innate consistency of my definition with our moral intuitions in all circumstances and from all viewpoints.” I ask where the sacrifice is in all the trolley problems.
      3. You say “we are free to make a rational choice about what morality we will practice”. I say that there is only one truly rational choice — in that *every* other choice is suboptimal when compared to it.

      • 1. The unselfishness I am claiming is a necessary component of every moral act is only required to be in the short term. In environments where there are benefits from cooperation that are in excess of those available to non-cooperators, people who act morally can expect, on average, to benefit from their moral behavior and the cooperation it initiates and maintains. Therefore, on average, moral behavior is a benefit, not a burden.

        2. Trolley problems typically require choosing to either act to kill one person in order to save many or, if you do choose not to act, allow many people to die. Every choice available in trolley problems I know of are unselfish choices. Each requires accepting the guilt for either actively killing one person or doing nothing and allowing many to die.

        3. Yes, we are free to make a rational choice about what morality we will adopt and practice. I also agree that there are suboptimal choices. But a rational choice is a choice that a group or individual expects will best meet needs and preferences. A group or individual can be unaware of all alternatives, or be mistaken about which choice will actually best meet needs and preferences. But their choice is just suboptimal, not irrational.

      • Interesting answer/now we’re getting somewhere.

        Let me attempt to rephrase your #2.

        First by counter-example. I don’t know what the right choice is, I don’t care to work it out, and I’m not going to feel guilty about it. I can choose to not make a choice and pretend that that isn’t a choice in and of itself. Or, if forced to make a choice, I can simply flip a coin and let the result be my choice.

        It seems to me that your argument is that a) making a choice and owning that choice is a sacrifice and b) morality requires making and owning a choice; therefore c) morality requires a sacrifice.

        This is obviously strictly logically true; however, it is also misleading/confusing since people generally don’t consider making a choice a sacrifice (despite the fact that they *are* sacrificing the other options(s) for, at least, the current moment, if not forever).

        The problem with this definition is that it also makes my shopkeeper’s choice a moral choice by saying that he chose it over the other (sacrificed) alternatives because of the benefit to the community.

        Also, to forestall another possible argument, please note that I would never feel guilty over a real life trolley problem. I would feel regret for those who died but it would be exactly the same regret I would feel if I had stood next to the person who made the decision.

        So, I would like to strengthen my argument by saying that the ONLY sacrifice that I suffer in a trolley problem is by owning my choice (or, alternatively phrased, by taking responsibility).

        I would (and am seriously considering precisely how to) add owning/taking responsibility for a choice to the requirements for a choice/action to be moral. Does this answer your objections regarding the necessity of unselfishness and/or sacrificing or am I still missing something? And, if I am, could you please continue to phrase your arguments in terms of the trolley problem?

  5. Becoming Gaia, this is in response to your June 9, 2010 at 12:20 am post.

    The Golden Rule admonishes us to do to others as we would have them do to us. If we would want others to help us when we need help, then, it tells us, we must unselfishly help others when they need help without regard for whether our unselfishness will ever be reciprocated – whether or not the others can be expected to ever help us achieve our goals. Therefore, unselfish behavior is required by the Golden Rule. If a proposed definition of morality does not include a requirement for at least short term unselfishness, then it is contradicted by the Golden Rule.

    Your proposal is contradicted by a common moral intuition: The overwhelming majority of people believe it is morally admirable for a soldier to risk injury and death in time of war while lawfully killing enemy soldiers. How can killing someone else maximize the ability of both to meet their goals?

    Your proposal lacks explanatory power for contradictory moral standards such as: Slavery is moral/not moral, Women are morally obligated to be/not to be submissive to men, Religious groups believe it is morally required/not required that men be circumcised or to not trim their beards, ‘Pagan’ moral virtues emphasize leadership; this is the opposite of ‘Christian’ moral virtues that emphasize meekness.

    Your proposal is not consistent with the rest of science (evolution and game theory) concerning the emergence of the biological components of our moral intuitions such as empathy, shame, guilt, pleasure when being generous, and a willingness to risk injury and death to defend family and friends.

    The overall point I am trying to make is that a proposed definition of morality should only be called part of science or consistent with science if that definition is shown to better meet normal criteria for scientific utility than any other known alternative.

    • MS > The overwhelming majority of people believe it is morally admirable for a soldier to risk injury and death in time of war while lawfully killing enemy soldiers. How can killing someone else maximize the ability of both to meet their goals?

      Killing someone cannot maximize the ability of both to meet their goals.

      Unfortunately, in a kill or be killed situation, there is no option which avoids killing someone. Therefore, you go to the next level of evaluation.

      Loyalty vastly increases the goal satisfaction within your group. You’re being loyal to your group is therefore moral.

      The other soldier that is trying to kill you is equally moral.

      Or maybe, if you desert to Canada, you’re being more loyal. But actually, no, you’re desertion probably isn’t saving any lives (unless enough people do it), it is merely changing which lives are lost and makes your life easier because YOU don’t get put into the situation of kill or be killed. In fact, arguably, you are being less moral because you are abandoning your group with no other effect that gets counted by morality.

      Of course, the truly immoral ones are the ones who let it get to the point of war (or who are prosecuting the war).

      My algorithm seems to exactly handle this case.

      • I’ll rephrase my assertion: “The overwhelming majority of people believe it is morally admirable for a soldier to risk injury and death in time of war while ACTIVELY SEEKING OUT enemy soldiers to be attacked and possibly killed.”

        That is, it does not have to be a kill or be killed situation to be morally admirable. Finally, yes, the leadership who started the war almost certainly acted immorally. It still seems to me your proposal is contradicted by this case as well as the unselfishness requirement integral to the Golden Rule.

      • Your rephrasing doesn’t seem to me to change anything. Now the soldier is merely ACTIVELY pursuing the task to which he has been set as opposed to lollygagging about and doing it half-heartedly. Seems to me that by my proposal, people would find that more admirable/moral because he is ACTIVELY pursuing the sub-sub-goal of authority (once an action is moral or immoral, ACTIVELY pursuing it can only make it more so).

    • MS > Your proposal lacks explanatory power for contradictory moral standards such as: Slavery is moral/not moral, Women are morally obligated to be/not to be submissive to men, Religious groups believe it is morally required/not required that men be circumcised or to not trim their beards, ‘Pagan’ moral virtues emphasize leadership; this is the opposite of ‘Christian’ moral virtues that emphasize meekness.

      Actually my proposal has already clearly and explicitly addressed the reason for contradictory moral standards in a fashion that very few others have. Go back and reread The Science of Morality, Part II: Universal Subgoals. Contradictory moral standards arise when entities demote my proposed goal of morality and promote one of the subgoals above it.

      I think that you’re missing the *immense* explanatory power of what I’m proposing. Try applying each of the “universal” subgoals to any moral question you can come up with. If only one applies, it should supply the moral answer that you are expecting. If more than one applies, you should be able to find entities and cultures for each which that it is the true “moral” choice and justification/reason.

      Slavery and submissive women are conflicts between the two subgoals of gain/preserve resources and gain/preserve cooperation on one side and the single sub-goal of freedom on the other. Religious groups are formed solely to preserve the goal of honor and obey G*D and his dictates. Pagan “moral” virtues emphasize self-improvement. Christian moral virtues emphasize submission to authority (a sub-goal of goal preservation). My proposal clearly explains all of these things without recourse to anything other than the proposed universal subgoals which were themselves directly derived from my proposed goal.

      • Becoming Gaia said: “Contradictory moral standards arise when entities demote my proposed goal of morality and promote one of the subgoals above it.”

        I went back and read The Science of Morality, Part II: Universal Subgoals. I read it as a list of goals that people might have. Everyone already knows people have goals (and roughly what they are) and that people can sometimes choose, either individually or in groups, to pursue those goals by adopting moral standards that they expect will be useful for achieving these goals. I don’t see that the list explains anything or provides any useful insights.

        On the other hand, my hypothesis, backed up by the mathematical rigor of game theory, provides useful insights into the relative effectiveness of existing moral standards in fulfilling the needs and preferences of groups and how those moral standards (such as rules for punishing immoral acts which are very tricky to get right) might be changed to best increase that effectiveness, the rewards of moral behavior, and therefore the incidence of moral behavior.

        Finally, I’ll repeat that your proposal is not consistent with the rest of science (evolution and game theory) concerning the emergence of the biological components of our moral intuitions such as empathy, shame, guilt, pleasure when being generous, and a willingness to risk injury and death to defend family and friends.

        We may be to the point in our discussion where we will just have to agree to disagree. That is certainly not unusual in discussions about morality. In fact, it may be the norm.

        In any event, I appreciate your comments and the opportunity to comment on your posts.

      • I’m content to agree to disagree at this point.

        I’m clearly not expressing my point well enough if you believe that my proposal “is not consistent with the rest of science (evolution and game theory)” because that is precisely what I believe that it is based upon (of course, it could also be that I’m badly wrong — but then you should eventually be able to point out where I’m wrong once I’ve successfully conveyed what I think I’m arguing ;-).

        I dearly hope that you’ll continue reading and commenting because when/if you have an AHA! moment and see what I’m striving to convey, you’ll be an invaluable resource in improving my attempts at communication. Though, actually, it would be even more correct to say that comments are always *really* helpful even without such an AHA! moment — so please keep on keeping on.

        = = = = = = = =

        An important point of Universal Subgoals that I’m apparently not conveying at all well enough is that these specific subgoals (and no others) are logically derived merely from the fact of having goals (without the requirement of knowing any goal content). Yes, “Everyone already knows people have goals (and roughly what they are)” BUT this proposal is an explanation of WHY they have the top-level/moral goals that they have (and why they *are* so similar AND also why morality differs from culture to culture i.e. when the history/environment of the culture leads to differing orders of importance of the subgoals).

        With regard to “the emergence of the biological components of our moral intuitions”, I’m trying to get at that with the post The Science of Morality, Part III: The Evolution of Morality, Part 1 and today’s post on A Gentle Introduction to the Telos of Morality, Part I. The problem is that just like the simple statement “Corn is life”, there is an incredible amount of conceptual information marshalled behind my so-called “simple” proposal.

  6. You seem to oscillate between defining morality as “total attainment of goals” and “benefit of the community”. You should at least perhaps say something to the effect of “total utilitarianism and average utilitarianism are equivalent, because due to the Holy Light of Cooperation, you can’t have one without the other”; for example, you cannot increase average utility by killing off poorer agents. Same goes for self-sacrifice.

    What is your take on the http://plato.stanford.edu/entries/repugnant-conclusion/ ?


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Categories

%d bloggers like this: