Posted by: Becoming Gaia | May 25, 2010

Challenge: Disprove this “Definition of Morality”


I declare that the (unknown but correct) goal of morality is to satisfy the maximum number of goals for all goal-setting entities (satisfaction as judged by the individual goal-setters themselves).

It is clear, concise, objective, and deriving the consequences of this declaration leads directly to our current intuitive beliefs about morality.

The specific goals have absolutely no relevance except insofar as they affect other goals. Murder or maiming another is obviously bad (i.e. by definition) because it pretty much guarantees that the other entity’s goals will go unsatisfied.

I challenge anyone to present a moral issue whose current state of play is not correctly analyzed by extending this goal (note that I am not saying solved — since the total ramifications of many acts are crucially dependent upon circumstances and unknown — but that this definition gives us a much more concrete handle on it).

(edited 20100526T13:53 to add a tag of utilitarianism due to comments)

Advertisements

Responses

  1. It has some merit, but also similar weaknesses to Sam’s argument.
    1. People are often not the best judge of their own satisfaction (hence Sam’s insistence on scanning people’s brains.)
    2. Should we maximise the greatest number of goals or should we have some weighting placed on each goal? If so, then is the weighting per person or overall? If overall then how do we judge between two separate people’s goals when their reporting of ‘satisfaction’ is subjective?
    3. Sometimes you can have knowledge that contradicts people’s goals but you ‘know’ is in their long term interests, e.g. children being forced to go to school; an intervention for an alcoholic or drug addict; restraining a temporarily suicidal person; etc.

    As for current issues not correctly analyzed by this idea: healthcare springs to mind. Gun control. Separation of Church and State in the US.

    Basically anytime the needs (or rights) of the few outweigh the needs of the many.

    A good attempt but since morality is subjective it necessarily will come into conflict with itself when applied to more than one person. In fact, often within one person.

    I would suggest having a look at the trolley problem to see how our own morality fails to be logical or consistent. (anadder.com/putting-a-price-on-human-life where I go through hoops to find the ‘right’ answer)

  2. 1. People are often not the best judge of their own “global” satisfaction but I’m not looking for that. I’m looking for them to judge whether a *specific* goal has been met.
    2. Goals should be weighted in importance by the setting entity. We should not judge between two separate people goals. Goal satisfaction should be measured across individuals as the summation of each individuals weighted percentage fulfilled.
    3. This is why I am drawing a distinction between goals and preferences. Children do not have a goal not to go to school. They don’t want to currently but they will acknowledge the reasons why it is in their best interest (as measured by fulfilling their other goals). An addict doesn’t want an intervention but when he is clear-headed, he again recognizes that it is in his best interest.

  3. My favorite example of where the needs (or rights) of the few outweigh the needs of the many is the question of whether it is moral to kidnap a person off the street to serve as an involuntary organ donor to save five people. The short-sighted analysis of this says that you are sacrificing one person to save five and that you should do it. The reality is that once such a practice is considered acceptable, then people will have to defend against it. This, in turn, leads to arms races, etc. which ends up costing far more than the five lives saved.

    I contend that in all the cases where it appears that the needs (or rights) of the few outweigh the needs of the many, it is merely an illusion due to not understand the implications, ramifications and consequences of the entire situation.

  4. I could be wrong, but this seems to be only an extension of utilitarianism. Instead of the greatest happiness for the greatest number of people, we have the greatest goal-related satsifaction for the greatest number of people.

    I notice you say that murder is bad by definition because it guarantees that their goals will go unsatisfied. But why must their goals be related in any way to their death? That is to say, one person’s goal might be to create a great work of art, which they succeed in doing, after which they are killed. From their point of view, murder is pretty much the same as dying by any other cause.

    • @David Michael – Yes, this is a variation on utilitarianism as explained in a subsequent post.

      Regarding your second point, it is important to remember that human beings have *many* goals and decide upon new ones all the time. Any living human being will have goals and death will wipe them out.

      Also, be sure to read my most recent post for more clarification.

  5. Sorry, but your explanations of why interventions or children’s education are important is not in any way defined, described or hinted at by your initial statement:

    “I declare that the (unknown but correct) goal of morality is to satisfy the maximum number of goals for all goal-setting entities (satisfaction as judged by the individual goal-setters themselves).”

    So, I accept your challenge and ask you to tell me how, in any way, shape or form, your initial assertion deals with an alcoholic who is on a self-destructive spiral but doesn’t want help.

    I may be playing Devil’s Advocate here, but it’s important people see any holes their logic has or are able to explain to people who think they see holes why they aren’t holes.

    • @keddaw – Please continue to play Devil’s Advocate and keep me honest. Does my most recent post on Universal Sub-Goals start making it clear how the initial statement can be linked to the specific examples that you cite?

  6. First visit. Interesting concept. What you have is the core of an algorithm for calculating a moral outcome, in which the outcome is “satisfaction of X” and the answer is “true” or “false” to the term “satisfaction.” It wouldn’t matter much whether “goals,” “priorities,” “happiness,” “prosperity” or practically any other X were the relevant operand, as long as “satisfaction of” can be evaluated as true or false. Thats’ my first impression.

    More reading! LOL.

  7. […] Note:  I still have not had any takers for the previous challenge to “Disprove This Definition of Morality“. […]

  8. Becoming Gaia,

    “I declare that the (unknown but correct) goal of morality is to satisfy the maximum number of goals for all goal-setting entities (satisfaction as judged by the individual goal-setters themselves). …. I challenge anyone to present a moral issue whose current state of play is not correctly analyzed by extending this goal.”

    Regardless of not knowing what you mean by “correctly analyzed” (my understanding is that how to correctly determine what is moral remains an unresolved question in moral philosophy), I will venture two contradictions to your definition.

    First, a soldier jumps on a live grenade to protect his friends. The likely outcome is this soldier dies, but his friends are spared injury. Further, due to the nature of anti-personnel grenades, if this soldier did not jump on the grenade, everyone would probably have been injured, perhaps seriously, but none killed.

    Jumping on the grenade would be immoral by your definition since the soldier’s death prevents him from fulfilling his goals for the rest of his life without necessarily increasing the ability of his friends to fulfill their goals. But I claim it is a fact that the action of the soldier is morally praiseworthy (though not prudent) and your proposed definition is contradicted.

    Second, Bill Gates starts a company called Microsoft and sells software for PC’s that enables people around the world to communicate and cooperate together in ways never before possible and, by that communication and cooperation, more personal goals are fulfilled by goal-setting entities than if Bill Gates had spent his life writing poetry.

    Bill Gates’ business success meets your definition’s requirement for increased fulfilled goals, but I claim is not morally praiseworthy (though socially admirable) and your definition is contradicted. Bill Gates became morally praiseworthy as a public person when he began to give away his fortune, not when he was making it.

    There is a necessary characteristic of almost all moral intuitions and cultural moral standards that your definition lacks. That characteristic is unselfishness at least in the short term. If you add a necessary requirement of unselfishness at least in the short term, you will be closer, but, in my opinion, still not quite there.

  9. I have read a couple of new posts and then I jumped to reading from the beginning. Your proposition has the obvious dynamical caveat that it promotes entities setting goals to the effect of creating (the more the “better”) new entities which share at least the goal of creating new entities. The evolutionary pressure is on high-output mechanisms for creation of new entities.

    Are you participating in LessWrong? That community is not single-minded, and I think it is a best community around to discuss these issues. Or are you aware of or attempt at a “counter-balancing” community?

    • I agree there is a “universal” sub-sub-goal of “creating new entities that share your goals” (a.k.a. reproduction). The constraint to this is that it invariably takes resources whose consumption blocks others access to them.

      It is also not at all clear that “The evolutionary pressure is on high-output mechanisms for creation of new entities.” A short-sighted “logical” view certainly leans that way but reality seems not to. How do you explain the drop in the birth rate in the first world nations (who presumably have the best access to resources)?

      I participate in LessWrong now and again. Unfortunately, the focus of interest is radically different (rationality as opposed to morality) and I frequently find the culture suboptimal for making useful progress.

      • “How do you explain the drop in the birth rate in the first world nations (who presumably have the best access to resources)?”

        I don’t know, actually. There are some “obvious” answers, as that

        (a) having to care about children is detrimental to the achievement of other goals on the 10-to-20 years horizon

        (b) having children is more difficult due to changes (to the worse) in the global culture

        (c) having children is more difficult due to changes (to the better) in the average awareness of the goal of well-being for one’s children

      • But more to the point, there is not enough traction for the evolutionary (i.e. “darwinian”) dynamics to kick in. The “obvious” reasons are universal enough (plus there’s the nature/nurture divide) that the implication “I have many siblings” ==> “I want to have many children” does not hold. No “evolutionary traction” = no “evolutionary pressure”.

  10. Oh and there is the case of low birth-rate animals who do not invest all that much in offspring, like that bird Douglas Adams was talking about in one of his speeches. The animals who also have low death rate, so that overall the population is stable; and the strategy is argued to be arrived at by group selection. (Of course this doesn’t apply to the human case.)


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Categories

%d bloggers like this: