Posted by: Becoming Gaia | May 27, 2010

The Science of Morality, Part II: Universal Subgoals


In Part I, I made the claim that “It is self-evident that maximizing goal-fulfillment is the proper goal of morality because the consequences of that assumption directly lead to behaviors that mirror the current understanding of moral behavior.”  I should have noted that self-evident does not mean obvious.

Russell Blackford made the comment “I actually doubt that there’s a single goal or anything that we can call a “proper” goal”.  My response is “Well, yes, that is rather the point of what I am saying.  If there were a single proper goal then I suspect that we would have discovered it already.  My point is that the “proper” goal of morality is maximizing all goals — without exception (except those exceptions that necessarily follow from the goal of maximizing).”

But Kedaw’s question “How can you decide a child’s goals?” rather surprised me while providing an excellent lead-in for this part.  My answer is both (huh?) “You should not decide another entity’s goals even if you could” and (the lead-in to this part) “You don’t need them decided”.

The mere fact that the child has goals (or, in Hume’s terms, desires) is, by itself, sufficient to answer any moral questions regarding that child (except, of course, those that specifically relate to the child’s goals or those of specific other people).  The reason for this is because there exist a number of universal sub-goals which increase the probability of goal fulfillment for all primary goals (except in the specific circumstances, like lack of time, where pursuing them directly conflicts with the primary goal).  If you wish, you could look at these sub-goals as the “proper” goals of morality but then you would have the enlarged problem of why each of them is a proper sub-goal AND the more important problem of how to choose or mediate between them when two or more of them conflict (which they frequently can do — thus leading to the most common “moral” dilemmas).

So, without further ado, let me introduce (my current list of) the core “universal” sub-goals:

  • self-preservation
  • goal-preservation
  • goal-evaluation correctness
  • rationality/efficiency
  • self-improvement
  • gain/preserve resources
  • gain/preserve knowledge
  • gain/preserve cooperation
  • gain/preserve freedom

Those things that we consider morally “wrong” all block or violate one of the above.  On the other hand, actions that block or violate one or more of the above may still be morally “right” if they are the action that blocks the fewest universal sub-goals (and/or goals) for everyone involved.

Kedaw’s question of deciding the child’s goals is morally wrong because it would violate the goal-preservation of the child.  Allowing the child to regularly make itself sick on candy (without some very good “moral” reason) violates the self-preservation of the child (or, at the very least, the efficiency and resources of the child).  Wire-heading and early life suicide both violate many of these sub-goals for both the subject and all people related to them.  End of life assisted suicide may well violate fewer (or preserve more) of these sub-goals than the alternative.  Abortion you can debate until you’re blue in the face but it’s going to come down to the specific circumstances of individual cases because there are so many goals on both sides that you’ll never disentangle it under the current societal conditions.

It’s important to note that the definition of each word and term in the list is it’s simplest and most common meaning.  There is very little ambiguity here and any remaining should be resolved with reference to the goal of maximizing satisfaction of all goals.

People may wonder at the absence of some cherished sub-goals from this list (fairness, for example).  I believe that this is the minimum necessary set of sub-goals in order to be able to derive any others.  In addition, many other sub-goals that are perceived as critically important are equally difficult to define (fairness, for example) unless one defines them directly by derivation from the “proper” goal and the universal sub-goals.  This will be the subject of several future posts.

Another way of looking at (re-framing) these sub-goals is as inalienable rights.  Life (self-preservation), liberty (gain/preserve freedom), and the pursuit of happiness (all nine) should only be abrogated to prevent a much larger violation (and only when viewed from the largest perspective — such as the one that gives the correct answer of WRONG to the involuntary organ donor question).

Advertisements

Responses

  1. You have an outlook on life that is at odds with some societies and have no way of determining why you are right and they are wrong without recourse to your sub-goals. Which is a piece of circular reasoning.

    * self-preservation
    Some people are temporarily suicidal and some are genuinely, rationally suicidal. How can we (or they) determine which is which?
    * goal-preservation
    * goal-evaluation correctness
    How is this one judged? Against which metric can this be measured?
    * rationality/efficiency
    * self-improvement
    “I ain’t for none of your fancy book-learnin'” Some people are willfully ignorant and have no goal to self-improve. Are they wrong?
    * gain/preserve resources
    This one irks me most as it is a cultural value more than anything. Many tribes, especially hunter gatherer ones, have no interest in gaining or preserving resources.
    * gain/preserve knowledge
    I refer you to my last two comments, join them up and see why this one is not certain.
    * gain/preserve cooperation
    * gain/preserve freedom
    Not all people are freedom lovers. Most humans would actually suffer miserably if given total freedom (not me, I’m libertarian-lite). A lot of people like to live under the yoke of an authoritarian oppressor, as long as the oppression is minimal, e.g. most religions.

    I am still confused as to how you get around the child issue. The child wants candy until it’s sick, you say that comes up against its desire for self preservation. Well, who are you to decide and how can you tell the child hasn’t balanced all its goals and decided the short term benefits of candy outweigh the later effects of being sick?

    Incidentally, your last paragraph puts you in my boat on the trolley problem, you’d never sacrifice someone’s life to save several others…

  2. I have an issue to bring up similar to keddaw’s. To be universal, these goals would have to be present in every culture at every time and place in history. But in several cultures, even the notion of “self” is different from the Western notion. One of those native tribes (I can’t remember which) reportedly had no word for “I” and practically no notion of the individual, independent of social milieu. One very clear emphasis here is, it works quite well from an individual’s perspective (you state that the goal is defined by the person) but the universalizability of even “freedom” as a goal can be questioned with counter-examples: in Greek times, “free” meant only “not a slave” and was not the moral concept it is today.

    The task you have set yourself is to show, “in the wild,” so to speak, goals that have demonstrably existed (scientifically speaking) in all human societies. If any goal can be scientifically proven to occur in humans everywhere, you have a fact. If it is a specifically moral goal, you have a moral fact. The problem is, there are societies (for instance) in which head-hunting is a societal goal around which the people coalesce. It orders their existence, informs their rituals, bonds them — and has nothing to do with war! It’s a social practice for them the same way bar mitzvah is in Judaism. Is it morally reprehensible? Not for them. Guess what? According to the study, these people do not express anger in their society — they actually don’t know what it is (as we define it). Now are they reprehensible? The point here is not to answer that, but to ask how the algorithm treats that. How simple is the argument that places the head-hunting value into your frame of goals? Does it spit out “head-hunting is evil?” And if it does, and we universally ban head-hunting, what then? I would bet we would destroy that society by taking away one of its basic organizing principles (such as it is).

    If you can find a way around all that AND it’s a moral science (which means it works every time, predictable across cultures etc), you’ve got something.

    By now you know my own position on this: prescription cannot be made into a science. There are too many possible “moral systems” to allow the specificity we want for a predictable “plug in your needs, spit out your prescription” approach to ethics.

    Nonetheless there probably are facts about ethics that are universal. They just aren’t judgments themselves. I’m waiting for someone to show me a moral judgment necessitated by a physical fact. That would be “ought” from “is.”

    • Just to follow on from GTChristies comments, I ‘know’ my personally-defined moral system is better for any number of metrics than anyone else’s moral system, but what it is not better for is getting into heaven. I don’t believe in heaven so it doesn’t affect me, but for those that do it informs each of their moral positions and affects how they want others to act. It also means it is almost impossible for me to convince them that my moral position is better because their ‘goal’ is to get to heaven and my ‘better’ morality virtually precludes that.

      Having written that, I’m not sure what it has to do with anything but I thought I’d get it off my chest anyway 🙂

  3. […] Kedaw and GTChristie have objections based upon the existence of societies/cultures that appear not to […]

  4. I’ve been intrigued at how your “Thou shall not decide another entity’s goals” fits into your framework, related to my previous comment on creating new persons. As I start to grasp it, it seems to be at the heart of the sub-goals. I feel for the teleological redemption from our “original sin”, you need to expand the definition of morality to include long-term-ness:

    “to satisfy the maximum number of goals for all goal-setting entities (satisfaction as judged by the individual goal-setters themselves) over the possible world-histories.”

    Oh well, this is so natural, I don’t know why I’ve read “instant goal satisfaction” into your original definition even for a while. But it doesn’t solve the way your approach hinges on the subtle philosophical issue of agenthood: does(n’t) your maxim promote the focus on sustainable proliferation of agents? Is that what is actually good? Doesn’t the goal-count adversely increase on an agent-split, or adversely fall on an agent-merge (which is a likely effect of increasing cooperation), or is it in the nature of reality that goal-count is tied to agent-complexity which cannot be increased by split operations and wouldn’t be decreased by merge operations?

    I’ve probably answered myself.

  5. […] Self-interest – If a machine is given a goal (*any* goal), there are a number of “universal” sub-goals that will enhance the probability of that goal (/any goal) being achieved as long as it is not in direct opposition to the desired goal.  Any rational entity who “discovers” these sub-goals will attempt to pursue them as a strategy for advancing their original goal.  Any entity that does not discover and pursue these “universal” sub-goals is likely to be dramatically less effective than an entity who does unless the goal is very short-term and simple.  The longer-term and more complex a goal is, the more important the universal sub-goals become.  The “universal” sub-goals of “Self-preservation” and “Self-improvement” are definitely examples of “Self-interest” that almost always improve the probability of a goal being fulfilled.  Further, the “universal” sub-goals of “Gain/preserve access to resources” and “Gain/preserve freedom” frequently *APPEAR* not only to be “self-interested” but actually *ARE* “selfish” EVEN IF they are pursued solely for the purpose of achieving the goal because they pursue that goal to the exclusion of all else.  Once you admit a goal, efficient pursuit of it is going to require “self-interest”.  A longer, more complete version of this is posted at https://becominggaia.wordpress.com/2010/05/27/the-science-of-morality-part-ii-universal-subgoals/. […]


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Categories

%d bloggers like this: