Posted by: Becoming Gaia | Jun 29, 2010

Reboot: Defining Morality


My purpose in writing this blog is to build out my ideas on the science of morality and to learn how best to present them.  This being the case, I will periodically go back and evaluate what I perceive to be errors in my approach and attempt to correct them.  One such error was that I started too far into the middle of the subject with no proper grounding — like, for example, a simple definition of what I mean by morality.

= = = = = = = = = = = =

What Is Morality?

Most people have an intuitive understanding of what they believe morality to be — but it is an understanding that has been built from the bottom up from examples of moral and immoral acts, consequences, and feelings.  It is an understanding that has been profoundly influenced by their society, their circumstances, and their genetics.  And, it is an understanding that is emotionally-charged and only created by the conscious mind as an explanation after the fact.

Everyone understands that morality has to do with what is right and wrong and what one “ought” to do.  But relatively few people have an understanding that is any deeper than their emotive responses, a reasonably small number of simple rules, and some assumptions and rationalizations that were thrown together by their conscious mind to make it all into a seemingly comprehensible and integrated whole.  This means that most debates on morality quickly reach the point of arguing that a given action is right or wrong simply because either a) “it just is” or b) “because G*d says so” — with no way of resolving the issue.

Before we can hope to reconcile radically different moralities or correctly extend our own rules of right and wrong to new and/or difficult circumstances, we must understand more clearly what it is that morality is.  We must understand why it is what it is.  And, eventually, we must understand why we, as individuals, should be moral.  We are coming to an age where our mind children, intelligent machines, will far surpass where we are today.  If we cannot teach them right from wrong and why they should care, we are in a tremendous amount of trouble.

There are two different ways in which to go about defining and understanding morality. The first is to merely describe what it is.  Unfortunately, morality very much resembles the proverbial elephant presented to the blind men.  Currently, morality is clearly culture-dependent and varies tremendously from place to place (and even among people from the same physical location).

The second method is to describe what it does (or what its function is) .  This is far more effective (better) because it makes it much easier to explain why it exists, why it is what it is, and it gives a yardstick for improvement.  Further, it gives a context for the value judgments of right or wrong and specifies a “goal” which determines the actions that we “ought” to take in order to fulfill it.   Trying to define right and wrong or good and bad (or evil) in the abstract is a hopeless task. It is only in the context of some task or goal, either a positive achievement or the avoidance of a negative result, that such evaluations can be made.

The very simple functional definition that I will start with is “Morality is that which maximizes the probability of cooperation”.  This definition has several advantages. Since it defines morality not by what it *is* but by what it *does*, it provides a context that makes it possible to start answering questions like those above. It doesn’t make any arguable assumptions like the existence of an absolute truth or an omnipotent being (or the non-existence of either). And, finally, it is entirely in line with the latest expert opinions such as:

Moral systems are interlocking sets of values, virtues, norms, practices, identities, institutions, technologies, and evolved psychological mechanisms that work together to suppress or regulate selfishness and make cooperative social life possible.
— Jonathan Haidt, Handbook of Social Psychology, 5th ed. (2010)

Thus, and quite simply, the science of morality is just the science of maximizing cooperation or working together.  It is no longer a philosophical question because it is grounded in reality.  It is not a calculus because it isn’t and can’t be entirely known — but it should enable an engineering effort to improve our lives and increase the probability of a safe future.   Just as the steam engine is a human invention that is constrained by the universal laws of physics with better and poorer implementations based upon circumstance, so too is ethics constrained by the universal scientific facts of what does and does not work. And there is a lot of improvement that can be made over the situation present today.

Advertisements

Responses

  1. I agree almost entirely with your concluding paragraph (and much of the preceding) and I completely approve of the direction you appear to be going with this thinking.

    However, to me it seems your view remains burdened with Ptolemaic epicycles, and we should (in the normative sense of the word) act in the direction of increasing coherence over an increasing context of meaning-making, progressing somewhat similar to Ptolemy -> Copernicus -> Kepler -> Einstein -> Unknown…

    Despite critically limited bandwidth in this forum, I’ll attempt to highlight a few areas for not merely refinement, but refinement over an increasing context. [You’ll notice that I mention “context” repeatedly; it is crucial to the growth beyond the reductionist paradigm that I see as necessary to our proximal progress.]

    (1) The topic at hand is not so much morality, on which, as you point out, everyone has a point of view, but metaethics, a theory of how we come to moral agreement (now and future.)

    (2) A robust metaethical theory must apply not only to human persons, but for agents of all forms and environments of interaction.

    (3) Your statement “Everyone understands that morality has to do with what is right and wrong…” is somewhat jarring to me. Yes, most would agree with this, but I would rephrase it somewhat differently, in order to eliminate a few of those “epicycles”: Everyone has a sense of “morality” the relative rightness of possible actions, distinguished on a scale between proximate good and bigger-picture right-in-principle. [As you point out earlier, the basis of such distinguishment is a complex and only partially perceived result of our physical nature (due to our biological inheritance, our cultural experience) and our particular environment of interaction.]

    (4) You say “…eventually, we must understand why we, as individuals, should be moral” and you seem to assert a hard “should” here, whereas I would suggest the more nuanced view that this “should” emerges not from any particular morality to be discovered, but from the logical imperative to align our intentions with “what works.” [And since we are the evolutionary result of a long chain of “what works” embedded agents within, rather than assessing from without, we naturally avoid the fallacy of deriving ought from is, as well as the Naturalistic Fallacy.]

    (5) You say “The first is to merely describe what it is” and you refer to the proverbial blind men and the elephant. I agree that our views of “morality” suffer from the limitations of individual perception and perspective, but you seem to imply that there could be an objective view of this elephant, while I would argue that the only workable approach is to improve our science of combining these various observations, which are not only different on their face but different in their depth, thus requiring a hierarchical approach to the combining.

    (6) You mention “goals” which I would suggest are another “epicycle” inherent to our (especially Western) way of thinking, and at the root of much fruitless philosophizing in not only ethics but also the field of artificial agents. Goal-achieving is always only a special case of values-promoting, legitimate to the extent the future context can be effectively specified. Of course, for any agent operating within a complex, changing environment of inherent uncertainty (applicable to pretty much all questions of moral interest) we act to null the difference between our perceived environment and our present-but-evolving values, lather, rinse, repeat, with “wisdom” corresponding roughly to the depth and coherence of our models of our selves and our environment of interaction.

    (7) You move on to an operational approach to “morality” of which a approve. You say it essentially comes down to “cooperation” which is not so much wrong as it is incomplete. It’s easy to come up with examples of cooperation that we would agree are evil, and I would suggest that the difference is related to an entropic arrow: Actions which promote increasing cooperation over INCREASING context of meaning-making will be seen as increasingly “right.” Actions which promote increasing cooperation over DECREASING context of meaning-making (such as with various cults, religions, Luddites, terrorist organizations) will be seen as increasingly evil.

    I would then suggest that it’s not so much about “cooperation” as it is about synergy, going all the way back to the cosmic tendency to maximize free energy rate density.

    In short, actions will be increasingly assessed as increasingly right or “moral” to the extent they are assessed as promoting an increasing context of increasingly coherent, hierarchical but fine-grained, present but evolving values via instrumental methods assessed as increasingly effective, in principle, over increasing scope of interaction.

    Unfortunately, as I said, we lack the necessary bandwidth for a proper unpacking of this metaethical Arrow of Morality.

    • >> your view remains burdened with Ptolemaic epicycles

      😀 I like that (and will have to remember it for future use).

      If you could help me remove them, I’d be greatly appreciative.

      >> we should (in the normative sense of the word) act in the direction of increasing coherence over an increasing context of meaning-making

      I can parse that and vaguely see where you think you’re going but I think that you need to unpack meaning-making. I also prefer the term integrity over coherence unless you have a reason why you believe that the latter is better. Maybe if you could ground this statement a bit more?

      >> growth beyond the reductionist paradigm

      is necessary but care must be taken not to lose people before we even start.

      • Re: “increasing coherence over increasing context of meaning-making”

        Note that I’m speaking in terms of a system that expresses behavior “meaningful” within the context of its environment of evolutionary adaptation, because its values (not to be confused with preferences) are shaped by the regularities of its (ancestors’) interaction with that environment. Put a fish on land, or a human in vacuum, and there will be little that is meaningful about its behavior.

        By “coherence” I mean the strength of association (mutual information) between the system’s (hierarchically organized) components and its interactions with its environment. Incoherence has a cost, and natural selection will tend to correct elements of the structure to the extent they do not contribute (synergistically) to its integrity within a competitive (or even merely dissipative) environment.

        Note that in all cases, what persists is what tends to work; meaning follows.

      • How about congruence instead of coherence?

        congruence = the quality or state of agreeing or corresponding.

        coherence = logical interconnection; overall sense or understandability.

        Most people will “get” congruence much faster than coherence.

        Is a fish out of water incongruous or incoherent? (Is a fish in water coherent? Am I coherent? ;-))

        = = = = = = = = = =

        You also seem to be saying:

        coherence/congruence = what works = morality
        meaning = knowledge of coherence/congruence/what works

        I have no disagreement with this EXCEPT that it expands the meaning of morality far beyond the point where it is useful.

      • Oh. I forgot

        context = cluster of coherence.

    • >> (1) The topic at hand is not so much morality, on which, as you point out, everyone has a point of view, but metaethics, a theory of how we come to moral agreement (now and future.)

      I disagree. Metaethics is merely the engineering of morality and as such, it is a subtopic of the topic of morality. Furthermore, if we attempt such engineering without fully understanding and agreeing upon the fundamentals of what we are engineering (i.e. morality), we are doomed to spectacular, likely bloody failure.

      >> (2) A robust metaethical theory must apply not only to human persons, but for agents of all forms and environments of interaction.

      Absolutely. That is where I personally started from (intelligent machines) and where I am eventually headed back to, once sufficient groundwork is laid.

      • You say “Metaethics is merely the engineering of morality…” This relates to my point about goal-achieving being only a special case of values-promoting.

        Engineering is about the application of known methods to produce specified outcomes. For over twenty years I managed Technical Support and Field Service for leading-edge scientific instruments, and let me tell you, there’s a night/day difference between the engineering of the product and getting/maintaining/recovering customer satisfaction, and the crucial difference is the ability to effectively specify the context of the problem.

        We can agree at the metaethical level that increasing “morality” entails increasing cooperation (or somethings more fundamental that drives intentional agents toward cooperation), but we’ll never be able to nail down the specifics of that “morality” because within an increasingly complex world we will be challenged to discover and develop increasingly complex solutions at all levels. It’s an approach; not a target. We can never guarantee success; we can only become better at avoiding failure.

      • Go West, Young Man is a direction and a goal without a specific target. Get better is a goal without a specific target. Methinks the philosopher doth protest too much.

        Arguments like this could be called “wrong” in a moral sense since they normally lead away from congruence (i.e. they don’t work too well in terms of achieving your desired ends/goals) ;-).

    • >> (4) You say “…eventually, we must understand why we, as individuals, should be moral” and you seem to assert a hard “should” here, whereas I would suggest the more nuanced view that this “should” emerges not from any particular morality to be discovered, but from the logical imperative to align our intentions with “what works.” [And since we are the evolutionary result of a long chain of “what works” embedded agents within, rather than assessing from without, we naturally avoid the fallacy of deriving ought from is, as well as the Naturalistic Fallacy.]

      Somehow I need to convey that that “the logical imperative to align our intentions with *what works*” is exactly what the moral “should” is.

      I also contend (as Hume intended) that there is no fallacy in deriving ought from as long as there is a desire present. If a desire (or goal) is present, what you *ought* to do *is* what is most likely to lead to the fulfillment of that desire.

      The concept that morality is a “thing apart” is exactly the same error as Dualism. Your believing that my “hard *should*” (which apparently you believe is morality-based) is different from your logical imperative is an indication that you haven’t internalized my point that they are *exactly the same thing*. Morality is no more than what works — limited to the sphere of cooperation.

      • We mostly agree here, and the difference I tried to raise is subtle.

        You say that there is no fallacy in deriving ought from is as long as there is desire present. I said that there is no fallacy in deriving ought from is since we are agents embedded within an environment that, to a significant extent, shaped our values. The difference between your statement and mine is that I was speaking of adaptive systems’ values and you were speaking of desires (which requires a desirer.)

        You say “Morality is no more than what works…” and I’m trying to point out that “what works” (within the open-ended domain of human affairs) can only be approached but never specified.

      • Hmmm. Are you arguing against a desirer?

        Whether you want to consider desire an emergent property or a useful illusion, I don’t think that it is useful to deny it.

        Perfection can only be approached. What works is a giant swath that can be easily maintained — and constantly improved.

        Again, approached but never specified/satisfied is a “bad” argument since it generally defeats your purpose.

      • Also, when you say “we naturally avoid the fallacy of deriving ought from is”, I generally interpret such a statement as meaning “deriving ought from is is a fallacy”.

        I’d also be curious what you see as the distinction between values and desires?

        From dictionary.com:
        Value
        11. Ethics . any object or quality desirable as a means or as an end in itself.

    • >> you seem to imply that there could be an objective view of this elephant

      If you’re implying that there is no objective view of an object like an ordinary elephant, then I strongly disagree with such a refusal to ground in the real world.

      If, however, you believe that there is an objective view of an elephant, then let me out-and-out state that there is an objective view of morality. It’s appearance varies according to circumstances but it is always “that which maximizes the probability of cooperation” (by *my* definition ;-))

      • If you’re familiar with the concept of the umwelt, then you might agree with me that most certainly there can be no such thing as an objective view of an “elephant.” Just because you and I share 99.99999… percent of our perception and cognition in common and would agree in virtually all cases about what is and what is not an elephant does not make it objective.

        Likewise, there can be no objective morality, but to the extent that agents share values in common, within a meaningful environment of interaction, then most certainly they can traverse the tree of subjective “truth” down its increasingly probable branches toward a root grounded in the mists of “reality” and somewhere along the way they will find agreement. If not on the value of Britney Spears, then perhaps on the value of music, or entertainment, or the electronics that supports its dissemination, or on the physics that supports electronic technology, or…

        However, if one were dealing with, say, an agent whose meaningful environment of interaction is a virtual library in cyberspace, for whom primate values including even breathing and gravity are merely remote abstractions, then one would have to go a long way to find a branch of common values.

      • Now you’re trying to be difficult . . . . 😉

        Topmost branch of common values -> cooperation is good

        (And thank you for not insisting on the umweldt view. I hate people who believe that their refusal to allow any sort of consensual grounding is a valid argument against anything that they care to disagree with.)

    • >> Goal-achieving is always only a special case of values-promoting, legitimate to the extent the future context can be effectively specified.

      Could you give another case of values-promoting to clarify what you mean?

      >> Of course, for any agent operating within a complex, changing environment of inherent uncertainty (applicable to pretty much all questions of moral interest) we act to null the difference between our perceived environment and our present-but-evolving values, lather, rinse, repeat, with “wisdom” corresponding roughly to the depth and coherence of our models of our selves and our environment of interaction.

      because here you seem to backtrack and say that the “special” case is the general case.

      • You said “Could you give another case of values-promoting to clarify what you mean?”

        Consider the case of a tribe living on one side of a huge chasm, with a similar tribe on the other side. Each side values cultural and technological exchange. What to do? The values are a given–they always are. But a goal is legitimate only to the extent that the future context can be specified.

        The goal is obviously a cable-suspension bridge, you say? But you would be forgiven for not knowing that the other tribe are incorrigible cannibals, or perhaps your side carries a devastatingly fatal disease, or…? The challenges ahead of us will be far more complex, and unpredictable.

        [So after numerous failures, and many deaths, the tribes developed 3D telepresence and greatly enjoyed and profited from their virtual meetings.]

        I don’t see why you would say I seemed to backtracked.

      • A goal is a goal. There is no “legitimacy” involved in goals.

        An entity may pursue a goal that may turn out to have tremendously unhappy effects but that does not reduce its legitimacy as a goal.

        Further, the goal that was specified was cultural and technological exchange. The cable-suspension bridge was an approach. I’ll even buy the term sub-goal *but* the bridge was NOT the ultimate goal nor properly desired for its own sake.

    • >> I would then suggest that it’s not so much about “cooperation” as it is about synergy

      And I would then argue that cooperation is the best method of increasing synergy and the most accessible point to do so. 😀

      Remember that you need to play to an audience here or all your efforts will go for naught. 😉

  2. >> Everyone has a sense of “morality” the relative rightness of possible actions, distinguished on a scale between proximate good and bigger-picture right-in-principle.

    OMG 😀 Couldn’t you just have said that your scale is from “what is good for *me* right now” (proximate good) and “what is good for the world as a whole” (bigger-picture right-in-principle — except that right-in-principle can be wrong in a specific instance)? Or how about, a scale from short-sighted selfishness to near-omniscient charity.

    I do need to revise that statement though. It is ugly.

  3. >> You say it essentially comes down to “cooperation” which is not so much wrong as it is incomplete. It’s easy to come up with examples of cooperation that we would agree are evil

    Only because they are instances where the effect of a short-term/small-scale cooperation reduces the total long-term/large-scale cooperation.

    >> Actions which promote increasing cooperation over INCREASING context of meaning-making will be seen as increasingly “right.” Actions which promote increasing cooperation over DECREASING context of meaning-making (such as with various cults, religions, Luddites, terrorist organizations) will be seen as increasingly evil.

    I understand what you mean by “context of meaning-making” but would suggest that the term is *SO* abstruse and forward-thinking as to render your communications virtually unintelligible to the vast majority of the audience you would like to reach. Why not use a standard term like “Singer’s circles of morality”?

    • Laconic I am, and therefore abstruse.

      Because context can not be conveyed, but must be constructed, I am painfully aware that the thinking I would most like to test and refine is extremely difficult to share. I have to earn a living and metaethical philosophizing is only a hobby (although I can’t imagine a more important one for our times.) So I seems I must be content for now with scattering a few seeds of thoughts, that they might grow within a few fertile minds.

      • I have to earn a living as well — but I am using this blog to attempt to construct the concepts that I am trying to convey.

        Arguably, it is unlikely that your thinking is anywhere near as coherent and complete as it would be if you were forced to define it well enough to convey it.

        I’m forcing myself to learn to be less laconic. Why be content with your current lot? Why not take up the challenge and become a guest blogger?

  4. “I understand what you mean by “context of meaning-making” but would suggest that the term is *SO* abstruse and forward-thinking as to render your communications virtually unintelligible to the vast majority of the audience you would like to reach. Why not use a standard term like “Singer’s circles of morality”?”

    His expanding circle has the agent in the center acting as an individual, in accordance with its perception of an increasing space of values recognized as shared with others.

    My thinking is more general, systematic and evolutionary in that the expanding context of meaning-making effectively IS the agent.

    The increased context can be comprised of Person-A plus its observations of the consequences of its previous actions, or just as naturally comprised of Person-A along with Persons B, C, D, E & F sharing observations and acting as a team. In any case, acting to promote present but evolving values, the agent system will tend toward increasing coherence due to selection for “what works” at whatever scale(s) of agency.

  5. >> My thinking is more general, systematic and evolutionary in that the expanding context of meaning-making effectively IS the agent.

    Yes. But I would AGAIN “suggest that the term is *SO* abstruse and forward-thinking as to render your communications virtually unintelligible to the vast majority of the audience you would like to reach.”

    And you could convey the same understanding with Singer’s expanding circles by pointing out that each level – family, tribe, nation, etc. can be considered as an agent/entity itself.

    Reframing like this has the dual advantages of both making your thoughts more comprehensible to others and forcing you to think more about them (and possibly learn more about them as you suddenly have an analogy to extend).

  6. Is it a mildly normative ethics statement (a “golden commandment”), or a metaethics ground-making for a deconstructionist descriptive ethics program?

    The question is slightly rhetorical — I have a general feeling what you might be after; I don’t need to be convinced, just playing devil’s advocate. I think your earlier posts point more towards “conciliatory normative” stance, and then your metaethics task is to derive that from an ought-is bridging assumption involving the concept of goals. But then your current post invokes the metaethical rhetoric that smells like “proof by assumption of thesis”.

    I’ll read the above discussion later, so you can point me to it if appropriate.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Categories

%d bloggers like this: