Posted by: Becoming Gaia | Jun 1, 2010

The “TRUE” Kantian Categorical Imperative (The Science of Morality, Part IV)

In his Groundwork of the Metaphysics of Morals, Immanuel Kant claimed that morality can be summed up in one ultimate commandment of reason (imperative) from which all duties and obligations derive.  He defined an imperative as any proposition that declares a certain action (or inaction) to be necessary; a hypothetical imperative as one which compels action under given circumstances/hypotheses (i.e. if I wish to quench my thirst, I must drink something); and a categorical imperative as an absolute, unconditional requirement that asserts its authority in all circumstances, both required and justified as an end in itself. It is best known in its first formulation: “Act only according to that maxim whereby you can at the same time will that it should become a universal law.”

Perusing the list of “universal” sub-goals, one of them immediately leaps out as being suitable to be that one ultimate commandment (as well as being the one for which this blog is most criticized for not arguing against it’s negation more strenuously).  “Cooperate!”, in the sense of “Maximize your participation in cooperation” is a maxim that should become a universal law.  It correctly handles the edge case where an entity shouldn’t be expected to participate in it’s own demise and avoiding its opposite, selfishness, is what a number of comment authors appear to be basing their morality upon.  Further, since maximizing cooperation is itself a goal, all of the other “universal” sub-goals immediately come back into play.

This blog started this series focused upon the goals of “fulfilling everyone’s goals” and “fulfilling my goals” because they answer the questions of “Why should I be moral?” and “Why should I cooperate?”  I would greatly appreciate any comments on whether (and if so, how) it would be more effective to start with cooperation instead.  I have started with cooperation in other venues but quickly found that the other “universal” sub-goals are still needed to solve many moral conundrums and that I was not given enough rope (time and attention) to delve into and prove the basis for the “universal” sub-goals after revealing the “key” to be cooperation.

I have not focused on selfishness because one of its opposites (self-sacrifice) is frequently also the opposite of cooperation.  Also, many people (such as Kedaw and Sloan below) lump self-interest in with selfishness as something that negates morality.

This blog contends that a) morality is based upon the goal “Maximize goal fulfillment for everybody”, b) self-interest is based upon the goal “Maximize goal fulfillment for me”, and c) selfishness is furthering self-interest at the expense of morality.  Kedaw has the slightly different take that morality is limited to “acts that deserve praise” — meaning acts that further the goal of fulfillment for everybody (cooperation) at the expense of self-interest.  I believe that this is incorrect because it discourages self-interest in cases where it is more than justified and may induce individuals to inappropriately self-sacrifice even when the results are lessened cooperation and goal fulfillment for everyone (e.g. those that exert control through guilt after “self-sacrifice” or those who get off on self-negation).  Further, I would expect that saying that morality is restricted to acts that “are not self-evidently in your self-interest” will quickly lead to hiding one’s own self-interest which will lead first to less knowledge about motivations and then less information about trade-offs and finally fewer goals fulfilled for everyone.

Mark Sloan has similar views on acts that are “morally praiseworthy.”  He cites the example of a soldier throwing himself on a grenade, killing himself but sparing his friends from injury (but not death).  He acknowledges that this action  eliminated any chance of the soldier fulfilling any more goals while his friends wouldn’t necessarily have been injured enough to cause any goal fulfillment problems (and even acknowledges that the action was “not prudent”) but still believes that it was morally praiseworthy.

My question is “Why would any action that decreases goal fulfillment be worthy of any praise?”  If the soldier believed that he was saving other lives plus injuries at the cost of his own (i.e. causing a net increase of goal fulfillment) but was mistaken, then I would call his intentions moral and praiseworthy (but the results of his action remain imprudent and effectively immoral).

I could agree with the distinction of “morally praiseworthy” if it meant furthering morality at the expense of self-interest but morality must be furthered (unlike in the soldier example).  Sloan’s second example of Bill Gates only being “socially admirable” while heading Microsoft and not achieving the status of “morally praiseworthy” until he started to give his fortune away is fine as long as “socially admirable” includes “moral” in its connotations.  (As a side-note: Steven Pinker began an excellent article on The Moral Instinct – that I highly recommend – by asking “Which of the following people would you say is the most admirable: Mother Teresa, Bill Gates or Norman Borlaug?”)

Tomorrow I will continue on the subject of cooperation with an explanation of The Distinction Between Enlightened Self-Interest & Morality and several of my answers to my last challenge When Is It Rational To Be “Immoral” (so leave your answers in the comments before then).



  1. I would greatly appreciate any comments on whether (and if so, how) it would be more effective to start with cooperation instead.

    That is actually promising. Scientific experiments have begun to show that the cultural part(s) of the brain, which are the sites most associated with cognitive (linguistic) thought and high-order executive (planning) functions, are the sites most active when presented with moral dilemmas such as the trolley problem (roughly, “should I sacrifice X to save Y?”). The research is beginning to point to a (yet unproven) conclusion: though “morality” appears to be culturally constructed (ie, more nurture than nature), it is “debated” in an executive part of the brain. the responses this part of the brain produces (such as hesitancy to harm another human) point to “cooperation” as a primary function (which also could be called either a “motivation” or a “goal” if viewed just so.) Viewed evolutionarily, the cultural part of the brain is THE part no other animal has — it is our primary adaptation. Given that, it may be argued, if argued carefully enough, that we are evovled for cooperation. Yhe separation of species h. sapiens from other apes and even older hominids may consist exactly in this singular and exceptional cultural form of cooperation which far transcends the “social behavior” of our closest relatives. If we can be said to be evolved for cooperation (by careful argument from physical facts about the brain), then “cooperate!” very well might constitute a natural (albeit culturally programmable) categorical imperative, contradiction of which would require “moral reasoning” to resolve.

    And there is the outline of a naturalist and simultaneously cultural view of ethics.

    This is the position I am working on in my moral theory. My cultural theory of ethics always has been able to explain much of our cultural variation in moral systems but it could not explain why all cultures, in all times and places have ethics — until this latest neuropsych research came into view.

    Now whether that can be taken as a starting point and overlayed with a conventional ethics such as your heirarchy of values remains to be seen. But it is a possibility.

    I would say “go there.” Even if it’s wrong, it’s right — that’s science!

  2. P.S. That’s philosophy! too.

  3. I fully intend to go there. 😉

    If you’re interested and you drop me a message with your e-mail, I’ll send a you a couple of papers that I’ve presented that show that I actually started there (but now I’m trying it from the other side).

  4. Have you read Nonzero by Robert Wright? is his website supporting the book.

    Until that book, I resisted every effort to “reduce” ethics to “game theory” (an idea once supported by Carl Sagan).

    Wright does not reduce ethics to game theory, but demonstrates how a concept from game theory does reveal some interesting approaches to human ethics — primarily because it reveals how evolution is a natural example of a “survival game” and our human evolution has advanced this game by selecting for cooperation. As I read it, this is explanatory of many features of ethics (such as the illusion of a “moral sense”) without presupposing any particular ethics. That non-supposition is necessary for any account of moral systems because empirically, specific moral systems differ over time, place and culture and any pre-supposition would contradict the empirical fact of variation.

    In meta-ethics it’s very tricky to maintain compatibility with known science without turning science into the arbiter of all morality (which most cultures would question). Wright’s approach is scientifically sound without being culturally abrasive or arrogating “moral authority” to science.

    If you were to point out one truly factual empirical observation about ethics — a properly meta-ethical fact that does not presume any moral content, but informs what ethics is — what would it be?

    I do not publish my email address but in your comment box you require me to enter it, so I would imaging you can see that. Send your papers.

    You will find, over time, that although I personally consider philosophy far beyond science in its reach and value and thus tend to defend it from those who mistreat it (esp by various means of disrespect), my own criticisms of meta-ethical philosophy are quite radical — so radical that everyone I know of, from approximately Hammurabi and Akhnaton onward, must give up a little bit of flesh to agree with it. But resistance is futile. LOL.

    • Yes, I have read (and have recommended to others 😉 Nonzero.

      Hmmm. The “illusion” of a moral sense? There’s pretty clear empirical evidence of a moral “sense”.

      A truly factual empirical observation about ethics that does not presume any moral content . . . . hmmm . . . . OK, I can come up with three but I suspect that you’ll claim that at least one is moral content.
      1. ethics is all about goals
      2. ethics is all about cooperation
      3. ethics is all about selfishness

      You’re welcome to your pound of flesh . . . . as long as you can earn it via constructive criticism (and it’s worth noting that *I* generally don’t resist . . . . 😉

  5. Becoming Gaia,

    I look forward to your discussion of cooperation as a moral act. I hope you will distinguish between the morality of cooperation initiated and maintained by commerce, which I argue is morally neutral, and the morality of cooperation initiated and maintained by unselfish behaviors (unselfish at least in the short term), which I argue describes the basis of virtually all moral intuitions and cultural moral standards.

    I do not understand what kind of moral statements you are proposing when you previously said:

    “2) It is self-evident that maximizing goal-fulfillment is the proper goal of morality because the consequences of that assumption directly lead to behaviors that mirror the current understanding of moral behavior. 3) It is simple to aggregate goal-fulfillment over different individuals.”

    Who’s “current” understanding? It seems to mirror only what I would call liberal, secular, western cultural morality (which I share) but I would not call this the “current” moral understanding of a significant number of people.

    Statement 2) is not at all self-evident to me even after reading your variation of Sam Harris’ argument.

    Is this a claim to have uncovered the underlying principles about what moral behavior ‘is’ as a matter of science? If so, what is the science that supports that claim?

    Second, it is NOT simple “to aggregate goal-fulfillment over different individuals”. A major problem in moral philosophy is finding a source of justificatory force for an individual to accept the burdens of acting morally when they expect doing so will be against their enlightened self interests. Certainly people can commit to such a definition based on their expectations that doing so is likely to be, on average, in their enlightened self interests. But why should they accept its burdens in cases when they have reason to expect, or even feel certain, that in a particular case doing so will not be in their enlightened self interests?

    • Suppose that I give half of my (non-existent) fortune to charity. Am I being moral?

      Suppose that I gave it because my brain says it’s the right thing to do. Am I being moral?

      Suppose that I gave it because I get a warm fuzzy feeling and feel awesome about myself. Am I being moral?

      Suppose that I gave it solely because it will earn the respect of my friends and shower me with a bunch of other benefits. Am I suddenly not being moral?

      From the outside, how do you distinguish between these cases?

      Are you even sure that you can distinguish them from the inside (if you think you can, then look up Trivers)?

      • It doesn’t matter if we cannot always distinguish if an act is unselfish (at least in the short term and in addition increases the benefits of cooperation in the group) and therefore conclude if an act moral. There is no logical problem with the morality of a specific act being indeterminate.

        Also, it does not matter if the agent in question is even capable of understanding what unselfishness is. Based on the behavior of other social mammals, our pre-cultural but social ancestors were almost certainly motivated by their biologically determined moral intuitions to cooperate in groups to obtain benefits like group protection, increased efficiency in hunting and child care, and the ability to defend food resources and territory.

      • If the agent in question is not capable of understanding unselfishness, is it capable of understanding morality? “Biologically determined moral intuitions” (what I and other normally call by the shorter name, the “moral sense”) operate by making the “correct” actions feel good and “incorrect” actions feel bad. They evolved according to the correlation between the individual action and survival. Since cooperation is majorly pro-survival, actions that promoted it were majorly selected for. There are, however, BDMIs, like the contact principle, which are clearly *not* moral upon analysis. The point to a “science of morality” is not to exactly replicate our intuitions but to create a rigorous framework (model) that hopefully will diverge from our intuitions where doing so will serve us “better”.

    • Who’s current understanding? The rational scientific one.

      Religious peoples’ moral thinking is driven by the goals of authority (doing G*d’s will) and purity.

      Conservative moral thinking is driven by the goals of conserving/preserving and authority.

      Look at Haidt’s and Hauser’s work. There are lots of different moral systems but they are all driven by circumstances and goals.

      • I am fine with the current understanding referring to the rational scientific one. However, the rational scientific understanding seems to me to be demonstrably very different from what you are proposing.

      • MS > However, the rational scientific understanding seems to me to be demonstrably very different from what you are proposing.

        Could you draw me a few distinctions? I don’t see this as being the case.

    • I don’t understand your argument that “it is NOT simple “to aggregate goal-fulfillment over different individuals”.”

      Have each individual write down their goals and allocate importance to each. Normalize each individual to 100%.

      Have each individual indicate percentage fulfillment for each goal. The sum of an individuals goals’ percentage fulfillment times importance gives that individual’s normalized total goal fulfillment.

      How is a normalized number not simple to aggregate over different individuals?

      You seemed to diverge into some sort of moral discussion. This is just a very simple, clear algorithm.

      • The problem is first to “allocate importance to each goal” (how, by majority rule?) and second how to balance satisfying a relatively unimportant goal for many people at the cost of a very important goal (like staying alive) for a few, or just one person. This is an old problem in moral philosophy. If you can convince people you can actually solve it using rational arguments, then many people will think you have done a great thing.

      • “Allocate importance to each goal” is done by each individual for themselves.

        There are two principles involved in balancing satisfying a relatively unimportant goal for many people at the cost of a very important goal (like staying alive) for a few, or just one person. The first is the solution to the involuntary organ donor problem — you realize that the total sum of consequences is greater when you allow people to be snatched off the street and sacrificed. The second is to invoke Rawls’ “veil of ignorance” (i.e. you don’t know which side of the moral argument you personally are on so you act in such a manner as would be acceptable to you in both cases).

  6. Becoming Gaia,

    A reason you, or anyone, should start with cooperation instead is as follows: A specific kind of cooperation, cooperation initiated and maintained by unselfishness, is what science for the last 40 years or so has been indicating moral behavior ‘is’. This science prominently includes game theory which shows that virtually all cultural moral standards, even contradictory ones, can be classified as one of the known, and fully mathematically defined, strategies for increasing the benefits of cooperation within a group.

    That is, virtually all moral intuitions and cultural moral standards are heuristics for motivating unselfishness for increasing the benefits of cooperation within a group (family, band, tribe, religion, race, nation, all intelligent beings, or even all beings that can feel pain). This is a statement that can be shown to be provisionally ‘true’, ‘false’, or indeterminate as a matter of science.

    • Mark Sloan said: A specific kind of cooperation, cooperation initiated and maintained by unselfishness, is what science for the last 40 years or so has been indicating moral behavior ‘is’. This science prominently includes game theory which shows that virtually all cultural moral standards, even contradictory ones, can be classified as one of the known, and fully mathematically defined, strategies for increasing the benefits of cooperation within a group.

      My understanding is that game theory, starting with Axelrod, says that cooperation is a strategy of self-interest. Could you please cite some references from game theory that insist that cooperation must be “initiated and maintained by unselfishness”.

      I also wish that more people had your view that cooperation “is what science for the last 40 years or so has been indicating moral behavior ‘is’.” That’s certainly my view as well — but I find it a very hard sell to most audiences (thus this blog).

  7. Axelrod’s book The Evolution of Cooperation and the work in the literature since then is clear that the kind of cooperation of interest in game theory requires two critical elements: 1) sacrificing one’s self interests in the short term in order to gain the benefits of cooperation, and 2) punishing poor cooperators.

    For a more modern reference see Nowak, Martin .A. (2006). Five Rules for the Evolution of Cooperation. Science, 314: 1560-1563. It is an excellent paper. Last time I checked Nowak had a PDF version posted on his website

    Nowak is head of the Program for Evolutionary Dynamics at Harvard University and well respected.

    • Thank you for the reference. Good references are always *greatly* appreciated.

  8. I think it is OK to start from goals, as people who don’t consider goals as primary, I believe they don’t consider cooperation as primary (since it is a relation), but as a chicken-and-egg thing. You just need to call goals as “affirmation of life”, then cooperation becomes love and you’ve just won over the Christians. (“Look, we are not praising all goals, we are only praising goals that affirm life.”)

    I’m pondering about your notion of “(im)moral intent”. Kant is I guess a deontologist as in “only the intentions can be good or bad” and you seem to be a consequentialist as in “only the actions are good or bad, and only judging by their fruits” — but not to the bone.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s


%d bloggers like this: