Posted by: Becoming Gaia | Jun 7, 2010

The Distinction Between Enlightened Self-Interest & Morality


My second challenge invited takers to present a detailed and specific scenario where it is rational to be immoral (i.e. where enlightened self-interest diverges from morality — OR — where selfishness is not ultimately stupidity).  Since no one took me up on it, I’ll just have to do it myself.  😉

Taking our goal-based approach to morality – self-interest is pursuing your own goals, morality is forwarding the goals of the community, and selfishness is pursuing your own goals to the detriment of the goals of the community.  Enlightened self-interest, then, is realizing that:

  1. Visibly acting to the detriment of the community directly causes community behavior that is detrimental to your own goals.
  2. In the vast majority of cases, the costs of trying to unnoticeably act against the interests of the community are generally significantly higher than simply just ensuring that the community profits from all your actions (not to mention the cost of getting caught).
  3. It is very likely that volunteers will step forward to help when a visibly moral person needs (or even could benefit from) assistance in fulfilling their goals.
  4. Adding to the strength of the community directly feeds back to the support and level of infrastructure that the individual can leverage to stop worrying about basic needs and pursue advanced goals.

There are basically two real cases where selfishness is more rational than morality and they are both terminal cases.  The first is when the individual’s goals and the community’s goals are so directly in conflict that it is not possible for both to be achieved.  The most cited example of this is the so-called “paperclip maximizer” (the Friendly AI proponents nightmare of a super-intelligent artificial intelligence whose sole goal is to fill the universe with paperclips).  Such an entity will undoubtedly behave and appear moral and beneficent until such a time as our actions can have only minimal impact on the timing of our death and subsequent replacement with paperclips.  Given the goal of the AI, this is the epitome of rationality.

Unfortunately, this also means that a truly rational sociopathic human can exist.  As long as the desire to hurt or kill overrides the sum of all other goals (including survival), there is no true conflict between rationality and immorality.

The second “real” case is when the individual is both a) unlikely to interact with the community in the future and b) unlikely to have its reputation follow it.  In the game-theoretic example of the Prisoner’s Dilemma, optimistic tit-for-tat is only optimal as long as interactions are repeated and expected to continue.  If an entity knows when the last interaction will take place (and particularly when the other party does not have such information),  the optimal “rational” move is generally to take advantage of the other party.  In terms of human realities, though, it is becoming less and less likely that consequences and a reputation can be avoided as civilization grows ever more tightly knit.

In addition, there are two basically nonsensical cases where short-sightedness and/or the ability to believe two mutually contradictory things at the same time lead to the belief (but not the reality) that enlightened self-interest diverges from reality.  These are the cases of a super-powerful  or a super-stealthy entity theoretically being able to always avoid the consequences of their actions.  In reality, these are short-sighted rationalizations rather than any sort of true reality.  There is no way to ensure that you will remain (or even, are currently) beyond punishment and it is entirely in a community’s self-interest to ensure that immorality is punished in direct proportion to the threat (which will be high if an entity believes they are beyond consequences).

Fortunately, both of the “real” cases are highly unlikely for most human beings and those for whom enlightened self-interest truly differs from morality are exceedingly rare.  For the most part, it should be possible to assume that any deviations from morality are also deviations from one’s own self-interest (also known by it’s technical term — stupidity).


Responses

  1. Wouldn’t you like to invoke virtue ethics for the “little prisoner’s dilemmas of the big anonymous city”? That is, that the little sure gains of not being generous towards others are outweighed by the loss due to what that means to who the agent is. Like, to the development of the person. The loss is the psychological cost of being hypocritical, etc.

    For the case of mathematically neat agents, I can’t help but think about counterfactual mugging variant introduced by Vladimir Nesov at LessWrong. http://lesswrong.com/lw/3l/counterfactual_mugging/ Do you think it is related to virtue ethics?

  2. Here are some cases:
    Where there exists a monopoly position the rational thing to do is to maximise profits (at the cost of reduced output) whereas the moral thing to do is to provide as much as is sustainable for the lowest cost.

    When lending money to the community it is rational to over-react (immorally beat someone up or take their house) to a non-payer to cause fear among others which will decrease the total number of late/non-payments.

  3. >> When there exists a monopoly position . . .

    I disagree. Bleeding people dry as you suggest will have serious negative effects down the road. What you suggest is emphatically *not* rational if you’re at all far-sighted.

    >> When lending money . . . .

    Again, I disagree. Sometimes your plan works. Sometimes the cops step in and put you in jail.

    – – – – – – – – – –

    My biggest problem with most so-called “rational” arguments is that they are generally very, very short-sighted and actually really dumb in the long term.

    It seems as if the rationalists absolute insistence on surety and perfection leads to an absolute myopia since you can’t get 100% assurance beyond a very small number of steps. I wish I knew how to express that better. Any suggestions?

    • Ignore the cop/beating up angle ad take it as a bank following through on its legal rights to repossess a home. This is ofte immoral and done as a sign to the rest of the community.

      As for monopoly… please read any ecoomics textbook and tell me oil companies, banks and may other industries are not acting rationally yet against the public interest (i.e. immorally)

  4. >> As for monopoly… please read any ecoomics textbook . . . .

    Please don’t be insulting. Once again, it’s a matter of short-sighted rationality vs. the long-term interests of the entity in question. Look at the state of the world and the corporatocracy today. How long do you think that state of affairs is going to last? The world certainly isn’t perfect but time does wound all heels . . . . eventually (provided that they don’t die first 😉

    It seems as if rationality of this sort is merely a cover to protect and defend immorality.

  5. I’m sorry if you feel insulted, but profit maximisation really is the rational and logical thing for people/companies to do in the vast majority of situations.

    The current state of corporatocracy aside, even in a perfect world we would still have companies producing at profit maximising levels (and prices) rather than at the socially optimal levels. Unless you think the entire field of economics is too myopic, which is an entirely possible observation, or that the corporate structure is so fluid that the people making decisions at companies are far too interested in the (very) short term performance of their companies – now that I’d 100% agree with.

    The rational level of production, even if your views are correct, is still at the long-term profit maximising level, which avoids the price gouging and customer retaliation, but still is [almost always] a long way below the optimal level that perfect competition would produce.

    I also happen to think a bank’s immoral decision to repossess due to a late payment, as a signaling tool to avoid other people’s late payments, is a rational and effective decision as well as immoral in most cases.

    • >>> I’m sorry if you feel insulted, but profit maximisation really is the rational and logical thing for people/companies to do in the vast majority of situations.

      >>> The current state of corporatocracy aside, even in a perfect world we would still have companies producing at profit maximising levels (and prices) rather than at the socially optimal levels.

      There are different forces at play that counteract morality of companies. Keep in mind that under this blog’s perspective you need to be intelligent to act morally.

      One effect is the evolutionary competitive dynamics. Here, the companies that have higher momentary market share gain outrun the others, so what is optimized is myopic, therefore immoral. This should be counterbalanced by transparency and (ultimately) wise choices by the customers.

      Another effect is that monopoly positions are not constrained by the effectiveness of the company. This case is very similar to the state government problems. A monopoly company or a government is an organization that interacts with other organizations (on a more abstract level) and is just a bunch of acting individuals (on the concrete level). In the common interest of the of parties is to put pressure on the organization to act far-sightedly. Yet in the interests of the parties that seek power might be to support myopic actions, to overturn the structure of power (or keep it if they feel it is threatened). Unwise agents prefer a negative-sum game over a positive-sum game. Evolutionary “Darwinian” dynamics is immoral, because it favors individual (i.e. myopic) selection over group (i.e. far-sighted) selection. But as the playing field grows in complexity, stability and the ability to expand to new niches, the “acting together” and “diversifying” associated with morality wins.

      Ultimately, there is an individual behind the desk. He or she should be moral. You are moral by wisely answering “what do I really want = why do I want what I want”.

      >>> I also happen to think a bank’s immoral decision to repossess due to a late payment, as a signaling tool to avoid other people’s late payments, is a rational and effective decision as well as immoral in most cases.

      It is not bluntly immoral if the client knows what he gets himself into when he makes the deal. Life is not all about having it easy.

      I have another case. The leader of an organization is good at increasing the organization’s monetary efficiency (both gaining funds and cutting costs), but makes working for the organization more miserable. The moral choice of an employee is to leave if the misery outgrows the furthering of the goals that he shares with the organization, but it is very difficult to make (both a living and any contribution outside of the organization). Here the society “should” have tools to deal with that.

      • >> Evolutionary “Darwinian” dynamics is immoral, because it favors individual (i.e. myopic) selection over group (i.e. far-sighted) selection. But as the playing field grows in complexity, stability and the ability to expand to new niches, the “acting together” and “diversifying” associated with morality wins.

        You’re contradicting yourself here ;-). If evolutionary dynamics were truly immoral, how would morality win in the end? What is the additional force that counteracts the evolutionary dynamics (and was it not, itself, developed by those same dynamics)?

        This is yet another example of someone claiming something due to “short-term” rationality. Evolution really only favors “immoral acts” when survival and reproduction are immediately and permanently at stake — yet people insist on claiming and behaving as if it applies in *all* cases. This is simply not true.

      • >>> You’re contradicting yourself here 😉 . If evolutionary dynamics were truly immoral, how would morality win in the end? What is the additional force that counteracts the evolutionary dynamics (and was it not, itself, developed by those same dynamics)?

        Let me first say that I think the evolutionary “Darwinian” dynamics is not the be-all-end-all.

        Then to address your question shortly and somewhat metaphorically: there are three forces of evolutionary dynamics (ED), (1) accommodation to the environment, (2) competition over homologous positions, (3) diversification potential to expand to new niches. None of them are immoral of themselves, but (2) is most problematic — (2) is the domain of justice, it is only moral when it is subordinated to (1) and (3) “let the best one win”. In the context of evolutionary dynamics (ED), one might say that it is moral to be subordinate to the proliferation of the more encompassing ensemble rather than its part, but that the evolutionary “Darwinian” dynamics is immoral because (2) is independent of the others once an entity (a unit of selection) meets some basic level of (1), and intra-ensemble ED coefficient can easily exceed inter-ensemble ED coefficient, leading to the decrease of inter-ensemble “fitness” by an ED-in-the-small process. There’s too much space for “inefficiency”. But ED-in-the-large will shape the fitness landscapes for (1) to make it more ensemble-aligned.

      • I understand your point but feel that it is artificially generated by your distinguishing between 1 and 2 while I consider 2 to be a proper subset of 1. And, as you say in your last sentence, ED-in-the large will shape 1 to make it more moral. Thus, I’ll argue that you’re making my point that the “blind”, amoral process of evolution predictably leads to morality (which is contrary to what many people argue).

  6. >> I’m sorry if you feel insulted, but profit maximisation really is the rational and logical thing for people/companies to do in the vast majority of situations.

    😉 I didn’t *feel* insulted, but comments like “please read an economics textbook” are avoiding a honest intellectual debate by saying “you are ignorant” without even making an attempt at starting to defend your point.

    Legally, a “for profit” corporation that has more shareholders that are more interested in making a profit than in the corporation’s products or services themselves (a state which is true for virtually any corporation which is majority publicly held) must have a primary goal of making a profit. Indeed, legally, if told by the shareholders, a corporation MUST act immorally (as long as the actions are not illegal) — as indeed, if told by the shareholders, a company must act irrationally.

    Further, what companies *really* attempt do is maximize profit over a certain time period. If that time period is too short, they would enrage the public and their customers and quickly die. On the other hand, if the time period is too long, they would have to pay attention to pesky things like the environment, the oil running out, etc. and that would reduce their short-term profit. Not paying attention to these things is NOT rational if you’re interested in long-term profit maximization (or even survival of the company– or the earth). Unfortunately, since stockholders know that they can bail at any time and refuse to believe that their choices, mediated through the company, truly have world-altering implications — they legally compel the company to act in ways that appear rational when only the short-term is considered but are obviously irrational in the long-term.

    In a nutshell, this is merely another example of the “Tragedy of the Commons” with the entire earth being the commons.

    More tragically, what we have done is created a raft of semi-controlled entities whose primary goal is inherently sociopathic — because we are too short-sightedly “rational” to recognize the long-term effects and true “irrationality” of the situation we are creating.

  7. >> Unless you think the entire field of economics is too myopic, which is an entirely possible observation, or that the corporate structure is so fluid that the people making decisions at companies are far too interested in the (very) short term performance of their companies – now that I’d 100% agree with.

    Then we’re in 100% agreement (so why do I need to read an economics textbook? ;-))

    >> The rational level of production, even if your views are correct, is still at the long-term profit maximising level, which avoids the price gouging and customer retaliation, but still is [almost always] a long way below the optimal level that perfect competition would produce.

    You’re conflating at least two points here; however, if society were structured in such a way that it was expensive/unprofitable to be immoral, then companies would strive to be moral if only because then the “short-term rationality” would agree with the “long-term rationality.”

    Excessive profits are to companies like cigarettes are to a smoker or drugs are to an addict. They think they need them in the short-term but they will kill them in the long-term.

    >> I also happen to think a bank’s immoral decision to repossess due to a late payment, as a signaling tool to avoid other people’s late payments, is a rational and effective decision as well as immoral in most cases.

    I would argue that a bank’s decision to repossess, if done in accordance to a contract that the buyer knowingly entered into without fraud or coercion, is entirely moral. People need to not only suffer the consequences of poor/greedy decisions but realize that the consequences were their own fault. That is the only thing that makes poor/greedy decisions irrational in the short-term as well as the long-term. If a bank can’t repossess (if only due to your idea of morality), then it makes it short-term rational (at least) to fraudulently enter into a contract with the bank that you know that you can’t keep (which is a VERY bad idea).

    If the bank pulled a bait-and-switch (which they frequently do with hidden rate hikes after low initial rates) that was successfully totally hidden to the buyer, then the bank should be punished. Unfortunately, what the bank is normally doing is revealing the information after the buyer has been reeled in but before the contract is signed and letting “human nature” do its dirty work for them. Again, this is an immoral act (and irrational in the longest-term since, if nothing else, it calls a bunch of regulations down upon your head if society is smart) but it is “short-term rational”

    Starting to notice a theme here? Short-term rationality = long-term irrationality = immorality. All morality *really* is is long-term rationality. Nothing more — but nothing less.

  8. A quick point before my meeting:

    Evolution (or virtually any non-human process) is amoral, not immoral. Huge difference. (Although some animals have a basic morality it is difficult to call the full moral agents.)

    • Agreed, evolution is amoral. It does, however, “spontaneously” give rise to probable emergent behaviors like “intelligence” and “morality”.

      I don’t believe that many (if any) animals have sufficient reasoning capability to be even partial moral agents. Read de Wall and Tomasello for interesting perspectives on this issue.

      • Any particular reference? (I’ve found some de Waal talking about how human is a social animal, but it takes two to selfishness anyway.)

      • Primates and Philosophers: How Morality Evolved (Princeton Science Library) [Hardcover] by Frans de Waal and
        The Age of Empathy: Nature’s Lessons for a Kinder Society by Frans de Waal (Hardcover – Sept. 22, 2009)

      • I’ve heard somewhere that even humans were not conscious until after Homer. But I gather you refer to forms of (rational) empathy?

      • The Origin of Consciousness in the Breakdown of the Bicameral Mind by Julian Jaynes is what you’re referring to. His views are an interesting lifetime of conversation all by themselves.

      • A short review of M. Tomasello “The Cultural Origins of Human Cognition”: http://www.2think.org/humancognition.shtml

      • I’ve only read his
        Why We Cooperate (Boston Review Books) by Michael Tomasello (Hardcover – Oct. 30, 2009)
        Origins of Human Communication (Bradford Books) by Michael Tomasello (Hardcover – Sept. 30, 2008) and
        Constructing a Language: A Usage-Based Theory of Language Acquisition by Michael Tomasello (Paperback – Mar. 31, 2005)

        I think that he’s getting better as he goes along . . . . (so I’m disinclined to go back to his 2001 book that you recommended ;-))

      • (I only stumbled upon the review, not read it or anything, I’ve been too quick to report.)

    • I agree, my point is meant to be that the “blind”, amoral process of evolution predictably leads to morality (which is contrary to what many people argue).

    • Yes, and genes don’t want anything.

      • (I’m sorry if I insult anyone’s intelligence with this explanation after my attempt at being somewhat ironic, but if someone uses the metaphorical language of genes wanting their proliferation, as Richard Dawkins does, then evolutionary processes can be considered moral or immoral by an extension of the metaphor.)

      • Want is an interesting term. At what point does a hard-coded locomotion towards (or away from) a certain sensory gradient turn into a desire to go somewhere? Does an amoeba have wants? A planaria? A sponge? A fish? A crocodile? A cat? A human? An AI?

      • I’ll say that (as a trivial case) the light doesn’t want to go straight, it simply cancels out in all other trajectories. 😉 Genes don’t want to proliferate either, they simply do or perish. A reactive agent like amoeba does not have wants. A BDI agent like a cat has wants-I. A reflective agent like modern human has wants-II. A bicameral man has wants in the spectrum between I and II to the degree that he can argue with both the god and the animal in him. The wants of an AI don’t need to recapitulate the phylogeny of human, but are placed in the spectrum depending on both grounding of the AI and the deliberative-reflective “freedom” of the architecture. A “want” is an explanatory tool, as we discussed re. goals (a goal of a non-reactionary pursuit, and — by metonymy perhaps — that pursuit, or — by “modus ponens” — its actions; I want to be strong, if I want to be strong I should exercise, therefore I want to exercise). Using the term “pleasure” so broadly that it’s mostly metaphorical, a reactive agent has pleasures, and pursuits immediate pleasures, but cannot postpone pleasure, a BDI agent can want (want-I) to feel pleasure and can postpone a pleasure due to a conflicting want, and only a reactive agent can want (want-II) to not feel pleasure (due to being influenced by Schopenhauer).

      • (Oups, should be [reflective agent can want (want-II) …]. http://en.wikipedia.org/wiki/Schopenhauer is an interesting guy, but you need some Nietzsche as an antidote.)

  9. They technically ‘want’ nothing, they simply obey the natural laws, as we do. However we have an internal feedback loop that internalises the external laws and provides us with a (fake) sense of ability to decide what we want.

    • You want to say that you didn’t have a real sensation of the decision to say it? 😉 How do you define “not fake”?

  10. I’ve tried to collect teasers for Nietzsche, but the beginning of http://plato.stanford.edu/entries/nietzsche-moral-political/ is so nicely abstracted, that I can just recommend it (I imagine that your opinion would be that Nietzsche exaggerates to the point of being wrong, but is not wrong-headed — do you have any comments?)

    • I’ve been too quick to argue that (that Nietzsche is not wrong-headed as far as “our” considerations go). (I’m sorry, being tired…)

      • OK, an argument might go, after citing “We have expended so much labor on learning that external things are not as they appear to us to be — very well! the case is the same with the inner world! Moral actions are in reality “something other than that” — more we cannot say: and all actions are essentially unknown.” (http://en.wikipedia.org/wiki/The_Dawn_%28book%29), noting that we are becoming into a position of knowledge here.

    • Nietzsche is childish… I’ve extracted some teasers here: http://lukstafi.blogspot.com/2010/06/nietzsche-teasers.html

      • Nietzsche annoys me for the same reason that Yudkowsky always annoys me. He is a dart-thrower with no coherent theory of his own who will change his position to support his argument of the day while also not being above using the same rhetorical tricks that he accuses others of.

        He is interesting to read to mine for ideas and to challenge your ideas but, ultimately, there is no real substance there (though there are a LOT of good quotes — which eventually cancel each other out 😉

  11. “I agree, my point is meant to be that the “blind”, amoral process of evolution predictably leads to morality”

    I would suggest you seriously think about this statement. I read it a couple of times and have come to one of two conclusions for it:

    1. There is an ‘objective morality*’ that is best for reproduction and/or species survival and evolution by natural selection will always tend towards it for that reason.
    2. Evolution shapes our morality so whatever we consider moral is entirely relative to our evolutionary history and a different evolutionary path would have led to a different morality.

    I can’t see 1 as being realistic since so many species seem to do pretty well while being exceedingly immoral to other species, including humans, and I happen to believe that torturing animals for fun is immoral.

    Equally, I think that 2 is also false since the existence of higher brain functions allowing reasoning means that we can step completely outside our evolutionary morality and come up with a more intellectual reaction than disgust (which is the primary driver of morality in the brain**).

    * This can either be the proper objective morality or objectively the best morality for a species to reproduce and survive long term. Don’t care which since I disagree with this point anyway 🙂

    ** This is why so many cultures have been, and still are, against consensual adult incest – even when they cannot have children, or why so many people are against equal rights for homosexuals – the thought of the act disgusts them.

    • I *am* arguing for 1. There is an ‘objective morality*’ that is best for reproduction and/or species survival and evolution by natural selection will always tend towards it for that reason.

      The argument “so many species seem to do pretty well while being exceedingly immoral to other species, including humans” merely reinforces that the world is nowhere near a final stage equilibrium. Humans have become VASTLY more ethical over the past three thousand years (as a whole, not just the scholars who really haven’t changed much at all).

      2 is certainly true if you replace evolution with culture and is what explains the divergence of morality across cultures. But I also argue that all cultures are converging upon the ‘objective morality’ in 1.

      *I normally consider it objectively the best morality for a species to reproduce and survive long term since “proper objective morality” gives no reference for “proper”

      • I find it problematic. It might be argued that “evolution-in-the-large” needs too many resources to be entrusted with converging to M, because evolution-in-the-small can kill us all (or leave in a similar boring meandering) in a relevant pool of reference (e.g. in the solar system). And moreover, the evolutionary process leaves the tracks of biology just as things get “morally interesting”, and it becomes rather an economic cultural evolution. I’d say that one can postulate an objective morality in the claimed sense of “best for interesting life expansion”, but claiming that “evolution by natural selection will always tend towards it for that reason” is a stretch of the meaning of “natural selection” — reasoning is the acting “force” (the selection may well be over multiple earths).

      • You jumped from “tend towards it” to “converging to M”. I made no statements about it successfully converging (particularly before other forces take over). Your strawman is indeed problematic but it’s not what I’m arguing. 🙂

  12. Incidentally, since no-one has ever suitably defined morality, outside of it being an individual thing, people could easily be talking at cross purposes…

    • A morality is a way of conduct for a person (an agent), a function which says if a behavior is OK in a context (universal, i.e. not dependent on further parameters), meaning that the agent only picks actions that are OK according to that morality as far as he holds that morality.

    • I agree with Kedaw. Part of the reason why I haven’t made a post is because I’m starting back at the beginning with a solid definition of morality. Look to see it and a bunch more on Sunday.


Leave a reply to keddaw Cancel reply

Categories