Posted by: Becoming Gaia | May 31, 2010

Challenge: When Is It Rational To Be “Immoral”?


Trying to promote my ideas, one of the most vexing objections that I come across is “I don’t believe that it is always in an entity’s best interests to be moral.”  The person then comes up with an edge case where either a) the “moral” action results in the entity’s death, b) the entity is super-powerful and cannot be stopped, or c) the circumstances are such that no one else will ever know that the entity has been immoral.  All of the cases underestimate the rationality of “correct” morality by taking a short-sighted (and incorrect) view of what is moral.

Morality recognizes that there are very few circumstances where a “rational” entity will choose a fatal “moral” act over remaining alive.  It further recognizes that insisting upon such a choice will result in fewer entities being moral — thus leading to the reduction of the fulfillment of its goals.  Thus, an intelligent act to avoid certain death may not be what is generally recognized as “moral” but it is generally recognized as rational, reasonable, and not “immoral”.

In the second case, while an entity may not be able to be stopped, there is no way that it cannot be forced to take consequences (even if it is as minor as minimizing assistance and subtly thwarting goals whenever it is possible to do so) short of killing or removing the will of everyone else (which have consequences of their own).  Given enough time these sanctions or reparations required by the community to get back into their good graces will offset the expected utility of any “immoral” action.  There is also the very real possibility of an even larger and more powerful entity finding out about and taking exception to such an immoral situation and imposing not only consequences but punitive damages as well.

The third case simply requires a reality check.  What act can you perform that is guaranteed to invisibly benefit you (so you won’t get caught) while hurting another entity’s goals more which is still worthwhile to you after the long-term effects are taken into account (including reduced benefits to everyone from the damaged community in the best case and the repercussions of paranoia and suspicion and the efforts of keeping things hidden in the worst).  Undoubtedly such cases occur but their infrequency and the human proclivity for false positives in identifying them makes taking advantage of perceived cases a losing proposition (and that’s even before considering the cost of evaluation, looking out for opportunities, etc.).

Sooooo  .  .  .  my second challenge requires takers to present a detailed and specific scenario where it is rational to be immoral (i.e. where enlightened self-interest diverges from morality — OR — where selfishness is not ultimately stupidity).

= = = = = = = = = =

Note:  I still have not had any takers for the previous challenge to “Disprove This Definition of Morality“.

Advertisements

Responses

  1. (0) (If you’re interested in the backstory to “Let the Right One In”, I can answer, having read the book and quite a share of fansite discussion.)

    (1) There are situations that are amoral, on which the genre of tragedy is based. That’s when the contextually-relevant morality heuristics fail (e.g. the virtues contradict each other). One is left without guidance as to what choice to make. Every choice seems immoral.

    (2) How do you handle the situation when there is not enough resources to develop the trust and/or retaliation mechanisms to arrive at cooperation? OK, this is overly broad to bite perhaps.

    (a) are people who don’t sweep insects out of their way immoral? rational?

    (b) is a vampire who kills occasionally, but minimizes the burden on the society as well as he can while staying alive under the radar, immoral? rational?

    (c) is someone who commits suicide because he values his life “below zero”, immoral? rational?

    (d) is someone who commits suicide because he estimates that the net effect of his life on the society is negative (i.e. the effect his continued existence will acquire), immoral? rational?

    (e) is an AI that in not easily detectable or unexplainable-away ways manipulates the global economy to acquire more computronium, to the detriment of the goals of most, but not all, participants of the economy (and not to the detriment of the “efficiency” of the economy — which many of the observers will argue to be “on the rise”) , immoral? rational? (This is an analogue of the rain-forest example.)

    (f) like (e), but where the smashed goals are considered to be “unworthy” by most? (so there is a net benefit of weighted goal satisfaction, but decrease of freedom to most of the participants)

    • I don’t know if I should give quick answers or not. 🙂

      a) amoral, individuals sweeping insects out of their way has very little impact on anyone’s goals
      b) moral and rational, making the best of a bad situation
      c) moral in intent, probably irrational and thus immoral in reality
      d) again moral in intent, probably irrational and thus immoral in reality BUT if rational then moral (end-of-life assisted suicide is rational and moral)
      e) grossly immoral (not getting caught DOES NOT equal morality), long-term irrational for all that it may appear short-term rational (just like the humanocentricity of “Friendly” AI :-))
      f) stealthy manipulation is itself immoral since it is inherently anti-cooperation (add this comment to e) as well)

      • Well, (b) is difficult especially as it is parametrized by both the agent and the society, I guess you would extend the BUT of (d), and admit a reverse BUT of “immoral in reality” if the vampire overlooks an opportunity of lesser damage.

        What is your take on the importance of consent in this case?

        (a) it is OK only to take blood of a consenting human

        (b) it is OK to take blood of a non-consenting human, in order to avoid the risks of negotiation (the human recovers without being infected)

        (c) it is OK to take life of a consenting human, in order to minimize risks to the vampire and indirectly other humans

        (d) it is OK to take life of a non-consenting human

        (d1) based on an estimated value of that human’s life

        (d2) as a random process akin to natural disaster

        A malnutritioned vampire is dangerous as his humane faculty has less control. A vampire needs about two humanfuls of fresh living blood a week.

        I’m sorry in case I’m gross..ly offtopic — I hope it’s not 😉

      • I’ve been too harsh to the vampire, perhaps he only needs a humanful of blood every two weeks.

  2. I have a case that probably contradicts at least some direct formulation of your position, but which I think by the spirit of your position you should consider moral in at least some contexts.

    Sparta “exponentiated”, a society with relatively high birth rate, whose children at many stages of development right into maturity, are subjected to life threatening tests, and only the best survive. The society therefore uses artificial evolutionary pressure to evolve its own race.

    • Here I would ask again for the clarification of “maximize satisfaction of goals”: are we counting somehow conceived platonic goals, somehow coalesced goal instances, (person/agent, goal) pairs, or straight-out shouldness-flow-edges of persons/agents?

    • The moral solution is to let the candidates choose if they want to pursue a test, or to be expelled from the society, isn’t it?


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Categories

%d bloggers like this: