My second challenge invited takers to present a detailed and specific scenario where it is rational to be immoral (i.e. where enlightened self-interest diverges from morality — OR — where selfishness is not ultimately stupidity). Since no one took me up on it, I’ll just have to do it myself. 😉
Taking our goal-based approach to morality – self-interest is pursuing your own goals, morality is forwarding the goals of the community, and selfishness is pursuing your own goals to the detriment of the goals of the community. Enlightened self-interest, then, is realizing that:
- Visibly acting to the detriment of the community directly causes community behavior that is detrimental to your own goals.
- In the vast majority of cases, the costs of trying to unnoticeably act against the interests of the community are generally significantly higher than simply just ensuring that the community profits from all your actions (not to mention the cost of getting caught).
- It is very likely that volunteers will step forward to help when a visibly moral person needs (or even could benefit from) assistance in fulfilling their goals.
- Adding to the strength of the community directly feeds back to the support and level of infrastructure that the individual can leverage to stop worrying about basic needs and pursue advanced goals.
There are basically two real cases where selfishness is more rational than morality and they are both terminal cases. The first is when the individual’s goals and the community’s goals are so directly in conflict that it is not possible for both to be achieved. The most cited example of this is the so-called “paperclip maximizer” (the Friendly AI proponents nightmare of a super-intelligent artificial intelligence whose sole goal is to fill the universe with paperclips). Such an entity will undoubtedly behave and appear moral and beneficent until such a time as our actions can have only minimal impact on the timing of our death and subsequent replacement with paperclips. Given the goal of the AI, this is the epitome of rationality.
Unfortunately, this also means that a truly rational sociopathic human can exist. As long as the desire to hurt or kill overrides the sum of all other goals (including survival), there is no true conflict between rationality and immorality.
The second “real” case is when the individual is both a) unlikely to interact with the community in the future and b) unlikely to have its reputation follow it. In the game-theoretic example of the Prisoner’s Dilemma, optimistic tit-for-tat is only optimal as long as interactions are repeated and expected to continue. If an entity knows when the last interaction will take place (and particularly when the other party does not have such information), the optimal “rational” move is generally to take advantage of the other party. In terms of human realities, though, it is becoming less and less likely that consequences and a reputation can be avoided as civilization grows ever more tightly knit.
In addition, there are two basically nonsensical cases where short-sightedness and/or the ability to believe two mutually contradictory things at the same time lead to the belief (but not the reality) that enlightened self-interest diverges from reality. These are the cases of a super-powerful or a super-stealthy entity theoretically being able to always avoid the consequences of their actions. In reality, these are short-sighted rationalizations rather than any sort of true reality. There is no way to ensure that you will remain (or even, are currently) beyond punishment and it is entirely in a community’s self-interest to ensure that immorality is punished in direct proportion to the threat (which will be high if an entity believes they are beyond consequences).
Fortunately, both of the “real” cases are highly unlikely for most human beings and those for whom enlightened self-interest truly differs from morality are exceedingly rare. For the most part, it should be possible to assume that any deviations from morality are also deviations from one’s own self-interest (also known by it’s technical term — stupidity).