Trying to promote my ideas, one of the most vexing objections that I come across is “I don’t believe that it is always in an entity’s best interests to be moral.” The person then comes up with an edge case where either a) the “moral” action results in the entity’s death, b) the entity is super-powerful and cannot be stopped, or c) the circumstances are such that no one else will ever know that the entity has been immoral. All of the cases underestimate the rationality of “correct” morality by taking a short-sighted (and incorrect) view of what is moral.
Morality recognizes that there are very few circumstances where a “rational” entity will choose a fatal “moral” act over remaining alive. It further recognizes that insisting upon such a choice will result in fewer entities being moral — thus leading to the reduction of the fulfillment of its goals. Thus, an intelligent act to avoid certain death may not be what is generally recognized as “moral” but it is generally recognized as rational, reasonable, and not “immoral”.
In the second case, while an entity may not be able to be stopped, there is no way that it cannot be forced to take consequences (even if it is as minor as minimizing assistance and subtly thwarting goals whenever it is possible to do so) short of killing or removing the will of everyone else (which have consequences of their own). Given enough time these sanctions or reparations required by the community to get back into their good graces will offset the expected utility of any “immoral” action. There is also the very real possibility of an even larger and more powerful entity finding out about and taking exception to such an immoral situation and imposing not only consequences but punitive damages as well.
The third case simply requires a reality check. What act can you perform that is guaranteed to invisibly benefit you (so you won’t get caught) while hurting another entity’s goals more which is still worthwhile to you after the long-term effects are taken into account (including reduced benefits to everyone from the damaged community in the best case and the repercussions of paranoia and suspicion and the efforts of keeping things hidden in the worst). Undoubtedly such cases occur but their infrequency and the human proclivity for false positives in identifying them makes taking advantage of perceived cases a losing proposition (and that’s even before considering the cost of evaluation, looking out for opportunities, etc.).
Sooooo . . . my second challenge requires takers to present a detailed and specific scenario where it is rational to be immoral (i.e. where enlightened self-interest diverges from morality — OR — where selfishness is not ultimately stupidity).
= = = = = = = = = =
Note: I still have not had any takers for the previous challenge to “Disprove This Definition of Morality“.