Mailbag posts happen when I get such great comments that the reply turns into something that I *really* want to post instead. Feedback really helps when one is trying to improve both ideas and the communication of those ideas – and commenters who successfully inspire mailbag posts are *really* appreciated. I also tend to read what they have published – which then frequently leads to me liking it and disseminating it (Hint, hint ;-)).
Callan (http://philosophergamer.blogspot.com/) commented
“to suppress or regulate selfishness and make cooperative social life possible”
That still has ‘cooperative social life’ as a fairly ill defined term. I mean, the nazi’s had a ‘cooperative social life’ to a degree.
I’d suggest both moving away from there being just one way of cooperative social life and instead multiple models – further, describe the model in more of a boardgame like, moment to moment instructions for what people are doing from this minute to the next.
Which I know is unattractive – it’s more attractive to go with the ambiguity of ‘cooperative social life’. That is because of (and I agree with this) your point ‘(and part of morality “likes”* it that way)’
Ie, the feeling it needs to be more than just a boardgame…it needs to be, like, cooperativly social and…*insert more ambiguous terms*
Is it that “cooperative social life” is ill-defined or that it is very broad? It is intentionally very broad (i.e. it is meant to encompass all of your multiple models while, hopefully, ruling out the negative cases). The Nazi’s only had a cooperative social life among themselves. They did not repress or regulate their selfishness towards others. Indeed, they reveled in it and made it a point of pride.
Board game is a great analogy (which I think I’ll steal, thank you). You need to have rules (or restrictions) like “don’t be selfish”, things that promote cooperative social life and gain you “karma or altruism” points (giving to charity, saving the lives of children), and rules by which you can spend points to relax the rules (“yes, you can shoot that sniper to save the lives of the children he is shooting”). What many people don’t understand about ethics is that every rule has circumstances under which it “SHOULD” be broken. The ONE exception that proves the rule — and defines ethics (and really is Kant’s Categorical Imperative) — is MAKE COOPERATIVE SOCIAL LIFE POSSIBLE. That law is universal (and does not rule out altruistic punishment — indeed, simulations show that it *requires* it when necessary). Selfishness is best defined as those things which are contrary to a COOPERATIVE SOCIAL LIFE (that we only do because they benefit ourselves). But small selfish acts are not only allowed but actually promoted if (they enable us to and) we actually “pay” for them with a greater amount of social benefit points (“yes, Bill Gates should be a member of the 1% because, damn, look at what he is doing with his money”).
So . . . . what I think you are asking for with your multiple models, is how to translate the topmost goal “to suppress or regulate selfishness and make cooperative social life possible” into a board game based upon the relevant environment and circumstances. I particularly like that concept/phrasing because it starts to move us back from the short-sighted reductionist distraction (red herrings) of ethical dilemmas (which are always caused by either insufficient information or circumstances that they are so contrived that they break the brittleness of our expert-system-like moral sense) towards the “Virtue Ethics” of the ancient Greeks.
The Consequentialists argue that consequences are what matters (which I agree with) except that it is impossible to calculate all the consequences – particularly when our self-deception (see Trivers, etc.) has evolved to enable our selfishness. The Deontologists argue that rules are what matters – which I agree with because they are what are producing the best consequences despite (or more, accurately, because of) our not being able to calculate them. That probably makes me a consequentialist – but I *really* hate being lumped with the short-sighted reductionists who can’t see their way out of the “moral dilemmas” caused by not being able to explain why you can/should kill one person to save five in the trolley problem but can’t/shouldn’t in the case of an involuntary organ donor and five dying patients (it’s because of the context that you stripped out, you . . . you . . . reductionists! ;-)).
Over the next few posts, I’m going to try to cover why context is king (and why context-sensitivity is *totally* independent of/orthogonal to “moral relativism”) and try to explain how Haidt’s top-level definition can be grounded to our moral sense by extending my new board game analogy.