Posted by: Mark Waser | Apr 20, 2014

Mailbag 4: A Great New Analogy


Mailbag posts happen when I get such great comments that the reply turns into something that I *really* want to post instead. Feedback really helps when one is trying to improve both ideas and the communication of those ideas – and commenters who successfully inspire mailbag posts are *really* appreciated. I also tend to read what they have published – which then frequently leads to me liking it and disseminating it (Hint, hint ;-)).

Callan (http://philosophergamer.blogspot.com/) commented

“to suppress or regulate selfishness and make cooperative social life possible”

That still has ‘cooperative social life’ as a fairly ill defined term. I mean, the nazi’s had a ‘cooperative social life’ to a degree.

I’d suggest both moving away from there being just one way of cooperative social life and instead multiple models – further, describe the model in more of a boardgame like, moment to moment instructions for what people are doing from this minute to the next.

Which I know is unattractive – it’s more attractive to go with the ambiguity of ‘cooperative social life’. That is because of (and I agree with this) your point ‘(and part of morality “likes”* it that way)’

Ie, the feeling it needs to be more than just a boardgame…it needs to be, like, cooperativly social and…*insert more ambiguous terms*

Is it that “cooperative social life” is ill-defined or that it is very broad? It is intentionally very broad (i.e. it is meant to encompass all of your multiple models while, hopefully, ruling out the negative cases). The Nazi’s only had a cooperative social life among themselves. They did not repress or regulate their selfishness towards others. Indeed, they reveled in it and made it a point of pride.

Board game is a great analogy (which I think I’ll steal, thank you). You need to have rules (or restrictions) like “don’t be selfish”, things that promote cooperative social life and gain you “karma or altruism” points (giving to charity, saving the lives of children), and rules by which you can spend points to relax the rules (“yes, you can shoot that sniper to save the lives of the children he is shooting”). What many people don’t understand about ethics is that every rule has circumstances under which it “SHOULD” be broken. The ONE exception that proves the rule — and defines ethics (and really is Kant’s Categorical Imperative) — is MAKE COOPERATIVE SOCIAL LIFE POSSIBLE. That law is universal (and does not rule out altruistic punishment — indeed, simulations show that it *requires* it when necessary). Selfishness is best defined as those things which are contrary to a COOPERATIVE SOCIAL LIFE (that we only do because they benefit ourselves). But small selfish acts are not only allowed but actually promoted if (they enable us to and) we actually “pay” for them with a greater amount of social benefit points (“yes, Bill Gates should be a member of the 1% because, damn, look at what he is doing with his money”).

So . . . . what I think you are asking for with your multiple models, is how to translate the topmost goal “to suppress or regulate selfishness and make cooperative social life possible” into a board game based upon the relevant environment and circumstances. I particularly like that concept/phrasing because it starts to move us back from the short-sighted reductionist distraction (red herrings) of ethical dilemmas (which are always caused by either insufficient information or circumstances that they are so contrived that they break the brittleness of our expert-system-like moral sense) towards the “Virtue Ethics” of the ancient Greeks.

The Consequentialists argue that consequences are what matters (which I agree with) except that it is impossible to calculate all the consequences – particularly when our self-deception (see Trivers, etc.) has evolved to enable our selfishness. The Deontologists argue that rules are what matters – which I agree with because they are what are producing the best consequences despite (or more, accurately, because of) our not being able to calculate them. That probably makes me a consequentialist – but I *really* hate being lumped with the short-sighted reductionists who can’t see their way out of the “moral dilemmas” caused by not being able to explain why you can/should kill one person to save five in the trolley problem but can’t/shouldn’t in the case of an involuntary organ donor and five dying patients (it’s because of the context that you stripped out, you . . . you . . . reductionists! ;-)).

Over the next few posts, I’m going to try to cover why context is king (and why context-sensitivity is *totally* independent of/orthogonal to “moral relativism”) and try to explain how Haidt’s top-level definition can be grounded to our moral sense by extending my new board game analogy.

Advertisements

Responses

  1. Hi Mark,

    The video game fallout 3 had a karma mechanic – you could do so many good deeds that you became a messiah of the wasteland.

    It also meant you could at that point just kill someone who was just innocently standing there…and remain a messiah of the wasteland. You’d just banked so much karma.

    Which is to say rules that allow some kind of point expenditure to bypass other rules easily become ‘holes’ in the game and play can fall out of the game from them, while still seemingly carrying the credibility of playing the game. What this means is that whatever your intent with the game, if you have a hole in the structure someone can subvert the intent of the game with it. Further if other people thought your intent a credible one and joined the game, they might think someone who’s slipped through a hole is still playing the game ‘as intended’ when actually they aren’t – so those players are effectively being conned. You have to be really careful to avoid any holes that allow play to fall out into areas that are not at all you intention in making the boardgame.

    With reductionists, well granted they do shift goalposts by effectively changing the boardgame that we’re playing from in engaging the situation. Give a solution for one game and then they switch games and say ‘well how does that solution work? It doesn’t!’. As you say, cutting out contexts – cutting out one game and replacing it with another.

    But that’s only partly fallacious of them. One might take their point that maybe in one boardgame shooting the sniper is alright, but how does that justify it – it’s just a boardgame?

    Yes, here I am undermining my own concept, but it’s because I want to be honest in delivering it and ensure it’s not taken as more than it is.

    The idea of a boardgame is that we cease to have people who on a whim command our lives (or to be more exact, on a whim they can command people who are both armed and loyal to them to brutalise us, so we obey first out of fear, then we obey out of habit). Instead of whim, we have rules.

    However, this is no better than the whimful tyrant if people are forced to play the game.

    So the key element of boardgames is that people only play if they wish to consent to it. It’s a radically different paradigm to the one governments work from.

    So in the end the sniper shooting is about as supported as the number of people who bought into your game (somewhat like voting). And not everyone will (probably).

    The key element is not so much justification, as it is transparency and consent rather than someone being put into power and at their whim doing as they wish. Currently we have laws and laws sort of do that, but not very well – their wording is often ambigous and can be subverted (ie, they have holes in them), the laws aren’t well known and so lack transparency (apparently it’s good enough to say ‘ignorance of the law is no excuse’ with a straight face) and the laws are forced onto people whether they consent or not. Further laws tend to be about what you can’t do – they offer no way of earning anything for yourself, you are left to your own devices, but they do get in the way of you figuring out how to earn anything for yourself, even as the rich tend to use their wealth to avoid such restrictions. It’s dreadful design that slowly erodes freedom without bothering to offer any compensation for that.

    I’d also get into ‘false positives’ boardgame policies – ie, lets take an extreme example of false positive to highlight that – the sniper is trying to shoot some midgets who are threatening children amongst them. But from the outside maybe it looks like he’s just threatening children and it’s okay to kill him. BUT this is another kettle of fish that could go on for some time.

    I’m enthusiastic for the boardgame sort of model and ensuring it’s not seen as more than it is might dampen enthusiasm. But I feel it’s important to be transparent on what it is and isn’t.

    • I agree strongly with your last point.

      The fallout 3 karma problem could be solved by more interesting formulaic procedures like

      after murder of an innocent for no reason
      KARMA = (KARMA * 0.001) – 100
      if new KARMA less than 0, punish appropriately and set KARMA = 0
      if new KARMA > 0, set KARMA = 0

      i.e. a true messiah gets away with it but is back to karma zero, everyone else is going to be punished according to karma

      Your point about holes is effectively what Godel’s incompleteness theorem and Rice’s theorem are all about. However, being anchored by a clearly defined purpose does ground the problem somewhat.

      All your points about willingly playing vs. being forced to play are good ones. That’s why the goal of the game should be “to make a cooperative social life possible”. People who don’t want to play make cooperation impossible as much as they can. In order to win, you need to woo people into playing the game enthusiastically, not force them. We’ve allowed our creations (governments, corporations, the 1%) to get too big and powerful to easily prevent them from being selfish (over-optimizing themselves and their existence by squeezing us and forcing us to do things). Justification is just a ploy to keep the unhappy cooperating.

      One of the rules that we desperately need to have is allowable distributions of power. Laws against monopolies are just the faintest beginnings of those — but we need them yesterday.

      The false positives are scope problems. You’ll always have them and always attempt to resolve them by trying to determine who was aware of what when. Imperfect knowledge is a fact of life that all systems are going to have to deal with.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Categories

%d bloggers like this: