Posted by: Mark Waser | Oct 24, 2010

Absolute Morality


Assume a community

  • with the primary goal of reducing conflict/friction/lack-of-coordination between its members (and between members and nonmembers)
  • comprised of the set of all entities willing to take on the top level goal of not defecting from that community.

You can assume a secondary goal of assisting with members’ goals if you wish (or not).

Assume that the community is reasonably effective in its primary goal (and getting more effective all the time).

Assume that the community practices altruistic punishment upon non-members and, even more severely, upon defectors.

Under what conditions would an entity *NOT* want to join the community?  Remember that reducing interference and avoiding punishment is an instrumental goal/universal sub-goal for every other goal.

It seems as if there are but two types of edge cases:

  • an entity with an imperative goal that it must protect and fulfill which unavoidably conflicts with the majority of entities (i.e. paper-clipping the universe, etc.) so it is going to be subject to interference and punishment regardless of what it does
  • a selfish, short-sighted entity that believes that the community will be so ineffective at imposing costs upon it that it doesn’t see why it should bother subjugating its desires to anything else

The first entity should, and will, be opposed by every rational entity without the problematic goal.

The latter entity should, and will, be opposed by every truly rational being with sufficiently broad vision/low time-discounting/high wisdom.

In both cases, opposition need not take the form of obvious confrontation but can even be limited to anything that is untraceable.  In both cases, the entity will be impeded every step of the way, can show no weakness, will get no help, and will never know when an even more powerful entity might show up to end its efforts forever.  Ever notice how the nasty, powerful guy never gets an unexpected break and that many expected things unexpectedly fall through for them?

Is anyone going to be stupid enough to create a type 1 entity and smart enough to make it smart enough to be relatively unstoppable?

Is it truly possible to be long-sighted enough to be dangerous *to everyone* without being long-sighted enough to realize the benefits of *NOT* being dangerous?

The key to morality is simply NOT defecting from the community of moral entities.  It *IS* just that simple and obvious.  (and if you don’t think so, please comment and show me where I’m wrong)

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Categories

%d bloggers like this: