Posted by: Mark Waser | Dec 31, 2011

The Three “Big” Questions of AI Morality


A recent post caused me to realize how much discussion time and effort is “wasted” on poorly focused debate of whether or not machine morality is even possible (as opposed to “spent productively” – according to my preferences – on specification of what machine “morality” might be and how it might be implemented).  Much of this “problem” is due to definitional differences where each person’s position is logically and factually correct given their definitions and incorrect or even nonsensical given someone else’s definitions.

<Note: this post has been edited twice, once to replace my rephrasing of Matt Mahoney with phrasing that he prefers, and once to remove the specific views of an individual who wishes not to be referenced.>

For example, Matt Mahoney argues that “morality is a set of opinions about right and wrong.  Obviously morality exists because opinions exist.  However there is no “absolute” morality as a law of nature.  Nothing is intrinsically right or wrong, although of course many people feel this way to justify their opinions.”   My reply to that is “OK, so let’s create our machines so that it is a shared opinion”.

Others have argued that there is a distinction between when something is simply efficient and thus gives rise to cooperative mechanisms and when something is morality.  This begs the questions “What is the line/distinction between cooperative mechanisms and morality?” and “Why can’t we ensure that machines will be on the morality side of the line (if there is indeed a valid distinction)?”

Two frequently cited distinctions are the necessity of emotions (which many argue that machines cannot and will not have) and the presence of biological constraints that machines (and even future humans) can and most likely will evade.  In this case, I can either ask, “Of what value is morality if it is impossible for advanced entities?” or simply reply “OK, so let’s create our machines so that they follow some cooperative mechanism that always produces the same answers as morality would EVEN IF somehow these cooperative mechanisms aren’t actually morality”.

One way to begin to address these definitional issues is with the first “big” question.  As Eray Ozkural’s post phrased it:
1. Is it possible that a technological civilization could evolve without any concept of morality whatsoever?

This is answered tautologically by noted expert Jonathan Haidt’s “functional definition” of morality that “Moral systems are interlocking sets of values, virtues, norms, practices, identities, institutions, technologies, and evolved psychological mechanisms that work together to suppress or regulate selfishness and make cooperative social life possible.”  By this definition, cooperative life (civilization) is not possible unless you have some form of a moral system (if only in the simplest form of shared values).

Answering yes to the first big question because you’re disputing the definition of morality simply leads to another form of the question.

1.  Is it possible that a technological civilization could evolve without interlocking sets of values, virtues, norms, practices, identities, institutions, technologies, and evolved psychological mechanisms that work together to suppress or regulate selfishness and make cooperative social life possible?

If you still answered yes, then humanity has a huge problem and this discussion is NOT the answer to AI safety.  If you answered no, then assume that we are talking about a system of interlocking sets called M* instead of morality and let us proceed.  M* can be Matt Mahoney’s shared delusion or on the other side of the morality line – but as long as it leads to AI safety, specifying and implementing such a system has to be a MAJOR priority (that shouldn’t be derailed by definitional arguments).

The remaining two “big” questions follow and will be the subject of (many ;-)) future posts.

2.  Can we specify and implement an M* system that is stable, self-correcting AND INCLUSIVE in the face of greater and greater scope, power and diversity?

3.  How is current human morality different from M* and how might we better our lives by learning from M* (ESPECIALLY if you agree with viewpoint that future humans may not be subject to human morality — much less machines and corporations)?

Advertisements

Responses

  1. Actually my position is that morality is a set of opinions about right and wrong. Obviously morality exists because opinions exist. However there is no “absolute” morality as a law of nature. Nothing is intrinsically right or wrong, although of course many people feel this way to justify their opinions. Your opinion may vary.

  2. Morality, at the brain level, is simply a reaction to a situation. Every society conceivable would have an opinion of situations and acts and so would have preferences about which they prefer.

    That is all morality is.

    Society could easily deify selfishness and thrive, as long as the incentives and rewards were such that people cooperated enough to progress.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Categories

%d bloggers like this: