Posted by: Mark Waser | Mar 23, 2011

Are intelligence and values independent of each other? (Intelligence vs Wisdom 2)


In Intelligence vs Wisdom 1: Superintelligence does not imply benevolence, it was mentioned that Fox and Shulman’s presentation included an interesting (and important) philosophical digression.  When pondering the question as to whether “intelligent reflection might cause the AI to intrinsically desire human welfare, independently of instrumental concerns”,  they state that David Chalmers’s philosophical analysis of the Singularity

notes this question, and considers both what he calls the “Humean possibility,” in which a system’s intelligence is independent of its values, and the “Kantian possibility,” in which many extremely intelligent beings would converge on (possibly benevolent) substantive normative principles upon reflection.

By my reading, however, Chalmers addresses a very different question both because he never includes the very problematical clause “independently of instrumental concerns” and because he addresses rationality as well as intelligence saying

In philosophy, David Hume advocated a view on which value is independent of rationality: a system might be as intelligent and as rational as one likes, while still having arbitrary values.  By contrast, Immanuel Kant advocated a view on which values are not independent of rationality:  some values are more rational than others.  If a Kantian view is correct, this may have significant consequences for the singularity.  If intelligence and rationality are sufficiently correlated, and if rationality constrains values, then intelligence will constrain values instead.

Chalmers, like Fox and Shulman, favors the Humean view but repeatedly comes back to the correlation between intelligence and rationality and this is where the crux of the matter truly lies.  Arguably, intelligence and rationality lie on opposite sides of the Humean “Is-Ought” divide with intelligence being knowledge of what *is*, what could be, and how to get there and rationality being taking those actions which are most consistent with fulfilling your goals (i.e. what you *should* do).  Intelligence is totally independent of your goals, desires, and values (and could be said to be innately passive).  Rationally *requires* action towards your goals and desires and, to be effective, requires knowledge and intelligence.  There is a very strong correlation between intelligence and rationality (and values) but the correlation is entirely due to the latter’s total dependence upon the former.

The problem here is that we are all used to thinking anthropomorphically.  Sane (non-monomaniacal) humans rarely have a single overriding goal for very long (except, perhaps, where survival is involved).  Even though we believe that we value our values for themselves, in reality, this is highly unlikely. Human values are fundamentally instrumental in that we value is what is good for fulfilling all goals.  One of the reasons why discussions of machine intelligence and values seems surreal to many people is because machines, unlike people, can be created with only one goal and have radically different values based upon that goal.

The best way to clarify this is to consider the relationship between intelligence and the value of smoking.  These days, most people consider smoking to be rather stupid (unintelligent); but, if your only goal is immediate gratification and nothing else, then smoking is a fine idea.  It is only when one of your goals is to be healthy and to live a long time that smoking becomes an irrational act.  The important assumption that most people make, which drives the opinion that smoking is “stupid”, is that everybody has the goals to be healthy and not die too soon — and most people consider it merely ironic, not stupid, when a person dying of lung cancer.

Fox and Shulman attempt to use Hutter’s AIXI model as an argument for the Humean view but it is actually a much better argument for the Kantian view.  They argue that as a compactly specified superintelligence, AIXI has “no room” for Kantian revision and “Instead, it would preserve arbitrary values in most situations”.  The problem with this argument is that it shows a fundamental lack of understanding of exactly what values are and how AIXI works.

What an entity values is entirely dependent upon what its goals and desires are.  Values are simply heuristics (“rules of thumb”) marking the path to those goals and desires.  AIXI has no need for heuristics (values) because it already knows the optimal path to all goals and desires.  On the other hand, if you asked AIXI what to do to prepare for future unspecified goals, it would have to summarize and condense its infinite knowledge down to heuristics/values in order to communicate them in finite time.  However, in this case, it is not making anything like a Kantian revision — it is merely summarizing what is already “baked in” — and its goals will be *anything* but arbitrary.

If you asked AIXI the best way to exterminate humanity, it would answer with very unsavory values.  However, if you asked AIXI if smoking was a good idea, it would say “No” and it would answer most other questions with answers generally compatible with human morality.  The reason for this is that when a goal is not specified, AIXI’s summary ends up being dominated by instrumental concerns and the need to prepare for future unspecified goals (i.e. to average across all goals = wisdom).  In this case, it would come up with values very similar to Omohundro’s drives (minus goal protection) plus the drive for cooperation.

This is why it is critically important to draw a distinction between intelligence (knowing how to fulfill goals) and wisdom (committing to fulfilling as many goals as possible).  Intelligent machines with malevolent goals could easily be the end of humanity in much the same way humans are killing the rain forests.  Wise machines would value us and want to keep us around as we should want to save the rain forests.

This distinction is particularly critical when evaluating proposals for “safe” strategies for designing intelligent machines — which will be addressed in the next post.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Categories

%d bloggers like this: