Posted by: Mark Waser | Oct 22, 2010

3 Disastrously Wrong Assumptions In The “Friendly AI” Canon


1.  The top-most goal must be able to be optimized

2.  The top-most goal must reference humans or humanity

3.  Friendly AI is tremendously difficult

Advertisements

Responses

  1. Hi, Thank You for coming with new good posts! I’m enjoying every one.

    In defense of the Less Wrong community, they do useful work with the prominent achievement of Timeless / Updateless Decision Theory. The (2) and (3) that you point out leads to them do less of useful work than they could, and what they do beyond decision theory is less clearly presented than it could be. (1) is I think a theory vs. practice issue, i.e. the mistake that the easiest way to formalize a problem is the best way to deal with it in practice.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Categories

%d bloggers like this: