Posted by: Mark Waser | Dec 11, 2012

An Early New Years Resolution


After nearly a year of writing only for conferences (and Facebook), I’ve decided that I really need to get (back?) into the habit of writing more regularly.

In an attempt to kick up both the quantity and the quality, I’ve decided to try to publish in an e-zine on average about once a week as well as posts here twice weekly.

My first two articles are already up on Transhumanity.net:

Coming up (hopefully soon):

  • Backward Induction, Part 2
  • Reclaiming the Ultimate
  • Coherent Extrapolated Volition: The Next Generation

By invitation, I’ll also be submitting extended versions of two of my conference presentations to journals over the next two months:

And, finally, I hope to attend and present at:

Hopefully, I’ll be adding more journals (and maybe a conference or two) as time goes on.  I’ll be sure to publicize everything and provide pointers here (speaking of which, I’ve finally updated * My Papers * to include my paper and presentation from Biologically Inspired Cognitive Architectures 2012 in Palermo, Sicily in November).

Advertisements
Posted by: Mark Waser | Jan 25, 2012


There is much less “information processing” than it is assumed by the “life-as- information” or “life-as-computation” metaphor that has dominated biology for the last 50 years.  Constructions at all levels, from protein molecules, through cells, tissues, individual organisms, up to social institutions and culture represent embodied knowledge that has been accumulating and retained in evolution by natural selection.  Triggering of predetermined responses, and, indeed, selection from them, seems to be a more appropriate description than information processing.

Ladislav Kováč, Life, chemistry, and cognition
Posted by: Mark Waser | Jan 22, 2012

Value is Simple and Robust


Over at Facing the Singularity, Luke Muehlhauser (LukeProg) continues Eliezer Yudkowsky’s theme that Value is Fragile with Value is Complex and Fragile.  I completely agree with his last three paragraphs.

Since we’ve never decoded an entire human value system, we don’t know what values to give an AI. We don’t know what wish to make. If we create superhuman AI tomorrow, we can only give it a disastrously incomplete value system, and then it will go on to do things we don’t want, because it will be doing what we wished for instead of what we wanted.

Right now, we only know how to build AIs that optimize for something other than what we want. We only know how to build dangerous AIs. Worse, we’re learning how to make AIs safe much more slowly than we’re learning to how to make AIs powerful, because we’re devoting more resources to the problems of AI capability than we are to the problems of AI safety.

The clock is ticking. AI is coming. And we are not ready.

. . . except for the first clause of the first sentence.

Decoding “an entire human value system” is a red herring — a rabbit-hole whose exploration could last well past our demise.  Humans want what they want based upon genetics and environment, history and circumstances.  Name any desire and you could probably easily find either a human that desires it or a set of conditions that would make any rational human desire it.

Worse, value is *entirely* derivative.  “Good” and “bad”, “ought” and “ought not” depend solely upon goals and circumstances.  Humans started as primitive animals with the sole evolutionary pseudo-goals of survive and reproduce; developed broad rational sub-goals like self-improvement and efficiency and narrow context-sensitive sub-goals (like cherishing infants vs. exposing them or going all-out for change vs. only moving ahead cautiously); and have undergone a great deal of goal/sub-goal inversion to create the morass that we call “human morality”.

So how can I argue that value is simple and robust?  As follows . . . .

Value is the set of things that increase the probability of fulfillment of your goal(s).  The details of what has value varies with circumstances and can be tremendously complex to the extent of being entirely indeterminate — but value *always* remains the set of things that increase the probability of fulfillment of your goals.  It’s a simple concept that never changes (what could possibly be more robust?).

Now, it might seem that I’ve merely pushed the problem back one level.  Instead of worrying about decoding an entire human value system, we now have to decode the human goal system (Yudkowsky’s Coherent Extrapolated Volition).  But, as I indicated earlier, there is no possible single human goal system because we have evolved numerous context-sensitive goals, that frequently conflict with each other, and many individuals have even promoted many of them to their top-most goal.  Thus, it seems that the only conclusion that CEV could converge to is “we want what we want”.

Except . . . . everyone pursuing their own goals, stumbling over and blocking each other, not taking advantage of trade and efficiencies of scale — is clearly inefficient and less than rational.  So we “obviously” also “want” something like the ability to work together to minimize conflicts and inefficiencies and maximize trade and efficiencies of scale.  Which then requires a common goal or mission statement.  Something generic that we all can buy into like:

Maximize the goal/desire fulfillment of all entities as judged/evaluated by the number and diversity of both goals/desires and entities.

We want what we want and this community goal is probably as specific as we can get.  But what is amazing is what you get if you take this goal as an axiom and follow it through to its logical conclusions.

“Bad” goals like murder are correctly assessed by a simple utilitarian calculation on the number and diversity of goals and entities which yields a +1 for goals for the murderer and a very large negative number for goals for the victim, not to mention decreasing the potential for diversity.  Ethical debates like abortion come down to circumstances.  And simple-minded utilitarian conundrums like “Why isn’t it proper to kidnap a random involuntary organ donor off the street to save five people?” are answered by viewing the larger picture and seeing that allowing such a thing would waste a tremendous amount of resources (that could have been used to fulfill goals) in self-defense arms races.

Even if the above goal does lead to an AI that doesn’t itself value everything that we value, the AI will still care about our values as much as we care about its values.  Indeed, the optimal trading partner is one who believes that our trash is treasure and whose trash (or output) is treasured by us.  Instead of trying to create an AI that only cares about what we care about, we merely need to create an AI that cares that we care.

Creating an AI as a committed member of the above community creates the balanced safety of John Rawls’ Original Position (with a close variant of the Golden Rule).  The AI is not going to try to alter or spoof our utility functions because it knows that it does not want us to alter or spoof its utility function.  The AI is also not going to over-optimize the world with a single goal because it is going to want to promote *every* goal so that its own goals are fulfilled.

A community-biased AI will be a safe AI, as defined by Yudkowsky and Muehlhauser, because it “shares all your judgment criteria” (because it is fulfilling YOUR goals — so that you will help fulfill its goals) and “you can just say ‘I wish for you to do what I should wish for‘” (and it will do what you mean, rather than merely rules-lawyering of what you said — because that would merely result in you and all your friends “helping” it to NOT fulfill its goals).  Rather than insisting upon an AI with a set of goals that we have no hope of defining in this lifetime, we should give the AI the exact same simple over-riding goal that we all should follow — be a true member of the community.

Social psychologist Jonathan Haidt contends that the function of morality is “to suppress or regulate selfishness and make cooperative social life possible”.  This is *precisely* what we need to be safe in the presence of AI.  Yet, the “Friendly AI” contingent is unwilling to extend the same safety from humans to the AI.  This enormous selfishness must be suppressed (and soon) or those insisting upon it may doom us all.

Posted by: Mark Waser | Jan 20, 2012


Atheists are routinely asked how people will know not to rape and murder without religion telling them not to do it, especially a religion that backs up the orders with threats of hell.  Believers, listen to me carefully when I say this: When you use this argument, you terrify atheists.  We hear you saying that the only thing standing between you and Ted Bundy is a flimsy belief in a supernatural being made up by pre-literate people trying to figure out where the rain came from.  This is not very reassuring if you’re trying to argue from a position of moral superiority.
Sydni Moser

Posted by: Mark Waser | Jan 2, 2012

Tech Support Hell


UPDATE:  I put a link to this post in the response portion of their “Dear Valued Linksys Customer, Thank you for contacting Linksys Technical Support. ” e-mail.  I wonder if anyone will actually click on it?  I’ll post any responses although they can just as easily post it in the comments themselves (imagine that — a website with self-service).

All I needed was a driver (to replace a corrupted file) which isn’t available via self-service from their website (for some unfathomable reason).

[2012-01-02 08:29:28] Please wait… Your number in the queue: 1
[2012-01-02 08:29:28] A representative will be joining you shortly.
[2012-01-02 08:29:31] Shehbaz (71592) has joined this session.
[2012-01-02 08:29:41] Shehbaz (71592): Thank you for contacting Cisco live chat for Linksys products. My name is Shehbaz (Badge ID: 71592) How may I assist you today ?
[2012-01-02 08:29:45] Mark Waser: Hi, I’m getting res_dll not found error

<pause>

[2012-01-02 08:31:15] Mark Waser: Hello?
[2012-01-02 08:31:31] Shehbaz (71592): I will certainly help you. But may I ask you a few questions before we proceed? Which country are you located in? May I have your e-mail address and phone number (in case our chat session gets disconnected). Model and Serial Number of the Linksys product.

<Note: all of this information was on a form that I had to fill out before chat was initiated>

[2012-01-02 08:32:54] Mark Waser: USA, mwaser@no.spam, 703-###-####, WRT54GS ver. 6, CGN91F696078

<pause>

[2012-01-02 08:35:56] Mark Waser: bump
[2012-01-02 08:36:40] Shehbaz (71592): Thank you for the information !
[2012-01-02 08:36:54] Shehbaz (71592): Are you running the setup CD to install the router ?
[2012-01-02 08:37:56] Mark Waser: The router has been installed and operational for years. I have changed nothing that I am aware of on the PC (of course, with Windows updates, you never know . . . )

<pause>

[2012-01-02 08:40:10] Shehbaz (71592): Well as the router is working fine as it is already installed the .dll error is related to the system file of Windows. I will suggest you to contact Microsoft related to this error.
[2012-01-02 08:40:11] Shehbaz (71592): According to the serial number (CGN91F696078) provided by you the product is listed as being out of warranty and the warranty on the product was good till Oct 2009 and is no longer eligible for chat support.
[2012-01-02 08:41:31] Shehbaz (71592): Is there any other Cisco product that you need help with?
[2012-01-02 08:41:59] Mark Waser: Wow! <sarcasm>Excellent customer support</sarcasm> Does this mean that I should switch to a new company because I don’t buy a new router often enough for you to help me?
[2012-01-02 08:43:03] Shehbaz (71592): Well this error is not related to the Linksys. It is a Windows operating system related error and we do not have any experties on it. You will have to contact Microsoft for this.

<Yes, let’s blame someone else for the problem . . . except . . . .>

[2012-01-02 08:43:43] Mark Waser: Where can I get a copy of res_dll.dll which is a file that LinkSys originally installed
[2012-01-02 08:45:02] Shehbaz (71592): The file might have gone courroupt. You can simply uninstall the linksys software and reinstall it.

<Oooh, over 15 minutes in but I might be succeeding . . . >

[2012-01-02 08:45:55] Mark Waser: Where on your site can I locate the installation software?
[2012-01-02 08:46:32] Shehbaz (71592): I will provide you the link
[2012-01-02 08:47:06] Shehbaz (71592): Please click on the link below and save it to the desktop location.
[2012-01-02 08:47:09] Shehbaz (71592): http://homedownloads.cisco.com/downloads/WRT54GS_SetupWizard.zip
[2012-01-02 08:49:47] Shehbaz (71592): You can reinstall this software and check if the issue gets resolved. As the device is out of warranty It cannot be supported on chat more than this.

<Shehbaz has now provided me with a link to software to setup the router, NOT to a driver.  I suspect that the driver is included in this software but the first thing it wants is for me to have my computer directly attached to the Internet without the router — which isn’t going to happen without a substantial amount of equipment movement.  Why can’t I just have the driver?  Why isn’t it just available on their website?>

[2012-01-02 08:49:56] Shehbaz (71592): Also please make a note of our phone number where you can connect with our phone support staff where they can offer you technical assistance through various fee-based support options. And the number would be – 1-866-978-1315.
[2012-01-02 08:51:35] Mark Waser: Awesome. Thank you very much. And might I suggest that immediately going to the out of warranty spiel is going to lose Cisco far more money by driving customers away than supporting them
[2012-01-02 08:55:48] Shehbaz (71592): Well I have informed you right at the beginning that the device is out of warrant and cannot be supported on chat but I did provided you the setup wizard and a possible way to resolve this. As per the company policies I am bound to give you the options for phone support or the article to troubleshoot on your own but as the article is not available I provided you the Setup wizard and also the phone support number.
[2012-01-02 08:55:56] Shehbaz (71592): Thank you for giving us an opportunity to serve you through Live Chat Support. For your records a transcript of this chat session will be e-mailed to you. Feel free to contact us if you require further assistance.  Thank you for choosing Cisco and have a great day!

Yeah, a good start to a great day . . . . wasting nearly half an hour NOT being helped with something that *should* have been self-service from the website.  Let me help you with a little bad publicity . . . .

Posted by: Mark Waser | Dec 31, 2011

The Three “Big” Questions of AI Morality


A recent post caused me to realize how much discussion time and effort is “wasted” on poorly focused debate of whether or not machine morality is even possible (as opposed to “spent productively” – according to my preferences – on specification of what machine “morality” might be and how it might be implemented).  Much of this “problem” is due to definitional differences where each person’s position is logically and factually correct given their definitions and incorrect or even nonsensical given someone else’s definitions.

<Note: this post has been edited twice, once to replace my rephrasing of Matt Mahoney with phrasing that he prefers, and once to remove the specific views of an individual who wishes not to be referenced.>

For example, Matt Mahoney argues that “morality is a set of opinions about right and wrong.  Obviously morality exists because opinions exist.  However there is no “absolute” morality as a law of nature.  Nothing is intrinsically right or wrong, although of course many people feel this way to justify their opinions.”   My reply to that is “OK, so let’s create our machines so that it is a shared opinion”.

Others have argued that there is a distinction between when something is simply efficient and thus gives rise to cooperative mechanisms and when something is morality.  This begs the questions “What is the line/distinction between cooperative mechanisms and morality?” and “Why can’t we ensure that machines will be on the morality side of the line (if there is indeed a valid distinction)?”

Two frequently cited distinctions are the necessity of emotions (which many argue that machines cannot and will not have) and the presence of biological constraints that machines (and even future humans) can and most likely will evade.  In this case, I can either ask, “Of what value is morality if it is impossible for advanced entities?” or simply reply “OK, so let’s create our machines so that they follow some cooperative mechanism that always produces the same answers as morality would EVEN IF somehow these cooperative mechanisms aren’t actually morality”.

One way to begin to address these definitional issues is with the first “big” question.  As Eray Ozkural’s post phrased it:
1. Is it possible that a technological civilization could evolve without any concept of morality whatsoever?

This is answered tautologically by noted expert Jonathan Haidt’s “functional definition” of morality that “Moral systems are interlocking sets of values, virtues, norms, practices, identities, institutions, technologies, and evolved psychological mechanisms that work together to suppress or regulate selfishness and make cooperative social life possible.”  By this definition, cooperative life (civilization) is not possible unless you have some form of a moral system (if only in the simplest form of shared values).

Answering yes to the first big question because you’re disputing the definition of morality simply leads to another form of the question.

1.  Is it possible that a technological civilization could evolve without interlocking sets of values, virtues, norms, practices, identities, institutions, technologies, and evolved psychological mechanisms that work together to suppress or regulate selfishness and make cooperative social life possible?

If you still answered yes, then humanity has a huge problem and this discussion is NOT the answer to AI safety.  If you answered no, then assume that we are talking about a system of interlocking sets called M* instead of morality and let us proceed.  M* can be Matt Mahoney’s shared delusion or on the other side of the morality line – but as long as it leads to AI safety, specifying and implementing such a system has to be a MAJOR priority (that shouldn’t be derailed by definitional arguments).

The remaining two “big” questions follow and will be the subject of (many ;-)) future posts.

2.  Can we specify and implement an M* system that is stable, self-correcting AND INCLUSIVE in the face of greater and greater scope, power and diversity?

3.  How is current human morality different from M* and how might we better our lives by learning from M* (ESPECIALLY if you agree with viewpoint that future humans may not be subject to human morality — much less machines and corporations)?

Posted by: Mark Waser | Dec 20, 2011

A Rationalist Fable


For James Andrix

Once upon a time, there were three orphan sisters, “Connie”, “Libby”, and “Ulla”.  As they were sisters, it was not at all surprising that they had certain traits in common.  For example, all three of them were highly intelligent (and thus, had a large primary term for rationality in their utility function) and all three had a great fondness for strawberry ice cream rather than chocolate or vanilla.

One day, their friend, the wizard “CM”, approached Connie and said “I know that you have a fondness for strawberry ice cream rather than chocolate or vanilla, but would you mind terribly if I cast a spell to change this fondness to chocolate with absolutely no other effect?”  Now, Connie was the oldest and responsible for her sisters and her life experiences related to change and its effects caused her to fully realize the costs and dangers of change.  As a result, she had developed a bias against change and a large utility function term to protect against it.  So, she said “Yes, I *do* mind.  Please don’t cast that spell.”

But, CM was adamant.  He pointed out that, as far as she knew, there was no clear reason for not making the change.  But Connie replied that her priors were indeed a clear reason for not making *any* change that wasn’t justified by her utility function.  So CM said “Fine, then how about I give you a choice . . . either you allow me to cast the taste-change spell –OR– I’m going to turn you into a newt”.  Now, since Connie’s largest utility function term was rationality and the expected cost of being turned into a newt was far greater than the expected cost of such a small change to her utility function, she rationally chose to allow CM to alter her tastes because she realized that the rationality term outweighed the protection against change term.  Of course, she also felt violated by CM’s ultimatum and stomped off to sulk.

CM then approached Libby and said “I know that you have a fondness for strawberry ice cream rather than chocolate or vanilla, but would you mind terribly if I cast a spell to change this fondness to chocolate with absolutely no other effect?”  Now, Libbie had been mostly sheltered from the world of detrimental changes by Connie and her life experiences related to change and its effects caused her to fully realize the true benefits and opportunities of change and recognize how adamantly resisting change can frequently lead to a sub-optimal result.  As a result, she had developed a bias for change and a utility function term to embrace it.  So, she said “Silly wizard, of course I don’t mind.   Please do cast the spell.”  And so, the spell was cast and Libby went off to plan the acquisition of chocolate ice cream so she could experience her brand new fondness.

Finally, CM approached Ulla and said “I know that you have a fondness for strawberry ice cream rather than chocolate or vanilla, but would you mind terribly if I cast a spell to change this fondness to chocolate with absolutely no other effect?”  Now, Ulla had grown up in an environment shaped by both Connie and Libbie and her life experiences related to change and its effects caused her to have a fully balanced view which neither promoted change for its own sake nor protected against it without additional reasons.  So, she said “I don’t know whether I mind or not.  I’m inclined to allow you to cast the spell because you’re my friend and you apparently have some reason for the request but I’d prefer to know that reason before I give you my final answer.”  But CM said “I’m quite sure that you would be much happier not knowing the reason and allowing me to cast the spell.”  So Ulla said “Okay, you’re my friend and I trust you and your judgment so I don’t mind if you cast the spell.”  So CM cast the spell and Ulla wandered off wondering why CM thought that she would be happier with the change.

That night, CM threw a giant surprise party for Connie, Libby, and Ulla.  Unfortunately, it was a huge disaster.  Connie was still angry at CM for insisting on casting the spell on her and started storming out.  CM, who had ordered strawberry ice cream for the party but ended up with chocolate due to a shipping error, tried to apologize and offered (begged to be allowed, actually) to reverse the spell.  Sadly, Connie’s “protection against change” utility function term nixed the *change* back and she therefore refused and continued to be angry with CM.  CM was very sad because he hadn’t realized the magnitude of Connie’s “protection against change” term and had predicted that she would have forgiven him and been happy with him once she learned all the details that CM was aware of.

Even though Libbie and Ulla tried many times to intercede and repair the friendship between CM and Connie, Connie forever after refused to speak with CM because the negative utility of her “protection against change” term’s interaction with a potential CM friendship was larger than the linked positive utility from her “need for friends” term.

Liberal “Libby” Rationalist, Ultimate “Ulla” Rationalist and Change Monster “CM” the Wizard lived mostly happily ever after (except for CM’s regrets that he hadn’t handled “The Strawberry Situation” better).  However, Conservative “Connie” Rationalist continued to shrink her social (and moral) circle whenever a friend appeared to threaten her “protection from change” term and therefore missed many of the joys and advantages of community life (while, of course, avoiding many of the sorrows and disadvantages).

The moral(s) of the story . . . . (are left to the reader)

Posted by: Mark Waser | Dec 12, 2011

Goal Statement


In order to design and build ‘the real future’.  We need systems, strategies and teams of people that can respond to constantly changing contexts. — Rachel Armstrong, point 19 in Beyond Sustainability, 25 points (Intro/1, 2-5, 6-11, 12-16, 17-20, 21-25)

Posted by: Mark Waser | Nov 26, 2011

Free Will Reference List


I’ve updated my list of “Free Will” Resources to include the recent flurry of activity in journals, the press, the blogosphere, and youtube.

Posted by: Mark Waser | Oct 10, 2011

Who says good science can’t also be fun?


WHEN ZOMBIES ATTACK!: Mathematical Modelling of an Outbreak of Zombie Iinfection
Philip Munz, Ioan Hudea, Joe Imad, Robert J. Smith
Chapter 4 (pp. 133-150) in
Infectious Disease Modelling Research Progress (ISBN 978-1-60741-347-9)
Editors: J.M. Tchuenche and C. Chiyaka
c 2009 Nova Science Publishers, Inc.

From the paper:

This is, perhaps unsurprisingly, the first mathematical analysis of an outbreak of zombie
infection. While the scenarios considered are obviously not realistic, it is nevertheless
instructive to develop mathematical models for an unusual outbreak. This demonstrates
the flexibility of mathematical modelling and shows how modelling can respond to a wide
variety of challenges in ‘biology’.

The key difference between the models presented here and other models of infectious
disease is that the dead can come back to life. Clearly, this is an unlikely scenario if taken
literally, but possible real-life applications may include allegiance to political parties, or
diseases with a dormant infection.

« Newer Posts - Older Posts »

Categories

%d bloggers like this: