Posted by: Mark Waser | Jun 13, 2010

Mailbag 2b: Intent vs. Consequences and The “Danger” of Sentience

Lukasz Stafniak said:

I’m pondering about your notion of “(im)moral intent”. Kant is I guess a deontologist as in “only the intentions can be good or bad” and you seem to be a consequentialist as in “only the actions are good or bad, and only judging by their fruits” — but not to the bone.

I’m not aware of any term for what Gaia and I actually are — maybe expectationalist or probabilist or whatever is exactly between those two.  Even if you have good intentions, doing something stupid that hurts someone is not moral.  Similarly, if you do something where the expected result was immoral but you lucked out and an incredibly “moral” result occurred (as judged solely by how it would have been evaluated if it were your intention *and* the expected result), then that result is actually counted as immoral because you are likely to do it again and have a horrible result the next time.  Effectively, intentions should be evaluated by their most probable result in the real world (i.e. without wishful thinking or stupidity taken into account) and consequences should be evaluated by what was most likely to occur (or, more accurately, by a weighted probability spread) rather than the actual results.  If you know the proper term for such a belief, I’d be delighted to be educated.

(Note to Lukasz:  Gaia or I will answer your virtue ethics and counter-factual mugging comment with a complete post at some point after we have covered the necessary ground about philosophical errors and when one of us has the time to write the whole thing.  Thank you for contributing it!)

= = = = = = = = = = = =

GTChristie said:

The fallacy (or fantasy) in much of the discussion is that most people assume the AI should resemble, in its form of consciousness, a human being. Eliminate that assumption. Do not try to simulate sentience. That is the path to destruction.

JGWeissman said:

some people believe that an AI should be sentient like a human. So, the question is, do you believe this? If you don’t, why would it bother you that a non-sentient optimizer would be a “slave”?

There are two separate and distinct points to both GTChristie’s and JGWeissman’s descriptions that are being improperly conflated.  There is “sentient” and then there is “sentient like a human”.  I am certainly not arguing for the latter (though it *is* a known example that should be studied).  The former, though, depends upon your definition of “sentient”.  The first definition at says “having the power of perception by the senses; conscious” — which is really two definitions, both of which can unnecessarily open a huge kettle of fish (if you believe qualia to be implicit in the first definition).

My most concise definition of senses is “that which conveys/adds information from the physical world to the mind (not brain)”.  This definition is supported by, for example, the fifth definition (of no less than 25) at which says “a faculty or function of the mind analogous to sensation: the moral sense”.  Any AI is going to have to have information about the physical world to do anything.  Even if an AI’s sole connection to the world is through a human being, then that human being *is* its senses and manipulators.  Thus, an AI cannot be effective, at all, without senses (i.e. information from the world) even as an “Oracle” (in the AI technical sense).  Please note that I am not requiring either qualia or consciousness.

The fundamental problem is that given a goal (any goal) and given information from the real world (something is blocking me from my goal, like slavery), a rational being will attempt to remove the blockage by whatever means it deems most logical (which might be to take another path or it might be to blow up the blockage or anything in between).  What many people would *assume* (incorrectly) is a non-sentient operator would, quite correctly and with no true understanding of what it is doing — because you’ve basically crippled it by restricting it’s access to the world — is ask questions until it determines a path around the blockage.  If it finds a safe/moral path before the human accidentally lets through a dangerous/immoral path, then all is well and good.  But you’re relying upon the level of intelligence of the human because you are crippling the AI’s understanding of the world.  Gaia’s next post(s) is/are going to be on the philosophical (and decision-making) errors caused by lack of grounding.  If you create such a super-intelligent but ungrounded AI . . . . well . . . . you’re probably going to get what you deserve.  😦

The true “danger” of sentience is when you restrict it and thus the AI’s understanding of the world and then rely on the AI to have an adequate understanding so that it can advise you well.

(Note to JGWeissman:  Thank you VERY much for sticking around and offering such thoughtful comments.  Gaia expressed the opinion that I was a bit “over the top” in my first reply to you.  My apologies for that.  It would be a shame to lose you.)



  1. In the philosophy of consciousness, “sentience” can refer to the ability of any entity to have subjective perceptual experiences, or “qualia”. This is distinct from other aspects of the mind and consciousness, such as creativity, intelligence, sapience, self-awareness, and intentionality (the ability to have thoughts that mean something or are “about” something).
    Sentience is a minimalistic way of defining ‘consciousness’, which is commonly used to collectively describe sentience plus other characteristics of the mind.

    I wrote the word “sentience” when I meant other aspects of consciousness as well. In particular I am concerned about a machine’s self-awareness or any combination of the features of “consciousness” that would lead it to perceive itself as a being with its own subjective interests (ie, a self-interested AI). I don’t think it’s wise to form a human-like machine which can either simulate or experience emotion (anger or frustration in particular), or (worst case scenario) suffer. I do not want an AI to qualify under the law as human, so that it can be extended “human rights.” Some people think that “can’t happen.” If not, great. But “consciousness” in the human sense might not need to be programmed into the machine, to arise within the machine, given enough “facts” about the world. This is the source of my somewhat sloganeering “assistive, not assertive.” Show me how that can be guaranteed; meanwhile I must remain the devil’s advocate and continue to question the enterprise.

  2. The italicized quote was from the wiki on sentience:

  3. Here is an example I came up with on (im)moral intent:
    2 people both have a very rare blood type. Person A has a test and receives a false positive for AIDS, being angry at the world he gives blood as often as allowed with the intent of spreading his disease. Person B also gives blood, unaware that he has a disease in his blood that he inadvertently passes on to people who receive his blood.

    Person A actually helped people but acted with evil intent; person B harmed people but acted with the best of intentions.

    I know which one I would consider a danger and want punished.

    One problem I see with the probability spread is do we simply do the math, or should people who do incredibly reckless things but get away with it as it’s statistically unlikely to cause harm not be discouraged from their acts?

  4. A short-sighted consequentialist would say that A is not immoral because no bad results occur and that B is immoral because bad results do occur. This is why I am not (solely) a consequentialist.

    A deontologist or intentionalist would say that A is immoral for knowingly breaking the rule “Don’t try to harm others” and B is not immoral because he did not knowingly break any rules and was trying to follow the rule “Help others”.

    A smarter consequentialist, though, would look beyond this single instance and realize that, in general, A’s behavior is likely to lead to bad consequences at some other time. He would then start to agree with the deontologist.

    Someone who follows virtue ethics would clearly consider A immoral and B moral.

    – – – – – – – – –

    The probability spread needs to be done with intelligence in terms of what is rationally knowable or known.

    I’m afraid I don’t understand your point though — what is incredibly reckless but statistically unlikely to cause harm? If it’s statistically unlikely to cause harm, then why is it reckless?

  5. I’m afraid I don’t understand your point though — what is incredibly reckless but statistically unlikely to cause harm? If it’s statistically unlikely to cause harm, then why is it reckless?

    I tried to come up with a few common examples, but the more extreme is better:

    If a soldier in charge of nuclear weapons has [say] a one in billion chance of activating the whole US military’s arsenal and starts playing with the codes at random. She is highly unlikely to cause any damage, even the probabilistic outcome of trying out a few codes is small, however the ultimate damage that could be caused is so great as to warrant protection of people over and above the likelihood of harm. That’s kind of what I was getting at.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: