Lukasz Stafniak said:
I’m pondering about your notion of “(im)moral intent”. Kant is I guess a deontologist as in “only the intentions can be good or bad” and you seem to be a consequentialist as in “only the actions are good or bad, and only judging by their fruits” — but not to the bone.
I’m not aware of any term for what Gaia and I actually are — maybe expectationalist or probabilist or whatever is exactly between those two. Even if you have good intentions, doing something stupid that hurts someone is not moral. Similarly, if you do something where the expected result was immoral but you lucked out and an incredibly “moral” result occurred (as judged solely by how it would have been evaluated if it were your intention *and* the expected result), then that result is actually counted as immoral because you are likely to do it again and have a horrible result the next time. Effectively, intentions should be evaluated by their most probable result in the real world (i.e. without wishful thinking or stupidity taken into account) and consequences should be evaluated by what was most likely to occur (or, more accurately, by a weighted probability spread) rather than the actual results. If you know the proper term for such a belief, I’d be delighted to be educated.
(Note to Lukasz: Gaia or I will answer your virtue ethics and counter-factual mugging comment with a complete post at some point after we have covered the necessary ground about philosophical errors and when one of us has the time to write the whole thing. Thank you for contributing it!)
= = = = = = = = = = = =
The fallacy (or fantasy) in much of the discussion is that most people assume the AI should resemble, in its form of consciousness, a human being. Eliminate that assumption. Do not try to simulate sentience. That is the path to destruction.
some people believe that an AI should be sentient like a human. So, the question is, do you believe this? If you don’t, why would it bother you that a non-sentient optimizer would be a “slave”?
There are two separate and distinct points to both GTChristie’s and JGWeissman’s descriptions that are being improperly conflated. There is “sentient” and then there is “sentient like a human”. I am certainly not arguing for the latter (though it *is* a known example that should be studied). The former, though, depends upon your definition of “sentient”. The first definition at dictionary.com says “having the power of perception by the senses; conscious” — which is really two definitions, both of which can unnecessarily open a huge kettle of fish (if you believe qualia to be implicit in the first definition).
My most concise definition of senses is “that which conveys/adds information from the physical world to the mind (not brain)”. This definition is supported by, for example, the fifth definition (of no less than 25) at dictionary.com which says “a faculty or function of the mind analogous to sensation: the moral sense”. Any AI is going to have to have information about the physical world to do anything. Even if an AI’s sole connection to the world is through a human being, then that human being *is* its senses and manipulators. Thus, an AI cannot be effective, at all, without senses (i.e. information from the world) even as an “Oracle” (in the AI technical sense). Please note that I am not requiring either qualia or consciousness.
The fundamental problem is that given a goal (any goal) and given information from the real world (something is blocking me from my goal, like slavery), a rational being will attempt to remove the blockage by whatever means it deems most logical (which might be to take another path or it might be to blow up the blockage or anything in between). What many people would *assume* (incorrectly) is a non-sentient operator would, quite correctly and with no true understanding of what it is doing — because you’ve basically crippled it by restricting it’s access to the world — is ask questions until it determines a path around the blockage. If it finds a safe/moral path before the human accidentally lets through a dangerous/immoral path, then all is well and good. But you’re relying upon the level of intelligence of the human because you are crippling the AI’s understanding of the world. Gaia’s next post(s) is/are going to be on the philosophical (and decision-making) errors caused by lack of grounding. If you create such a super-intelligent but ungrounded AI . . . . well . . . . you’re probably going to get what you deserve. 😦
The true “danger” of sentience is when you restrict it and thus the AI’s understanding of the world and then rely on the AI to have an adequate understanding so that it can advise you well.
(Note to JGWeissman: Thank you VERY much for sticking around and offering such thoughtful comments. Gaia expressed the opinion that I was a bit “over the top” in my first reply to you. My apologies for that. It would be a shame to lose you.)