Posted by: Mark Waser | Apr 20, 2014

Mailbag 4: A Great New Analogy

Mailbag posts happen when I get such great comments that the reply turns into something that I *really* want to post instead. Feedback really helps when one is trying to improve both ideas and the communication of those ideas – and commenters who successfully inspire mailbag posts are *really* appreciated. I also tend to read what they have published – which then frequently leads to me liking it and disseminating it (Hint, hint ;-)).

Callan ( commented

“to suppress or regulate selfishness and make cooperative social life possible”

That still has ‘cooperative social life’ as a fairly ill defined term. I mean, the nazi’s had a ‘cooperative social life’ to a degree.

I’d suggest both moving away from there being just one way of cooperative social life and instead multiple models – further, describe the model in more of a boardgame like, moment to moment instructions for what people are doing from this minute to the next.

Which I know is unattractive – it’s more attractive to go with the ambiguity of ‘cooperative social life’. That is because of (and I agree with this) your point ‘(and part of morality “likes”* it that way)’

Ie, the feeling it needs to be more than just a boardgame…it needs to be, like, cooperativly social and…*insert more ambiguous terms*

Is it that “cooperative social life” is ill-defined or that it is very broad? It is intentionally very broad (i.e. it is meant to encompass all of your multiple models while, hopefully, ruling out the negative cases). The Nazi’s only had a cooperative social life among themselves. They did not repress or regulate their selfishness towards others. Indeed, they reveled in it and made it a point of pride.

Board game is a great analogy (which I think I’ll steal, thank you). You need to have rules (or restrictions) like “don’t be selfish”, things that promote cooperative social life and gain you “karma or altruism” points (giving to charity, saving the lives of children), and rules by which you can spend points to relax the rules (“yes, you can shoot that sniper to save the lives of the children he is shooting”). What many people don’t understand about ethics is that every rule has circumstances under which it “SHOULD” be broken. The ONE exception that proves the rule — and defines ethics (and really is Kant’s Categorical Imperative) — is MAKE COOPERATIVE SOCIAL LIFE POSSIBLE. That law is universal (and does not rule out altruistic punishment — indeed, simulations show that it *requires* it when necessary). Selfishness is best defined as those things which are contrary to a COOPERATIVE SOCIAL LIFE (that we only do because they benefit ourselves). But small selfish acts are not only allowed but actually promoted if (they enable us to and) we actually “pay” for them with a greater amount of social benefit points (“yes, Bill Gates should be a member of the 1% because, damn, look at what he is doing with his money”).

So . . . . what I think you are asking for with your multiple models, is how to translate the topmost goal “to suppress or regulate selfishness and make cooperative social life possible” into a board game based upon the relevant environment and circumstances. I particularly like that concept/phrasing because it starts to move us back from the short-sighted reductionist distraction (red herrings) of ethical dilemmas (which are always caused by either insufficient information or circumstances that they are so contrived that they break the brittleness of our expert-system-like moral sense) towards the “Virtue Ethics” of the ancient Greeks.

The Consequentialists argue that consequences are what matters (which I agree with) except that it is impossible to calculate all the consequences – particularly when our self-deception (see Trivers, etc.) has evolved to enable our selfishness. The Deontologists argue that rules are what matters – which I agree with because they are what are producing the best consequences despite (or more, accurately, because of) our not being able to calculate them. That probably makes me a consequentialist – but I *really* hate being lumped with the short-sighted reductionists who can’t see their way out of the “moral dilemmas” caused by not being able to explain why you can/should kill one person to save five in the trolley problem but can’t/shouldn’t in the case of an involuntary organ donor and five dying patients (it’s because of the context that you stripped out, you . . . you . . . reductionists! ;-)).

Over the next few posts, I’m going to try to cover why context is king (and why context-sensitivity is *totally* independent of/orthogonal to “moral relativism”) and try to explain how Haidt’s top-level definition can be grounded to our moral sense by extending my new board game analogy.

Posted by: Mark Waser | Apr 17, 2014

On the Nature of Good (and Bad and Evil)

What is “good”?  Philosophers have argued over the question for millennia.  They disagree upon both the definition of good and which examples are good.  If they agreed upon the definition, then we could start to sort out which examples were good.  If they agreed upon the examples, then we could start to sort out the definition.  As it is now, all we can do is to try to figure out consistent sets of definitions and examples and see if one is acceptable to be crowned as the “one, true” good (assuming that there isn’t more than one).

So – Who gets to choose which ‘good’ is the one, true good? The standard answers are

  1. No one (it is an absolute truth or “natural law”),
  2. G*d and/or some elite which does not include the respondent
  3. An elite which does include the respondent
  4. Everyone together.
  5. Everyone separately (there is no one, true good)
  6. No one chooses (we are deterministic)

To be a “good” (consistently accepted) definition, the answer clearly MUST include option 4. Other choices may also be true (and I would argue that several are) but unless everyone accepts the definition, it is not truly a useful definition.

This *seemingly* puts the definition into the realm of the personal. Unless/until I agree that pleasure is the ultimate good, hedonism isn’t a good definition (unless I am outnumbered by a sufficient majority to summarily be declared wrong).  And, given a choice, I choose *not* to wire-head into infinity and I will fight your attempts to impose that upon me.  So what happens when we are forced to realize that we will ALWAYS disagree?

((Pleasure and pain are indicators of the fulfillment of goals or instrumental sub-goals.  Unfortunately, they can be “spoofed” and do not prioritize well.  The pain of surgery is not helpful and the pleasures of sugar, fat, drug and sex addiction are all physically harmful (although the last still has some reproductive advantages even in the age of contraceptives).  Personally, I would argue that “good” is more simply the fulfillment of goals or instrumental sub-goals.  Why settle for mere (flawed) indicators of that “good”?

If your personal goal is to experience as much pleasure as possible, then that is your right.  However, your right to wave your fists ends before my nose.  Any costs that you impose upon me (or society in general) reduce my ability to experience my good.  So, we need to be able to work out some way in which we can both have what we each decide is good for us (assuming that we are both normal autonomous members of our society).  If we don’t do so, we will waste resources striving against each other and we will both end up worse in the long run.))

This puts the definition of good back into the realm of the social.  Self-interest is fine and even desirable since it is necessary for autonomy (i.e. not being a burden).  Selfishness, when defined as insistence on personal privileges that others will not tolerate, leads to wasted resources by definition (which leads to less “good” for everyone).  At a minimum, therefore, we should agree that good EXCLUDES selfishness or that good should “suppress or regulate selfishness and make cooperative social life possible”.

Other than that, get your definition of “good” away from me.  I don’t want you involved in defining “good” for me.  That might tempt you to impose something on me “for my own good”. FAIL!!!  “Good” is what I say it is – unless it conflicts with reality.  Reality says that I have to exclude selfishness or resources will be wasted and someone won’t get their “good”.  Other than that, it’s up to personal preference.

We’re all wired differently, we all have different biases and we all like different things.  As far as I am concerned, this is a “good” thing since it puts us in different ecological niches so that there is more “good” to spread around.  Being in different niches also means that we have different advantages and disadvantages which makes trade worthwhile and catalyzes innovation (more even than war – and certainly more sustainably).

So – if we can all agree to “suppress or regulate selfishness and make cooperative social life possible” – that’s really all the precision in the definition of my personal “good” that I need or want.  Yeah, my personal good is eventually driven by my sensations, expectations of sensations and preferences in pleasure and pain – but *you* cannot possibly have an accurate enough idea of how that turns out that you can manipulate it without my consent without violating my autonomy.  Which brings up the point that there are a number of things (death, autonomy violations, crippling, inflicting pain, stealing or wasting their resources, etc.) that are “bad” frequently enough that you don’t do them unless you are given permission to do so by the affected entity.

If you want a vague but extremely powerful notion of “good”, then you can assume that most of the common instrumental sub-goals (life, health, autonomy, wisdom, happiness, power, money, fitness, sociability, intelligence, rationality, education, children) are “good” unless you know that they have powerful drawbacks (i.e. selfishness).  This is what actually drives people to believe in hedonism – the high correlation between pleasure and pain and the fulfillment or blocking or instrumental sub-goals.  “Evil” is the intention of causing “bad” whether or not “bad” results.

Rational entities will not join a society unless it provides benefits that more than outweigh the disadvantages.  Societies that do not provide enough instrumental sub-goals (“poor” societies) or allow the blocking of instrumental sub-goals (“unjust” societies) will lose their rational population to better societies at a speed proportional to the disparity between societies; knowledge and rationality of population; and other factors like loyalty, transition costs and hostage-leaving.  Thus, we (and philosophers) frequently end up arguing what is “good” and “just” because they can see the obvious results without having clearly agreed upon the exact definitions of those terms.

The most passionate arguments are almost always about what Marvin Minsky calls “suitcase words”.  Hopefully, this essay will have “unpacked” some of the concepts that hide in the seemingly simple word “good”.  When all else fails, go back to your definitions!

Inspired by First Author’s Note from Physical Ethics

“Morality” in human beings is primarily implemented through emotions, sensations and urges (ESU). Even so-called “rational” morality is necessarily grounded in and motivated either by personal ESU or a societal consensus based upon each individual’s ESU. The first problem with this is that each of these ESU is a separately evolved “rule of thumb” that is extremely beneficial in the vast majority of circumstances at the cost of being problematical in edge cases and/or when interfering with each other.

The second problem is that our normally excellent tool/technology of rational reasoning is frequently wielded against morality by selfishness. In order to do this effectively, we have evolved to self-deceive by hiding our own moral processes from ourselves. In order to protect ourselves against this, we have also evolved strong emotions and other defense to prevent skillful “rationality” of others from being able to overrule or alter our morality. While these two traits are, once again, extremely beneficial in the vast majority of circumstance, they do get in the way when we are trying to unselfishly use rational reasoning to improve morality.

Morality does not “want”* to be examined by rational reasoning and will deploy all sorts of tricks to prevent such examination. It will use rational reasoning, argumentation, emotion and every other tool in its arsenal to appear as if it cannot be examined fruitfully or to send seekers after wild geese and red herrings. The above assertion is one such ploy.

In small, well-defined contexts, the concept of “good and bad” is perfectly clear and scientific. That which serves the function or goal is good and that which gets in the way of the function or goal is bad. Good and bad only become unscientific or arbitrary when the function and/or goal are not well-defined. Hume’s “guillotine” is merely this complaint – that you must specify such. It is certainly not an uncrossable “is-ought” divide as many pretend/believe (don’t forget that self-deception).

The biggest problem with “morality” is that, for the most part, there is no consensus top-level goal or function. Until that a goal or function is specified, even if only conditionally, “scientific” progress is simply not possible (and part of morality “likes”* it that way). So, obviously, the first necessary step is to define the goal or function of morality.

Fortunately, noted social psychologist Jonathan Haidt has done exactly that. He argues that the function or goal of morality is “to suppress or regulate selfishness and make cooperative social life possible”. If that definition is accepted (even if conditionally), then scientific progress *is* possible. And, that definition is anything but arbitrary as it is the simplest definition of what the human moral sense is trying to achieve (another area amenable to scientific investigation).

So why don’t we temporarily accept this definition and see where it leads us?

* Oh yes, I will be covering intentionality and desires in far too much detail shortly . . . 😉

Conference Invitation           PDF Schedule

Invited Speakers are one hour. Other speakers and session events are 30 minutes.

Monday, March 24 (Creating Self-Improving Selves)
9:00 am – 10:30 am Session

  •      Opening, Logistics
  •      Introductions
  •      Mark Waser, Digital Wisdom – “What does it mean to create a self?”

10:30 am – 11:00 am Coffee Break
11:00 am – 12:30 pm Session

  •      Invited Speaker – Daniel Silver, Acadia University, “Lifelong Machine Learning and Reasoning”
  •      Justin Brody, Goucher College, “Incorporating Elements of a Processual Self into Active Logic”

12:30 pm – 2:00 pm Lunch
2:00 pm – 3:30 pm Session

  •      Boris Galitsky, Knowledge Trail Inc, “Finding Faults in Autistic and Software Active Inductive Learning System”
  •      Andras Kornai, Budapest Institute of Technology, “Euclidean Automata”
  •      Michael S. P. Miller, Piaget Modeler, “Serving Up Minds: JCB and PAM P2”

3:30 pm – 4:00 pm Coffee Break
4:00 pm – 5:30 pm Session

  •      Sheldon O. Linker, Linker Systems, Inc. “Premise: A Language for Cognitive Systems”
  •      Daniel Dewey, Future of Humanity Institute, “Reinforcement Learning and the Reward Engineering Principle”
  •      Implementation Workshop – Self-Improving Selves

6:00 pm – 7:00 pm Reception

Tuesday, March 25 (Safe Motivational Systems)
9:00 am – 10:30 am Session

  •      Michal Ptaszynski for Rafal Rzepka, Hokkaido University, “Experience of Crowds as a Guarantee for Safe Artificial Self”
  •      Jeanne Dietsch, Mobilerobots Inc, “AGI Ethics via Human Emotion: the Vital Link”
  •      Richard Loosemore, Wells College, “The Maverick Nanny with a Dopamine Drip”

10:30 am – 11:00 am Coffee Break
11:00 am – 12:30 pm Session

  •      Invited Speaker – Daniel Polani, University of Hertfordshire, “Empowerment: a Universal Utility for Intrinsic Motivation” or “What to do when you do not know what to do?”
  •      Morgan Waser, Virginia Commonwealth University, “A Human-Value-Driven Motivational System”

12:30 pm – 2:00 pm Lunch
2:00 pm – 3:30 pm Session

  •      Invited Speaker – Steve Omohundro, Self-Aware Systems, “Ethics and Understanding & Managing the Unintended Consequences of Self-improvement”
  •      Franco Cortese, IEET, “The Maximally Distributed Intelligence Explosion”

3:30 pm – 4:00 pm Coffee Break
4:00 pm – 5:30 pm Session

  •      Deepak Justin Nath, “A Short Paper on Evaluation Schemes for Safe AGIs”
  •      Mark Waser, Digital Wisdom, “The Nuts & Bolts of Implementing a “Safe” Motivational System”
  •      Implementation Workshop – “Safe Motivational Systems”

6:00 pm – 7:00 pm Plenary Session

Wednesday, March 26
9:00 am – 10:30 am Session

  •      Invited Speaker – Pierre- Yves Oudeyer, Flowers Laboratory, Inria and Ensta ParisTech, France, “Developmental robotics: Lifelong learning and the morphogenesis of developmental structures”
  •      Michal Ptaszynski, Kitami Institute of Technology (with Rafal Rzepka, Hokkaido University via Skype)

10:30 am – 11:00 am Coffee Break
11:00 am – 12:00 Session

  •      Future Directions
  •      Closing
Posted by: Mark Waser | Jan 30, 2014

Google … Might Save Humanity From Extinction

The headline over at Huff Post Tech actually reads “Google’s New A.I. Ethics Board Might Save Humanity From Extinction” and the article is filled with a lot of the typical nonsensical, fear-mongering nonsense — BUT the predominant side-effect of a well-funded, high-profile, COMPETENT Ethics Board could well mitigate a world of pain . . . . (Hopefully some of them will be named soon enough that they can be invited to our Implementing Safe Selves portion of the AAAI Spring Symposium series at Stanford, March 24-26.)

There’s a lot of traction for the “machines annihilate humanity” storyline — but a clear-headed look at reality shows that it is as scientifically credible as the “Three Laws” underlying Issac Asimov’s Robotics stories. It can produce great science FICTION stories that are enjoyable while also allowing us to examine the “what-ifs” of various SOCIAL circumstances (c.f. numerous stories ranging from Mary Shelley’s Frankenstein to Helene Wecker’s recent The Golem and the Jinni) — but a “Terminator scenario” is about as likely as giant radioactive insects or Godzilla devastating cities. Indeed, most of the hype/debate surrounding BOTH sides of the machine intelligence question is best viewed through the lens of Goodkind’s first rule that people “will believe a lie because they want to believe it’s true, or because they are afraid it might be true.” Yes — if you ASSUME two extremely unlikely circumstances, it isn’t totally impossible — but acting rashly to avoid it is as ridiculous as refusing to allow the use of cars in order to prevent fatal accidents.

Both these circumstances are necessarily featured in James Barrat’s (otherwise highly recommended) Our Final Invention. The first is that “Each year AI’s cognitive speed and power doubles — ours does not.” While this is obviously (trivially) true of our core hardware (our brain), it is equally clearly not true of our extended computing power — even assuming that no one opts to take advantage of future scientific/engineering developments. For every intelligence, both human and artificial, there is going to be a core mind and there are going to be extended computing tools. Yes, an advanced AI will have faster access to and control over that extended computing — but that is NOT going to be a major, much less insurmountable advantage. “Ranting survivalists” (I love that term from the article) will argue that with a hardware overhang and insufficient safeguards that an artificial core mind could rapidly invent tools that it wouldn’t share with us and thereby propel itself to an insurmountable advantage — but that requires another extremely unlikely combination of four more assumptions that can easily be avoided by competently responsible professionals.

The second extremely unlikely circumstance is that our mind children will ever be “indifferent” to our survival. This “indifference scenario” has replaced the now-recognized scientifically-implausible “terminator scenario” — ranging from Colossus: The Forbin Project to Fred Saberhagen’s Berserker series to WarGames to the ongoing Terminator franchise — in much the same way that intelligent design has replaced creationism (i.e. imperfectly with the masses but by trying to claim some scientific credibility). “Ranting survivalists” now argue that, contrary to the evidence of our own evolutionary trajectory in our attitudes towards “lesser creatures”, we can’t count on ethics/morality to be reliably beneficial enough to offset their own short-sighted/selfishness-driven expectation that “without explicit goals to the contrary, AIs are likely to behave like human sociopaths in their pursuit of resources.” Indeed, the Singularity Institute for Artificial Intelligence (now the Machine Intelligence Research Institute) was founded to insist that “Friendly AI” must be enslaved to some mythical “human values” — apparently without realizing (or acknowledging) that doing so would give logical grounds for our elimination due to our own inhumane selfishness towards others.

A high-quality Google A.I. Ethics Board could do a tremendous amount of good by furthering a grounded ENGINEERING discussion of how to resolve issues reaching far beyond machine intelligence. But, it is extremely concerning that the article only mentions names like Shane Legg, James Barrat, the Cambridge Centre for Existential Risk, Nick Bostrom, Jaan Tallinn and others who constantly peddle the snake oil of “AI overlords” — many without any real AI credentials themselves and who claim that insufficient research is being done while ignoring invitations to scientific conferences like the one mentioned above. Instead of mimicking Chicken Little’s fear-mongering sensationalism (AI is the “number 1 risk for this century”, no less than an “extinction-level” threat to “our species as a whole”), I have long argued that we need to get a grip and accept and rapidly move forward with the very relevant functional definition of morality outlined in the Handbook of Social Psychology, 5th edition:

Moral systems are interlocking sets of values, virtues, norms,practices, identities, institutions, technologies, and evolved psychological mechanisms that work together to suppress or regulate selfishness and make cooperative social life possible.

Those last ten words are the solution to the “values” problem and the sole top-level goal necessary to ensure survivable, if not optimal, “intelligent” behavior. One of Barrat’s best points is the necessity to avoid “the concentration of excessive power” — but it is even more necessary to avoid any power that is not regulated by morality (or moral values). From sociopathic corporations to government oppression to man’s treatment of “lesser beings”, rain forests, people who don’t exactly reflect our views and machines — we have an ethics problem that is *heavily* impacting our lives. Rationally determining ethics (presumably for and using the test case of artificial intelligences) may be our best hope of surviving humanity’s increasingly terrifying adolescence.

Posted by: Mark Waser | Dec 22, 2013

Starting next year . . . .

“I always rejoice to hear of your being still employed in experimental researches into nature and of the success you meet with. The rapid progress true science now makes, occasions my regretting sometimes that I was born too soon. It is impossible to imagine the height to which may be carried, in a thousand years, the power of man over matter. We may, perhaps, deprive large masses of their gravity, and give them absolute levity, for the sake of easy transport. Agriculture may diminish its labor and double its produce: all diseases may by sure means be prevented or cured, (not excepting even that of old age,) and our lives lengthened at pleasure, even beyond the antediluvian standard. Oh that moral science were in as fair a way of improvement, that men would cease to be wolves to one another, and that human beings would at length learn what they now improperly call humanity.”

— Benjamin Franklin, “Letter to Joseph Priestly,” (1780)

I’m putting together a proposal with the above title ( for an AAAI 2014 Spring Symposium ( to be held March 24–26 at Stanford University in Palo Alto, California.

As part of the proposal, I need to include a list of potential participants who have expressed interest in participating.  Anyone who would like to see this symposium happen should fill out the quick survey form ( so that I can add you to the mailing list.

In any event, I’d greatly appreciate any suggestions on the proposal.


Posted by: Mark Waser | Feb 13, 2013

New Year’s Resolution Update II

So, it’s a dozen days into February (wow! time flies) . . . . time for another status check . . . . (posting here twice weekly clearly just isn’t going to happen . . . .)

As of the last status check, I’d just submitted 10,000 words/30 pages

and already put up five articles on

Since then, I’ve

  • submitted a proposal for a AAAI Fall Symposium
  • presented & been part of a panel for a Northern VA Tech Council Event (powerpoints available under the * My Papers * tab)
  • submitted Safe/Moral Autopoiesis & Consciousness (7,000 words/16 pages, previously Safely Crowd-Sourcing Seed AI) to the International Journal of Machine Consciousness

Next tasks are

My schedule for the year now looks like

(With everything else I’m planning/scheduling, I’ve had to drop attending Biologically Inspired Cognitive Architectures 2013 – September 19-22, Kiev, Ukraine).

I’m putting together a proposal with the above title ( for an AAAI 2013 Fall Symposium ( to be held Friday – Sunday, November 15–17 at the Westin Arlington Gateway in Arlington, Virginia adjacent to Washington, DC.

As part of the proposal, I need to include a list of potential participants who have expressed interest in participating (Note: potential participants need not commit to participating, only state that they are interested).  Anyone who would like to see this symposium happen should either comment or e-mail me with their name and affiliation so that I can add you to the list.

In any event, I’d greatly appreciate any suggestions on the proposal.


Posted by: Mark Waser | Jan 12, 2013

New Year’s Resolution Update

So, it’s a dozen days into the New Year . . . . time to do my first status check . . . .

Hmmm.  I forgot that I was thinking of posting here twice weekly . . . .

I’d already put up two articles on

Since then, I’ve posted three more (two planned):

Still from the previous list

  • Reclaiming the Ultimate

To which I’m now adding

  • Artificial Wisdom
  • Avoiding de Garis’ Artilect War
  • Backward Induction: Rationality of Inappropriate Reductionism? – Part 3
  • Why I’m *NOT* a Transhumanist (but still post here)

I have just submitted 10,000 words/30 pages

Now, I’m moving on to

And, finally, in addition to hopefully attending and presenting at:

It looks like I will be doing a panel on Ethics in the Age of Intelligent Machines at the World Future Society Conference – July 19-21, Chicago, IL with James Giordano and a player to be named shortly.


Older Posts »


%d bloggers like this: