My purpose in writing this blog is to build out my ideas on the science of morality and to learn how best to present them. This being the case, I will periodically go back and evaluate what I perceive to be errors in my approach and attempt to correct them. One such error was that I started too far into the middle of the subject with no proper grounding — like, for example, a simple definition of what I mean by morality.
= = = = = = = = = = = =
What Is Morality?
Most people have an intuitive understanding of what they believe morality to be — but it is an understanding that has been built from the bottom up from examples of moral and immoral acts, consequences, and feelings. It is an understanding that has been profoundly influenced by their society, their circumstances, and their genetics. And, it is an understanding that is emotionally-charged and only created by the conscious mind as an explanation after the fact.
Everyone understands that morality has to do with what is right and wrong and what one “ought” to do. But relatively few people have an understanding that is any deeper than their emotive responses, a reasonably small number of simple rules, and some assumptions and rationalizations that were thrown together by their conscious mind to make it all into a seemingly comprehensible and integrated whole. This means that most debates on morality quickly reach the point of arguing that a given action is right or wrong simply because either a) “it just is” or b) “because G*d says so” — with no way of resolving the issue.
Before we can hope to reconcile radically different moralities or correctly extend our own rules of right and wrong to new and/or difficult circumstances, we must understand more clearly what it is that morality is. We must understand why it is what it is. And, eventually, we must understand why we, as individuals, should be moral. We are coming to an age where our mind children, intelligent machines, will far surpass where we are today. If we cannot teach them right from wrong and why they should care, we are in a tremendous amount of trouble.
There are two different ways in which to go about defining and understanding morality. The first is to merely describe what it is. Unfortunately, morality very much resembles the proverbial elephant presented to the blind men. Currently, morality is clearly culture-dependent and varies tremendously from place to place (and even among people from the same physical location).
The second method is to describe what it does (or what its function is) . This is far more effective (better) because it makes it much easier to explain why it exists, why it is what it is, and it gives a yardstick for improvement. Further, it gives a context for the value judgments of right or wrong and specifies a “goal” which determines the actions that we “ought” to take in order to fulfill it. Trying to define right and wrong or good and bad (or evil) in the abstract is a hopeless task. It is only in the context of some task or goal, either a positive achievement or the avoidance of a negative result, that such evaluations can be made.
The very simple functional definition that I will start with is “Morality is that which maximizes the probability of cooperation”. This definition has several advantages. Since it defines morality not by what it *is* but by what it *does*, it provides a context that makes it possible to start answering questions like those above. It doesn’t make any arguable assumptions like the existence of an absolute truth or an omnipotent being (or the non-existence of either). And, finally, it is entirely in line with the latest expert opinions such as:
Moral systems are interlocking sets of values, virtues, norms, practices, identities, institutions, technologies, and evolved psychological mechanisms that work together to suppress or regulate selfishness and make cooperative social life possible.
— Jonathan Haidt, Handbook of Social Psychology, 5th ed. (2010)
Thus, and quite simply, the science of morality is just the science of maximizing cooperation or working together. It is no longer a philosophical question because it is grounded in reality. It is not a calculus because it isn’t and can’t be entirely known — but it should enable an engineering effort to improve our lives and increase the probability of a safe future. Just as the steam engine is a human invention that is constrained by the universal laws of physics with better and poorer implementations based upon circumstance, so too is ethics constrained by the universal scientific facts of what does and does not work. And there is a lot of improvement that can be made over the situation present today.