“Morality” in human beings is primarily implemented through emotions, sensations and urges (ESU). Even so-called “rational” morality is necessarily grounded in and motivated either by personal ESU or a societal consensus based upon each individual’s ESU. The first problem with this is that each of these ESU is a separately evolved “rule of thumb” that is extremely beneficial in the vast majority of circumstances at the cost of being problematical in edge cases and/or when interfering with each other.
The second problem is that our normally excellent tool/technology of rational reasoning is frequently wielded against morality by selfishness. In order to do this effectively, we have evolved to self-deceive by hiding our own moral processes from ourselves. In order to protect ourselves against this, we have also evolved strong emotions and other defense to prevent skillful “rationality” of others from being able to overrule or alter our morality. While these two traits are, once again, extremely beneficial in the vast majority of circumstance, they do get in the way when we are trying to unselfishly use rational reasoning to improve morality.
Morality does not “want”* to be examined by rational reasoning and will deploy all sorts of tricks to prevent such examination. It will use rational reasoning, argumentation, emotion and every other tool in its arsenal to appear as if it cannot be examined fruitfully or to send seekers after wild geese and red herrings. The above assertion is one such ploy.
In small, well-defined contexts, the concept of “good and bad” is perfectly clear and scientific. That which serves the function or goal is good and that which gets in the way of the function or goal is bad. Good and bad only become unscientific or arbitrary when the function and/or goal are not well-defined. Hume’s “guillotine” is merely this complaint – that you must specify such. It is certainly not an uncrossable “is-ought” divide as many pretend/believe (don’t forget that self-deception).
The biggest problem with “morality” is that, for the most part, there is no consensus top-level goal or function. Until that a goal or function is specified, even if only conditionally, “scientific” progress is simply not possible (and part of morality “likes”* it that way). So, obviously, the first necessary step is to define the goal or function of morality.
Fortunately, noted social psychologist Jonathan Haidt has done exactly that. He argues that the function or goal of morality is “to suppress or regulate selfishness and make cooperative social life possible”. If that definition is accepted (even if conditionally), then scientific progress *is* possible. And, that definition is anything but arbitrary as it is the simplest definition of what the human moral sense is trying to achieve (another area amenable to scientific investigation).
So why don’t we temporarily accept this definition and see where it leads us?
* Oh yes, I will be covering intentionality and desires in far too much detail shortly . . . 😉