Posted by: Becoming Gaia | May 23, 2010

The Science of Morality, Part I: You *CAN* Derive ‘Ought’ From ‘Is’


A good summary of the current state of play can be found in Russell Blackford’s blog entry describing how, following his recent Ted talk, Sam Harris has engaged Sean Carroll (and others) in an interesting dialogue on “Science and Morality” and whether or not “Science can answer moral questions”.

Skipping the obvious question of “If the scientific method can’t help us answer moral questions, what method should we use?”, I would like to focus this post on answering Sean Carroll’s defense of the common interpretation of David Hume’s Is-Ought problem.

It seems to me that Carroll and Harris have gotten entirely off track with their discussion of “well-being”.  I agree entirely with Carroll’s argument that “There is no single definition of well-being” but would argue that it is entirely irrelevant.  There is a single, simple definition of another concept that can be used to define and derive morality via the scientific method.

Carroll starts with two assumptions

  1. Human beings seek to maximize something we choose to call “well-being” or “utility” or “happiness” or “flourishing” or something else.  The amount of well-being in a single person is a function of what is happening in that person’s brain, or at least in their body as a whole.
  2. That function can in principle be empirically measured. The total amount of well-being is a function of what happens in all of the human brains in the world, which again can in principle be measured.  The job of morality is to specify what that function is, measure it, and derive conditions in the world under which it is maximized.

I propose modifying them very slightly:

  1. Human beings seek to maximize something I choose to call “goal-fulfillment”.  They can choose to pursue goals ranging from experiencing pleasure to saving the world (i.e. “well-being” or “utility” or “happiness” or “flourishing” or something else can all be subsumed under goal fulfillment).  The amount of goal-fulfillment in a single person is a function of what actually occurs vs. what they wanted and how badly they wanted it.
  2. If you know what a person wanted, how badly they wanted it, and the current reality, you have an empirical measurement of current goal-fulfillment.  Similarly, the total amount of goal-fulfillment is the same function over all goal-seeking entities.  The job of morality is to derive conditions in the world under which it is maximized.

These simple modifications; however, provide answers to all three of his arguments:

  1. There is no single definition of well-being
  2. It’s not self-evident that maximizing well-being, however defined, is the proper goal of morality
  3. There’s no simple way to aggregate well-being over different individuals.

In simplest form, my replies are as follows:

  1. There is a single definition of goal-fulfillment.
  2. It is self-evident that maximizing goal-fulfillment is the proper goal of morality because the consequences of that assumption directly lead to behaviors that mirror the current understanding of moral behavior.
  3. It is simple to aggregate goal-fulfillment over different individuals.  There is admittedly contention over the specific aggregation algorithm (mostly whether different individuals should receive preferential weighting) but the actual evaluation is conceptually simple.  In addition, the contention over the algorithm can also be answered by following through the consequences of the assumption above.

I realize that my contention that the consequences of goal-fulfillment being the proper goal of morality directly lead to behaviors that mirror the current understanding of moral behavior are, as yet, unproven but wish to stop here to receive feedback as to whether the argument is solid so far.

I’m going to modify them very slightly:

1.  Human beings seek to maximize something I choose to call “goal-fulfillment”.  They can choose to pursue goals ranging from experiencing pleasure to saving the world (i.e. “well-being” or “utility” or “happiness” or “flourishing” or something else can all be subsumed under goal fulfillment).  The amount of goal-fulfillment in a single person is a function of what actually occurs vs. what they wanted and how badly they wanted it.

2.  If you know what a person wanted, how badly they wanted it, and the current reality, you have an empirical measurement of current goal-fulfillment.  Similarly, the total amount of goal-fulfillment is the same function over all goal-seeking entities.  The job of morality is to derive conditions in the world under which it is maximized.

These modifications; however, provide answers to all three of his arguments:

1.  There is no single definition of well-being – but there is a single definition of goal-fulfillment.

2. It’s not self-evident that maximizing well-being, however defined, is the proper goal of morality – If the consequences of the assumption (that the proper goal of morality is to maximize fulfillment of all goals) leads to behaviors that mirror the current understanding of moral behavior, then it should be self-evident that maximizing fulfillment of all goals is the proper goal of morality.

3. There’s no simple way to aggregate well-being over different individuals.  Actually, there are numerous relatively simple methods of aggregating goal-fulfillment over different individuals.  The problem is that there is no common consensus on which of the simple methods is “morally” correct.  It turns out, however, that again this can be answered by following through the consequences of the assumption above.  So let’s get to it.

Bookmark and Share

About these ads

Responses

  1. This is preference utilitarianism – and sorry, it’s not at all obvious that that’s the “proper” goal of morality.

    I think it would be closer to say that the proper goal of morality is to get us to act in a way that is likely to avoid disaster. Morality sets some contraints on how we act that are not too burdensome for creatures like us – and thus unacceptable to us and impractical to enforce – but which make it likely that we’ll get along without too many disasters occurring if we follow them. E.g., we compete with each other in all sorts of ways, but within contraints that also allow us to live in peace and cooperate.

    But that’s still an approximation. I actually doubt that there’s a single goal or anything that we can call a “proper” goal. There are just various goals that we can ascribe. Fortunately, we are enough alike that we might end up with rough agreement about what we want from a moral system (e.g. that it is more likely to avert disasters than create them). If we have that sort of rough agreement on what we want, we might also be able to get some data on what sort of moral system is most likely to deliver it.

  2. [...] Utilitarianism” vs. “Preference Utilitarianism” Russell Blackford has helpfully commented that he believes my position to be preference [...]

  3. [...] Science of Morality, Part II: Universal Subgoals In Part I, I made the claim that “It is self-evident that maximizing goal-fulfillment is the proper [...]


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Categories

Follow

Get every new post delivered to your Inbox.

Join 52 other followers

%d bloggers like this: