Why Moral Relativism is False

But not for any reasons that would comfort a Metaphysician.

This post is only a rough draft.  I would appreciate as much feedback as possible.

Did I ask too much of moral philosophy? This is the question I’ve been asking myself.  Maybe I was so angry and disgusted with its failure to provide moral claims that can be logically demonstrated to, and somehow “binding” on, all rational agents, that I gave up on moral philosophy without realizing what it can do.

My argument is that if anyone has ever changed their moral beliefs (not their meta-ethical beliefs, but their moral beliefs) by reading a work of philosophy, then moral relativism is false.  If anyone has ever read Rawls and thought, “I guess we should do more to help the disadvantaged”, or read Nozick and thought, “Huh, maybe there is something to this libertarianism thing”, then moral relativism is false.

This may seem like a strange claim.  But by “moral relativism” I don’t so much mean a philosophical doctrine as an attitude.  The moral relativist says:

“You have your values, and I have mine.  There’s no use having a debate about them.  You’re not going to convince me that my values are wrong with an abstract argument the way you might convince me that the Earth orbits the sun with an abstract argument.  Don’t you know that no one has provided rational foundations for morality? Trying to convince me with an argument on this subject is as silly as trying to convince me that I should change my favorite ice cream flavor with a rational argument.  Have a nice day.”

(By the way, I will also argue that it is not silly to try to use a rational argument to get someone to change their favorite ice cream flavor).

This form of moral relativism, I will argue, relies on a crude belief-desire theory of decision-making that is empirically false.

Belief-desire theory is straightforward: people have desires that they act to satisfy, and beliefs about the world.  All reasons for actions are combinations of such beliefs and desires.  I desire to stay healthy; I believe that I need to eat nutritious food to stay healthy; therefore, I have a reason to eat nutritious food.  All decision-making consists of causal reasoning about how to best fulfill a given set of desires.

I have two objections to belief-desire theory:

1. Human deliberation and decision-making includes a lot more than just causal and instrumental reasoning.  Other important forms of deliberation include: imagining, weighing, evaluating, and comparing.

Imagine if we really were crude belief-desire machines.  No one could ever be persuaded, converted to a cause, or change their mind after receiving advice — unless it was about strictly factual issues.  We would just accept other people’s preferences as given, and never try to implore people to consider “how would you like it if someone did that to you?”, or “what if everyone did that?” The concepts of acquired taste, moral education, Lakoffian framing, and Heideggerian “dwelling poetically” would be nonsensical.  If I were to write a book about this, I would flesh out a detailed psychological account of human decision-making.  Since I’m not, I’ll just posit that our deliberations involve much more than causal reasoning.

2. Many of our “desires” are commitments to moral ideals that are at least partially defined through such a deliberative process, and do not exist antecedent to such deliberations.

For example, imagine a soldier who is motivated by a certain conception personal honor.  This conception of honor involves loyalty to country and fellow soldiers, not killing civilians, and following orders efficiently.  This soldier is then faced with a situation they have never encountered before — maybe they need to torture someone to get information, or disobey a direct order in order to avoid killing innocent people.  In this situation, what does it mean to act honorably? There might be no clear answer.  The soldier will have to engage in a process of deliberation that might include trying to engage in empathy, looking for inspiration from heroic historical figures who faced similar situations, or applying abstract tests like the categorical imperative to one’s actions.  The point is that the soldier’s desires don’t fully exist until he engages in this internal debate — and if this debate can happen within one person, why not among many persons?

So those are my criticisms of belief-desire psychology.  Given these criticisms, I claim that moral philosophy can be valuable.

In my humbler version of moral philosophy, philosophy would be conceived as a form of persuasive dialogue, rather than a truth-seeking enterprise.  Philosophy would articulate and develop the standards and reasons why we consider actions right or wrong, while acknowledging that these reasons are ultimately rooted in the contingencies of human psychology, history, culture, and various final vocabularies.  This wouldn’t help us stigmatize the amoralist as irrational, but I don’t think that project is really all that valuable.  Convincing the Tutsis that genocide violates some requirement of rational consistency probably wouldn’t have saved any Hutus.  The real issue is how to create consensus and resolve conflict among reasonable people who have differing opinions but agree on certain broad parameters of debate.  For example, how much should we value freedom when it conflicts with equality, or privacy when it conflicts with security?

One important implication of my argument is that pragmatism is not an excuse to avoid moral debates.  Moral debates aren’t (or don’t have to be) silly, irresolvable excursions into metaphysics.  Instead, they are the way we figure out, however imperfectly, what to do about the important and inevitable conflicts and problems in human social life.

I now want to consider two objections to my argument:

1. The biggest problem, as far as I can tell, is the inevitability of distinction-drawing/intuition-contriving.  Take Peter Singer’s argument in The Life You Can Save.  He gives the thought experiment of seeing a baby drowning in a pool — surely, he says, you have a moral obligation to save the baby, even if it means ruining your pair of shoes.  From there, he argues that you have an obligation to give away all of your income above subsistence levels to help suffering people in third world countries.  How? Because, he says, geographic proximity is morally arbitrary.  Valuing the life of a baby right next to you more than a baby in Africa is irrational.  The problem with this argument is someone can say, geographic proximity does matter, for the same reason that the suffering of the baby in the original thought experiment matters — because we have an intuition that it does.  I don’t see how Singer can get out of this criticism.  This is a big problem: anytime someone wants to convince you of a moral claim, you can always draw a new distinction or contrive another intuition that allows you to answer it.  As of yet I don’t have a solution to this problem.

2. The other criticism is that if philosophy can’t prove anything, why not just replace it with literature? If you want to convince people that we should do something about poverty, give them Dickens instead of Rawls.  I have three responses:

-This is ultimately an empirical question.  Some people might be persuaded by literature, others by philosophy.
-Literature and philosophy don’t have to be substitutes.  They might be complements.
-Literature focuses on particular individuals, and doesn’t allow you to step back and analyze and evaluate a complex system composed of millions or billions of people.  There are important ethical issues (involving climate change, economic growth, health policy, etc) that are just too large for literature to be useful.

Thoughts?

Advertisements

One Response to Why Moral Relativism is False

  1. Ross says:

    I largely agree.

    A few comments which may or may not be useful:

    1. I think belief-desire psychology may be consistent with your argument. Why not view literature and philosophy as two different means of changing desires?

    2. I’m not sure if the model of moral deliberation you describe is best described as “philosophy.” Most moral conundrums strike me as political issues rather than philosophical ones; for example, I would be more comfortable with political scientists deciding whether torture is a good idea than philosophers. To the extent that philosophers are involved in debates about torture, I think it should probably be Foucault-type social philosophers (i.e. the mindset which allows for torture allows for other bad things) rather than traditional moral philosophers (“waterboarding is, like, totally fucked up. I mean, COME ON.”)

    3. I think some people might consider this post a straw man. I tend to label myself a moral relativist but I don’t have any real disagreement with the moral model you describe. I think a lot of people who label themselves moral relativists (but not all) would agree.

    4. One good example of the phenomena you describe: I currently don’t have any moral qualms about eating meat. My limited exposure to criticisms of the meat industry, though, leads me to believe that if I read a book or two on why that shit is fucked up I would probably become a vegetarian.

    Here’s a question: do you think that in any sense –

    A. I SHOULD read those books?
    B. I have an OBLIGATION to read those books?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: