Clifford’s Law

Phil Hansten | June 27, 2013

"Leucocephalus" Phil Hansten

“Leucocephalus” Phil Hansten

Some people find Clifford’s Law disturbing and counterintuitive. Its inventor, William Clifford, doesn’t care what you think. Clifford’s Law, for example, leads to the startling conclusion that the 2003 invasion of Iraq would have been a terrible decision even if we had found huge stockpiles of weapons of mass destruction, fully functional and ready for use. This sounds like nonsense, but is it?

Of course, William K. Clifford (1845-1879) weighs in on the issue from the safety of the nineteenth century, but his argument is, in my opinion, impeccable (I’ll explain in a minute). William Clifford was a gifted British mathematician who is perhaps better known today for a philosophical essay he published in 1877 entitled “The Ethics of Belief” (available on the Internet) in which he explored the conditions under which beliefs are justified, and when it is necessary to act on those beliefs.

In his essay, Clifford proposes a thought experiment in which a ship owner has a vessel that is about to embark across the ocean with a group of emigrants. The ship owner knows that—given the questionable condition of the ship—it should probably be inspected and repaired before sailing. But this would be expensive and time-consuming, so the ship owner gradually stifles his doubts, and convinces himself that the ship is seaworthy. So he orders the ship to sail, and as Clifford remarks, “…he got his insurance money when she went down in mid-ocean and told no tales.”

At this point Clifford asks the easy question of whether the ship owner acted ethically, to which only sociopaths and hedge-fund managers would answer in the affirmative. But then Clifford asks us a much thornier question: What if the ship had safely crossed the ocean, not only this time but many more times as well. Would that let the ship owner off the hook? “Not one jot” says Clifford. “When an action is once done, it is right or wrong for ever; no accidental failure of its good or evil fruits can possibly alter that.” This is “Clifford’s Law” (a term I made up, by the way).

Clifford recognized that we humans are results-oriented, and we are more interested in how something turns out than on how the decision was made. But bad decisions can turn out well, and good decisions can turn out poorly. For Clifford, the way to assess a decision is to consider the care with which the decision was made. Namely, did the decider use the best available evidence, and did he or she consider that evidence rationally and objectively? These factors are what make the decision “right or wrong forever.” What happens after that is irrelevant in determining whether or not it was an ethical decision.

So, just like the ship owner, the people in power made the decision to invade Iraq based on what they wanted to be the case rather than what the evidence actually showed. So even if by some strange combination of unlikely and unforeseen events the invasion of Iraq had turned out well—hard to imagine but not impossible—the invasion still would have been wrong. So Clifford is saying (or would say if he were still alive) that if they had found weapons of mass destruction, the invasion would have still been a bad decision, because the best evidence clearly suggested otherwise. The decision was “wrong forever” on the day it was made, no matter what the outcome.

Occasionally, one of my students complains about Clifford’s Law. Since they are studying drug interactions, it means that even though a particular drug interaction may only cause serious harm in roughly 5% of the patients who receive the combination, if they ignore that interaction in dozens of people they are just as wrong in the many people who are not affected as they are for the person who has a serious reaction. “No harm, no foul” is legally exculpatory, but does not let you off the ethical hook.

Clifford’s Law applies to almost any decision, large or small, provided that the decision affects other people. With climate change, for example, there is overwhelming evidence that we need to act decisively and promptly. If we do nothing about climate change, but through an “accidental failure of evil fruits” there are no serious consequences, we are no less wrong than we would be if our inaction resulted in a worldwide catastrophe. The outcome is irrelevant. We long ago reached the threshold for decisive action, and our failure to act is “wrong forever” no matter how it turns out in the long run. So we don’t have to wait decades to find out who is right or wrong… we know already, and it is the climate change deniers.

Some powerful people have exercised their predatory self-interest to prevented substantive action on climate change. If they continue to succeed and no catastrophe occurs—not likely but possible—the victorious bleating of the deniers will, of course, be unbearable. But given the stakes for humanity, the cacophony would be music to my ears because it would mean that we avoided disaster. A more likely outcome, unfortunately, is continued lack of action followed by worldwide tragedy. Unlike the tragedies of old, however, there will be no deus ex machina to save us… we will be on our own.

4 Comments

  1. Climate change denial has been going on for about a quarter of century now. I have a specific marker when as a young man I questioned my thoughts on the issue the day after watching a BBC programme. Its totally related to death denial, make no mistake.

    • I think you are right about the relationship between death denial and climate change denial. It seems to me that it has to be something deep and visceral (like death denial) for people to so completely disregard the overwhelming scientific evidence on climate change. I’m sure there are also ancillary motives that contribute, and some people just believe what they are told on certain cable TV stations.

      • I dunno who said this but hat tip Corey Anton for quoting it;

        “People prefer problems they just cant solve to the solutions they just don’t like”

        Aint that the truth.

  2. Good posting! I’m going to be nerdy and pick at the edges, however. In the section I teach on social work ethics, we consider Clifford’s Law (thanks for the background – I was a bit shaky on it and I think some of my students probably assume it refers to Clifford the Big Red Dog…) We test it in the context of “informed consent.” In this perspective the thing that makes the shipping magnate’s actions unethical is that he is assuming all of the benefits but sharing none of the risks (like so many “financial advisers” he gets paid either way.) Had he stated to his passengers clearly what the problems with the vessel were, and then gave them a choice: 1. Sail “as is” for $X/passage, or 2. Have all of the problems fixed before sailing, but then the voyage will be $X+/passage, then the prospective passengers could decide for themselves how much and in what form they wanted to absorb the risks. One might also imagine forms of this decision in which the families of passengers would share in the insurance money if the ship went down, etc., which might effect the balance of a decision between 1 and 2 (they might logically assume more risk if they know their families are provided for in case of disaster) but the moral/ethical issues remain pretty much the same. The immorality of the shipper’s action lies not in the decision itself (sailing or not sailing) but in the fact that he used “insider information” to profit at the expense of those from whom he was withholding that information.
    PS: Early in this posting you posit a clear between sociopaths and hedge fund managers. I might quibble with that (HFMs’ business model is often largely finding and executing decisions of information imbalance exactly like the one outlined here) but that is a discussion all its own!

Leave a Reply

Required fields are marked *.