I’ve read a few critiques of consequentialism recently, and am starting to get pissed off. Not because I harbor an affinity for any particular brand of consequential morality, but because I believe we don’t have any other options.

What are morals, anyway?

As far as I can ascertain, morality is a construct of sentience. No morality detector exists. The universe just doesn’t care. Every single moral statement I have ever encountered has arisen from the mind of a human being. Moreover, the fact that it is possible to find two people who disagree on the morality of almost any action strongly suggests that if there is some moral code outside our own heads, we’re remarkably bad at listening to it.

I’d like to make one other observation, which is that it is rarely impossible to commit an immoral act. When it comes to horrifyingly evil deeds, if you can dream it, you can do it. We’ve neglected, insulted, demeaned, beaten, burned, quartered, impaled, and liquifacted each other so many times that the experimental record clearly states: outside of physical constraints, the universe does not pick sides. You may remove your safety goggles now.

What do morals do? Well, we classify people as moral or immoral. We also classify actions taken by people as moral. I’d like to argue that it is completely impossible to ascertain an individual’s moral character without knowledge of the morality of their actions (including thoughts), and indeed, any carefully examined evidence as to the morality of a person will turn out to be founded purely on that basis. Moreover, if an evaluation of one’s character turns out to be significantly decorrelated with their actions, we’ll be forced to make awkward judgments wherein to some extent horrible people accomplish more good than decent people, and vice versa. This is not only useless, but also confusing.

If you know of a method for reliably ascertaining the moral status of a person which is not equivalent to some function operating purely on their actions, please let me know.

As for actions: there are some completely black-and-white moral systems, in which an event is either moral or immoral. These systems encounter significant difficulty in dealing with complex actions, such as running over little Suzie and Hitler with the same steamroller.

Most people I’ve encountered operate under the assumption that acts have varying degrees of “goodness”. A good proportion of those, if carefully questioned, would assert that goodness is a scalar quantity, not just a partial ordering. (If you agree, for example, that saving two lives is roughly twice as good as saving one life, you’re probably one of those.) Nobody I’ve met (with the exception of myself, on painkillers) has proposed a multivariate moral framework without a scalar norm.

Within these constraints, then, let a moral system be a function which maps every action to a scalar value–say, normalized in the range [-1,1]. Positive values correspond with good actions, and negative values with evil.

Deontological moral systems, in this language, select values based on the conformance of the action with all the rules for the system (“Don’t eat your sister’s lunch”, “Tell the truth whenever possible”, etc.) The ways in which these rules interact is entirely unconstrained; rules may be superseded by others, or count for more or less, depending on the action.

Consequentialist moral functions operate purely on the state of the universe.

Equivalence

As the state of the universe includes its history, a consequentialist framework can, in generality, distinguish not only between outcomes (“Suzie is dead”), but also any action ever taken (“And Nefertiti was inadvertently responsible”).

Because deontological rules can take into account the universe state in addition to the action itself (“One may lie when a life is at stake”), they are free to distinguish between any configuration of the universe.

This means that, in general, deontological models are formally equivalent to consequentialist ones. For any deontological model, we can by careful construction of a consequentialist function come up with identical moral values for all possible situations–and vice versa.

When faced with functionally identical models, we may only decide between them on ideological grounds: parsimony, tradition, beauty, efficiency, and so forth. It is the first constraint which I find the most compelling.

Parsimony

In a nutshell, parsimony means simplicity. A model is maximally parsimonious when it requires the fewest entities (or equivalently, calculations).

f(x) = (|x|)^2
g(x) = x^2

Which is preferable? Both produce identical results for real x. Yet f(x) is more complicated. It has introduced an absolute value operation which has no effects, but retains the illusion of meaning.

Consider two simple moral functions for the Ticking Time Bomb scenario, where you must torture one person to disarm a bomb and save five.

d(x) = +1/5 for saving a life. -1/9 for torturing a man, unless doing so to save human life.
c(x) = (+2/9 for each life extended) + (-1/9 for a man experiencing torture).

d(x) is rule-based. c(x) is consequentialist. The two are functionally identical, but note that the deontological expression is more complex. We have introduced a qualifying clause to allow the deontological system to account for a situation in which a more simplistic rule system could produce non-optimal results.

Now consider a more diverse action space. Perhaps it is possible to save two out of five by throwing a bucket of adorable puppies onto the tracks. We can amend the consequentialist model by introducing a single new term, say, -1/9 per puppy flattened. The deontological model, by contrast, requires the introduction of more complex rules: animal cruelty is bad except when it’s used to save human lives. Torture is OK when animals lives are at stake, and so forth.

Now imagine a global moral function: one which maps every possible action to a moral value. Consider, briefly, the number of exceptions and special cases required to formulate a complete rule-based system which yields “reasonable” results. Consider what would drive your creation of those rules: on what basis do you decide how the exceptions work? What principles allow you to decide? Would you rely on tradition? The Bible? What would you do, if presented with a situation which had no exceptions codified? Blindly follow the closest rules possible?

I contend that consequentialist frameworks can satisfy the “reasonability constraint” with fewer exceptions and special cases by careful weighting of various components of the outcome states and that moreover, those weights are what underly our moral rules in the first place.

So what do you believe, anyway?

My personal moral function is the sum of the correspondence of the objectives of every living being with their subjective experience, integrated over their lifetimes, nonlinearly weighted by sentience.

This function is maximized for an individual when they are self-actualized, living full lives, in concordance with their personal objectives. Eudaimonia might be a good word. It is nonlinear; if any individual lacks basic needs or self-determination, it is strongly negative. It also considers the full lifecycle: incurring temporary unpleasantness to ensure long-term fulfillment is acceptable. It also gives the strongest consideration to human beings and similar sentience, and less to the well-being of cats, individual insects, etc.

A function like this leads to surprising differences with traditional deontological systems. Gay marriage, for example, is preferable because it greatly maximizes the fulfillment of some couples while imposing minimal constraints and unhappiness on others. Imprisonment and murder are morally correct where doing so has significant positive consequences; such as the prevention of damage or death. Assisted suicide is acceptable when the individual’s continued life would produce great unhappiness in themselves and those around them–greater than the loss their death would induce. Forced population constraints are acceptable to prevent resource exhaustion and the associated wars or famines in the future. Lying is correct to prevent greater injustice, balanced against the risks of a cultural norm of distrust.

Naturally, it’s impossible to fully evaluate this function; many of the terms are undefined or impossible to measure. But to a large extent our actions are local, and I have found it is a usable heuristic in approximation. Moreover, its terms are subject to inquiry: one can ask parties what they really want, are they happy now, and how things could change. Framing the problem in terms of a desired world state allows for certain creative escapes from rule gridlock.

Where were we going with this, again?

Oh, right. Probabilistic consequentialism. There’s a classic critique of consequentialist morality:

Johnny Dangerously speeds on the highway, but hits nobody and causes no accidents or ill feelings. Consequentialism states that since no harm occurred (and indeed Johnny saved time), Johnny’s decision to speed was moral.

Which brings us full circle. Consequentialism, as mentioned above, operates on the full state of the universe. If you are a strict determinist, there is only one moral value for the universe we’re in, and we’re stuck with it. Johnny D’s decision to speed is irrelevant, because he couldn’t have hit anybody if he tried. (Strictly speaking, he couldn’t have tried any more or less either.) All “decisions” are equally moral, if they can even be said to exist. This is a natural consequence of determinism!

If, on the other hand, you believe we have the ability to change the universe, then decision-making allows us to choose between various moral outcomes. Johnny’s decision to speed selected a universe with different consequences. Of course, Johnny didn’t know which universe he was choosing. Any number of events were possible; some involving accidents, some not. If you want to assign a moral value to his decision, simply integrate over all possible universes, weighted by their probability of occurring. Since some of those universes include fatal accidents, the overall integral for the speeding decision is lower, and the decision less moral.

We know the universe is fundamentally probabilistic, which means that in a very real sense, these sorts of integrals over possible universes must be taken into account in every situation. For example, Shrodinger’s cat experiment involves a cat in half-and-half superposition of dead and alive. Placing the cat in a box like that is half as bad as killing it absolutely! Luckily for us, quantum superpositions don’t often remain coherent in ethical dilemmas.

The same method may be used to evaluate subjective moralities. If we know that an individual has a 50% likelihood of having built up an immunity to Iocane powder, feeding that powder to him is not as bad (subjectively speaking) as feeding it to him while considering immunity inconceivable. Our predictive ability then influences the morality of our actions! And wouldn’t you agree that brandishing a knife in a darkened kitchen is worse than waving it about when you can see?

Consequentialism. It works.

Loren Bruns
Loren Bruns on

I owe you a steak.

Post a Comment

Comments are moderated. Links have nofollow. Seriously, spammers, give it a rest.

Please avoid writing anything here unless you're a computer. This is also a trap:

Supports Github-flavored Markdown, including [links](http://foo.com/), *emphasis*, _underline_, `code`, and > blockquotes. Use ```clj on its own line to start an (e.g.) Clojure code block, and ``` to end the block.