Ranganaut

Ethics

Buddhist philosophers like to talk about “two truths”, where one level of truth is conventional and provisional and the other level is absolute.

I find that distinction useful when thinking about utilitarianism.

There are many formulations of utilitarianism in the realm of ethics, but all of them have an optimization principle built into them: maximize something such as pleasure or happiness or alternately, minimize something else such as pain or suffering. Both of these can be subsumed into a general principle: act in a manner that maximizes utility (which is equivalent, mathematically, to acts that minimize negative utility).

All other things being equal, once we agree on a path of action, we would want to do so in a manner that maximizes utility. For example, all other things being equal, if I am hungry and craving chicken soup, I eat chicken soup.

Unfortunately, all other things aren't equal. My pleasure in eating chicken soup causes some pain to the chicken that the soup comes from. Is my pleasure greater than the chicken's pain? I would think not, for I can probably find pleasure and nourishment in eating something else, but killing the chicken is an irreversible process. There's an alternative to eating chicken soup but no alternative to being alive.

In other words, utilitarianism helps us reason in contingent circumstances but is unlikely to deliver ultimate grounds for comparison. There's an obvious rejoinder: are there any ultimate grounds for moral judgment?

One answer lies in the historical trajectory of ethics, which mostly passes through religious communities. If one's a member of a religious or moral community which considers certain entities sacred (eg: do not kill) then that tradition delivers ultimate grounds: if I am a member of that tradition, I maximize utility subject to the constraint that I do not kill.

Utilitarianism itself can't tell me where those ultimate principles come from. A bigger worry might be that there's no ultimate ground outside of a religious community, that all ethical principles are grounded in community specific ideals that can't be defended as absolute truths.

If I think that eating chicken is bad because I am a vegan and you think eating chicken is central to living a religious life in your community, we have no grounds for agreement. Until lab grown meat becomes good enough as a replacement for slaughtered meat, when we our differences become those of taste rather than morals.

There's a deeper point lurking in the background: the tension between exploiting uncertainty and ensuring certainty. When it comes to sacred values we want to ensure certainty – killing is bad with exceptions only in exceptional situations such as capital punishment (I don't agree with that one) or war (which I can imagine is unavoidable, especially if it's foisted on you).

Certainty oriented reasoning doesn't do well with optimization and can't be modeled probabilistically. In contrast, uncertain reasoning is eminently suited to optimization and can be modeled probabilistically. If privacy is a sacred value, we don't want systems implementing them to be probabilistically secure. We want them to be provably secure. In contrast, if I am trying to sell a widget, I want to learn how to maximize my sales percentage and probabilistic prediction is a perfectly sound way to reason.

Flourishing requires both forms of reasoning. Certainty is crucial as a bedrock – that's why human rights or animal rights are the right principles to adopt in foundational documents such as constitutions. At the same time, certainty is expensive and sucks up resources that could be used to benefit more beings.

Even if every tree is sacred, it might be better to water the forest from the air everyday instead of watering a hundred trees by hand daily and killing the forest because every tree gets watered only once a year on average.

#Ethics