Alethic Utilitarianism
Like many other things on this blog, this is a little mental exercise, an experiment in thought. I'm not necessarily saying that I advocate the following ideas - in fact, I generally don't. But I thought it would be interesting to think through an idea that occurred to me - an idea for a moral framework, or a system of "meta-ethics" as they are sometimes called. I'm not saying this is the best moral framework, and in fact I think it's probably not. But so far as I know, no one else has put forward this framework. Maybe someone else out there has, but I'm not aware of it. So I thought it would be interesting to explore.
I call this system "alethic utilitarianism". As you may know, if you read this blog, I'm not a utilitarian of any kind. But this is a form of utilitarianism that is distinct from all the other forms out there: act utilitarianism, preference utilitarianism, etc., etc.. The "alethic" part is just a fancy Greek-sounding way for me to refer to truth, and that is the essence of this idea. Whereas most utilitarians want to maximize the amount of pleasure in the world, or minimize the amount of pain, or minimize the amount of suffering, which is somehow distinguished from pain, or maximize the amount of happiness, which is distinguished from pleasure, and so on and so on, in myriad forms of increasing sophistication and complexity, alethic utilitarianism has quite a different way of measuring success, which is to maximize the amount of truth in the world.
What is truth? Of course that is a question that philosophers and non-philosophers have discussed for millennia, but for our purposes here I will be using a fairly simple (if not simplistic) definition. First of all, when I say "truth" I do not mean "reality". I'm not sure what it would even mean to increase or decrease the amount of reality. When I talk about "truth" here, what I mean is the conscious representation of reality - or, if you like, the representation of reality for consciousness. What I'm getting at is that, for an alethic utilitarian, the goal of all morality is to represent as much of reality as accurately as possible to consciousness - to know more, to learn more, to be aware of more, more clearly, more accurately, more precisely, more honestly. The goal of life is for our representation of reality to be absolutely identical with reality itself. That may be an unattainable goal, but that's the goal towards which we will ever strive.
I think this way of conceiving of utilitarianism avoids some of the pitfalls of other forms of utilitarianism. For instance, there's the "antinatalist" argument: the South African philosopher David Benatar has pointed out that Karl Popper's negative utilitarianism (the idea that the goal of ethics should be the minimization of suffering in the world) leads logically and inevitably to the conclusion that all humanity should become extinct (he advocates the voluntary decision not to have children, not mass murder). After all, there will be no suffering if no one exists to suffer. No life, no problem. (Apparently, Benatar thinks that his argument holds even if you don't believe in utilitarianism. But his statements on this matter make me think that he has not thought through other ethical frameworks very thoroughly or systematically.)
In a way, Benatar is right: negative utilitarianism does inevitably lead to antinatalism. But this just shows the intellectual bankruptcy of negative utilitarianism. In this sense, negative utilitarianism is essentially nihilistic. Alethic utilitarianism does not suffer from this problem, because it is oriented almost inversely to negative utilitarianism. If there are no conscious entities, then there is no conscious representation of reality - no truth - and therefore this would be the ultimate calamity of alethic utilitarianism, not its triumph. If we want to maximize truth, then we need conscious beings.
There are other pitfalls of utilitarianism that alethic utilitarianism avoids. Two of them are discussed by the philosopher Robert Nozick. First, let's consider the Experience Machine. This is a machine that Nozick imagines that could plug into your brain and cause you to experience only happy experiences. (This concept was depicted, or parodied, in Woody Allen's "Sleeper" as the "Orgasmatron" - which inspired a song of the same name by Motorhead.) Even if, in reality, you were starving, sick, and causing destruction to other people, you would only experience joy. Would you plug yourself in? Should you? According to classical Benthamite utilitarianism, or "hedonic calculus," you should, since the goal of all morality is the maximization of pleasure. Everyone should. But again, this only demonstrates the inadequacy of classical utilitarianism.
Similarly, there is the
For a classical utilitarian, the goal is for conscious beings to experience the maximum pleasure. For a negative utilitarian, along the lines of Karl Popper, the goal should be for conscious beings to experience the minimum suffering. For an alethic utilitarian, the goal is neither of the above. Rather, the goal is for conscious beings to experience things exactly as they are, or as accurately as possible. The goal is not maximizing happiness, or minimizing suffering. The goal is maximizing consciousness.
Possible corollary: the Principle of Experience Maximization: If a conscious being is experiencing pleasure, then an alethic utilitarian would want that being to experience the pleasure fully and clearly. If a conscious being is experiencing pain, then an alethic utilitarian would want that being to experience pain fully and clearly.
Comments
Post a Comment