Against Lexical Suffering Focused Utilitarianism
Or, Against 'Negative Utilitarianism With Extra Steps'
Famously, Utilitarianism commits you to counterintuitive, and sometimes icky feeling results — which even the most hardline bullet-biting Utilitarian would admit. In fact, one of the central projects for Utilitarian philosophers is figuring out how to stop the darn thing from making you agree to increasingly unappealing versions of the repugnant conclusion1.
Just so that we can all be on the same page2, as a refresher, Utilitarian theories all share the following four aspects:
Consequentialism — Consequentialism is the view that one ought always to promote overall value.
Welfarism — Welfarism is the view that the value of an outcome is wholly determined by the well-being of the individuals in it.
Impartiality — Impartiality is the view that a given quantity of well-being is equally valuable no matter whose well-being it is.
Aggregationism — Aggregationism is the view that the value of an outcome is given by the sum value of the lives it contains.
Because of this, especially aggregationism and welfarism, Utilitarianism commits itself to, sometimes, to preferring outcomes with very large amounts of suffering, so long as there is enough total positive wellbeing present as well. A large negative number plus an even larger positive number is still positive, and Utilitarianism says that if the resulting number is positive than it’s morally okay. This is sometimes referred to as the Very Repugnant Conclusion.
However, as some have argued3, we might think that some instances of suffering are so bad they cannot be worth any reward, no matter how great — at least if you are a self-interested individual. This is an intuitive perspective — it’s very hard for me to imagine what 1,000 years of bliss looks like, but easy to understand, and empathize with, how horrific being tortured, or slowly dying, might be. Just in the same way there’s no number of Zeros you could add on to my checking account that I would ever trade for a loved one, it’s almost inconceivable that I’d willingly accept extreme suffering, no matter what you would offer me in return.
But, seeing as you’re no doubt someone obsessed with ethics, you might then extrapolate from this insight. You might then think ‘well it would be weird if my unwillingness to suffer didn’t imply anything about my broader ethics.’ If individuals in their own life could be offered “trades” that should in theory increase their total wellbeing — that is joy-minus-suffering — which they would nevertheless refuse to make, this should inform how we think about our ‘ethical calculus’.
Given I’m so often surrounded by Utilitarians, we’ll explore this in the context of that ethical theory. How do we combine Utilitarianism — a view famously founded on the idea of making trade-offs — with the idea that some things can have too great a cost to pay?
I will refer to the resulting family of ethical theories as Lexical4 Suffering Focused Utilitarianism5.
What is Lexical Suffering Focused Utilitarianism?
A lexical ethical system is one where some ethical principles are categorically more important than others. While happiness might be morally important, a lexical ethics might say ‘before we concern ourselves with maximizing happiness, we must first ensure society is truly just.’ Such a view would argue that ensuring justice will always take precedent over considerations.
To combine this with the definition of Utilitarianism, and put it in more plain spoken English, Lexical Suffering Focused Utilitarianism is the view that some suffering is so bad, it can never be ‘offset’, full stop, but some suffering can be offset by positive wellbeing.
The idea of offsetting (in Utilitarianism) follows from the aggregationism principle — imagine I offer you $1,000, but you have to agree to being punched (fairly hard). All the fun, enjoyable, things you can buy with the $1,000 “offsets” the suffering of being punched. While it’s still bad to be punched, and this suffering isn’t erased or anything, it is still preferable, ethically speaking (if you’re a utilitarian), to take this trade6.
As another example, you could imagine forgetting someone’s birthday, which obviously makes them sad — we’ll say it’s 10 sadness7 — but you then write an especially thoughtful card to them, which brings them 10 happiness. Writing the card “offsets” the sadness you caused by forgetting their birthday: 10 happiness - 10 sadness = 0.
Again, this isn’t to say that the suffering ‘doesn’t happen’ — it was still bad, and caused bad things, to forget someone’s birthday. But in the end, everything worked out okay. Offsetting in ethics is just a word for describing the idea that ethically there are some actions you could do to ‘make up for’ or ‘let everything work out okay’, though of course what those actions are depends on the specific bad thing that happened. In a deontic or virtue ethical system, this might only apply to specific situations — you can’t offset someone else forgetting their friend's birthday, only the person who forgot can make things right. Utilitarian theories are aggregative, so all that matters is the end results. This can result in it being ethical for you to make trades where you allow one person to suffer, but some other people to be brought more total joy — even if these people are totally unrelated8.
Lexical Suffering Focused Utilitarianism says there is some sadness, or some pain, so bad that nothing could ever be worth enduring it for. There is no amount of happiness that can ever offset some kinds of suffering. We would never trade someone’s extreme suffering for the joy of others, just as we wouldn’t make that own trade ourselves9.
Glossary:
Because this is abstract analytic philosophy, it can be confusing to talk about it, and often it is easiest to rely on a little bit of math — or at least some acroynms. I assure you, I was not very good at math when I studied it in college (which is why I only have a minor), so you don’t need to be either.
LSFU - Lexical Suffering Focused Utilitarianism
ES - Extreme Suffering
BNES - Barely Not Extreme Suffering
δS - Delta S; the difference in suffering between ES and NES, that is (ES - BNES) = δS
The Mugging Objection:
Or, the super very repugnant conclusion
Under standard Utilitarianism, value is taken to be ‘welfare’, and all welfare is aggregatable, and therefore comparable. One way to think about a lexical welfarism is that it creates different kinds of wellbeing — which I’ll call Extreme Suffering and Non-Extreme Suffering (or joy). For simplicity’s sake we’ll assume there’s only two kinds of lexical wellbeing — extreme and non-extreme — but you could obviously have a view where there are many types10.
Because this is a lexical view, it will always be preferable to prevent Extreme Suffering — it’s just a categorically different thing.
As we’ll see, this is actually a specific formulation of LSFU, which I’ll call Intensity Concerned LSFU. This is the view that some types of suffering, specifically Extreme Suffering, are categorically more important than other suffering, or pleasure, and that Extreme Suffering cannot be offset.
However because this view is both lexical in nature, and draws a distinction between kinds of suffering (between Extreme and Non-Extreme) it becomes susceptible to philosophical mugging11.
The mugging objection is as follows:
World A: a world with 1 billion people, each of which experiencing BNES suffering.
World B is a world where one person experiences ES suffering.
Under the LSFU view, World A is always preferable to World B. This is true no matter how many people are in World A, and you can continue to increase the population experiencing BNES — no matter how many people you add it will always be lexically more important to prevent an instance of Extreme Suffering12.
This means that the LSFU view will result in it being ethically preferable to have a world with substantially more total suffering, because the suffering is of a less important type. Extreme Suffering and Non-Extreme suffering are just two different kinds of things, and because this is a lexical view, we must always prioritize Extreme suffering over all other suffering.
This result is structurally similar to the Very Repugnant Conclusion, which for those who read Alexander 202313 would be unsurprised to hear, as they know that Repugnant Type Conclusions are an issue for any aggregative value system — at least if the values are comparable with each other. As we discussed earlier, the Very Repugnant Conclusion is an issue for standard Utilitarians, wherein it can be ethically permissible to allow many people to suffer, so long as there are enough happy people also in existence. While it does intuitively feel unfair to say that a world with people suffering greatly could be worth it, so long as enough people are just a little happy — what LSFU implies is radically worse. Intensity Concerned LSFU says that any number of people suffering can be worth it, so long as they are suffering just a bit less than Extreme Suffering. Unlike Total Utilitarianism this trade has no upside — it’s just more suffering, all around.
Sensitivity Objection:
Or, Negative Utilitarianism with extra steps
You might think, “Huh, it’s kinda strange to claim that a billion people each experiencing Barely Not Extreme Suffering, is less bad than one person experiencing Extreme Suffering. Seems like if we’re aggregative then it should be the case that BNES + BNES = something at least as bad as, or worse, than ES.”
This is a good response — one of the appealing aspects of utilitarianism is that it aggregates. Sometimes in the real world we really ought to put the needs of the many above the needs of a few — because we value and care about all people equally, and so must take seriously how to do right by all people. As we saw, one way to approach the lexical suffering view would be to say Extreme Suffering is just a categorically different kind of suffering — it’s a torture so intense nothing could ever offset it. But surely a torture that lasts dozens and dozens of years could also rise to this level, even if no single moment of it is ‘the most extreme suffering we could imagine’?
We’ll call this view Threshold LSFU — some suffering can be so great it cannot be offset, but that Non-Offsetable Suffering isn’t strictly about intensity, or ‘suffering-of-a-kind’, perse — it’s about the total amount. It’s about whether you have reached a certain threshold, which can be done by a particularly intense instance of suffering, or by there being a great number of people suffering. This helps meet the intuition that very extreme suffering cannot be offset, but also allows us to say that two people experiencing Barely Not Extreme Suffering is at least as bad as one person experiencing Extreme Suffering.
However, this creates new issues, even if it avoids the mugging objection. Let's do some more abstract thought experiments — after all, who doesn’t love those?
World A: a world with X number of people suffering such that their combined total suffering adds up to BNES.
Now imagine we can choose to turn World A into one of the following:
World B: Nothing changes
World C: Cause one person to experience an additional δS suffering, and cause X10 people to experience the most happy thing you can imagine
Under Threshold LSFU, we should prefer World B — we should do nothing. This is strange, because the change in total suffering between World A and World C is very small — by definition it is as small as is possible. Perhaps that is about as bad as stepping on a single Lego. It seems absurd to have a view that says it is more important to avoid stepping on a single Lego than to greatly improve the lives of millions or billions of people.
Imagine those moments of sheer joy we sometimes get to experience with others. If offered the choice, would you decide to stop someone from stepping on a Lego instead of bringing that joy to your loved ones?
This makes the view extremely sensitive — very small differences in total suffering can radically change what is morally acceptable.
One of the reasons the LSFU family of views might be more plausible than typical Negative Utilitarianism — that is the view that we should prioritize reducing suffering above all else — is that it preserves the standard Utilitarian ability to make trade-offs and ‘offset’ between joy and suffering, while also prioritizing the prevention of the worst suffering. However, if you take a threshold approach to determining ES — that is, enough instances of Non-Extreme suffering can add up to, or even surpass, the least bad instance of Extreme Suffering — then you inevitably end up prioritizing miniscule amounts of suffering over any amount of joy.
While this is also a conclusion of standard Negative Utilitarianism, LSFU has to explain why a certain total amount of suffering is non-offsetable — lexically more important than all else — but before that it can be offset. If individually I can offset being poked with a paperclip — or even make it very worthwhile (imagine getting ice cream in exchange for being poked) — it seems extremely weird that if enough people get poked this could turn into a situation where it can never be offset. Why is it that one person can make the positive trade, but if 100 people each making this trade it suddenly becomes bad?
LSFU wants to have its cake and eat it too — it wants to say that creating joy can be worth a very small amount of pain, something Negative Utilitarianism can’t commit to — but instead it reaches the same conclusion with a few extra steps. Which is it? Is suffering categorically more important than joy, or are these things that can be compared and aggregated? Negative Utilitarianism can consistently say that suffering is always more important than wellbeing — but LSFU instead must explain why it is Total Utilitarianism up until a very specific point. After that point, it’s simply the same as Negative Utilitarianism.
Because of this, Threshold LSFU is also subject to the issue where the right thing to do can be dependent on things very far away in space or time. For example, if in the past there was enough suffering to reach the threshold point, Threshold LSFU no longer allows for trade-offs between suffering and pleasure. It’s weird to say ‘it can be good for one person to get poked with a paper clip in exchange for ice cream, but bad for 100 people to do that’, but even weirder to say ‘it can be good for one person to be poked with a paperclip in exchange for ice cream, but it’s bad if 10,000 years ago someone broke their leg’. Similarly, if there is life on other planets, we could pass the threshold point despite there being no causal relationship between groups of people. ‘It can be good for one person to be poked with a paperclip in exchange for ice cream, but bad if 3 galaxies over someone broke their leg’.
Against Lexically Prioritizing Suffering
In truth, I haven’t been pedantic enough — many forms of Negative Utilitarianism are also lexical in nature, and suffering focused, or at least they can be thought of that way. They create a bundle of two things, suffering and joy, and then say that suffering is categorically more important. We cannot concern ourselves with increasing wellbeing until we have ensured there is no more suffering.
If you’re already a Negative Utilitarian then what I say next won’t be very convincing to you, and that’s okay — you know who you are. But you might be someone who hasn’t made up their mind yet on ethics, and Threshold LSFU might seem like it can handle more of our intuitions and values than standard Total Utilitarianism.
But as we’ve already seen, lexical views, by nature, must prioritize the higher values above anything else. This can commit you to prioritizing small amounts of suffering over anything else, no matter the scales involved.
There is another issue then, LSFU entails that we should almost always be prioritizing bringing about extinction. This is famously the most common objection to Negative Utilitarianism, and since Threshold LSFU collapse into that, it apples here too.
There is always a chance that there might be Extreme Suffering in the future, either as an instance of Intense Suffering, or because the total surpasses our threshold. Or, even worse, we might already have passed the threshold point. Once that happens, there will be no upside that can ever be worth the suffering that might, or has already, happened. All the matters is minimizing suffering.
So extinction is preferable. If there is no life, there can be no suffering. The very point of these views is that there can be nothing more important than preventing pain. No matter what the future might hold, it can never be worth the cost of even a bit of suffering — at least according to the Negative Utilitarian. As we know, Threshold LSFU is more or less Negative Utilitarianism, but with a flaking coat of paint, and so will practically be committed to the same.
Conclusion
Lexical ethical theories will necessarily privilege certain values over everything else. Intuitively, we might think that by creating a Lexical Utilitarian theory we could avoid situations where it’s ethically preferable to create worlds with the worst types of suffering, so long as there is sufficient positive wellbeing also occurring. However, Lexical Suffering Focused Utilitarianism either commits to preferring extremely large amounts of total suffering, for no upside, or is so sensitive that very tiny instances of suffering can be more important to avoid than any amount of joy.
While I haven’t demonstrated it in this post, I believe that any variation of LSFU will either be Intensity Concerned or Threshold based14, and so will commit you to being mugged, or will have suffer from a sensitivity issue that collapses the view into Negative Utilitarianism.
Therefore, either you must accept that sometimes worlds with very large amounts of total suffering are preferable to a world with radically less total suffering, or you must be committed to Negative Utilitarianism — and if you’re unwilling to accept either of these, than you must reject LSFU views.
Threshold LSFU is a view that tries to smuggle in all the unintuitive conclusions of Negative Utilitarianism, but with a more palatable facade, and Intensity Concerned LSFU commits to even more untenable results than Negative Utilitarianism. We need our ethics to be both workable, and honest, which therefore precludes LSFU from being viable.
A not Shameless Plug:
If you’re interested, last year I was on the Consistently Candid podcast where I discussed some, but not all of this. I truthfully do not fully remember the contents of that conversation, as it has been slightly over a year, but we did — in part — discuss what I referred to as LSFU in this post. Unfortunately there is no financial gain, for anyone involved, if you go and listen to that podcast — but that does however mean my plugging it here is truly a shameless endeavor.
As we all know every ethical theory has to grapple with problem in population ethics, but allow me some stylistic flair, if you would
The definitions I’m using are taken directly from chapter two of An Introduction to Utilitarianism, by Chappell, Meissner, and MacAskill
I am friends with Aaron, so it is appropriate for me to clarify that he tried to argue that in his post — though of course one might dispute his success. While there are no doubt much stronger essays I could have linked to that make this case, I have — with great bias — selected his to highlight here above any other.
To be clear, you could have non-utilitarian theories that are also ‘lexical suffering focused’, but as I’ve said I will not be considering those here
Assuming $1,000 can actually buy enough happiness, which I think it reasonably can
The values here are made up to be illustrative, and are not my endorsed view on how good or bad forgetting someone’s birthday is
A real world example of this might be choosing between two charities. A utilitarian will pick whichever charity can help the most, even though this might be helping different groups of people
You might, perhaps reasonably, criticize the idea that a selfish individual wouldn’t agree to endure extreme suffering in certain situations. I’ll simply assume this premise, since otherwise there’s nothing to explore here — but I’m not committing to it myself, and don’t necessarily endorse it
I’m also assuming that non-extreme suffering and all pleasure are in the same ‘bundle’ here, so you can aggregate and offset non-extreme suffering with pleasure. You could however have a view where suffering and pleasure are lexically different as well, which is one way of thinking about standard negative utilitarianism.
Which I am playfully naming after Pascal's mugging, the idea being that certain philosophical principles might commit you to obviously bad and wrong conclusions. You’re being held up a metaphorical gunpoint by the logical implication of your own view
Edit: I feel I didn't make this clear enough in the original post, but this is a genuine philosophical mugging. If someone threatens to cause ES there is no number of people you wouldn't inflict BNES on. If the mugger said “1 million people" you'd gladly agree, so long as you can prevent one person from experiencing ES — and if they said " actually, you need to hurt an extra 10 people, or the deal is off” you'd keep agreeing, no matter how many extra people they said, so long as you were sure this would stop Extreme Suffering. There's no point you'd ever refuse.
My senior thesis from college, which isn’t currently available online
I would of course be interested in reading an account of a LSFU view that is neither Threshold or Intensity Concerned, as at the very least it would let me write a follow-up essay



Thanks for writing this! I’ll try to cook up a proper response but here's the short sloppy version:
> "This means that the LSFU view will result in it being ethically preferable to have a world with substantially more total suffering..."
You’re begging the question here. Like this assertion is largely just the actual debate - I think that the above quote isn't true.
And I think that mistake falls out of the following: my argument is largely that the following is a *bad model* of world:
> "LSFU - Lexical Suffering Focused Utilitarianism
> ES - Extreme Suffering
> BNES - Barely Not Extreme Suffering
> δS - Delta S; the difference in suffering between ES and NES, that is (ES - BNES) = δS"
I claim that what you’re calling BNES actually feels extremely different from ES
You can start from ES and make the suffering monotonically less bad at step 1, 2, 3, … but not actually get to BNES, *sorta* (!) like if you start from 1 you can subtract 1/3, then 1/9, then 1/27 and so on but never reach 0.2
An alternative very fuzzy imperfect analogue is that you’re trying to like start from infinity and asking me to accept that if you keep subtracting integer amounts you’ll eventually hit some very large integer
Tbc these two math things are not perfect formal models of what I believe the correct models are, just a bit of intuition assistance
Also ofc this is not a complete or extremely robust argument, just part of the general idea.