Contents

Arguments for and against a focus on s-risks

by Tobias Baumann. First published in 2020.

Among the myriad ways to do good, should averting risks of astronomical suffering (s-risks) be our main priority? The case for a focus on s-risks rests on a combination of the following:

  1. Longtermism: We should focus on improving the long-term future, rather than trying to help those alive now or in the near future.
  2. Suffering focus: We should give priority to avoiding severe suffering or other large-scale harm, compared to other goals such as ensuring a flourishing future for humanity. (This can be justified on normative or empirical grounds.)
  3. Worst-case focus: The most effective way to reduce expected suffering in the long-term is to focus on preventing particularly bad outcomes.

In the following, I will outline key arguments for and against each of these premises. Most of those are not novel, and I will mostly refer the reader to existing work. The contribution of this article is to compile an even-handed overview of the ideas that underpin a focus on s-risks, as well as possible reasons to reject such a focus in favor of other priorities.

It is important to note that we should think about this in gradual rather than in binary terms: the question is to what extent we endorse each of the three above-mentioned beliefs, and to what extent we prioritise s-risks as a result. Also, sometimes practical interventions render (parts of) this discussion moot; certain projects (e.g. improving the political system or reducing malevolence) are robustly good across many scenarios and many moral perspectives. Still, it is important to have conceptual clarity about our priorities. 

Longtermism 

Longtermism is defined as “the view that the most important determinant of the value of our actions today is how those actions affect the very long-run future”.

The case for the importance of the long-term future (see e.g. 1, 2, 3) depends on two key points: 

  • We have no good reason to discount the interests of sentient beings merely because they live at a different time: their well-being matters just as much as the well-being of present-day individuals. 1
  • The individuals who are alive today, and who will live in the coming decades, are (in expectation) vastly outnumbered by those who will live in the centuries, millennia, and ages to come.

Given a moral view that urges us to help others as effectively as possible, this strongly suggests that we should focus on how our actions can benefit those (vastly) more numerous future beings.

The main argument against a long-term focus is the difficulty of influencing long-term outcomes (in a predictably beneficial way) due to great uncertainty over what the future will look like (cf. cluelessness), coupled with a lack of reliable feedback loops. This uncertainty becomes greater the further we are away from the outcomes we seek to influence. The challenges are manifold:

  • Most of what we can hope to affect now can, and likely will, be changed by later decisions (bracketing the possibility of some kind of lock-in). 
  • For many problems, it can be argued that future decision-makers are in a much better position to solve such problems than we are (see e.g. here), so it can be argued that short-term efforts are our comparative advantage.
  • If the future is long, or big, our influence will be diluted because the outcome is shaped by the decisions of a vast number of individuals, as long as those are autonomous. Indeed, it can be argued that this (prima facie) exactly cancels out the astronomical stakes, unless we are in an exceptional situation to shape the future.2

Yet it is hard to argue that these challenges render it entirely futile to try and improve the long-term future. A particularly robust strategy is to simply save and invest money (cf. patient philanthropy); even if we are currently not in a good position to shape the long-term future, this allows us to spend later on, as better opportunities arise.3 In general, a plausible middle ground is to focus on improving the state of civilisation one or two centuries from now, which hopefully translates to better outcomes in the very long term. 

Suffering focus

Granting longtermism, the next question is to what degree we should focus on avoiding (severe) suffering or other harms compared to other long-term priorities. Possible alternative goals include, but are not limited to: ensuring human survival, achieving a period of long reflection, increasing the probability of a utopian future, or improving the long-term according to some notion of “compromise values” or “common-sense morality”.4

This is in part a normative question and in part an empirical question.5 

The normative part is about how much moral weight we give to reducing (severe, large-scale) suffering or other harms, compared to other moral goals, such as the creation of additional happy lives or the promotion of greater happiness of individuals that are already well-off. Proponents of suffering-focused ethics give priority to averting suffering (cf. 1, 2, 3, 4, 5), whereas others contend that achieving positive goods is of comparable urgency and can morally “outweigh” or “cancel out” (severe) suffering (see e.g. here and here).6

Given a certain view on the relative moral priority of reducing suffering, the empirical part of the question is about how effectively we can actually do so, compared to how readily we can achieve positive goods or other moral goals.7 Under an optimistic view of the future, one might think that there will be little suffering anyway, so only strongly suffering-focused moral perspectives would nevertheless prioritise suffering. By contrast, if a large-scale moral catastrophe appears likely to occur in the future, then a much larger range of moral views will assign great priority to mitigating such an outcome.8

A common argument for optimism is that future technology will render it easier to achieve desired outcomes without causing suffering.9 Also, many people are trying to do good, whereas few people – with the exception of malevolent actors – deliberately want to cause harm, as opposed to just being indifferent. We also empirically observe that our world is getting better over time on many metrics.

On the other hand, this is only weak evidence, considering our great uncertainty about the future. Pessimism also becomes more plausible when considering nonhuman animals; the rapid increase in factory farming in recent decades10 can be viewed as a blueprint of future moral catastrophes involving artificial sentience. Given a sufficient degree of factual pessimism about how the future will go, even non-suffering-focused moral views can support the conclusion that mitigating large-scale harm is, in practice, the most effective way to do good.

All told, the suffering focus is at least a plausible moral and empirical view. While not all views consider avoiding suffering the top priority, it is uncontroversial that doing so is of great importance. (However, that is arguably also true for some other goals.) 

Worst-case focus

The combination of longtermism and suffering focus implies that we should focus on reducing future suffering. In addition to that, a focus on s-risks also (at least implicitly) entails the belief, which I will call worst-case focus, that guarding against particularly bad outcomes is the most effective way to reduce future suffering in expectation. Suppose we consider a simple model with three distinct possible futures (adapted from Cause prioritization for downside-focused value systems):

  1. A future with no (severe) suffering whatsoever, thanks to advanced technology and improved moral standards. 
  2. A future that contains similar amounts of suffering as now: factory farms (or future equivalents thereof) continue to exist, and wild animal suffering is never tackled to a sufficient degree.
  3. A future that contains significant levels of suffering on an astronomical scale (i.e. an s-risk materialises).

The worst-case focus is the claim that the difference between 2. and 3. is much greater than the difference between 1. and 2., so that preventing 3. is most important (even when taking into account a lower probability). By contrast, a (complete) rejection of the worst-case focus would mean that we instead work on increasing the probability of a future that is entirely free of suffering (cf. the hedonistic imperative). A possible middle ground is to reduce suffering in futures that are moderately bad, but not extremely bad; or to focus on interventions that are robustly good across many scenarios.

The strongest argument for the worst-case focus is the observation that current levels of suffering are, while horrible, very small compared to what could plausibly happen in very bad futures. The scale of a future moral catastrophe would very likely dwarf anything we have seen so far, by many orders of magnitude.11 While concrete scenarios are necessarily speculative, it seems hard to argue that the probability of an s-risk is so tiny as to outweigh this difference.12 (See here for more details.)

That said, it is a fine line between taking expected values seriously and falling victim to Pascalian reasoning.13 It is not clear to what extent most future suffering lies in worst-case scenarios, as opposed to average futures (that may still contain significant amounts of suffering) or outcomes that fall somewhere in between. My analysis of this question supports a moderate, but not an exclusive focus on worst-case scenarios.14

Additional considerations

Cluster thinking suggests that we should consider many different reasons and arguments, even ones that are fairly weak in and of themselves. For instance, there are many intuitive reasons against longtermism, or at least strong versions thereof (which assert that long-term-focused work is orders of magnitude more important than short-term-focused work). Similar arguments could be made regarding the worst-case focus. A key theme is an aversion to speculative reasoning or overly “fanatical” conclusions. 

However, it’s not clear to me whether cluster thinking actually comes down against an s-risk focus (and its three underlying constituents), as it’s also possible to give a number of disparate reasons in favor of it.

  • Few people take reasoning based on expected values seriously (on moral matters). Likewise, few have internalised the idea that helping future sentient beings is just as important as helping those alive now. Given that, it is perhaps not surprising that the resulting conclusions are counterintuitive to many; but if we are highly confident in those underlying moral and decision-theoretical beliefs, we should arguably also follow them to their conclusions.
  • It is plausible that fewer people can productively work on s-risk reduction, as opposed to more short-term-focused or more concrete forms of altruism. This is because the former often requires relatively rare skills; for instance, research on macrostrategy requires highly abstract and interdisciplinary reasoning about the future and a strong drive to figure out how we can best reduce suffering. Therefore, those who are so inclined arguably have a comparative advantage in working on s-risks. (However, work on s-risks is not limited to such theoretical research.)

In the spirit of cluster thinking, we should also scrutinise possible biases that might distort our thinking:  

  • Preventing s-risks is not an inspiring vision, unlike the prospect of utopian future or opportunities to help others in the here and now. Contemplating very bad futures can be rather depressing, so wishful thinking might cause people to dismiss s-risks
  • Optimism bias may result in an underestimate of large-scale future moral catastrophes. However, there is also pessimism bias, and the overall direction of those biases is unclear. 
  • Proportion dominance is the effect that we feel it is more important to help 10 of 10 than 10 of 100. This – and scope neglect – plausibly distorts our intuitions against the worst-case focus, as it (falsely) seems more worthwhile to go from 10 units of suffering to 0 than from 100 to 90. On the other hand, illusion of control might bias us to overestimate our influence over such scenarios, and the long-term future in general.
  • I think there is a tendency to be overly confident in a certain view of the future based on abstract far-mode reasoning that tells a plausible story. Belief digitisation may bias us to implicitly assign a probability close to 1 to certain assumptions we consider plausible, rather than appropriately “adding up” the uncertainty. 

Last, we should consider the relative neglectedness of a focus on s-risks (and its three underlying constituents). At first glance, it seems clear that s-risks are highly neglected, given how few people actively work on s-risks.15 But evaluating neglectedness is often complicated because we should also take future efforts into account. To the degree to which we can expect future actors to work on preventing s-risks, the cause is less neglected than it first seems.16

Conclusion

All things considered, I think the case for each of the three underlying premises (longtermism, suffering focus, and worst-case focus) is plausible, but not unassailable. Or, to put it differently, I would endorse moderate versions of those premises but not strong ones. 

Consequently, I think the case for prioritising s-risks (again, to a moderate degree) is convincing. Further work on s-risks – see e.g. our Open Research Questions – therefore seems valuable, but not orders of magnitude more valuable than other work. In particular, given the abstract nature of reasoning about s-risks, we need to identify proxies or risk factors to arrive at more tangible interventions. 

But we should always remain epistemically humble, acknowledge that there are often strong arguments in both directions, and stay open to changing our mind if new evidence about the likelihood of s-risks or the (in)feasibility of long-term influence arises.

Acknowledgements

Thanks to Michael Aird and Magnus Vinding for valuable comments and suggestions.

  1. Note that one might still arrive at longtermist conclusions when morally discounting the future at a sufficiently small rate.[]
  2. We do have some grounds to think that contemporary people are plausibly far more influential than a model of “all generations are born equal” would suggest. This is because almost all agents will live in the future if a long or big future happens; so they are our successors, which means that we can influence them but they can’t influence us. Another way to put it is that our generation constitutes a population bottleneck relative to a future interstellar civilisation.[]
  3. Indeed, it may even be easier to build influence over longer timeframes, through a combination of financial investments and movement-building. By contrast, influencing what happens soon may be hard if rigid systems are already in place and we have little “run-up” to create change.[]
  4. One can argue that this entails a strong component of suffering reduction.[]
  5. Some strongly suffering-focused normative views (e.g. Epicurean views of wellbeing or lexical views) will render it an entirely normative question, as they do not accept any tradeoff involving (sufficiently severe) suffering.[]
  6. Many moderately suffering-focused ethics would concede that suffering can be outweighed in principle. The question, then, is how high that bar is (e.g. how much happiness is required). It is worth noting that an s-risk focus need not be predicated on strongly suffering-focused views, i.e. views according to which we should exclusively reduce suffering. For example, simply accepting the Asymmetry, or giving it the greatest credence among population-ethical views, could, if combined with longtermism, imply an s-risk focus.[]
  7. I prefer this framing over the more common “How much happiness and suffering does the future contain?” framing, because the latter primes us towards extinction risk reduction as a default intervention. For instance, from a classical utilitarian perspective, one would have to look at how much suffering we could reduce, versus how much happiness we can create, using marginal resources. That’s not the same question; in particular, relative tractability also matters.[]
  8. From a utilitarian perspective in particular, we can compare the E-ratio and N-ratio to arrive at an overall conclusion. See also here.[]
  9. See also here for a reply.[]
  10. It is quite possible that cultured meat technology will render factory farming obsolete within a century, but this is far from certain; and the fact that factory farming happened is still a bad sign for similar risks.[]
  11. Note that this does not even require speculative assumptions about interstellar space colonisation – even an Earth-based future could entail vast amounts of sentient beings.[]
  12. It could also be that s-risk reduction is far less tractable than reducing suffering on smaller scales.[]
  13. Under the many-worlds interpretation of quantum mechanics, or modal realism (which one can argue we should grant non-negligible credence), some very bad outcomes will be actualised (perhaps with different “measure”). So it is not enough that we do okay in just one branch; we need to minimize the branches in which things go extremely badly. Thinking in these terms might help make worst-case risks “more real”, as our moral cognition seems to treat mere risks very differently compared to actual suffering.[]
  14. We do not need to focus on very extreme scenarios to justify the “s-risk” framing; the claim is just that the relevant class of scenarios are significantly worse in expectation than e.g. wild animal suffering on Earth (while still being sufficiently plausible).[]
  15. Some alternative priorities, such as helping wild animals or invertebrates, are also highly neglected. (Of course, s-risks can also be about wild animals or invertebrates, so those causes are not entirely distinct.)[]
  16. However, it can be argued that we need to ensure that the relevant future actors do in fact care about s-risks. And we should distinguish between outcomes that are bad for everyone (e.g., a Malthusian future) and those that are only bad for a minority of moral agents who care strongly about the suffering of animals or artificial sentience.[]