Summary
Discussions about s-risks often rest on a single-tailed picture, focused on how much suffering human civilization could risk causing. But when we consider the bigger picture, including s-risks from alien civilizations, we see that human civilization’s expected impact on s-risks is in fact double-tailed. This likely has significant implications. For instance, it might mean that we should try to pursue interventions that are robust across both tails, and it tentatively suggests that, for a wide range of impartial value systems, it is safest to focus mostly on improving the quality of our future.
Introduction
What is the distribution of future expected suffering caused by human civilization?
If civilizations have the potential to cause large amounts of suffering, cf. the right tail on the figure below, we should also believe they have the potential to prevent large amounts of suffering in expectation.
The figure above shows percentiles along the x-axis and how much suffering is created or reduced by human civilization along the y-axis. The green tail is suffering reduced by human civilization, while the red tail is suffering caused (in expectation). A substantial fraction of the left tail will amount to reductions of s-risks caused by extraterrestrial civilizations: preventing their red-tail scenarios.
The nature of the distribution
As the above figure suggests, the distribution is probably somewhat asymmetric, with more expected suffering caused than prevented. A key question is whether human civilization will be alone in our forward light cone — if so, then the green tail is much less pronounced (though it still does not disappear, since there may be other ways in which human civilization could reduce vast amounts of suffering, though these are more speculative, e.g. acausal trade, reducing universe generation, and unknown unknowns).
It does seem reasonably likely that we are currently alone in our corner of the universe, yet the question is not whether we are currently alone, but rather whether we will be alone in the future. And since we will not reach certainty about either of these questions any time soon, it seems that the green tail should be of significant magnitude in expectation. The question, then, is the precise degree of asymmetry.
A simple model
As a rough quantitative model, suppose there are N civilizations causing a total amount of S suffering. Without factoring in additional information, we can treat human civilization as a random sample from this set of N civilizations. In expectation, human civilization then causes suffering on a scale of S/N. However, this does not say much about the shape of the distribution of S. In particular, it may be skewed either way, or the variance may be very large compared to the expected value.
By analogy, consider human-caused animal suffering. In expectation, a random person may add to the amount of animal suffering (e.g. through meat consumption). But there are also many whose existence reduces animal suffering (e.g. through animal advocacy), and presumably some who greatly increase animal suffering (e.g. sadistic people who enjoy causing suffering on factory farms or slaughterhouses).
Of course, we do have additional information that may imply that human civilization is better or worse than a randomly sampled one. We could consider values, political dynamics, the frequency of severe conflict, or other factors that affect the likelihood of s-risks, and try to assess how humanity may be different from average. However, since we do not currently know much about what the “average civilization” looks like, it seems reasonable not to deviate much from the “agnostic prior”. Further research on this question may give us a better sense of where human civilization falls in this distribution.
Implications of a double-tailed distribution
If it is best to focus on tail risks, then it may be that the most effective strategy is to focus on both tails. That is, it may be optimal to focus on reducing expected suffering in the scenarios found toward the bookends of this distribution (though not necessarily only among the most extreme ends; it could well be optimal to focus on something like the 15 percent lowest and highest percentiles respectively, cf. the arrows on the figure above).
This is not particularly intuitive — after all, what would it mean to (also) focus on the left tail? This seems a question worthy of further research.
Robust interventions
A plausible implication may be that we should seek actions that are robustly good across both tails. For example, reducing extinction risks and increasing the probability of human-driven space colonization may be favorable relative to the left tail, for the purpose of preventing s-risks caused by extraterrestrial civilizations, yet generic extinction reduction also seems likely to increase human-driven s-risks (of course, some interventions may both reduce extinction risks and human-driven s-risks).
Likewise, there will probably be strategies that are beneficial if we only consider human-driven s-risks, yet which turn out to be harmful, or at least suboptimal, when we also take the left tail into account.
In contrast, things such as improving human values and cooperation seem beneficial relative to both tails: it reduces the probability of human-caused s-risks, and increases the probability that human-caused colonization is significantly better than ET colonization (conditional on human-driven colonization happening).
Quality may matter most
If interstellar colonization is feasible, the prevention of ET colonization may be more likely than one would naively think. As Lukas Finnveden writes:
If you accept the self-indication assumption, you should be almost certain that we’ll encounter other civilizations if we leave the galaxy. In this case, 95 % of the reachable universe will already be colonised when Earth-originating intelligence arrives, in expectation. Of the remaining 5 %, around 70 % would eventually be reached by other civilizations, while 30 % would have remained empty in our absence.
Similar conclusions are reached by Robin Hanson et al. in recent work on “grabby aliens”, which suggests that all space is likely to be colonized anyway (assuming that “grabby aliens” will emerge).
Thus, on these (speculative) assumptions, if total space colonized is almost the same whether humans colonize or not, then, from an impartial moral perspective, the overall quality of the colonizing civilizations would seem the most important factor to consider, and plausibly also the safest thing to prioritize. This may hold for a variety of value systems, including classical utilitarianism. After all, if we do not know whether the expected quality of a colonization wave stemming from our own civilization is better or worse than the average quality of colonization waves stemming from other civilizations, it would seem imprudent to insistently push for colonization from our own civilization, and better to instead work to improve its trajectory.