Contents

Risk factors for s-risks

by Tobias Baumann. First published in 2019. Updated in 2022.

Suppose you want to evaluate how a given intervention would affect s-risks. This is made difficult by the multitude of possible s-risks and by our great uncertainty about the future. Similar to past and contemporary forms of suffering, future suffering will likely result from a variety of issues rather than any single cause. We therefore need to consider measures for s-risk reduction that are comparatively easy to assess, which will then simplify our discussion of specific interventions.

In this post, I will introduce several risk factors for s-risks. These risk factors are not s-risks in and of themselves, but they significantly increase either the probability or the severity of a very bad outcome. The concept is also used frequently in medicine. For instance, an unbalanced diet or a lack of exercise are not adverse health outcomes in and of themselves, but they are risk factors for a plethora of medical problems, from heart disease to depression.

This framework allows us to give sound advice for a healthy lifestyle without the need to analyse specific diseases. And the resulting conclusions are robust even though the health trajectory of any given individual is highly uncertain.

By analogy, we might not need to know all effects that a given action will have on specific s-risks. If we can identify reliable risk factors, we will be able to derive robust and effective interventions for reducing a broad range of s-risks.

Advanced technology and space colonisation

The simplest risk factor for s-risks is the capacity of human civilisation to create large amounts of suffering in the first place. Many s-risks are only possible in the context of powerful new technologies that give rise to both unprecedented opportunities and unprecedented risks. In particular, the emergence of advanced AI could, due to its unprecedented power, constitute a serious s-risk.

As with the concept of medical risk factors, this does not mean that the emergence of such advanced technologies would necessarily cause an s-risk to materialise.1 The point is merely that advanced technologies would equip humans with immense power and thereby exacerbate the potential scope of worst-case outcomes.

We should also distinguish between more and less worrisome forms of technological progress. The relevant aspect is the effect that new technologies could have on the overall scale of human civilisation and on the number of (potentially miserable) sentient beings. In particular, some new technologies might make it easier to create large amounts of suffering.

A concrete example of such a technology is the ability to create sentient artificial entities. As I have discussed elsewhere, this might result in the exploitation of large numbers of sentient beings, due to our likely insufficient level of moral consideration for artificial minds.

Another key factor is large-scale space colonisation.2 Due to advanced AI or other technological breakthroughs, it might become technically and economically viable to expand throughout the universe. This expansion could potentially multiply the total population size of both human and nonhuman beings, resulting in a truly astronomical scope of our civilisation. And without sufficient moral and political progress, this could multiply the amount of suffering entailed by our civilisation. (If the universe contains or will contain alien civilisations, then human expansion into space could also potentially reduce s-risks.)

Thus, space colonisation, even with the best of intentions, poses significant risks. The potential scale of future civilisation is mind-boggling. Astronomers estimate that there are 100-400 billion stars in our galaxy (the Milky Way) alone, and 100-200 billion galaxies throughout the universe. A future moral catastrophe on a galactic or intergalactic scale could therefore exceed Earth-based suffering by many orders of magnitude. By contrast, the amount of suffering is limited if we never expand into space.3

Some authors have further argued that space colonisation will by default result in catastrophic outcomes. But this is highly uncertain, and a pessimistic view of space colonisation is not necessary to establish that space colonisation is a risk factor for s-risks.

Is the large-scale colonisation of space a realistic prospect? Evidence on the feasibility of space colonisation is scarce, but preliminary reviews suggest that the obstacles, from microgravity to travel across cosmic distances, are massive yet probably not insurmountable. It also remains unclear what the motivation to colonise other planets would be — considering that other planets are usually extremely inhospitable places when compared to Earth, and we are currently far from running out of available land.

On the other hand, we face great uncertainty about what may or may not happen in the future, especially on long timescales. So we also cannot rule out the possibility that humanity will colonise space. And the scope of a galactic or intergalactic moral catastrophe could be so vast that an expected value framework suggests that we should take the possibility seriously, even if we do not consider large-scale space colonisation to be the most likely scenario.

Lack of adequate s-risk prevention

Human civilisation can likely mitigate most forms of suffering, given sufficient motivation and political will to do so.4 Therefore, s-risks are far more likely to occur if nobody works to prevent them.

Even a limited degree of moral concern could go a long way towards mitigating s-risks. For instance, if only a small number of people care about preventing an s-risk, they can still try to find a compromise with others to implement low-cost measures that can prevent worst-case outcomes. Such low-cost compromises are likely to be possible for many s-risks. It therefore seems plausible that we should be most worried about futures with little or no efforts to prevent s-risks, and that we should address that apparent bottleneck.

What could cause such a lack of efforts to prevent s-risks? The simplest reason is sheer indifference, especially for s-risks that affect those without any political representation or power. Future decision-makers might be aware of s-risks and be able to avert them, but they might not care enough about the suffering that their decisions cause (or fail to prevent). In particular, a narrow moral circle could result in a disregard of s-risks that affect nonhuman animals or artificial sentience.

Even if there is concern for s-risks, it is possible that the resulting efforts are misguided or ineffective. This could happen for many reasons. For instance, the idea of preventing s-risks or reducing suffering might become associated with controversial political ideas and factions, which could in turn cause a backlash that thwarts progress towards preventing s-risks.

It is also possible that the relevant actors will want to avert an s-risk, but doing so may be impossible due to ineffective political institutions or cooperation problems. Or the relevant actors might lack the foresight to anticipate and address potential s-risks at an early stage — and at a later point, it might be impossible to change course. (Of course, this depends heavily on the specific s-risk in question.)

Conflict and hostility

S-risks are more likely if there is a high degree of hostility between future actors, with little or no common ground. It is, of course, not problematic per se if people endorse different perspectives or opinions. However, such divergences can constitute a risk factor for s-risks when combined with a lack of understanding of other perspectives, or intolerance and hostility towards others.

Conflicts can be problematic for several reasons. First, powerful factions or individuals might ride roughshod over the moral concerns of others. This is likely to impede efforts to prevent s-risks (the lack of which is a risk factor, as per the previous section). A future that entails large-scale adversarial dynamics or ruthless competition would likely leave little room for prudent reflection on s-risks or for mutually beneficial compromises. Negative outcomes would be significantly more likely in this case, compared to a future where successful coordination makes it possible to implement countermeasures against potential risks.

Second, hostile relations always carry a risk of escalating conflicts and even outright war between competing factions. It stands to reason that this increases the risk of worst-case outcomes. In particular, some actors might want to intentionally harm others out of hatred, sadism, or vengeance for (real or alleged) harm caused by others (as discussed in Chapter 2). Conflicts and wars tend to exacerbate our worst impulses.

A related risk factor for s-risks is insufficient security against bad actors. Human civilisation contains many different actors, including some malevolent ones. Such bad actors are usually reined in by norms and laws that prohibit harmful acts, yet this might become difficult in some future scenarios. For instance, in the context of powerful autonomous AI agents or space colonisation, it might become harder or even impossible to stop rogue actors from causing harm on a massive scale.5

This is related to the future evolution of the offense-defense balance. Military applications of future technological advances could change the offense-defense balance in a way that makes s-risks more likely. A common concern is that strong offensive capabilities would enable a safe first strike, undermining global stability. Yet when it comes to s-risks, it is perhaps even more dangerous to tip the balance in favor of strong defense, since bad actors can no longer be deterred from harmful acts if they enjoy strong defensive advantages.6

Malevolent actors

Cruel dictators like Hitler and Stalin were responsible for many of the worst atrocities in human history. But how can we operationalise this notion of “cruel” or “malevolent” actors? A frequently used concept is the “Dark Tetrad”, which consists of the following four personality traits:7

  • Psychopathy is characterized by persistent antisocial behavior, impaired empathy, callousness, and impulsiveness.
  • Narcissism involves an inflated sense of one’s importance and abilities, an excessive need for admiration, and an obsession with achieving fame or power.
  • Machiavellianism is characterized by manipulating and deceiving others to further one’s own interests, indifference to common norms, and ruthless pursuit of power or wealth.
  • Sadism is the tendency to derive pleasure from inflicting suffering and pain on others.

Individuals with malevolent traits can pose serious risks if they rise to positions of power. And they often do — after all, the hallmarks of malevolence include strategic ruthlessness and a lust for power. These traits are often an advantage in the struggle for power, especially in fiercely competitive systems.

Malevolent individuals in power can cause a variety of negative outcomes. The aspects that are most relevant to s-risks include an erosion of interpersonal trust and coordination, an increased risk of escalating conflicts and war, and an increased likelihood of reckless behaviour in high-stakes situations. For more details on the concept of malevolence, the risks posed by malevolent individuals in power, and possible interventions to reduce the influence of malevolent actors, see Reducing Long-Term Risks From Malevolent Actors.

A concrete pathway to an s-risk is the formation of a global totalitarian regime under a malevolent leader, which could potentially result in a permanent lock-in of ruthless values and power structures. Historical examples of totalitarian regimes (e.g., Nazi Germany or Stalinist Russia) were temporary and localised, but a stable global dictatorship may become possible in the future.

Risks from malevolent actors are exacerbated if those actors have access to advanced technology, such as powerful AI. In the worst case, this might enable a cruel individual in a position of power to create suffering on an unprecedented scale.

How risk factors interact

It would be misguided to view each risk factor as independent. Instead, there are numerous connections and complex interactions between the factors I outlined. For instance, polarisation and conflict can increase the likelihood that a malevolent individual rises to power. A dictatorship under a malevolent leader would, in turn, likely impede efforts to prevent s-risks. Advanced technology could potentially multiply the harm caused by malevolent individuals — and so on.

Conversely, the presence of a single risk factor can, at least to some extent, be mitigated by otherwise favourable circumstances. Advanced technological capabilities are much less worrisome if there are adequate efforts to mitigate s-risks. Likewise, without advanced technological capabilities or space colonisation, the suffering caused by a malevolent dictator would at least be limited to Earth.

It therefore seems plausible that most expected s-risks occur in worlds where several risk factors coincide. The risk might even scale in a superlinear way. This would mean that if two risk factors materialise, the likelihood of an s-risks is more than twice as times as high compared to a future where only a single risk factor materialises.8

  1. Also, some s-risks would still be possible even if humanity were to halt all technological progress. This includes natural s-risks or potential s-risks caused by alien civilisations. And without advanced technology, we may be unable to do anything about these s-risks.[]
  2. This term is meant to refer to large-scale settlement across a vast number of planets or even galaxies, not activities like the mere exploration of space or isolated outposts for purposes such as asteroid mining.[]
  3. It is worth noting, though, that the number of beings on Earth could in theory also be very large (albeit still not as large as in a spacefaring civilisation).[]
  4. However, some s-risks might be hard to prevent even if we collectively want to.[]
  5. In the case of powerful autonomous AI, existing laws and institutions may not be directly applicable, and it is not obvious what the replacement could be. In the case of space colonisation, large cosmic distances might constitute an obstacle to effective enforcement of laws or norms — although this is, of course, highly speculative. The point is merely that a breakdown of the rule of law would make s-risks much more likely.[]
  6. How could strong defensive capabilities come about? One plausible scenario is intergalactic space colonisation with multiple loci of power. It might then be difficult to enforce large-scale prohibitions against harmful acts due to astronomical distances between galaxies or superclusters.[]
  7. Note also that the “dark traits” are positively correlated with each other, which is why it makes sense to combine them into a single “Dark Factor”. This has also been contrasted with a “Light Triad” of beneficial traits.[]
  8. A similar pattern can be observed when it comes to risk factors in a medical context (which inspired this framework). A single medical risk factor (like age, obesity, or high blood pressure) is (in many cases) not yet catastrophic, but the combination of several risk factors often is.[]