Pascal famously argued that belief in God is rational because of the expected value of belief. If God exists, the rewards of believing—eternal salvation—are infinite, while if God does not exist, the costs of believing are finite. Even a minuscule probability of an infinite reward seemingly swamps any worldly cost.
That is the core of Pascal’s Wager. But then there is Pascal’s Mugging. A stranger approaches and insists that if Pascal hands over a mere 100 dollars, they will pay him back tomorrow with an infinite amount of money. It may seem absurd to accept this deal, but as long as the probability that the stranger is telling the truth is above zero, the expected financial gain of handing over the money still seems greater than keeping it. Yet this may seem irrational!
One way to avoid being mugged this way is to discount extremely small probabilities. If some event is exceedingly unlikely, the idea is, we can safely ignore it, treating it as if it were impossible. Under this rule, Pascal can arguably refuse the mugger without renouncing his wager.
This move may seem plausible, yet it raises the issue of where to draw the line. What probability is too small to care about: one in a million? a billion? a trillion? Any choice feels somewhat arbitrary. But ignoring the utterly unlikely may seem like the only sane way to reason about how to act without abandoning expected values altogether.

S-Risks
Enter suffering risks, or s-risks. These are risks of future scenarios involving astronomical amounts of suffering—far beyond anything that has occurred so far in history (that we know of). Picture factory farming expanded to feed a population at cosmic scale; authoritarian systems locking in oppressive structures; misaligned AIs that trap countless beings in misery. Or, more speculatively, trillions of digital minds trapped in “suffering subroutines,” unable to advocate for themselves.
Taking S-Risks Seriously
One might be tempted to write s-risks off. The scenarios that could produce astronomical suffering are speculative, unprecedented, and some of them downright sound like science fiction. Yet even if we think that many, or all, paths to astronomical suffering are highly unlikely, we should not dismiss the category as a whole. There are several reasons to believe that the probability of an s-risk, even if it is low, is not so low as to meet the bar for neglecting.
- Disjunctive Risks
The first is that s-risks are disjunctive. Catastrophic suffering could arise through many independent pathways. S-risk scenarios, that is, do not all stand or fall together. If we discovered tomorrow that digital minds are impossible, this would not directly address risks of industrialized animal exploitation or authoritarian social control. Even if each pathway is highly improbable on its own, the combined chance that at least one of them materializes is significantly higher. In other words, there are many ways to lose, and to win (avoid an s-risk) we must win every way.
- Familiar Mechanisms
Second, while s-risks would be unprecedented in scale, the underlying mechanisms are not new. Tobias Baumann distinguishes three broad types of s-risks: agential, where suffering is intentionally inflicted; incidental, where suffering emerges as a byproduct of pursuing other goals; and natural, where suffering arises entirely independently of agents.
These categories reflect the same forces that have produced enormous suffering throughout history. Humans have deliberately caused vast harm, from tyrants to terrorists. Misaligned incentives have enabled enormous unintended suffering, most notably in factory farming and in the collateral brutality of war, driven not exactly by malice but by a combination of expedience and indifference. And nature itself—well, it is red in tooth and claw as they say. Over eons it has generated immense suffering, from starvation and predation to all manner of disease.
Sadly, mass suffering is nothing new. What would be new is its occurrence on an even greater scale. And perhaps it doesn’t seem so unlikely that this could happen, if the past and present mechanisms of mass suffering keep churning, and if they expand to ensnare ever more beings. Two especially worrying variables are technology and population size.
- Technology
Technological progress continually widens the horizon of possibility, for both good and ill. A century ago, humanity lacked not only the tools but even the imagination to produce suffering on a cosmic scale. Today we may be edging toward the ability to create artificial minds, simulate consciousness, reengineer biology, and extend civilization beyond Earth.
Each new capability multiplies not only our potential to solve problems but also our power to impose suffering—deliberately or by neglect. The 20th revealed this duality vividly: industrialized warfare—driven by trains, electricity, mechanized weapons, and nuclear fission—produced horrors that no previous generation could have conceived. Our ancestors could not have pictured skin seared by radiation or entire cities turned to ash in seconds.
The same logic governs the future. Technological progress does not merely amplify familiar harms; it invents new pathways to misery. We may create new sentient beings whose suffering goes unrecognized or unprotected. We may design intricate systems that, without malice, institutionalize harm on massive scales. And because we cannot foresee every innovation ahead, there are Black Swan “unknown unknown” s-risk pathways—routes to astronomical suffering that remain invisible because they depend on technologies not yet born.
- Population
Finally, the future could be unimaginably large. If technological progress creates new pathways for suffering and makes them more efficient, then population growth provides the substrate for that suffering to unfold. Should civilization persist for millennia, expand to other planets, or develop artificial minds that run far faster and in far greater numbers than biological ones, the total population of sentient beings could far exceed today’s figures.
The stakes scale with that growth. A conflict, governance failure, or technological misuse could harm far more individuals simply because there are far more individuals to harm. Even a disaster that touches only a tiny fraction of such a population could still constitute astronomical suffering.
Conclusion
Just because some scenarios sound strange does not mean they are far-fetched. The future has always appeared bizarre from the vantage point of the past. The seeds of s-risks are already visible today: mass factory farming, wild-animal suffering, the acceleration of AI research with little regard for non-human welfare, and the growing interest in space colonization. We cannot calculate the precise probability of an s-risk. But given the above considerations, it seems unreasonable to treat it as low enough to neglect.
Taking s-risks seriously does not mean assuming they are inevitable, or that they eclipse every other moral concern. It means granting them proportional moral weight—a place in our planning, institutions, and research agendas. That demands developing governance frameworks for advanced technologies and building institutions resilient against harmful value lock-in, to prevent malign norms or systems from entrenching themselves permanently. Above all, it requires moral foresight—the ability to recognize that our descendants will live amid the consequences of today’s choices, inheriting not only our progress but also our moral blind spots.
