Should Suffering-Reducers Focus on AI?

If we want to reduce suffering over the long term, should AI be a top priority? As capabilities advance and talk of AGI and the singularity fills the airwaves, some argue we should focus heavily on AI. Others are skeptical. In their view, steering AI may be just one important cause among many—and prioritizing it risks neglecting more reliable ways of reducing suffering. Who is right?

Pro

The strongest case for focusing on AI begins with the claim that it could be a genuine hinge-of-history technology—not just another tool, but something with imminent world-shaping potential. If we reach human-level or superhuman systems, the paradigm of “smart” humans controlling “dumb” tools would be over. If that happens, all bets are off, but it seems reasonable to expect tremendous impacts. [...] 

Read more

The Case for Being Nonpartisan

Modern politics often turns moral issues into tribal contests. Crucial debates about what we should value and how we should act quickly devolve from truth-seeking to status-defending, as ideas and causes become signals of identity and allegiance.

If we hope to reduce suffering, this pattern is dangerous. When a cause gets caught up in a political culture war, it invites backlash and shrinks the coalition of people willing to help. For that reason, there is great value in being nonpartisan in our efforts to reduce suffering. By nonpartisan, I don’t mean unprincipled, centrist, or reluctant to take strong stances. Rather, I mean not identifying with a pre-defined political tribe, like the Red or Blue (or Grey) tribe. Resisting the tribal logic of modern politics is difficult but important: it helps to keep the movement to reduce suffering healthy—with norms of cooperation and open inquiry—and focused. [...] 

Read more

Reducing Suffering: An Institutional Approach

There are many different approaches we can take to reduce suffering. One that seems especially promising is to target society’s institutions and better equip them to reduce suffering.

Why institutions? First, they have disproportionate power. Legal systems, markets, regulatory frameworks, research norms, media ecosystems, and international agreements do not merely express social values; they shape them to a large extent. Institutions structure incentives, constrain behavior, allocate authority, and determine which problems receive sustained attention. [...] 

Read more

S-risk impact distribution is double-tailed

Summary

Discussions about s-risks often rest on a single-tailed picture, focused on how much suffering human civilization could risk causing. But when we consider the bigger picture, including s-risks from alien civilizations, we see that human civilization’s expected impact on s-risks is in fact double-tailed. This likely has significant implications. For instance, it might mean that we should try to pursue interventions that are robust across both tails, and it tentatively suggests that, for a wide range of impartial value systems, it is safest to focus mostly on improving the quality of our future.

Introduction

What is the distribution of future expected suffering caused by human civilization? [...] 

Read more

On fat-tailed distributions and s-risks

Summary

It is sometimes suggested that since the severity of many kinds of moral catastrophes (e.g. wars and natural disasters) fall along a power-law distribution, efforts to reduce suffering should focus on “a few rare scenarios where things go very wrong”. While this argument appears quite plausible on its face, it is in fact a lot less obvious than it seems at first sight. Specifically, a fat-tailed distribution need not imply that a single or even a few sources of suffering account for most future suffering in expectation, let alone that we should mostly prioritize a single or a few sources of suffering.

Introduction

In his post Is most expected suffering due to worst-case outcomes?, Tobias Baumann explores how skewed the distribution of future sources of suffering might be. His conclusion, in short, is that worst-case outcomes may well dominate, but that it is unclear to what degree we should expect future suffering to be concentrated in worst-case outcomes. [...] 

Read more

Longtermism and animal advocacy

There is a common tendency among effective altruists to think of animal advocacy as having little value for improving the long-term future. Similarly, animal advocates often assume that longtermism has little relevance to their work. Yet this seems misguided: sufficient concern for nonhuman sentient beings is a key ingredient in how well the long-term future will go.

In this post, I will discuss whether animal advocacy – or, more generally, expanding the moral circle – should be a priority for longtermists, and outline implications of a longtermist perspective on animal advocacy. My starting point is a moral view that rejects speciesism and gives equal weight to the interests and well-being of future individuals.  [...] 

Read more