Contents

Should Suffering-Reducers Focus on AI?

If we want to reduce suffering over the long term, should AI be a top priority? As capabilities advance and talk of AGI and the singularity fills the airwaves, some argue we should focus heavily on AI. Others are skeptical. In their view, steering AI may be just one important cause among many—and prioritizing it risks neglecting more reliable ways of reducing suffering. Who is right?

Pro

The strongest case for focusing on AI begins with the claim that it could be a genuine hinge-of-history technology—not just another tool, but something with imminent world-shaping potential. If we reach human-level or superhuman systems, the paradigm of “smart” humans controlling “dumb” tools would be over. If that happens, all bets are off, but it seems reasonable to expect tremendous impacts.

Of particular concern is the possibility that advanced AI will enable a dangerous form of value lock-in, allowing harmful values to dominate the future. There are multiple pathways to this. One involves humans—states, firms, or coalitions—using AI as a force multiplier to entrench their dominance. Another involves humans losing control altogether, whether to an AI singleton or a multipolar AI world, which could be just as dangerous.

Even without value lock-in, AI could enable futures with vast suffering. There are many routes to this, from catastrophic misalignment to AI-enabled conflict, the expansion of systems like factory farming, and the creation of artificial minds capable of suffering.

Though the probabilities of such outcomes are uncertain, the scale of the potential downside is large enough that some argue we should take these risks extremely seriously and work to prevent them now. Since there is now a meaningful chance to directly steer AI away from these outcomes—through technical safety or governance—we should jump at it while we still can.

This kind of argument has been around for some time, but it feels more urgent as AI capabilities have advanced in recent years. Two trends stand out: first, we see increasing generality, with AI models able to handle a wide range of cognitive tasks, improving as compute and data scale.

Second, we see growing agentic ability, with systems increasingly able to carry out longer sequences of actions across tools and environments. Work by METR suggests that the length of tasks these systems can complete has been increasing at an exponential rate. This does not mean superintelligence is imminent, but it does point to a clear and measurable trend toward much more powerful systems.

Con

One path to skepticism is doubt about the technology itself. Recent systems are impressive, but it does not follow that they are on a direct path to AGI or superintelligence. Current models still show serious weaknesses, and many experts question whether the current LLM paradigm and today’s training methods will get us there.

More fundamentally, some question whether intelligence—understood as the ability to accomplish a wide range of real-world goals—is the kind of thing that can be rapidly scaled in isolation, as the AI “FOOM” narrative suggests. It is tempting to extrapolate from a few years of striking progress, but some circumspection is in order. AI has gone through hype cycles before, and there are reasons to be skeptical about explosive advances given technical, economic, and physical constraints.

Additionally, some argue that AI is best understood as a “normal technology,” one that diffuses gradually through society. Historically, even highly transformative technologies like electricity and the internet took time to diffuse and impact the world. So far, we have seen widespread experimentation with AI, but limited evidence of economy-wide transformation. If this pattern holds, it suggests that AI’s impact may be slower and present more familiar risks. This would not necessarily imply that it’s wrong to work on AI, but the level of urgency would be less clear.

Shaping AI may also be relatively intractable. Technological development is often driven by deep economic and strategic pressures, and it is not clear that a small group of altruists can make a meaningful difference. With AI already the focus of intense attention, it may also not be particularly neglected, which could further reduce the marginal impact of additional effort compared to working on other neglected causes.

Finally, even if one grants that AI could become extremely important, it remains unclear what we should actually do if we aim to reduce suffering. There is deep uncertainty about the future of AI, alongside core disagreements on everything from timelines and takeoff speeds to alignment difficulty and the overall risk landscape.

Governance proposals often sound promising, but they face real tradeoffs around feasibility, enforcement, and unintended consequences. Even advancing technical alignment is fraught, as it may increase risks of misuse or (somewhat paradoxicallyperverse instantiation. Slowing or “pausing” development is another popular idea, but it too raises difficult questions.

Conclusion

There are serious considerations on both sides. On one hand, AI could matter enormously for the long-run distribution of suffering; on the other, there is deep uncertainty about whether, when, and how those impacts will materialize, and whether AI outcomes are best influenced through direct or indirect means. Wherever one stands on these questions, there is a strong case for staying epistemically humble and continuing to learn from those we disagree with, rather than isolating into camps.

If we are not sold on one side, perhaps a reasonable middle ground is to take AI seriously and devote attention to it without letting it dominate all of our actions and thinking. Given uncertainty, it may make sense to favor more general work such as community- and field-building around suffering-focused AI governance, which may be robust to many different trajectories. It may also make sense to prioritize research, especially in highly neglected subareas like artificial suffering, cooperative AI, or worst case AI safety. That said, the pro side is right that things in AI are moving fast, so it seems valuable to be ready to act more directly if necessary.