Contents

The Importance of the Far Future

[This article was initially published on the website of Sentience Politics. It does not necessarily reflect the current thinking of the Center for Reducing Suffering.]

Most charities that work to relieve suffering focus on individuals who currently exist, like humans struggling in poverty or dealing with disease, or companion animals in need of shelter. Some help individuals who will exist in the near future, like the animals who will be farmed in the next year if our consumption of animal-based foods is not curtailed. Others help those who will be alive a little further on: organizations focused on climate change solutions, for instance, generally work for the benefit of humans in future generations. While we may be instinctively compelled to help those whose suffering we can immediately see or clearly visualize, disregard for individuals who have not been born yet, but who our actions will still affect, seems as misguided as disregard for individuals who live far away from us. Everyone within our reach should have our consideration, whether or not they are alive at the same time as us.

The individuals who are alive today, and who will live in the coming decades, are vastly outnumbered by those who will live in the centuries, millennia, and ages to come. While our impact on the distant future is less predictable than our shorter-term impacts, this means it could be orders of magnitude more significant.

The importance of emerging technologies

If technological developments continue to progress as rapidly as they have in the last few centuries, future generations will have powerful technologies that may be as incomprehensible to us now as cell phones would have been to people who lived before the telephone was invented.

When we talk about the suffering that may happen in the far future, we are largely talking about suffering caused by irresponsible uses of technology. For instance, if we develop the technology to geoengineer other planets, we may spread earth-like wildlife to those worlds with no concern for the suffering the animals will endure. If we colonize other worlds before achieving significant social and political progress here on Earth, we could multiply the problems and suffering we already have here. (In particular, we should establish an appropriate framework for space governance first.)

We could develop ways to force even more sentient beings into servitude than we do today in sweatshops, trafficking rings, and factory farms. Advancements in artificial intelligence could eventually result in the development of sentient digital minds (as opposed to biological minds), and the possibility of producing vast numbers of these entities, paired with our likely difficulty in empathizing with them, could lead us to exploit them on a massive scale. This scenario would be similar to our current exploitation of nonhuman animals, though its scale could be substantially greater.

Or, humans could develop an artificial intelligence that surpasses our own intellectual abilities. If such an artificial intelligence develops the wrong values, it could exponentially increase the amount of suffering in the world. For instance, if such an AI were to disregard others in the pursuit of its goals, it could exploit animals or humans in ways that cause them immense suffering, or even create its own suffering subroutines to help it achieve its goals. If it colonized other planets, it could multiply this suffering many times over.

Such scenarios may sound unlikely, and some may think this is valid grounds for dismissal. But what sounded like science fiction a few decades ago may fast become reality. Aerospace company Space X has already announced a plan to establish a colony on Mars as early as 2024, and the recent development of deep learning has given machines the ability to learn in a similar way to humans, meaning AI is several steps closer to reaching human intelligence than it was just two years ago.

Predicting future technological developments is challenging, but analyzing possible scenarios can provide valuable insight to help us prevent risks of astronomical suffering (“s-risks”). As history has shown, emerging technologies can endow humanity with unprecedented power to create positive change, but they also pose unprecedented risks. We should keep these ideas in mind even if we think a particular risk of astronomical suffering sounds unlikely. Ensuring that powerful technologies are used responsibly will be crucial to avoiding even worse developments than chemical warfare, factory farms, or nuclear weapons, and will require those in control of such technologies to care about the suffering of everyone they can impact.

How can we reduce risks of astronomical suffering?

Even without narrow predictions about the technological, social, and political landscape of the future, there are interventions that can help prevent particularly terrible outcomes.

One such intervention is to develop and implement precautionary measures. For instance, companies working on AI could develop agreements about how to proceed if an artificial intelligence appeared to demonstrate some degree of sentience. Independent bodies could oversee AI development to help ensure that AI is developed with goals which are more likely to result in good outcomes than bad ones. Nations with weapons of mass destruction already have agreements about their proliferation, and we could forge similar agreements regarding the development of powerful new technologies like superintelligent AI, to encourage their safe, rather than hasty, invention.

We can also show and spread concern for suffering irrespective of who is experiencing it — no matter their location, time, species, sex, substrate, or any characteristic independent of their suffering — to reduce the chance of a massive scale of suffering caused by such disregard in the future. Today, disregard like this victimizes many human beings, and many more nonhuman beings, and we have had varying degrees of discriminatory exploitation, violence, and indifference throughout the history of human civilization. It would be unwise to assume such inequity will simply disappear from society without a concerted effort to remove it. Increased concern for the suffering of all sentient beings would increase the probability that future generations will use their power responsibly, even if we don’t know the exact nature of that power. Specifically, successfully promoting antispeciesism — the active reduction of speciesism — should lead us to prevent the suffering that many animals will otherwise experience in the future. We could also work to prevent the possibility of future digital suffering by spreading concern for digital sentience, and encouraging engagement with the ethical questions it would generate now so we are prepared to take its needs seriously if it emerges.

Near-term and medium-term interventions may also indirectly influence the far future, to varying degrees. Work to specifically help farmed animals may diminish speciesism and so increase our concern for wild animals or other sentient entities. The achievement of basic rights to life, bodily integrity, and mental integrity for particular nonhuman animals will set legal and social precedents that will help us expand protections to other animals, ultimately helping many more.

Finally, we can research these and other strategies for preventing suffering in the far future.

Conclusion

Future individuals matter no less than individuals who already exist, and will likely outnumber existing individuals by several orders of magnitude. Setting our sights further than the next year, decade, or even the next few generations could therefore be more impactful than focusing on shorter-term goals, although our longer-term impact is less certain.

Emerging technologies may equip humans with immense power, but we cannot assume such new power will be used responsibly, so to reduce the risks of potentially astronomical future suffering, we should take care with technologies that could change our trajectory, and with the evolution of our social values.


Overview

  1. The Case Against Speciesism
  2. Altruism, Numbers, and Factory Farms
  3. Effective Strategies
  4. The Relevance of Wild Animal Suffering
  5. The Importance of the Far Future
  6. The Benefits of Cause-Neutrality