Contents

Open Research Questions

The Center for Reducing Suffering aims to answer a simple question: How can we best use our limited resources to alleviate as much suffering as possible? 

This page outlines research avenues that we consider important to our mission. CRS’ work focuses on three main pillars:

  • Developing and refining suffering-focused moral views
  • Gaining further clarity on our macrostrategic situation
  • Identifying high-priority interventions.

In all cases, we primarily list relatively broad research themes. Within each theme, the first step is to do further work to articulate fruitful research questions. Also, this is not meant to be exhaustive – it is quite likely that we have overlooked crucial questions.

If you would like to help research these topics, feel free to get in touch or apply to work with us

Suffering-focused ethics

Suffering-focused ethics (SFE) refers to a broad class of moral views that place primary importance on the prevention of suffering.

Overview

A starting point is to compile an overview of various suffering-focused views that have been defended in the philosophical literature.

  • How can these views be categorised to make it easier to understand, particularly for those not deeply familiar with the literature, what the relevant options and choice points are?
  • What are the most common questions and misconceptions related to suffering-focused ethics? (This could result in writing an FAQ.)
  • What are some of the main implications of suffering-focused ethics?

Ethics as being “about problems”

There is a common intuition that ethics is primarily about avoiding problematic states (such as suffering). In this framework, a problem cannot be solved, undone, removed or outweighed by other things (such as happiness). 

  • What are plausible ways to incorporate this intuition in a complete and coherent moral view? 

Addressing objections

Many objections to suffering-focused ethics, such as the pinprick argument or the world destruction argument, are directed at negative utilitarianism in particular. 

  • How might one reply to these and other objections leveled against negative utilitarianism?
  • How relevant are these objections to other suffering-focused views?
  • How can concern for suffering be combined with other values — such as fairness considerations, respecting individuals’ consent, or deontological side-constraints — to avoid counterintuitive implications? See e.g. the pluralist suffering-focused views of Clark Wolf and Jamie Mayerfeld.

Lexicality and related ideas

Value lexicality, in its simplest form, refers to the idea that some amount of a given value entity has greater (dis)value than any amount of some other value entity. An example of a lexical view is that certain forms of especially severe suffering are worse than any amount of mild pain or discomfort. (For more refined notions of lexicality, see here and here.)

  • How could one respond to sequence arguments against lexicality and similar ideas?
  • Can the concept of psychological bearability or unbearability help provide a plausible account of value lexicality? How does this relate to the intensity, duration, or other aspects of an experience? 
  • Is there a difference in how plausible value lexicality is at the intrapersonal level versus the interpersonal level?
  • Even if there is no value lexicality in theory, it still seems plausible that, for many views about the badness of extreme suffering, one should still, in practice, focus on reducing extreme suffering rather than mild suffering (given the expected prevalence of extreme suffering). Which moral views of acceptable moral tradeoffs between suffering and happiness would imply such a focus in practice?

Population ethics

Population ethics is the study of ethical problems arising when our actions affect who is born and how many people are born in the future. 

  • What are the best arguments against the Asymmetry in population ethics, and what might be the most plausible replies to these arguments?
  • A number of consequentialist views, such as classical utilitarianism and some versions of prioritarianism, imply the Very Repugnant Conclusion in population ethics. What arguments can be made for and against this conclusion? 
  • Can suffering-focused population ethics be a viable solution to the well-known problems and impossibility theorems of the field? What problems arise in a suffering-focused account, and how could a proponent respond?

Minimalist axiologies

Minimalist axiologies refer to a class of axiologies whose central conception of independent value is of the kind that says “The less this, the better.” In other words, their fundamental standard of value is only about the avoidance of something, and not about the maximization of something else.

Minimalist axiologies may be formulated in terms of avoiding cravings (tranquilism seen as a welfarist monism; certain Buddhist axiologies); disturbances (Epicureanism); pain or suffering (SchopenhauerRichard Ryder); frustrated preferences (antifrustrationism); or unmet needs (some interpretations of care ethics).

  • What are the advantages and disadvantages of these different formulations? 
  • According to Christoph Fehige’s antifrustrationism, a frustrated preference is bad, but the existence of a satisfied preference is not better than if the preference didn’t exist in the first place. Several authors have objected to antifrustrationism. How could a proponent of antifrustrationism respond? (More.)
  • Develop and assess contemporary Epicurean-inspired ideas about pleasure, happiness, good and value. 

Descriptive ethics and potential biases

A different perspective is that of descriptive (rather than normative) ethics. Descriptive ethics is the study of people’s beliefs about morality. We are interested in finding out more about existing views related to suffering-focused ethics.

  • What fraction of people hold suffering-focused views? What is the distribution of opinion on various thought experiments, like Omelas? What are the most significant correlates?
  • In discussions about the disvalue of bad parts of life compared to the value of good parts of life, one idea that comes up is what tradeoffs someone makes or would make. A person might say “I would accept 1 day of torture in exchange for living 10 extra happy years.” What, if anything, can be concluded from the actual or hypothetical tradeoffs people make?

In a similar vein, one could explore possible biases for or against suffering-focused views.

  • One might argue that many people would give far more priority to suffering if only they were more exposed to it, yet we tend to look away, as it is often unpleasant to consider (severe) suffering. Similarly, studies suggest that people make more sympathetic moral judgments when they experience pain. To what extent is it true that mere (lack of) exposure or attention is a key factor in whether people prioritize the reduction of suffering, as opposed to deeper value differences?
  • Conversely, what are possible reasons why we ourselves might be biased in favor of suffering-focused views?
  • How can suffering-focused ethics avoid becoming an accidentally harmful intellectual monoculture?

Macrostrategy

In a complex world, any attempt to do good requires careful thought about the long-term impact of our actions. Research on macrostrategy aims to improve our understanding of the “big picture”; that is, the condition that we find ourselves in when zooming out from immediate issues and instead considering the entire course of history in the long run. A better grasp of crucial considerations is necessary to identify the most important levers for reducing suffering.

Long-term focus

Due to the potentially vast numbers of individuals that may exist in the future, it is plausible that improving the long-term future should be our top priority. This view is called longtermism.

  • How can we have lasting and far-reaching impact in a constantly changing world – in particular, given significant value drift?
  • Considering the complexity of the dynamics that will shape our future (“cluelessness”), how can we guarantee, or at least be somewhat confident, that our influence is positive? 
  • What are ways to move the future in a direction that is good from many or all plausible perspectives?
  • All things considered, how strong is the case for focusing on the long-term future? (See e.g. here and here.)

The big picture

It would be easier to influence the long-term future if civilisation eventually reaches a steady state, also referred to as “lock-in” or “goal preservation”. 

  • How plausible is this assumption? (More.)
  • What might such a steady state look like? Assuming that it will eventually happen, when is it most likely to happen? 
  • Are concepts such “lock-in” or a “steady state” even well-defined; and if not, how can they be clarified?
  • What does that imply for our ability to shape the future?

When trying to shape the long-term future, we are particularly interested in “pivotal events” that would have an extraordinary and lasting impact (e.g. by leading to a steady state), such that it would be easily recognised as the most important development in the course of history. 

  • What future technologies could be so transformative that their invention would qualify as a pivotal event? Candidates include advanced artificial intelligence (more on this below), whole brain emulations, and genetic enhancement
  • Would a pivotal event likely be related to a powerful technology, or is it more likely to be about social change? Possible scenarios include (but are not limited to) a world government, the emergence of a unified global culture, or the rise of a global totalitarian system.
  • How likely is it that we will see explosive economic growth and innovation in the future, akin to the industrial revolution? If so, when would this be likely to happen? How likely is a slowdown or stagnation? (See e.g. here and here.)

Another perspective is to ask whether we are at the hinge of history.

  • How does our ability to influence the long-term future compare to that of future individuals? Is our time uniquely influential, or do we expect better opportunities to arise in the future? (See here for more questions in this area.)
  • What does that imply for patient philanthropists?

Transformative artificial intelligence

The idea that we should aim to shape transformative artificial intelligence (AI) is among the most well-known crucial considerations of effective altruism.

  • What are the most likely scenarios in terms of what future AI will look like?
    • Should we expect a single, superintelligent agent, or is transformative AI more likely to consist of a large number of distributed, task-specific systems (see e.g. Eric Drexler’s model of comprehensive AI services)?
  • When should we expect transformative AI to be developed, if at all?
    • How long would the transition to an age of advanced AI take (“hard takeoff” vs “soft takeoff”)? Will progress be gradual or discontinuous? (More.)
    • What are the right concepts to measure this?
  • All things considered, to what degree should altruists focus on shaping transformative artificial intelligence? (Cf. here, here, and here.)

Future suffering and s-risks

In line with suffering-focused ethics, our primary interest is to avert future suffering. Therefore, we are not only thinking in terms of how to influence the long-term future in general, but also about how there might arise large amounts of suffering in the future, and how that can be avoided. 

  • What are the largest sources of future suffering (in expectation)?
  • Is most (expected) suffering incidental, that is, arising as a side effect in the pursuit of other goals? Or should we be most concerned about scenarios where powerful agents deliberately cause harm?
  • To what degree is the distribution of future suffering tail-heavy? Should we expect that most future suffering is due to worst-case outcomes
  • To what extent can we predict future suffering from our current vantage point? What fraction of (expected) future suffering is due to unknown unknowns?
    • What are possible ways to reduce suffering from (unknown) unknown sources?
  • How can we facilitate a sound research project focused on suffering reduction?

We are particularly interested in preventing scenarios that result in exceptionally large amounts of intense suffering, also called s-risks. The following questions are first steps towards a better understanding:

  • Which moral positions and arguments can ground a priority on s-risk reduction? (See also Chapter 14 in Magnus Vinding’s book Suffering-Focused Ethics.)
  • What are the main arguments for and against a focus on s-risks? Which considerations have so far been overlooked or underappreciated?
  • How can we divide the space of all possible s-risks into useful sub-categories? (See here.)
  • To what degree should we worry about human civilisation causing an s-risk, compared to failing to prevent s-risks caused by extraterrestrial civilisations? (See here.)
  • What are some of the risk factors that make s-risks more likely? How can we mitigate these risk factors? Is it better to focus on specific interventions or broad risk factors?
  • What are useful proxies that can make the highly abstract endeavour of s-risk reduction more tangible and specific? Candidates may include better values and greater cooperation among powerful agents.
  • How can we best ensure that our efforts to reduce s-risks do not inadvertently increase s-risks?
  • Which particular s-risks are most important to reduce in expectation?
  • All things considered, how likely is a severe s-risk?

Incidental s-risks

Incidental s-risks arise when the most efficient way to achieve a certain goal unintentionally creates a lot of suffering in the process. The agent or agents that cause the s-risk are either ignorant of or indifferent to that suffering, or they would prefer a suffering-free alternative in theory, but aren’t willing to bear the necessary costs in practice.

  • What are plausible concrete scenarios for how such s-risks might arise?
  • What are other plausible levers to reduce incidental s-risk? How do they compare? (More.)
  • What are the greatest incidental s-risks?

Agential s-risks

Agential s-risks involve agents that actively and intentionally want to cause harm. One plausible mechanism is that, as part of an escalating conflict, agents might threaten to bring about outcomes that are particularly bad for the other side.

  • How can one best prevent worst-case outcomes resulting from the execution of such threats? (See also: Research priorities for preventing threats)
  • Surrogate goals have been suggested as one potential solution. What are the key barriers to implementing successful surrogate goals?
  • Improving cooperation and reducing the risk of severe future conflicts seems robustly beneficial. Are there practical opportunities to achieve this, or is it too intractable?
  • What are other possible reasons why agents might want to cause harm (e.g., sadism or hatred), and how plausible is it that such motives will lead to an s-risk scenario?

Interventions

In addition to research on ethics and macrostrategy, we are interested in applied work to learn more about high-priority interventions. The following is a selection of plausible priority areas.

Moral advocacy

One of the most straightforward levers to reduce suffering is to increase the degree of moral concern for all sentient beings. This approach is sometimes discussed under the term “moral circle expansion” (MCE). 

  • What are the main arguments for and against moral advocacy
  • How can we best increase concern for suffering and motivate people to prevent it in cost-effective ways? How can we entrench concern for suffering at the level of our institutions and make its reduction a collective priority? 
  • In general, what are the best ways to achieve social change? For instance, should we focus on broad public opinion, or try to influence specific groups and individuals?
  • How could moral circle expansion backfire? (More.)
  • What aspects of moral circle expansion are most important? Should we focus on particularly neglected sentient beings, such as wild animals or invertebrates
  • Is it plausible that we should advocate for concern for artificial minds at this point? (Cf. here, here, and here.)

We are especially interested in the intersection of moral advocacy and longtermism (and s-risks in particular). See Longtermism and animal advocacy.

  • How can we ensure that an improvement in values is lasting, rather than reverting to something worse?
  • What can be done to ensure the long-term stability of the relevant social movements, (e.g. the animal advocacy movement)? What are the main risks that would jeopardise the long-term ability of such movements to achieve positive social change?
  • What could trigger a serious (and permanent) backlash against the animal advocacy movement (or the effective altruism movement, or the suffering reduction movement)? What reasonable steps should we take to prevent these movements from becoming too controversial?

Shaping powerful new technology

If advanced artificial intelligence (or other transformative technologies) will be developed in this century, then it stands to reason that shaping the development of such powerful technologies is a unique lever to influence the long-term future. In particular, we are interested in AI safety work that is focused on reducing s-risks arising in conflict between advanced AI systems. (For more details on this, we refer to the research agenda of the Center on Long-Term Risk.)

  • How does AI safety research with a focus on s-risks differ from “conventional” AI safety?
  • Considering that such AIs will be far more capable than we are, what can be done at this point to reduce the risk of catastrophic bargaining failures?
  • Does an improved understanding of foundational issues in game theory and decision theory help to prevent worst-case outcomes, or could such insights backfire in unforeseen ways?
  • How can surrogate goals be implemented in AI systems? 

However, preventing s-risks from advanced AI is not entirely a technical issue. We are also interested in AI governance, i.e. the norms, institutions, and regulations that govern the development of AI.

  • What aspects of AI governance are most important from a suffering-focused perspective? 
  • How can we ensure adequate rule of law in contexts that involve advanced artificial intelligence? 
  • What can be done to advance a cooperative mindset and positive-sum thinking within the AI community and among other stakeholders? How can we avoid an AI “arms race”?

We are also interested in foundational questions to learn more about whether attempts to shape new technology are a high-priority intervention. 

  • How does the feasibility of changing technological developments compare to the feasibility of social change? (See e.g. here for a similar discussion in the context of animal advocacy.)
  • Transformative artificial intelligence has received the most attention, but what about other powerful technologies?
  • To what degree can we predict future technological developments?
  • The impact of technology is always mediated through economic, social, and political dynamics. Should we try to intervene on a technical level, or should we instead aim to influence other factors, such as the values, institutions, or culture that shape the development of a new technology? 

Reducing malevolence

We are interested in work on reducing risks from malevolent actors. This is because particularly “evil” individuals in positions of power are, as human history suggests, a plausible mechanism for how worst-case outcomes could come about.

  • In what contexts are malevolent actors particularly likely to negatively affect the long-term future, rather than merely causing short-term harm?
  • What can we do to increase public awareness of malevolence and of the risk of “pathocracy”?
  • What political norms, institutions, and systems are least likely to result in malevolent leaders? (See e.g. this comment.)
  • How tractable is reducing malevolence (in particular, without new technologies)?
  • Is it plausible that malevolent traits could arise in AI systems? If so, how can learning algorithms be adapted to mitigate this risk?
  • Might sortition be a promising way to reduce the influence of malevolent individuals?

Better politics

Given our vast uncertainty about the future, we should arguably focus on putting society in as good a position as possible to address the challenges of the future. One important dimension of this is a functional political system. We are therefore interested in efforts to avoid harmful political and social dynamics, to strengthen democratic governance, and to establish a more reasoned and thoughtful discourse on policy. (For more on this, see here.)

  • Can we identify institutional changes to our political system(s) that would constitute a clear improvement over the status quo?
  • Are efforts to improve politics a cost-effective intervention from a longtermist perspective (and in terms of s-risks in particular)? Or is this area too intractable, risky, and crowded? 
  • Is voting reform a high-priority intervention?
  • How can we ensure that the interests of all sentient beings are considered (to a greater extent) in political decisions?

Excessive political polarisation and tribalism make it harder to reach consensus or a fair compromise, and undermine trust in public institutions. Such dynamics also tend to exacerbate conflict, which is a risk factor for s-risks.

  • How can we increase the propensity of political decisions to focus on widely shared, cooperative aims, such as the reduction of suffering, rather than getting caught up in political conflict?
  • What are interventions that can robustly reduce polarisation in a lasting way, without the risk of backfiring in unexpected ways? 
  • Existing proposals include voting reform, public service broadcasting, deliberative citizen’s assemblies, and compulsory voting. Are any of these proposals tractable and effective ways to reduce polarisation?