Contents

Open Research Questions

A key question we seek to answer is: How can we best use our limited resources to alleviate as much suffering as possible? 

This page outlines research avenues that we consider important to our mission. The work of the Center for Reducing Suffering (CRS) work focuses on three main pillars:

  • Advancing the state of knowledge on suffering-focused moral views
  • Gaining further clarity on our macrostrategic situation
  • Identifying high-priority interventions.

In all cases, we primarily list relatively broad research themes. Within each theme, a standing task is to articulate fruitful research questions and assess which questions are most fruitful and which ones can wait. Also, this web page is not meant to be exhaustive.

If you would like to help research these topics, feel free to get in touch or apply to work with us

Suffering-focused ethics

Suffering-focused ethics (SFE) refers to a broad class of moral views that place primary importance on the prevention of suffering.

Overview

  • How can suffering-focused views be categorised to make it easier to understand, particularly for those not deeply familiar with the literature, what the relevant options and choice points are?
  • What are the most common questions and misconceptions related to suffering-focused ethics? (This could result in writing an FAQ.)

Addressing objections

Many objections to suffering-focused ethics, such as the pinprick argument or the world destruction argument, are directed at negative utilitarianism in particular. 

  • Can the objections be strengthened?
  • How might one reply to these and other objections leveled against negative utilitarianism?
  • Which objections are relevant to other suffering-focused views, are the objections convincing, and why are they or why are they not convincing?
  • Which pieces of the existing literature and which parts of the flow of new contributions should be replied to or addressed? What should be said in relation to them?
  • If someone thinks that a sole focus on reducing suffering has implausible implications, how can they combine a great concern for suffering with other moral  considerations, such as fairness considerations, respecting individuals’ consent, or various deontological principles? See e.g. the pluralist suffering-focused views of Clark Wolf and Jamie Mayerfeld.

Lexicality and related ideas

Value lexicality, in its simplest form, refers to the idea that some amount of a given value entity has greater (dis)value than any amount of some other value entity. An example of a lexical view is that certain forms of especially severe suffering are worse than any amount of mild pain or discomfort. (For more refined notions of lexicality, see here, here, and here.)

  • Lexicality and similar ideas have been critiqued on the basis of sequence arguments. Responses to these critiques exist as well. Is further research on these critiques necessary, and if so, what are the key points to address?
  • Can a plausible account of value lexicality be grounded in concepts such as psychological bearability or unbearability, or what is or could be accepted by the victim? How do such ideas relate to the intensity, duration, or other aspects of an experience?
  • If value lexicality were implausible at the intrapersonal level, could it still be plausible at the interpersonal level?
  • Even if there is no value lexicality in theory, it still seems plausible that, for many views about the badness of extreme suffering, one should still, in practice, focus on reducing extreme suffering rather than mild suffering. Which moral views (of acceptable moral tradeoffs between suffering and happiness) would imply such a focus in practice?

Population ethics

Population ethics deals with ethical issues related to the number of beings and their welfare and identities.

  • What are the most convincing arguments supporting or opposing the Asymmetry in population ethics?
  • A number of consequentialist views, such as classical utilitarianism and some versions of prioritarianism, imply the Very Repugnant Conclusion in population ethics. What arguments can be made for and against this conclusion? 

Minimalist axiologies

Minimalist views of value (axiologies) are evaluative views that define betterness solely in terms of the absence or reduction of independent bads. For instance, they might roughly say, “the less suffering, violence, and violation, the better”. They reject the idea of weighing independent goods against these bads, as they deny that independent goods exist in the first place.

Minimalist axiologies may be formulated in terms of, for example, avoiding cravings (tranquilism seen as a welfarist monism; certain Buddhist axiologies); disturbances (Epicureanism); pain or suffering (SchopenhauerRichard Ryder); frustrated preferences (antifrustrationism); or unmet needs (some interpretations of care ethics).

  • What are the advantages and disadvantages of these different formulations? 
  • Develop and assess contemporary Epicurean-inspired ideas about pleasure, happiness, good and value. 

The existence of positive experiences

  • Are there any positive experiences and could there ever be any? (I.e., experiences that are positive in themselves. The question is not whether experiences can have positive effects.) 
  • For example, do and could any experiences have a positive hedonic quality or tone? 

Applied and domain-specific ethics

Relevant domains include the ethics of space colonisation and the ethics of new technologies. Deudney (2020, 9) writes that “nearly all space futurists are space expansionists. By this I mean that most space futurists are advocates of extensive human expansion into space.” In such domains, there seems to be a shortage of people with expertise in suffering-focused ethical traditions who write from the perspective of those traditions. 

  • Which important new or neglected points can a suffering-focused ethical perspective bring to the table in a given domain or area of applied ethics?

Descriptive ethics and potential biases

A different perspective is that of descriptive (rather than normative) ethics. Descriptive ethics is the study of people’s beliefs about morality. We are interested in finding out more about existing views related to suffering-focused ethics.

  • What fraction of people hold suffering-focused views? What is the distribution of opinion on various thought experiments, like Omelas? What are the most significant correlates?
  • In discussions about the disvalue of bad parts of life compared to the value of good parts of life, one idea that comes up is what tradeoffs someone makes or would make. A person might say “I would accept 1 day of torture in exchange for living 10 extra happy years.” What, if anything, can be concluded from the actual or hypothetical tradeoffs people make?

In a similar vein, one could explore possible biases for or against suffering-focused views.

  • One might argue that many people would give far more priority to suffering if only they were more exposed to it, yet we tend to look away, as it is often unpleasant to consider (severe) suffering. Similarly, studies suggest that people make more sympathetic moral judgments when they experience pain. To what extent is it true that mere (lack of) exposure or attention is a key factor in whether people prioritize the reduction of suffering, as opposed to deeper value differences?
  • Conversely, what are possible reasons why we ourselves might be biased in favor of suffering-focused views?

Macrostrategy

In a complex world, any attempt to do good requires careful thought about the long-term impact of our actions. Research on macrostrategy aims to improve our understanding of the “big picture”; that is, the condition that we find ourselves in when zooming out from immediate issues and instead considering the entire course of history in the long run. A better grasp of crucial considerations is necessary to identify the most important levers for reducing suffering.

Long-term focus

Due to the potentially vast numbers of individuals that may exist in the future, it is plausible that improving the long-term future should be our top priority.

  • How can we have lasting and far-reaching impact in a constantly changing world – in particular, given significant value drift?
  • Considering the complexity of the dynamics that will shape our future (“cluelessness”), how can we guarantee, or at least be somewhat confident, that our influence is positive? 
  • What are ways to move the future in a direction that is good from many or all plausible perspectives?
  • All things considered, how strong is the case for focusing on the long-term future? (See e.g. here and here.)

The big picture

It would be easier to influence the long-term future if civilisation eventually reaches a steady state, also referred to as “lock-in” or “goal preservation”. 

  • How plausible is this assumption? (More.)
  • What might such a steady state look like? Assuming that it will eventually happen, when is it most likely to happen? 
  • Are concepts such “lock-in” or a “steady state” even well-defined; and if not, how can they be clarified?
  • What does that imply for our ability to shape the future?

When trying to shape the long-term future, we are particularly interested in “pivotal events” that would have an extraordinary and lasting impact (e.g. by leading to a steady state), such that it would be easily recognised as the most important development in the course of history. 

  • What future technologies could be so transformative that their invention would qualify as a pivotal event? Candidates include advanced artificial intelligence (more on this below), whole brain emulations, and genetic enhancement
  • Would a pivotal event likely be related to a powerful technology, or is it more likely to be about social change? Possible scenarios include (but are not limited to) a world government, the emergence of a unified global culture, or the rise of a global totalitarian system.
  • How likely is it that we will see explosive economic growth and innovation in the future, akin to the industrial revolution? If so, when would this be likely to happen? How likely is a slowdown or stagnation? (See e.g. here and here.)

Another perspective is to ask whether we are at the hinge of history.

  • How does our ability to influence the long-term future compare to that of future individuals? Is our time uniquely influential, or do we expect better opportunities to arise in the future? (See here for more questions in this area.)
  • What does that imply for patient philanthropists?

Transformative artificial intelligence

The idea that we should aim to shape transformative artificial intelligence (AI) is among the most well-known crucial considerations of effective altruism.

  • What are the most likely scenarios in terms of what future AI will look like?
    • Should we expect a single, superintelligent agent, or is transformative AI more likely to consist of a large number of distributed, task-specific systems (see e.g. Eric Drexler’s model of comprehensive AI services)?
  • When should we expect transformative AI to be developed, if at all?
    • How long would the transition to an age of advanced AI take (“hard takeoff” vs “soft takeoff”)? Will progress be gradual or discontinuous? (More.)
    • What are the right concepts to measure this?
  • All things considered, to what degree should altruists focus on shaping transformative artificial intelligence? (Cf. here, here, and here.)

Future suffering and s-risks

In line with suffering-focused ethics, our primary interest is to avert future suffering. Therefore, we are not only thinking in terms of how to influence the long-term future in general, but also about how there might arise large amounts of suffering in the future, and how that can be avoided. 

  • What are the largest sources of future suffering (in expectation)?
  • Is most (expected) suffering incidental, that is, arising as a side effect in the pursuit of other goals? Or should we be most concerned about scenarios where powerful agents deliberately cause harm?
  • To what degree is the distribution of future suffering tail-heavy? Should we expect that most future suffering is due to worst-case outcomes
  • To what extent can we predict future suffering from our current vantage point? What fraction of (expected) future suffering is due to unknown unknowns?
    • What are possible ways to reduce suffering from (unknown) unknown sources?
  • How can we facilitate a sound research project focused on suffering reduction?

We are particularly interested in preventing scenarios that result in exceptionally large amounts of intense suffering, also called s-risks. The following questions are first steps towards a better understanding:

  • Which moral positions and arguments can ground a priority on s-risk reduction? (See also Chapter 14 in Magnus Vinding’s book Suffering-Focused Ethics.)
  • What are the main arguments for and against a focus on s-risks? Which considerations have so far been overlooked or underappreciated?
  • How can we divide the space of all possible s-risks into useful sub-categories? (See here.)
  • To what degree should we worry about human civilisation causing an s-risk, compared to failing to prevent s-risks caused by extraterrestrial civilisations? (See here.)
  • What are some of the risk factors that make s-risks more likely? How can we mitigate these risk factors? Is it better to focus on specific interventions or broad risk factors?
  • What are useful proxies that can make the highly abstract endeavour of s-risk reduction more tangible and specific? Candidates may include better values and greater cooperation among powerful agents.
  • How can we best ensure that our efforts to reduce s-risks do not inadvertently increase s-risks?
  • Which particular s-risks are most important to reduce in expectation?
  • All things considered, how likely is a severe s-risk?

Incidental s-risks

Incidental s-risks arise when the most efficient way to achieve a certain goal unintentionally creates a lot of suffering in the process. The agent or agents that cause the s-risk are either ignorant of or indifferent to that suffering, or they would prefer a suffering-free alternative in theory, but aren’t willing to bear the necessary costs in practice.

  • What are plausible concrete scenarios for how such s-risks might arise?
  • What are other plausible levers to reduce incidental s-risk? How do they compare? (More.)
  • What are the greatest incidental s-risks?

Agential s-risks

Agential s-risks involve agents that actively and intentionally want to cause harm. One plausible mechanism is that, as part of an escalating conflict, agents might threaten to bring about outcomes that are particularly bad for the other side.

  • How can one best prevent worst-case outcomes resulting from the execution of such threats? (See also: Research priorities for preventing threats)
  • Surrogate goals have been suggested as one potential solution. What are the key barriers to implementing successful surrogate goals?
  • Improving cooperation and reducing the risk of severe future conflicts seems robustly beneficial. Are there practical opportunities to achieve this, or is it too intractable?
  • What are other possible reasons why agents might want to cause harm (e.g., sadism or hatred), and how plausible is it that such motives will lead to an s-risk scenario?

Interventions

In addition to research on ethics and macrostrategy, we are interested in applied work to learn more about high-priority interventions. The following is a selection of plausible priority areas.

Increasing moral concern for the suffering of all sentient beings

One of the most straightforward levers to reduce suffering is to ensure a sufficient degree of moral concern for all sentient beings. This approach is sometimes discussed under the term “moral circle expansion” (MCE). 

  • How can we best increase concern for suffering and motivate people to prevent it in cost-effective ways? How can we entrench concern for suffering at the level of our institutions and make its reduction a collective priority? 
  • How could moral circle expansion backfire? (More.)
  • What aspects of moral circle expansion are most important? Should we focus on particularly neglected sentient beings, such as wild animals or invertebrates
  • Is it plausible that we should advocate for concern for artificial minds at this point? (Cf. here, here, and here.)

We are especially interested in the long-term future, and s-risks in particular.

  • How can we ensure that an improvement in values is lasting, rather than reverting to something worse?
  • What can be done to ensure the long-term stability of the relevant social movements (e.g. the animal advocacy movement)? What are the main risks that would jeopardise the long-term ability of such movements to achieve positive social change?
  • What could trigger a serious (and permanent) backlash against the animal advocacy movement (or the suffering reduction movement)? What reasonable steps should we take to prevent these movements from becoming too controversial?

Shaping powerful new technology

If advanced artificial intelligence (or other transformative technologies) will be developed in this century, then it stands to reason that shaping the development of such powerful technologies is a unique lever to influence the long-term future. In particular, we are interested in AI safety work that is focused on reducing s-risks arising in conflict between advanced AI systems. (For more details on this, we refer to the research agenda of the Center on Long-Term Risk.)

  • How does AI safety research with a focus on s-risks differ from “conventional” AI safety?
  • Considering that such AIs will be far more capable than we are, what can be done at this point to reduce the risk of catastrophic bargaining failures?
  • Does an improved understanding of foundational issues in game theory and decision theory help to prevent worst-case outcomes, or could such insights backfire in unforeseen ways?
  • How can surrogate goals be implemented in AI systems? 

However, preventing s-risks from advanced AI is not entirely a technical issue. We are also interested in AI governance, i.e. the norms, institutions, and regulations that govern the development of AI.

  • What aspects of AI governance are most important from a suffering-focused perspective? 
  • How can we ensure adequate rule of law in contexts that involve advanced artificial intelligence? 
  • What can be done to advance a cooperative mindset and positive-sum thinking within the AI community and among other stakeholders? How can we avoid an AI “arms race”?

We are also interested in foundational questions to learn more about whether attempts to shape new technology are a high-priority intervention. 

  • How does the feasibility of changing technological developments compare to the feasibility of social change? (See e.g. here for a similar discussion in the context of animal advocacy.)
  • Transformative artificial intelligence has received the most attention, but what about other powerful technologies?
  • To what degree can we predict future technological developments?
  • The impact of technology is always mediated through economic, social, and political dynamics. Should we try to intervene on a technical level, or should we instead aim to influence other factors, such as the values, institutions, or culture that shape the development of a new technology? 

Reducing malevolence

We are interested in work on reducing risks from malevolent actors. This is because particularly “evil” individuals in positions of power are, as human history suggests, a plausible mechanism for how worst-case outcomes could come about.

  • In what contexts are malevolent actors particularly likely to negatively affect the long-term future, rather than merely causing short-term harm?
  • What can we do to increase public awareness of malevolence and of the risk of “pathocracy”?
  • What political norms, institutions, and systems are least likely to result in malevolent leaders? (See e.g. this comment.)
  • How tractable is reducing malevolence (in particular, without new technologies)?
  • Is it plausible that malevolent traits could arise in AI systems? If so, how can learning algorithms be adapted to mitigate this risk?
  • Might sortition be a promising way to reduce the influence of malevolent individuals?

Better politics

Given our vast uncertainty about the future, we should arguably focus on putting society in as good a position as possible to address the challenges of the future. One important dimension of this is a functional political system. We are therefore interested in efforts to avoid harmful political and social dynamics, to strengthen democratic governance, and to establish a more reasoned and thoughtful discourse on policy. (For more on this, see here.)

  • Can we identify institutional changes to our political system(s) that would constitute a clear improvement over the status quo?
  • Are efforts to improve politics a cost-effective intervention from a long-term perspective (and in terms of s-risks in particular)? Or is this area too intractable, risky, and crowded? 
  • Is voting reform a high-priority intervention?
  • How can we ensure that the interests of all sentient beings are considered (to a greater extent) in political decisions?

Excessive political polarisation and tribalism make it harder to reach consensus or a fair compromise, and undermine trust in public institutions. Such dynamics also tend to exacerbate conflict, which is a risk factor for s-risks.

  • How can we increase the propensity of political decisions to focus on widely shared, cooperative aims, such as the reduction of suffering, rather than getting caught up in political conflict?
  • What are interventions that can robustly reduce polarisation in a lasting way, without the risk of backfiring in unexpected ways? 
  • Existing proposals include voting reform, public service broadcasting, deliberative citizen’s assemblies, and compulsory voting. Are any of these proposals tractable and effective ways to reduce polarisation?