The AI Welfare Question

For decades, mainstream AI ethics and safety discourse has focused on risks to humans. The idea that we might owe moral duties to AI systems themselves has remained fringe, often dismissed as speculative philosophy. Not four years ago, Google fired engineer Blake Lemoine after he publicly claimed that the company’s chatbot had become sentient. Today, however, people are increasingly taking AI welfare seriously.

That is precisely what Robert Long, Jeff Sebo, and coauthors urge us to do in their 2024 paper Taking AI Welfare Seriously.” Their central claim is straightforward: given substantial uncertainty about whether some AI systems will deserve moral consideration, we should treat AI welfare as a real policy and research issue now. [...] 

Read more

Should Suffering-Reducers Focus on AI?

If we want to reduce suffering over the long term, should AI be a top priority? As capabilities advance and talk of AGI and the singularity fills the airwaves, some argue we should focus heavily on AI. Others are skeptical. In their view, steering AI may be just one important cause among many—and prioritizing it risks neglecting more reliable ways of reducing suffering. Who is right?

Pro

The strongest case for focusing on AI begins with the claim that it could be a genuine hinge-of-history technology—not just another tool, but something with imminent world-shaping potential. If we reach human-level or superhuman systems, the paradigm of “smart” humans controlling “dumb” tools would be over. If that happens, all bets are off, but it seems reasonable to expect tremendous impacts. [...] 

Read more