Basic Rights for AIs

The topic of AI welfare is fast becoming mainstream. As I wrote in my last post, there’s an emerging debate that has been drawing some strong reactions. There is some resistance to even treating AI welfare as a legitimate concern. But there’s a perhaps more understandable resistance—not to taking AI welfare seriously in general, but to particular ways of doing so. Many may accept that we should grant some concern to AI welfare, but hold that granting rights to current or near-future AI systems is a bridge too far.

(“AI systems” here should be understood broadly, not limited to particular architectures, such as current digital systems). [...] 

Read more

The AI Welfare Question

For decades, mainstream AI ethics and safety discourse has focused on risks to humans. The idea that we might owe moral duties to AI systems themselves has remained fringe, often dismissed as speculative philosophy. Not four years ago, Google fired engineer Blake Lemoine after he publicly claimed that the company’s chatbot had become sentient. Today, however, people are increasingly taking AI welfare seriously.

That is precisely what Robert Long, Jeff Sebo, and coauthors urge us to do in their 2024 paper Taking AI Welfare Seriously.” Their central claim is straightforward: given substantial uncertainty about whether some AI systems will deserve moral consideration, we should treat AI welfare as a real policy and research issue now. [...] 

Read more

Should Suffering-Reducers Focus on AI?

If we want to reduce suffering over the long term, should AI be a top priority? As capabilities advance and talk of AGI and the singularity fills the airwaves, some argue we should focus heavily on AI. Others are skeptical. In their view, steering AI may be just one important cause among many—and prioritizing it risks neglecting more reliable ways of reducing suffering. Who is right?

Pro

The strongest case for focusing on AI begins with the claim that it could be a genuine hinge-of-history technology—not just another tool, but something with imminent world-shaping potential. If we reach human-level or superhuman systems, the paradigm of “smart” humans controlling “dumb” tools would be over. If that happens, all bets are off, but it seems reasonable to expect tremendous impacts. [...] 

Read more

The Animal Gap in AI Governance

By Alistair Stewart

How can we steer AI development today and in the future to reduce animal suffering, rather than increase it? One place to look is AI governance: the set of norms, policies, laws and institutions whose purpose is to influence how AI is developed and used. [...] 

Read more