The AI Welfare Question

For decades, mainstream AI ethics and safety discourse has focused on risks to humans. The idea that we might owe moral duties to AI systems themselves has remained fringe, often dismissed as speculative philosophy. Not four years ago, Google fired engineer Blake Lemoine after he publicly claimed that the company’s chatbot had become sentient. Today, however, people are increasingly taking AI welfare seriously.

That is precisely what Robert Long, Jeff Sebo, and coauthors urge us to do in their 2024 paper Taking AI Welfare Seriously.” Their central claim is straightforward: given substantial uncertainty about whether some AI systems will deserve moral consideration, we should treat AI welfare as a real policy and research issue now. [...] 

Read more