The topic of AI welfare is fast becoming mainstream. As I wrote in my last post, there’s an emerging debate that has been drawing some strong reactions. There is some resistance to even treating AI welfare as a legitimate concern. But there’s a perhaps more understandable resistance—not to taking AI welfare seriously in general, but to particular ways of doing so. Many may accept that we should grant some concern to AI welfare, but hold that granting rights to current or near-future AI systems is a bridge too far.
(“AI systems” here should be understood broadly, not limited to particular architectures, such as current digital systems).

AI Rights?
There are two main arguments for granting AI systems rights. One appeals to welfare and moral standing: if AI systems can be benefited or harmed in ways that matter for them, then they may deserve protections in their own right. The other is practical: even without clear moral status, granting limited rights could help shape incentives and maintain a stable legal order. Like granting legal personhood to corporations, this may help to enable clearer, more predictable rules about responsibility and liability.
Both arguments have drawn significant resistance. Several U.S. states have been moving to preemptively deny AI systems rights. It’s no wonder why there’s strong resistance. Joanna Bryson, who insists that “robots should be slaves,” articulates some core objections to granting AI systems rights. Among other concerns, she argues that it risks misdirecting moral attention away from humans and obscuring questions of accountability for AI’s impacts.
Granting broad rights to AI systems could also plausibly contribute to human disempowerment. Concerns about disempowerment or loss of control, whether gradual or abrupt, have been around for a while. Whether or not this concern is overstated, it seems that granting AI systems rights could make this more likely, since it might limit the controls humans can exercise over AI.
Seeking a Middle Ground
When someone advocates for rights for AI systems, there is a wide range of possibilities in view. Those rights could be expansive, encompassing economic and political powers like owning property, entering contracts, or even voting. Or they could be minimal, limited to basic protections against harm or some form of legal standing.
But discussions often blur this distinction. A mistake is to treat rights as all-or-nothing: either AI systems are mere tools with zero protections, or they are full participants in social and political life. This binary is misleading. Rights are not monolithic; they vary in scope, function, and strength. Our laws already reflect this. Children are protected against abuse but lack full legal autonomy. Non-human animals are in principle covered by anti-cruelty laws without being legal persons. These cases show that we can recognize targeted safeguards without granting comprehensive rights.
If some AI systems could plausibly be capable of suffering, we may not need to settle their full moral and legal status to justify basic safeguards against extreme harm. When considering whether it should be permissible to cause suffering to AI systems, perhaps we think (with Bentham) that the right question to ask is simply “can they suffer?”—not whether they are fundamentally like us.
A useful model is Article 5 of the U.N. Declaration of Human Rights, which prohibits torture and cruel, inhuman, or degrading treatment. We might imagine something similar for AI systems—a basic protection against extreme harm. Granting this would not require treating AI systems as citizens or giving them autonomy, property rights, or political participation. It would simply be a minimal safeguard against the worst outcomes.
This approach may also be more politically tractable, sidestepping most concerns about disempowering or de-centering humans. Notably, some survey evidence suggests that a significant share of the public is open to the possibility that AI systems could be conscious and should be granted rights.
Conclusion
To be effective, these protections may need to be backed up by measures such as welfare assessments, like those Anthropic has pioneered, and potentially audits of training regimes and restrictions on experiments or use cases that could plausibly cause suffering. This would likely demand more research into AI consciousness, though this would need to be conducted responsibly and carefully to avoid inadvertently causing more suffering.
But the basic idea may still be sound as a kind of precautionary minimalism. We need not rush to declare AI systems legal persons and settle their full legal status, but neither should we proceed as if there is no chance we could owe them anything. Between these extremes lies a sensible path: build basic protections—as well as norms and institutions—to guard against possible artificial suffering.
