People of color, women, slaves, Jews, children and other human beings have been regarded, at some point in history, as legal nonpersons. Nowadays they are equal to all other human beings, at least according to the law. It’s not uncommon to hear the argument that certain species of animals, such as primates or dolphins, should also have rights similar to those of humans (at least some of the rights of humans).
Is it not time to do the same for machines? Why should the expanding circle of moral concern stop at living creatures? And what is life anyway? Are not intelligence, self-awareness (or “consciousness” whatever that means) and some form of agency more important in the attribution of rights? If so, then some machines at least should have rights, since they are intelligent, self-aware and capable of agency. And then we should stop treating machines are mere tools used for human ends, just as we – or at least some of us – have now stopped treating slaves, women and animals as mere tools. Otherwise we run the risk of presenting ourselves to future generations much like our bigoted, racist, speciesist, classist and sexist forefathers have presented themselves to us.
However, can we really make the case that the most sophisticated machines that we currently have are “creatures” with artificial intelligence able to make autonomous and self-aware choices? (Notice the use of the word “that” instead of “who”). Is it not more correct to say that, no matter how sophisticated they are, they simply execute instructions given to them by human engineers or programmers and choose whatever their designers have told them to choose? That they are mere extensions of our arms and brains like all machines before them, only more complex?
There are now machines that speak, or seem to speak. Should they have the right to free speech? A Google search result may be deemed protected speech, but not because machines – in this case a computer program that searches the internet – have the right to free speech but because the search result can be considered a form of human speech, in this case speech by a computer programmer. Google is a mouthpiece.
If we refuse the notion of machine rights because machines are not self-aware, autonomous and intelligent agents, then we immediately run into a serious problem. Certain human beings who have lost a significant part of their brainpower – or all of it – and who are no longer self-aware or capable of agency – or who never had those features – are still accorded rights. We don’t systematically euthanize the severely mentally handicapped or patients in a coma.
Maybe then this whole line of thinking is fruitless. Attributes such as intelligence, self-awareness, consciousness and agency shouldn’t be used to accord or deny rights because doing so leads to unacceptable conclusions. And even in the absence of such conclusion would we face a problem. We can’t clearly define those attributes, and even if we could then it would still be practically impossible to decide “who” or “what” has or doesn’t have them. It’s difficult enough with a human being in a coma, let alone with animals or machines. Even if someone or something looks like a thinking, conscious actor, that may be an optical illusion: perhaps it was programmed to look that way. A Turing test may help, but it may not be foolproof, or at least not able to convince us that it is foolproof.
The reason is the inaccessible of the mind. In the words of Daniel Dennett, there is no way to be sure that something that seems to have an inner life does in fact have one. We normally assume that our fellow human beings have an inner life like our own, but that is just an assumption necessary to make everyday life bearable. And it’s even more of an assumption in the case of animals or machines. At least in the case of fellow human beings we can convince ourselves of the proposition that because they look like us on the exterior they must also look like us on the interior.
Hence, we shouldn’t decide whether someone or something has rights on the basis of the presence of attributes such as consciousness and agency. In my own thinking about rights I’ve always avoided this line of thinking. Rights for me are things that we need to realize certain values. They are tools – legal and policy tools – rather than attributes of moral creatures. Hence, we should ask whether animals or machines requires rights for the realization of states that are important to them. Animals value a life without pain and without restrictions to their freedom of movement. Rights that help them to achieve these values would therefore be imaginable. However, it seems more difficult to make the same case for machines. It’s not clear that machines value anything, at least not in the same way that it is clear that human beings and animals value some things.
“Clear” should be understood not in the sense of “factual” or “true”. Like in the case of consciousness, intelligence or agency, the presence of the ability to value something can’t be easily determined: like any other state of mind it can’t be seen, verified or experienced by anyone else. For instance, if an animal seems to value the absence of pain, we may in fact be dealing with a machine that is programmed to impersonate an animal that doesn’t like pain. We can’t know for certain, not even with regard to our fellow human beings. But again, the general similarity between humans and between humans and some types of animals gives us reason to believe that other humans or animals value some of the same things we value. There is no such general similarity between human and machines.
One could argue that there are moral limits on the things we can do to machines – not torture robots or robotic toys for instance – not because those machines have rights but because what our actions do to ourselves. But that’s no reason to conclude that machines have rights. Our own rights are at stake here.