Volume 2, Issue 1
1st Quarter, 2007

Artificial Intelligence as a Legal Person

David Calverley, Esq.

Page 3 of 4

Image 11 - Human, persons and property 

You have to remember that the word "persona" actually derives from the Latin, and it refers to a mask that was worn early in plays, and only over time did persona really then come to refer to the human being that was wearing the mask. It was actually the mask itself, that's where it came from. Because there was never any basis for making that distinction until very, very recently, we've always kind of conflated those two ideas of human and person, and they have always been in opposition to the idea of property.

Now, if we look at the person as a legal creation, again, it's a cluster concept, intentionality, autonomy, self-awareness, sentience, there are a variety of terms that can be brought into this, but it need not be biological as we've seen.

As Solum says, "Functional Intentionality is probably enough." I would agree that autonomy seems to be more problematic in terms of using it as a knockout argument for legal personhood. The potential problems that we end up getting into because of this is, in fact, one of the questions that came up about slavery.  It's a slightly different argument - can we create a machine and make it a slave? Can we treat it as property? If we do treat it as property, where and at what point will it begin to make a moral claim?

In order to understand that issue we have to recognize that the law of slavery has always been based upon an argument of positive law, that law has -- if you look at the early cases, Somerset's case[1], which is an English case, rule that a slave brought from the colonies to England could not be continued to be held in slavery once they arrived in England.  And the reason that it was held that way was because English law did not provide for slavery. English law did not allow for slavery.

That rationale is identical to the rationale in the much maligned Dred Scott[2]case in the U.S., where Judge Taney said that Dred Scott, who was a black man from Missouri, was not a person under the U.S. Constitution in so far as that definition was developed by the drafters of the Constitution.  And on that basis alone he said that constitutional protection could not be extended to that person. Judge Taney said that you have to revert to state law in order to affect Dred Scott's status.

Now, there was no qualm about whether Dred Scott was a human at all.  It was clearly recognized that he was. The antislavery issue did happen very, very quickly from the Quakers.  I question whether or not it necessarily would be a top down issues because there were a lot of scenarios where the slaves plight was more analogous in a lot of ways to the British working class at the time, a lot of people could identify with that. Also, you have to remember that the practice of indentured servitude was well established.  In fact, most of the American colonies were intially populated by indentured slaves for servants. What was most surprising to me was the speed at which that change took place in the society. So there had to be something at work in the society that triggered that kind of immediate reaction and willingness to change a very significant economic factor.

In each one of those cases if the positive law had been different it is likely that the outcome of those cases could have been or would have been different.

The problem comes in because of the moral dimension, and when you add a moral dimension you get away from strictly positive law, you get into much more complex interplays between positive law, natural law, and the theoretical underpinnings of what is proper to allow a legislative body to bring into their determination. When I say legislative I probably should clarify that to include both the legislature and more properly, a court. Can a court bring moral issues, moral considerations into its determinations to effect law?

We need to be very, very careful when we start laying out all of these different criteria because we run the risk of conflating some very, very complex philosophical problems, but I would argue that by combining intentionality with autonomy we probably can save a nonbiological machine from being viewed purely as property, that it is something more than property.

By applying a functional analysis we probably can get to a point where we could argue that the law should act on this entity. But we have to be aware that there are limitations and we still need more care in defining what we mean by "person".

One of the things that I always hear about AI is that you aren't going to have the opportunity for gradualism, that the clock cycle is so much faster that you aren't going to have that capability to allow it to gradually happen.

There was an article that came out in the early 80’s by a theologian Michael LaChat,[3] and Michael's argument in that article was that if the end point of where you want to end up is a being or an entity that is a functional human equivalent, arguably you begin to implicate moral issues very early in the process and you may have to start looking at whether you've come within, for example, some of the structures in the Nuremberg code[4] or in the Belmont Report[5], which is the U.S. basis on which bioethics was developed. If your ultimate end point is to create a human equivalent, you may not be able to go very far down that road simply because of some of these concerns that you're experimenting with much, much too early in the process.

There is an interesting point in the context of the Frankenstein story. The real key to the Frankenstein story when you read Mary Shelley's[6] original and not the Mell Brooks movie, is that the creature asked Frankenstein to create him a wife so that he could have a creature as ugly as he so that they could live out their lives as he had done -- or as he had seen with the family that he spied on when he lived in the woodshed.

The aspect of it is, he was asking for something that's quintessentially human. He wanted a companion, he wanted not to be alone in the world. So if you take that argument, I can make a distinction between one AI and 10,000 AI's.

Next Page


[1] 98 Eng. Rep. 499, 1772

[2] DRED SCOTT v. SANDFORD, 60 US 19 How. 393, 1857  http://caselaw.lp.findlaw.com  February 20, 2007 1:37 PM EST

[3] Michael LaChat, “Artificial Intelligence and Ethics: An Exercise in the Moral Imagination” AI Magazine, Summer 1986

[4] Trials of War Criminals before the Nuremberg Military Tribunals under Control Council Law, No. 10, Vol. 2, pp. 181-182, Washington, D.C.: U.S. Government Printing Office, 1949

[5] Belmont Report, 1979, Ethical Principles and Guidelines for the Protection of Human Subjects of Research, available online at http://ohsr.od.nih.gov/guidelines/Belmont.html

[6] Shelley, Mary, Frankenstein, New York, Barnes & Noble Books (1818) 1993.

1 2 3 4 next page>