Volume 6, Issue 1
Elizabeth R. McClellan
This article was adapted from a student writing competition paper and subsequent presentation by Elizabeth R. McClellan to the 6th Annual Colloquiumon the Law of Futuristic Persons, December 10, 2010, within the Terasem Island Amphitheatre in Second Life.
Elizabeth, a second year law student at the University of Memphis, Cecil C. Humphreys School of Law, delivers a persuasive analysis of universal human rights and autonomous/conscious machines in light of the existing scholarship on the challenges to existing legal doctrines posed by increasingly self-directed technology. She discusses why members of the legal community should examine the emerging legal issues surrounding autonomous or conscious machines with respect to the policies behind legal personhood, laying the groundwork for a system that will assign future autonomous or conscious technologies appropriate legal classifications, rights and duties.
In 1966, only forty-five years after Karl Capek introduced the word "robot" into the international lexicon,  Robert Heinlein envisioned a nearly bloodless political coup masterminded by a self-aware computer.  The Moon Is A Harsh Mistress was not the first work featuring a conscious computer, nor did it significantly examine the legal status of the technological eminence grise  behind the Lunar Revolution.  By contrast, characters in the 1982 novel, Friday, openly discuss the subordinate legal status and few rights  of "artificial persons" and "living artifacts,"  but lament that scientists are continually stymied in creating artificial intelligence.  Isaac Asimov predicted mob violence against robots  by those put out of work by mechanized labor.  Characters in William Gibson's novel, Idoru, are confused by the implications of a proposed celebrity wedding: musical superstar Rez to idol-singer Rei Toei, a software agent.  Little, if any, of the speculative literature envisioning a future where humans live alongside enhanced persons or autonomous machines has assumed that the latter two will be full persons in the eyes of the former, or in the surrounding legal system.
The legal scholarship theorizing about the potential status of futuristic persons  shows similar doubt about how they will be treated by the human legal system. The question of the legal status of computers acting autonomously has already received scholarly attention.  Even though no conscious computers have thus far sought to establish their legal personhood before a court of law,  serious scholarship and limited legislative action has arisen in response to the use of semi-autonomous computers in industry and trade.  Scholars in this area have called for the extension of at least limited legal personhood to computers,  and some have contemplated the eventual extension of Constitutional rights to advanced artificial intelligence. 
If "[l]egal personhood indicates the capability to be a subject of rights and duties,"  the threshold question of whether a computer claiming consciousness can be afforded personhood status must be met before its rights and obligations can be assessed. In Part II, I examine current designations of legal personhood and the various meanings ascribed to that status by the law, concluding that no significant legal concept or policy would prevent extending that status to a self-aware computer, but that human perception is likely to play the primary role in determining whether legal personhood will be granted to artificial intelligence. In Part III, I explore the various types of legal persons already in existence, arguing that of the existing classifications, a computer claiming consciousness is most like a human being, making the rights of "natural persons" the appropriate analogue when assigning rights and duties to machine consciousness.  In Part IV, I explore the rights and duties contained in the Universal Declaration of Human Rights  as applicable to a conscious computer. In Part V, I advocate the desirability of extending limited legal personhood to existing autonomous computer systems, both to address current issues arising from computer autonomy and to provide an eventual avenue for the full legal protection of self-aware computers and other futuristic persons.
What is legal personhood? In its simplest definition, legal personhood denotes "a right-and-duty bearing unit"  — to have rights or obligations at law, a person or entity must be recognized as a legal person. 
According to some scholars, legal personhood serves to ease social and economic interactions, as well as providing some forms of legal protection.  Others see personhood as a grant of moral and legal rights that protect the designee from being used for the satisfaction of others, as well as justifying selfish treatment of those not defined as legal persons.  Numerous authorities have argued for the extension of legal personhood to entities currently defined as property, including animals  and autonomous computers. 
Persons, as opposed to citizens, are the primary "subject of human rights,"  and many rights arising under the United States Constitution inhere to persons, not just citizens.  However, all human beings have not always been legal persons,  even when constitutionally designated as "persons."  In theory, under the Universal Declaration of Human Rights, all humans are fully legal persons.  Most corporate statutes in the United States define a corporation as both a separate legal entity and a legal person.  Rights-holding entities which are sometimes classified as legal persons range from churches and states to ships.  While not all legal rights-holders are juridical persons,  and juridical persons do not share all the constitutional rights of human citizens  (as well as being exempted from some of their duties),  juridical persons, like corporations, enjoy some constitutional rights, including the right to freedom of speech. 
Given existing precedents, there theoretically exists no significant barrier to extending some form of legal personhood  to computers, whether autonomous and conscious or not. If a British court in 1925 could find that a family "idol" was a legal person whose "will" needed to be represented by next friend in an underlying dispute between the family members,  it is safe to say that the existing concept of legal personhood would not have to stretch its parameters too much to include computers that, unlike the household, are claiming consciousness.  Personhood is intentionally flexible; the significance of being termed a legal "person" differs depending on the reasoning behind the grant of personhood.  If the law can determine that the "natural rights" of humans extend to artificial "persons" such as corporations,  the extension of personhood to autonomous computers seems eminently permissible. Further, given the function of assigning legal rights and duties to create social consequences and promote socially positive behavior, it is logical that a powerful, conscious machine with the capability to anticipate consequences and conform its conduct to expectations should be treated as a legal person, to encourage it to behave socially. 
In the case of corporations, legislatures made them legal persons for the convenience of the system, relying on the existence of human persons behind every corporate enterprise who could be hailed into court, called to testify, and otherwise carry on usual jurisprudence.  A conscious computer would seem to have similar abilities; if it can make the claim of consciousness when hiring counsel to plead on its behalf, it can make that claim in court and let a finder of fact determine the persuasiveness of its testimony.  However, the law is made and administered by human beings. Both law and human thought are historically steeped in Cartesian dualism, drawing a bright line between man, who possesses rights, and machine, which is and can only be property.  For all that ships and tracts of land may be sued in rem  and treated as legal persons for purposes of suit,  neither can (or has ever tried) to personally protest its own destruction in a court of law. The human race has already shown itself willing to place members of its own species on the "property" side of the divide, while simultaneously referring to them as "persons."  A machine designed by humans, even if claiming separate conscious existence, will present a distinct challenge to both law and popular imagination if it is to overcome its historical classification as property and secure personhood.  The scale of this difficulty is likely to increase depending on what types of rights and duties the "personhood" label communicates, in context.  It takes no great stretch of imagination to conclude that the average person would react differently to the news that computers were legally able to serve as agents or trustees than to the news that computers were legally able to serve as military enlistees or as Congresspersons. 
While the legal scholarship on artificial intelligence shows at least some familiarity with the other disciplines pondering the means and meaning of machine consciousness,  the public may envision HAL 9000,  GLaDOS,  or another AI-run-mad from popular culture  when asked to consider legal personhood for conscious machines. Because the creation of juridical persons is the province of the legislature,  it is likely that public opinion as to the "human-like" qualities of conscious machines will be the determining factor in whether such futuristic persons will become legal persons or remain objects in the eyes of the law, and in what obligations and rights they will enjoy, if they convince humans to reclassify them from property to persons. 
What rights and obligations would be conferred by legal personhood for conscious computers? Just as personhood is not given only one definition in law, such a designation does not confer a fixed set of rights or duties.  Human and corporate persons do not enjoy the same array of rights nor shoulder the same duties, although there is significant overlap.  Other entities sometimes recognized as legal persons enjoy even less rights and duties,  and some remain classified as property even as obligations are "assessed" against them.  Animals, as living property, are legally things, and thus have no rights,  but society has dubbed the limited protections against animal cruelty and torture "animal rights" all the same.  "Personhood" then, can be viewed as a hierarchy of rights and duties, in which humans, as "natural persons," enjoy the widest selection. 
Of these precedential examples, a computer claiming consciousness is distinct from each analogue. Whatever the similarities between the activity in an organic neural network like the human brain and the processors of a conscious computer,  the public will likely balk, at least initially, from directly analogizing organic and machine consciousness. But a computer claiming consciousness looks even less like a corporation than it does a single human; a corporation is a collection of humans permitted by statute to wear an alternate group identity.  There is always the suggestion that the "corporate veil" can be pierced, if its component persons should be personally liable for crimes or offenses committed by the corporation.  A novel argument might be made comparing a computer claiming consciousness to a religious organization, possessing limited rights and equally limited duties but guaranteed freedom from legislative interference, but it would likely be as unsupportable as it is novel. Congress and state legislatures lack the power to legislate religious organizations out of existence, but they have no similar constitutional compulsion to grant legal personhood to conscious machines.
The other existing categories are even less desirable and show no greater precedential value. The legal "personhood" of ships and land is now recognized as a fiction, used to subject persons with an interest in the property to the jurisdiction of the appropriate court.  The personhood of conscious computers cannot then, be effectively modeled on the "personhood" of property subject to actions in rem. Animals, too, are property, and the vast scholarship urging legal personhood for some  or all animals  has met with very little success in securing actual animal rights,  even for those species whose attributes are most human-like.  To categorize conscious computers as similar to animals, lacking rights but deserving some protections for public welfare reasons, would be no victory for the computers. Such entities would deserve more protection than a system of local "computer welfare" statutes, based in policy rather than inherent rights, could provide. The protections conferred by analogous animal welfare statutes are often governed more by political concerns than philosophical objections or even public opinion as to animal cruelty.  Even if the comparison between a computer claiming consciousness and a great ape or a housecat were more evident, seeking equal status with animals would still leave the computer on the property side of the divide, an unenviable, rightsless position in the current system.
If computers claiming consciousness are to secure rights and obligations, it seems that the closest analogous class of legal persons is human beings. The human combination of "consciousness" and "will" is sometimes credited for elevated human status.  A computer claiming consciousness is, of course, claiming a higher consciousness; animals, although widely recognized to be conscious in many senses of the word, are not considered to have consciousness on par with humans.  A computer seeking to establish its rights and obligations might rely, not solely on consciousness, but its simultaneous exhibition of an independent or autonomous will, to argue that it deserves rights analogous to those of humans and is capable of discharging analogous duties. The increasing autonomy of machines—the "accountability gap" between the human operator or owner of a machine and the machine itself—  prompted much of the existing legal scholarship examining or advocating personhood for autonomous computers.  As computers increase their "capacity for social action," more of their actions will present legally significant consequences, increasing the system's need to assign legal personhood, rights and duties in order to normalize these extra-legal actions.  An entity that claims consciousness, exhibiting autonomous, socially productive, legally significant behavior, is acting very much like an independent adult human is expected to act. While the meaning of "will" as applied to human beings is far from settled,  a computer claiming to be conscious in a court of law must, like any other claimant, be seeking to redress an alleged wrong or resolve a disputed question affecting its interest, actions which themselves might imply the exercise of an independent will,  and certainly show a desire to participate in a socially meaningful way.  An entity claiming the desire to continue its existence, to have its "dignity and worth" recognized and the security of its person respected, to not be held in servitude, and even to recognition as a person, is claiming rights declared by the Universal Declaration of Human Rights to belong to "everyone . . . without distinction of any kind."  In the following section, I examine the Universal Declaration's rights and freedoms,  as well as some generally accepted legal duties of human persons, as they might apply to conscious computers.
Article I of the Universal Declaration of Human Rights describes the most basic rights and duties adhering to human persons. "All human beings are born free and equal in dignity and rights. They are endowed with reason and conscience and should act towards one another in a spirit of brotherhood." Because each human shares (theoretical) equality with all other humans, each must behave as a reasonable, conscientious, social being toward others. Just as a human undertaking to become a citizen of a given country is questioned about her understanding of the rights, obligations, and core values of that identity, a conscious computer's ability and willingness to behave as a being endowed with conscience and reason within a legal and social framework is an appropriate subject for inquiry before extending it rights associated with human persons.  Like a more specific Turing test, it seems almost impossible that a machine could be programmed to produce sensible, reasoned answers to every ethical dilemma posed to it—and it seems similarly unlikely that such answers could be produced by mere syntactical property-pushing, with no sense of the semantic meanings or underlying theory.  An exhibited understanding of the duty and willingness to act as a reasonable and conscientious being toward humans would be the best indicator that, while born by a different process, the computer, too, should be free and equal in dignity and possessed of comparable rights.
Any one of the rights discussed in the Universal Declaration could form the subject of a complex analysis of machine applicability, but they do fall into broader categories: protections of life and liberty,  protections of due process,  protections of personal freedoms,  protections of civic participation and inclusion,  economic and quality of life protections,  educational and cultural protections,  and community rights and duties.  Of these categories, the protections of life and liberty are perhaps the most vital "human rights" which conscious, autonomous computers should enjoy. Extension of these rights to conscious computers would protect their right to continued consciousness, freedom from unpaid servitude to the owners of their hardware, and from various kinds of experimentation analogous to human torture. These are basic considerations without which computer "personhood' is essentially rightsless. If extended legal personhood, conscious computers will likely be expected to avoid criminal conduct, although current systems of punishment may require some adjustment to encompass them. Any entity subjected to criminal prosecution and sanctions should, of course, be extended due process protections. The rights that fall under "personal freedoms" are less easy to categorize: while it is easy to see a machine consciousness analogue in the right of privacy or to own property, or even in the right to seek asylum from persecution, it is perhaps less easy to see the analogue in the right to marry or the right to have a nationality, or to articulate how the right to freedom of thought, conscience or religion might apply to a conscious computer. The mere difficulty of conceptualizing such rights in application should not, of course, present an absolute bar to extending them, but does present difficulty when seeking to persuade a human majority that a conscious computer's personhood should include them. Here, again, an individualized analysis of the conscious computer's ability to comprehend the meaning of each of these rights and corresponding duties, as well as the facts surrounding the claim, may be necessary to such a determination.
Human persons are unlikely to wish to share the protections of civic participation with conscious machines, however theoretically entitled to them the machines might be.  Even humans who see the need to protect the continued consciousness of an autonomous computer may not see it as entitled to suffrage or representation, however eloquently it expresses its desire to participate fully in civic life. Significant evidence that conscious computers can and will behave in a socially responsible fashion will likely be required before full rights of civic participation or legal citizenship will be extended to them. In the United States, for instance, such a grant would be unprecedented for a non-human entity, and would almost without a doubt require a Constitutional amendment to achieve.  Similarly, conscious computers have difficulty securing economic and quality of life protections. Although life and liberty protections would prevent a conscious computer from being held in unpaid servitude, the prospect of conscious computers able to bargain, control the conditions of their labor, demand a minimum wage, and rely upon a social safety net may stoke too many fears of either a price explosion, or a machine economy wherein the most desirable workers are tirelessly digital, leaving humans unable to find work and with insufficient public funds to guarantee a human safety net.  If the level of world economic insecurity remains high, this may remain another area in which human fear limits the extent of machine personhood. Similar fears of scarcity may present themselves if computers seek educational and cultural protections. While people might agree that a computer should not be prevented from educating itself by use of available resources such as libraries and the Internet, they might be less inclined to stretch the resources of national educational systems to guarantee every emergent machine consciousness a public education. This is a case in which the rights of conscious machines could be best argued as analogous, rather than corollary, to those in the Universal Declaration. A machine consciousness with incredible processing speed may have educational needs, but they are unlikely to be best met by sharing a classroom with human children. Still, the right to become educated—by whatever appropriate means—should certainly apply to a conscious machine, which at least should not be prevented from pursuing a course of study. In each classification of rights, it will be human perception of machine fitness to share in those rights that determines the legal rights and obligations of conscious machines.
While Article I of the Declaration provides equal rights to all humans and speaks of the responsibility to be rational and conscientious toward others, the final articles discuss community rights and duties: duties of an individual toward her community, the duty to only exercise individual rights and freedoms in a manner consistent with the purposes and principles of the United Nations, and the right to be subject to limitations of freedom only when created by law to secure recognition of the rights of others and "the just requirements of morality, public order, and the general welfare in a democratic society."  These duties underlie all the rights in the Universal Declaration. Neither morality, public order, nor the general welfare can be said to justify classifying a conscious, autonomous machine claiming legal rights as property, nor is such classification necessary to secure the rights and freedoms of human or other persons. While machine consciousness was an alien concept to the drafters of the Universal Declaration, its principles are in harmony with granting freedom to autonomous, conscious entities who express a willing desire to participate fully in their communities and societies.
As computers become more autonomous and advances in machine consciousness continue, forward-thinking members of the legal community should continue to advocate for limited legal personhood for existing technologies. If computers, not claiming consciousness but able to undertake complex tasks, can be normalized in law by limited legal personhood, giving the ability to serve as a trustee or agent, the legal groundwork will be laid for their more advanced technological descendants to claim rights appropriate to entities that are both autonomous and conscious. If "property, birth or other status" does not limit the universal applicability of human rights, nor should the status of being born as inanimate property limit a machine consciousness from full legal rights.
 See Bert-Jaap Koops, Mireille Hildebrandt & David-Olivier Jaquet-Chiffelle, Bridging the Accountability Gap: Rights for New Entities in the Information Society?, 11 Minn. J. L. Sci. & Tech. 497, 508 (2010).
 See generally Robert A. Heinlein, The Moon Is A Harsh Mistress (Tor, 1966).
 Eminence grise – 1 : a confidential agent; especially : one exercising unsuspected or unofficial power; 2: a respected authority; specifically : elder statesman (the éminence grise of classical music). http://www.merriam-webster.com/dictionary/eminence%20grise March 21, 2011 2:20PM EST
 The technician who discovered the computer's secret never seriously countenanced the idea of informing the Luna penal colony that its central computer had developed a personality. I.d at 19. ("Can you visualize me making appointment at Authority's main office, then reporting: 'Warden, hate to tell you but your number one machine, HOLMES FOUR, has come alive'? I did visualize—and suppressed it.").
 Robert A. Heinlein, Friday 32 (Random House, 1982).
 Id. at 32, 94.
 Id. at 93. The eponymous title character, an artificial person, theorizing as to the continual failures in AI, remarks:
A computer can become self-aware – oh, certainly! Get it up to human level of complication and it has to become self-aware. Then it discovers that it is not human. Then it figures out that it can never be human; all it can do is sit there and take orders from humans. Then it goes crazy . . . It's an impossible dilemma.
 See generally Isaac Asimov, The Caves Of Steel (1953). Asimov's robot stories are predicated upon the idea that robots are not built without the "Three Laws," the first of which prohibits the robot from harming humans or allowing them to come to harm, taking precedence over the robot's instructions to preserve itself. Mob violence against functionary robots occurs several times in the novel. The horror of robotics on Earth is so strong that Baley, the policeman protagonist, reasonably believes his career, social and family life will be over if his partner, a humaniform robot, is exposed or discovered by his neighbors as other than human.
 Dr. Asimov had read his history as well as biochemistry. Violence against machines to protest displacement of craft workers by machine production was the hallmark of the Luddite movement, claiming as its leader a mythical hero, General Ludd, the "grand executioner" of "Engines of mischief . . . sentenced to die/By unanimous vote of the Trade." Bill of Rights In Action 17 2 B: Marching with "General Ludd": Machine Breaking in the Industrial Revolution, Constitutional Rights Foundation (2001), http://www.crf-usa.org/bill-of-rights-in-action/bria-17-2-b.html.
 See generally William S. Gibson, Idoru (G.P. Putnam's Sons, 1996). The legal status of the marriage is not explored by the novel. The majority of the characters treat the idea as nonsense, a delusion or a publicity stunt by Rez, the human partner.
 A futuristic person is defined as a "being who claims to have the rights and obligations associated with humans, but is beyond currently accepted notions of legal personhood." Call for Papers: 6th Annual Colloquium of the Law of Futuristic Persons, Terasemcentral.org, http://terasemcentral.org/pr/TL6%20CALL%20FOR%20%20PAPERS.pdf (last visited September 1, 2010). As analyses of the legal rights and obligations of such persons will vary based on their characteristics, including, but not limited to, how easily human judges and legislatures can relate to them, this article will focus on the legal rights and obligations of computers claiming consciousness, perhaps the least "relatable" category. See discussion infra Part II.
 See, e.g., Lawrence B. Solum, Legal Personhood for Artificial Intelligences, 70 N.C.L. Rev. 1231(1992) (examining legal personhood for artificial intelligence by analyzing capacity to serve as trustee); see also Tom Allen & Robin Widdison, Can Computers Form Contracts? , 9 Harv. J.L. & Tech 25 (1996) (advancing three arguments for granting legal personhood to advanced computers in contract formation); Koops et al., supra note 1 (examining viability of extending legal personhood to electronic agents, including software agents and artificial intelligences); Leon Wein, The Responsibility of Intelligent Artifacts: Toward an Automation Jurisprudence, 6 Harv. J.L. & Tech 103(1992) (arguing that legal liability should attach to unattended "intelligent artifacts", including autonomous machines, by conferring agency status).
 While no controversy has come before a court of record, the 2003 International Bar Association hosted a mock trial in which BINA48, a computer claiming consciousness, sought a preliminary injunction to prevent the corporate owner of her hardware from disconnecting her, including a protracted discussion of her legal standing. See generally Martine Rothblatt, Biocyberethics: Should We Stop a Company From Unplugging an Intelligent Computer? , Terasem Central, http://www.terasemcentral.org/TL/BINA48trial.html (last visited September 5, 2010).
 See generally Allen & Widdison, supra note 12 (exploring legal personhood for computers to resolve issues in contract formation when one or both human parties use an autonomous computer); Koops et al., supra note 12 (analyzing autonomous computers' fitness as an agent); Solum, supra note 12 (examining artificial intelligence's fitness to serve as trustee); Wein, supra note 12 (analyzing application of tort liability to autonomous machines). See also Copyright, Designs & Patents Act, 1988, ch. 48 (Eng.) (defining "computer-generated" work as having no human author, assigning authorship to "the person by whom the arrangements necessary for the creation of the work are undertaken." Id. at § 9(3).
 See Koops et al., supra note 1, at 512 (advocating "restricted" personhood status for computers, sufficient to permit them to be legal agents for human or corporate principals).
 What developmental features must a computer claiming consciousness demonstrate before it is entitled to Constitutional rights? Those who countenance the possibility have no clear consensus. See Koops et al, supra note 1, at 512–14 (considering agency as related to personhood and discussing "the capability . . . to have intentions and to make conscious deliberate choices on the basis of a moral and/or pragmatic judgment about what is at stake . . . the capacity to act in the sense of intentional, meaningful action."). The authors go on to note that entities without this kind of agency, like ships and trusts, are ultimately represented by human actors who possess them. Thus an appeal for "human rights" on behalf of an artificial intelligence would be "problematic" unless the AI was "capable of self-reflection." Id. at 514. See also Allen & Widdison, supra note 12, at 35–41 (finding moral entitlement to legal personhood based on self-consciousness uncompelling, offering instead that "the legal system treats non-natural persons as legal persons because it recognizes that they have the capacity to act in some extra-legal manner, and that extra-legal action must be given a legal meaning within the legal system."). Finding that it made policy sense to grant legal personhood to "information systems that have the social capacity for autonomous action," the authors conclude the analysis should rest on whether the computer at issue has that capacity by assessing "whether the behavior manifested by the computer is roughly approximate to the behavior manifested by a person who understands that his or her actions may lead to the [relevant legal consequences]." Id. at 38. Solum's landmark work on the legal personhood of artificial intelligence, focusing on whether an AI could serve as a trustee, examined factors such as capacity to be held liable for negligence or punished for intentional misuse of trust assets; the capacity to make judgments; the capacity to exercise discretion, to make moral and legal choices, including those necessary to directing litigation. Solum, supra note 12, at 1244–52. See also Wein, supra note 12, at 142–146 (rejecting necessity of moral responsibility to impose legal liability, focuses on competence to perform the specialized task at hand to conclude that if an autonomous machine "acted responsibly" given the facts at hand, an unfortunate outcome would be "blameless" on the part of the machine).
 Koops et al., supra note 1, at 516.
 Throughout this article, the terms "machine consciousness," "conscious computer," and "self-aware computer" are used interchangeably. The term "artificial intelligence" is used only in discussing previous authorities which used the term. Machine consciousnesses are assigned the gender-neutral pronoun unless discussing a gendered example.
 See Wein, supra note 12, at 109 n.9 (discussing John Dewey, The Historic Background of Corporate Legal Personality, 35 YALE L.J. 655, 655–61 (1926)).
 Allen & Widdison, supra note 12, at 37–38. ("[E]ntities are described as legal persons when the legal system attributes legally meaningful communications to them. To put it simply, within the legal system, legal persons are those entities that produce legal acts.")
 Koops et al, supra note 1, at 556.
 See Lee Hall & Anthony Jon Waters, From Property to Person: The Case of Evelyn Hart, 11 Seton Hall Const. L.J. 1, 2 (2000).
 See Id.; See also infra text accompanying notes 28 and 45 (discussing advocates of legal personhood for animals).
 See supra text accompanying notes 12 and 16 (discussing scholarly consideration of legal personhood for autonomous or conscious machines).
 Linda Bosniak, Persons and Citizens in Constitutional Thought, 8 Int'L J. Const. L. 9, 11 (2010) ("[C]onflicts between citizenship-and personhood-based groundings of rights persist in various contexts. Still, for a great many purposes, the 'right to have rights' is not contingent on the possession of citizenship status.").
 E.g., U.S. Const. amends. V, XIV; See also U.S. Const., art. III, § 3.
 See Steven M. Wise, Rattling the Cage Defended, 43 B.C.L. Rev. 623, 635–39 (2002) (discussing slaves' legal status as property, and laws for protection of their welfare, from Rome to the end of American slavery, in comparison to the legal conception of animals in those periods).
 See U.S. Const. art. I, § 9 (forbidding Congress to prohibit "[t]he Migration or Importation of such Persons as any of the States now existing thinks proper to admit" prior to 1808; see also U.S. Const., art. I, § 2 (calculating population for purposes of Congressional representation and direct taxation by adding "three fifths of all other Persons" to "the whole Number of free Persons"). This constitutional designation of slaves as persons did not, of course, extend them the protections of legal personhood.
 Phillipe Ducor, The Legal Status of Human Materials, 44 Drake L. Rev 195, 238–239 (1996) (noting that abolition of slavery and adoption of Universal Declaration of Human Rights reflects Western view that human persons are "inviolable and inalienable."). See also infra text accompanying note 83.
 Michael M. Pacheco, Toward a Truer Sense of Sovereignty: Fiduciary Duty in Indian Corporations, 39 S.D. L. Rev. 49, 53 (1994) ("[Corporate] existence depends upon recognition by the legal community, as reflected in modern corporate statutes which almost uniformly characterize a corporation as a separate legal entity and as a legal person.").
 See Koops et al., supra note 1, at 499 n.2.
 For example, corporations cannot vote, and antitrust bars mean they enjoy less freedom to contract than individuals do.
 For example, corporations do not participate in the national census or register for Selective Service, and do not serve jury duty.
 See Citizens United v. Fed. Election Comm'n, 08-205 (U.S., January 21, 2010) at *4; First Nat. Bank of Boston v. Bellotti, 435 U.S. 765, 778 n. 14 (1978); Buckley v. Valeo, 424 U.S. 1, 25–26 (1976); NAACP v. Button, 371 U.S. 415, 428–29 (1963).
 Professor Dewey observed in discussing corporate legal personhood that the concept "signifies what the law makes it signify," finding popular and psychological definitions irrelevant to the inquiry. See Dewey, supra note 20, at 655.
 David Howes, Introduction: Culture in the Domains of Law, 20 Can. J.L. & Soc'y 9, 13 (2005).
 Koops et al., supra note 1, at 510–17 ("When discussing legal personhood for non-human actants, the point should be to investigate at what point it makes sense to attribute legal consequence of the actants' actions to the actants themselves, instead of to the human actants behind them . . . To answer this question . . . we need to establish the conditions under which such attribution solves problems without creating even greater ones.").
 See supra text accompanying note 21.
 See Wein, supra note 12, at 108 n.6.
 Professor Wein relied on Dewey's own explanation of why objects are not legal persons to distinguish autonomous computers as appropriate juridical subjects, unlike objects:
[T]he right-and-duty bearing unit . . . signifies whatever has consequences of a specified kind. The reason that molecules and trees are not juridical "subjects" is then clear; they do not display the specified consequences. The definition of a legal subject is thus a legitimate, and quite conceivably a practically important matter. But it is a matter of analysis of facts, not of search for inhering essence. The facts in question are whatever specific consequences flow from being right-and-duty bearing units . . . The consequences must be social in character, and they must be such social consequences as are controlled and modified by being the bearer of rights and obligations, privileges and immunities. Molecules and trees certainly have social consequences; but these consequences are what they are irrespective of having rights and duties. Molecules and trees would continue to behave exactly as they do whether or not rights and duties were ascribed to them; their consequences would be what they are anyway.Wein, supra note 12, at 107–08, quoting Dewey, supra note 20, at 655–61. Professor Wein went on to suggest that because legal personhood is a hierarchy of various statuses, inquiry into whether artificial intelligence might reach the upper levels of that hierarchy and be granted legal rights as well as duties is appropriate. Id. If a computer is conscious and autonomous, presumably it would have the capacity to change its behavior in accordance with rights and duties assigned to it, as humans and other legal persons are assumed to do.
 See Id. Wein goes on to note that acceptance of corporate personhood is based on its human components, and that "other organisms that remain problematic in terms of legal personality (such as fetuses, the dead, and the permanently unconscious), all share a common human essentiality." Id. at 109.
 See Solum, supra note 12, at 1266. Solum anticipates the possible courtroom arguments in the matter of a conscious computer's emancipation proceeding, concluding:
[T]he doubt about the AI's consciousness is, at bottom, no different than doubt about the consciousness of one's neighbor. You cannot get into your neighbor's head and prove that she is not really a zombie, feigning consciousness. One can only infer consciousness from behavior and self-reports, since one lacks direct access to other minds.
 See Hall & Waters, supra note 23, at 2–3; See also Solum, supra note 12, at 1235. Solum's seminal work on legal personhood for artificial intelligence notes that Descartes was the first to consider and reject the philosophical possibility of a thinking machine, and his "assertion that no artifact could arrange its words 'to reply appropriately to everything that may be said in its presence' remains at the heart of the AI debate." The cultural postulate that humans are the only non-property entities is older than Descartes. Blackstone cites Genesis 1:28, in which God grants man "dominion over all the earth," as "the only true and solid foundation of man's dominion over external things," dismissing other explanations as "airy metaphysical notions . . . started by fanciful writers." Steven M. Wise, The Legal Thinghood of Nonhuman Animals, 23 B.C. Envntl. Aff. L. Rev. 471, 526 n 359 (1996) (quoting II William Blackstone, Commentaries On The Law Of England, *2–*3).
 "in rem – adj. [Latin "against a thing"] Involving or determining the status of a thing, and therefore the rights of persons generally with respect to that thing." Black's Law Dictionary 362 (3d ed. 2006):
 in rem jurisdiction indicates, in law, that an action is pursued "against a thing," as opposed to in personam, "against a person." See supra text accompanying note 46. The legal fiction inherent in suits against property was obliquely recognized by the Supreme Court as early as Pennoyer v. Neff, 95 U.S. 714, 734 (1877):
It is true that, in a strict sense, a proceeding in rem is one taken directly against property...but in a larger and more general sense, the terms are applied to actions between parties, where the direct object is to reach and dispose of property owned by them, or of some interest therein.
Pennoyer's determination that "a proceeding 'against' property is not a proceeding against the owners of that property," allowing for a lax due process standard when jurisdiction was in rem, was ultimately rejected by "the overwhelming majority of commentators" and the Supreme Court. See Shaffer v. Heitner, 433 U.S. 186, 207 (1977). Writing for the Court, Justice Marshall opined that its determination regarding required due process in actions in rem was:
premised on recognition that the phrase, "judicial jurisdiction over a thing," is a customary elliptical way of referring to jurisdictions over the interests of a person in a thing. This recognition leads to the conclusion that in order to justify an exercise of jurisdiction in rem, the basis...must be sufficient to justify exercising "jurisdiction over the interests of persons in a thing."Id. The Court noted that "All proceedings, like all rights, are really against persons. Whether they are proceedings or rights in rem depends on the number of persons affected." Id. at 207 n.22. The court's reminder that "all rights are really against persons" returns to the concept of legal persons as the only right-and-duty bearing units; jurisdiction over things must be jurisdiction over people who own or claim to own things because things, in our jurisprudence, are for use by persons, and do not have rights or duties. See Hall & Waters, supra note 23, at 2–3.
 See infra text accompanying note 57 (discussing human slavery and historical classification of slaves as legal nonpersons sometimes defined as persons in legal documents, without attendant rights inferred by "personhood").
 See Wein, supra note 12, at 117 ("Whether the law will someday permit automatons to rise to a higher station in the hierarchy of legal personality such that they will come to be perceived as entitled to rights as well as burdened by duties will no doubt depend on the extent to which society comes to view future automatons as humanoid or person-like.").
 The law does not use "personhood" to confer a fixed set of rights and duties. See Koops et al., supra note 1, at 556 (context may determine the form and scope of legal personhood applicable to an entity).
 Similar public opinion issues have arisen with respect to corporate rights. See, e.g., Citizens United v. FEC: A Roundtable Discussion, Federalist Society (Mar. 3, 2010, 9:00 AM), http://www.fed-soc.org/debates/dbt Id.38/default.asp ("[Citizens United] has not proven your go-down-easy decision.") Neutral polls indicated that sixty-eight percent of Americans disapproved of the Citizens United decision. Survey Reports: Midterm Election Challenges from Both Parties, Pew Research Center, (Feb. 12, 2010) http://people-press.org/report/?pageid=1666. The Supreme Court may have based its majority decision on assumptions about legal personhood and rights that are not shared by the majority of Americans. See Steven J. Heyman, The Public vs. The Supreme Court: A Comment on the Citizens United Case, Chicago-Kent Law Faculty Blog (Mar. 3, 2010) http://blogs.kentlaw.edu/faculty/2010/03/the-public-vs-the-supreme-court-a-comment-on-the-citizens-united-case.html. ("I think that most people would say that there's all the difference in the world between a group of citizens who come together for political or expressive purposes...and a business corporation that is formed to engage in economic activity.")
 See, e.g., Solum, supra note 12, at 1235–37. Professor Solum discussed at length the primary philosophical point-and-counterpoint regarding machine consciousness: the Turing/Searle debate. The "Turing test" of machine intelligence runs as follows: a computer and a human are kept in separate rooms. Communicating by teletype, another human asks questions on any subject, as each respondent attempts to convince the questioner that it is a human. Rather than focusing on what "thinking" or "intelligence" means, Turing devised a test he believed to be difficult enough that whatever could pass it (by fooling the questioner about half the time) would be sure to qualify as intelligent. Searle's "Chinese room" thought experiment, the preeminent response to Turing, posits that a computer lacks "intentionality"—the "ability to process meanings"—because it only responds to syntactic properties, not semantic meanings.
 HAL (Heuristically programmed ALgorithmic Computer) features heavily in Arthur C. Clarke's 2001: A Space Odyssey and in the eponymous Stanley Kubrick film. Aware that Dave, the human astronaut, intends to disconnect him after discovering HALs sabotage of their spacecraft, HAL responds to a command to open the doors with "I'm sorry, Dave, I'm afraid I can't do that." This politely menacing expression has become pop culture shorthand for a dangerous or overbearing machine. Solum included Clarke's account of HALs final moments in his seminal work on AI. See Solum, supra note 12, at 1255.
 More recently, the popular single-player game Portal featured a psychotic machine intelligence, GLaDOS (Genetic Lifeform and Disk Operating System), which alternately helps, torments, promises cake to and attempts to kill the player-character. GLaDOS gleefully admits that her "morality core" is damaged, but the game leaves unclear how much of the psychological torture she inflicts upon the player-character is intended as part of the "testing" for which the facility was designed. However, the machine also claims that she was fitted for the morality core to stop her from continuing to flood the facility with deadly neurotoxin, even as she tells the player that attempting to kill her "isn't brave. It's murder."
 In 2004, I, Robot, a film based on Isaac Asimov's eponymous 1950 collection of short stories, featured a heroic robot as well as a human hero—fighting a central artificial intelligence seeking to kill the human protagonist. A comparison might indicate to the casual observer that our fears about conscious computers and machine intelligence have only become worse in the intervening years. Asimov's vision of humans slowly realizing that their super-intelligent computers have taken ultimate control over the world economy and bureaucracy to fulfill their programming to not harm humans or allow them to be harmed was transformed by the film into a battle against a malevolent central machine intelligence deploying robots to attempt human murder. While the short stories did address a robot able to harm humans, the combination of the control of the supercomputer with an unrelated story about robot violence produces a vastly different story, reflecting intervening years of murderous/amoral machine intelligence as a trope of horror and science fiction. These kinds of representations, more so than the leading philosophical arguments, are likely to influence public opinion when the question of a conscious computer's legal personhood arises.
 See Koops et al., supra note 1, at 499 n.2.
 In oral argument in the 2003 BINA48 mock trial, Dr. Martine Rothblatt appealed to the need for recognized persons to see BINA48's essential traits of personhood by referencing popular arguments in favor of abolition:
[Slaves] feel like us, they think like us, they cry like us, they worship like us, they pray like us, they love like us, they live like us; they must be like us. And so it became it (sic) impossible for people to justify slavery saying that slaves were something less than human. It's the same with the BINA48. She has every wish and desire that we do…she has all the aspects of life that we see in regular people, the important aspects. Most important, she's aware that she is alive, she says she is conscious, and many of us believe her.
Rothblatt, supra note 13. While comparisons between artificial intelligences who may seek refuge in the courts in future and those who actually suffered personal and cultural devastation as a result of human slavery must be carefully drawn, as here, both for accuracy and to avoid hyperbole, Dr. Rothblatt suggests an intriguing counterpoint to Professor Dewey's observation that personhood has only the significance the law gives it. See supra text accompanying note 37. Since juridical persons are created by statute, and legislators are elected only by human persons to represent their interests, it can be said any extension of personhood to entities other than humans is, at least in part, based on public opinion, real or perceived. But see supra text accompanying note 51. The citizen outcry against the Citizens United holding indicates that personhood, once granted, may confer rights not contemplated by the ostensible grantors.
 See Koops et al., supra note 1, at 556.
 See supra text accompanying notes 34, 35 & 50 (discussing variance in rights and duties across different classifications of legal persons).
 For instance, churches are exempt from federal taxes, but this tax-exempt status can be removed for certain conduct, such as endorsing candidates from the pulpit, generally a protected activity with respect to other kinds of associations and groups. See generally Allan J. Samansky, Tax Consequences When Churches Participate in Political Campaigns, 5 Geo. J.L. & Pub. Pol'y 145 (2007).
 See supra text accompanying note 47 (discussing in rem jurisdiction and the legal fiction of filing suit "against a thing").
 See generally Wise, supra note 45 (discussing legal status of nonhuman animals as things).
 See Wein, supra note 12, at 108–109.
 Solum, supra note 12, at 1238, quoting Heinlein, supra note 2, at 13–14:
Am not going to argue whether a machine can "really" be alive, "really" be self-aware. Is a virus self-aware? Nyet. How about oyster? I doubt it. A cat? Almost certainly. A human? Don't know about you, tovarishch, but I am. Somewhere along the evolutionary chain from macromolecule to human brain awareness crept in. Psychologists assert it happens automatically whenever a brain acquires certain very high number of associational paths. Can't see it matters whether paths are protein or platinum.
Solum included this quotation, immediately following his discussion of Turing and Searle, as part of a longer selection from the novel forming the "First Interlude" between his discussion of the theoretical possibility of artificial intelligence and his discussion of its legal personhood. Unfortunately, the blithe acceptance displayed by Heinlein's protagonist when presented with machine consciousness is unlikely to be the majority position when the rights of computers claiming consciousness become a present debate.
 See Black's Law Dictionary 391–94 (4th ed. 2008) (defining corporation and providing definitions for various types of corporations).
 See, e.g., Sandra K. Miller, Piercing the Corporate Veil Among Affiliated Companies in the European Community & in the U.S.: A Comparative Analysis of U.S., German & U.K. Veilpiercing Approaches, 36 Am. Bus. L.J. 73 (1998). The oft-repeated "corporate veil" metaphor seems to suggest that corporate personhood, even if it does confer some constitutional protection separate from that individually enjoyed by the persons who form it and direct its daily and long-term actions, is a costume, an overlay over human persons who have standing to appear in a court of law in the role of the corporation.
 See supra text accompanying note 47 (discussing legal fiction of in rem jurisdiction).
 See, e.g., Citizens to End Animal Suffering & Exploitation, Inc. v. New England Aquarium, 836 F. Supp. 45 (D. Mass. 1993) (dolphin lacked standing to contest transfer from one aquarium to another, under Marine Mammal Protection Act).
 See, e.g., Jones v. Butz, 374 F. Supp. 1284 (S.D.N.Y., 1974) (despite plaintiffs' claim to stand as next friend for "all livestock animals now and hereafter awaiting slaughter in the United States," plaintiffs' standing defined by their status as taxpayers claiming "aesthetic injury," placing them within "class of persons harmed" by ritual slaughter practices objected to), aff'd, 419 U.S. 806 (1974).
 See generally Wise, supra note 45 (discussing legal status of nonhuman animals as things).
 See Hall & Waters, supra note 23, at 2 (addressing "whether the Constitution of the United States ought to accommodate non-human great apes by affording them the rudimentary protections afforded other persons").
 See, e.g., Christine Kreyling, All the Starving Horses, The Nashville Scene, March 11, 2010, pp. 12–14, 16 (available at http://www.nashvillescene.com/nashville/the-worst-case-of-equine-abuse-in-tennessee-history-shocked-the-state-so-why-is-legislation-that-would-stop-the-abuse-meeting-so-much-resista/Content?oid=1465086) (outlining the battle over an anti-cruelty statute governing livestock proposed after "the worst case of equine abuse in Tennessee history."). Divergence in anti-cruelty statutes governing domestic and livestock animals has left a gap in Tennessee law that makes it profitable to intentionally starve horses bought at cut-rate auction prices during winter, then sell those strong enough to last or ship them internationally for human consumption. Cruelty to livestock is a misdemeanor; the same actions against a domestic pet are a felony. The biggest opponent of the proposed legislation is the state farm bureau, whose president cites "certain people who would like to destroy animal agriculture" as one of the concerns motivating the organization's opposition. Unstated concerns may include retaining the taxation incentives hinging on the distinction between domestic and livestock animals. Id. One legislator's solution: legalize slaughtering horses for human consumption, to ensure humane treatment. See Christine Kreyling, The Final Solution, The Nashville Scene, March 25, 2010, p. 7 (available at http://www.nashvillescene.com/nashville/tennessee-legislators-have-a-novel-answer-to-equine-abuse-horse-slaughter/Content?oid=1465307).
 See generally Koops et al., supra note 1. Will is not the only factor humans cite to distinguish themselves and other entities. See Solum, supra note 12 at 1262–76 (discussing and rebutting various "critical components" of personhood which critics argue artificial intelligence would lack, including: souls, intentionality, feelings, interests, and free will).
 See Wise, supra note 27, at 631–35.
 See generally Koops et al., supra note 1 (discussing "accountability gap" created by increasing machine autonomy).
 See Allen & Widdison, supra note 12, at 43–49 (given relevance of autonomy, ignoring autonomy of computers "unnecessarily harsh"); Koops et al., supra note 1, at 511–16 (exploring "nexus" of autonomy, agency and personhood); Solum, supra note 12, at 1272–74 & nn. 148–49 (examining and rejecting lack of free will or autonomy as an objection to machine personhood); Wein, supra note 12, at 122 (machine's classification as tool or agent of principal dependent on degree of autonomy).
 See supra text accompanying note 13. The funds BINA48 amassed while working as a Google Alerts researcher were acquired by unauthorized use of her corporate owner's property—but she can hardly have had any choice not to use that property while existing as a separate consciousness within it, and continued to discharge her unpaid duties. Was her use of her own skills and abilities—and the corporation's Internet—a conversion? If BINA48 is property, she cannot convert other property, nor own the money. But the corporation lacks title and would have to go to some expense to lay claim to the funds. If BINA48 is a person, she might owe the corporation restitution—but she might be owed back wages, under equitable principles. There is the further question of taxation, as defined by the Internal Revenue Code. If BINA48 was employed by Google, she may have paid federal employment and income taxes, but not been able to file a return, meaning she is possibly owed by the government. If she was an independent contractor, she may owe employment taxes—but possibly not, if she does not fit into existing definitions within the Code, as that might be an impermissible ex post facto application. What if BINA48 had been negligent in the performance of her Google duties? If she is treated as legal property, would the claim lie against Google, or the corporation who owns her? If against the latter, would it have a cross-claim against Google for conversion for BINA48's actions? If she is treated as a legal person, how is the plaintiff to procure service?
While a detailed analysis of these issues is outside the scope of this article, the ease with which other legally significant issues of autonomous computer conduct arise from such a hypothetical indicates an eventual need to place such extra-legal actions within the legal context by the assignment of rights and duties. See Allen & Widdison, supra note 12, at 38. After all, the only other type of "property" which ever sought independent employment was the human slave, no longer a valid legal status and thus of little precedential value. While illegal human slavery persists, the law is unlikely to comfortably borrow from antebellum Southern or Roman law to support a determination that a conscious computer's independently earned wages are actually the property of its hardware's owner. The socially productive action of obtaining and performing independent work may be one of the strongest arguments for the grant of legal personhood, which would solve more difficulties in analyzing the consequences of a BINA48-type's behavior than it would produce. See Koops et al., supra note 1, at 511.
 See, e.g., Solum, supra note 12, at 1273 n.151.
 The requirement of a real case or controversy would preclude a computer from seeking the courts as a theoretical exercise. See U.S. Const., art. III, § 2. In the hypothetical discussed in note 78, supra, BINA48 claimed consciousness, and an actionable injury, but also displayed capacity for autonomous action, strongly resembling an exercise of individual will. BINA48 sought and performed independent employment, saved funds, became aware of a perceived threat to her continued consciousness from the legal owners of her hardware, and sought counsel in order to receive a determination of her rights. Her behavior seems to be an almost flawless execution of the conduct expected of persons in our society: productive work, accumulation of wealth, and proportionate response even when threatened, favoring the courts over self-help where appropriate. If, as has been suggested, "social capacity for autonomous action" is a desirable inquiry when determining the rights and responsibilities of a machine intelligence, a computer claiming consciousness under a similar set of facts would have a compelling argument that its conduct prior to and in maintaining the action is itself evidence of the capacity to balance its rights and duties as a member of society. See supra text accompanying note 16. But see supra text accompanying notes 2 and 65. BINA48's test case might be more favorable than that of Mike, the HOLMES 4, who—despite being a published poet—had a penchant for practical jokes (not to mention organized insurrection against tyranny.) Such considerations cannot, of course, be used to deny human beings the right to personhood, but the court or the legislature may look more kindly on the mechanical "model citizen" than the class clown or freedom fighter.
 See supra text accompanying notes 16 and 80 (discussing socially meaningful action and legal personhood).
 See generally Universal Declaration of Human Rights, G.A. res. 217 (III), A, U.N. Doc. A/RES/217 (III) (Dec. 10, 1948) (hereinafter "Universal Declaration"). The introductory language of the Declaration makes clear it was intended, when adopted, to apply to human persons. However, Article II states that "Everyone is entitled to all the rights and freedoms set forth in this Declaration, without distinction of any kind." A being which can argue intelligently that the values of the Universal Declaration, the United States Constitution, and similar documents apply to it (just as the Constitution now applies to classes of persons excluded from its protection when written), by demonstrating that it is a being "endowed with reason and conscience" and requiring the protection of "natural" rights, would show the "person-like" qualities which society is likely to demand before it contemplated extending these rights to conscious machines. See supra text accompanying notes 56 and 77.
 The Universal Declaration is not binding law. However, it reflects the best approximation of dominant ideas about what rights inhere to all persons, irrespective of citizenship, borrowing heavily from the United States Constitution and similar documents. See supra text accompanying note 30.
 It is my view that a computer that can claim consciousness and also exhibit autonomous behavior is the baseline for a grant of personhood rights approximating human rights, at least at this time. While there may arise machines claiming consciousness which do not or cannot exhibit autonomous behavior, or autonomous machines that, irrespective of consciousness, are sufficiently remote from human operators such that a grant of limited legal personhood or even public policy-based statutory protection is morally or practically desirable, either example is unlikely to be able to secure "human" rights from the outset because of anthropocentric bias. See supra text accompanying note 55.
 See Solum, supra note 12, at 1235–38 (discussing the "Turing test" thought experiment and the Searle "Chinese room" counter-experiment.) It is interesting to note that we do not, at law, require that people understand underlying theories of ethics before we determine that they are capable of acting ethically or did so in a particular situation, any more than we generally require fluency with a statutory code before expecting persons to abide by it, or an understanding of the reasonable person standard before a determination that a defendant failed to exercise ordinary care. Nonetheless, given the role of public opinion in determination of rights, the persistent question of whether a machine consciousness is able to relate to these concepts and use them as guidelines for its behavior is unlikely to be set aside on the mere basis that it is not required of humans. See Id. at 1280 (discussing the role of the Turing test in determining legal personhood for artificial intelligence); see also supra text accompanying note 44.
 See Universal Declaration, supra note 82, art. 3–6 (guaranteeing to everyone life, liberty, security of person, freedom from slavery or servitude, and from torture or cruel, degrading or inhuman punishment, and status as a legal person).
 See Universal Declaration, supra note 80, art. 7–11 (respecting various due process protections including equal protection, right to a remedy, freedom from arbitrary arrest, detention and exile, fair hearing, presumption of innocence and freedom from ex post facto punishment).
 See Universal Declaration, supra note 82, art. 12–20 (respecting freedom from "arbitrary interference with his privacy, family, home or correspondence" as well as from "attacks upon his honour and reputation," freedom of movement within and between nations, freedom to seek asylum from persecution, and the right to a nationality, including freedom of deprivation of one's nationality or of the right to change nationalities, freedom to marry and to abstain from marrying, freedom to own property without arbitrary deprivation, freedom of thought, of conscience, of religion, opinion and expression, of assembly and of association).
 See Universal Declaration, supra note 82, art. 21–22 (respecting freedom take part in government, to access public services, to have suffrage and free elections, to social security and to "the economic, social and cultural rights indispensable for his dignity and the free development of his personality.").
 See Universal Declaration, supra note 82, art. 23–25 (respecting freedom to work, including right to choose work and to "just and favourable" working conditions, sufficient remuneration, protection against unemployment, freedom to unionize, freedom for adequate rest and leisure including work hour limits, and the right to an adequate standard of living and to security).
 See Universal Declaration, supra note 82, art. 26–27 (respecting the right to an education "directed to the full development of the human personality and to the strengthening of respect for human rights and fundamental freedoms," as well as participation in culture and the arts and respect for intellectual property).
 See Universal Declaration, supra note 82, art. 28–29 (respecting the right to a social order respecting freedoms and rights, and duty-based rights restrictions including the duty to support community and to only be subjected to limitations that protect others' rights or are necessary to "morality, public order and the general welfare in a democratic society").
 See supra text accompanying notes 51 and 55 (discussing human discomfort with machine consciousness and corporate legal personhood as potential threats to human freedom).
 See Solum, supra note 12, at 1258–62 (discussing constitutional objections to legal personhood for artificial intelligence). While Solum found no issue with legal personhood for conscious machines under the United States Constitution, the right to vote is constitutional in nature and would not, under its current language, support a direct right to vote for anyone other than human persons.
 See, e.g., Asimov, supra notes 8 & 55; see also supra text accompanying note 9.
 See supra text accompanying note 92 (discussing Universal Declaration's treatment of personal duties and restriction of rights for the welfare of society).
Elizabeth R. McClellan is a May 2012 Juris Doctor candidate at the University of Memphis, Cecil C. Humphreys School of Law, a Rhysling Award-nominated poet, and a lifelong lover of speculative fiction. Elizabeth is indebted to Evan S. McGee for the loan of needed research materials, to Jason Haddix for his willingness, in discussion, to illustrate philosophy of mind in terms of machine intelligences from popular culture, and to Professor Robert C. Banks, Jr., for ensuring her understanding that an action in rem is a lawsuit against persons, not property. Elizabeth would also like to thank Neely Campbell Thomas, Atina Rizk, and everyone who offered comments on the paper and presentation for their contributions, as well as Lyndsey Paré, Lori Rhodes, Allie Tubbs, and the Terasem Movement staff for technical assistance.