In July 2022, Google fired a software developer after he leaked information related to a chat-bot under development at the company. The software developer raised concerns that the chat-bot displayed characteristics suggesting sentience and wanted to create a framework for considering what to do when faced with potentially sentient Artificial Intelligence (AI). For decades, AI researchers have debated whether AI can reach a point of thinking on its own, and, whether such sentient AI should be treated as a person. Google’s employment spat revived this debate in a very public way.
Meanwhile, in a tangential area of emerging technology, in-roads are being made toward recognizing autonomous entities that enjoy the legal fiction of personhood. In July 2021, Wyoming approved a law that “clarified” the ability of “algorithmically managed” businesses to form legally recognizable LLCs. In August 2022, Tennessee followed suit—confirming that “decentralized organizations” operated solely by “smart contracts” could form a legally recognizable business entity.
Often, these two areas of development—self-aware AI and decentralized autonomous organizations—are considered wholly separate topics, with little reason to connect the two. In technical terms, that intuition is probably right: most decentralized autonomous organizations operate through very passive smart contracts (quite the opposite of sentient AI). Nevertheless, each of these areas of emerging technology development consider—and in some cases legally enable—the possibility of providing legal personhood to software that enjoys very little human oversight or control. In that respect, the two areas of research should talk to each other.
Considering AI systems within the specific context of the corporation offers an opportunity to explore various approaches to AI personhood within an existing legal framework—namely, corporate personhood.
In particular, using autonomous corporations as a case study in personhood reveals that building a comprehensive legal approach to artificial rights—rights enjoyed by artificial “people,” whether entities, machines, or both—requires consideration of the varied contexts (both social and technical) in which artificial people exist. Both corporations and AI systems are artifacts in the sense that they are both technologies to which the law can, and sometimes does, attach certain legal fictions. Artifacts—technologies—do not exist in a vacuum, but rather, exist and act within a specific social context. Indeed, artifacts are usually designed, built, and deployed in specific context with specific goals in mind.Considering AI systems within the specific context of the corporation offers an opportunity to explore various approaches to AI personhood within an existing legal framework—namely, corporate personhood.
Essentially, viewing autonomous corporations as a system composed of two artifacts—AI system and corporation—reveals that applying corporate personhood theory to the traditional debates about the scope and nature of AI personhood may shed new light on the policy choices and values-rights trade-offs inherent in bestowing personhood on any artificial artifact. To explore this nexus between AI systems and corporations, I developed a framework for evaluating when recognizing a measure of legal personhood might be appropriate, and what scope such personhood should encompass. First, a survey of the current uses of AI systems in the corporate context suggests a range of approaches. Some corporations—which I refer to as Traditional Plus—use AI systems as tools to more efficiently operate their business. Other corporations—which I call Distributed Business Entities—use AI systems to reduce certain aspects of organizational bloat, coordinate operations, and incentivize workers. Lastly, rare few organizations—which, in keeping with terminology used by other scholars, I call Autonomous Entities—are almost fully autonomous in their operations, management, and ownership structures.
Policy choices and values might require that Autonomous Entities can be sued and can contract for insurance, but otherwise receive the benefit of no other rights under the law
Given this diversity, it makes sense that legal personhood might be granted on a spectrum. In the case of Traditional Plus corporations, for example, the simple use of AI systems as tools to run a business does not (and should not) alter the corporation’s status under existing approaches to corporate personhood. Currently, corporations enjoy only the aspects of personhood given to them by law, but not the full spectrum of rights enjoyed by natural persons. This “restricted personhood” might arguably be less than what Distributed Business Entities should receive. Distributed Business Entities enable the participation of individual natural persons in management and operation of business in ways that the hierarchy in Traditional Plus corporations do not. As such, we might want the natural persons directing the activities of Distributed Business Entities to enjoy as much of their own personhood as possible—a type of “full personhood.” Lastly, Autonomous Entities, which may not be controlled or even owned by natural persons at all, represent a purely artificial artifact that might only reasonably be granted a form of “limited personhood” in order to protect the natural people with whom the Autonomous Entity might interact. In other words, policy choices and values might require that Autonomous Entities can be sued and can contract for insurance, but otherwise receive the benefit of no other rights under the law—limited personhood would be a way to protect humans from the Autonomous Entity, but nothing more.
exploration of autonomous corporate personhood makes clear that when designing legal personhood there is no one theory of personhood to rule them all
Ultimately, this exploration of autonomous corporate personhood makes clear that when designing legal personhood there is no one theory of personhood to rule them all. Rather, law and policy may need to embrace a spectrum of legal personhood that varies based on the social context in which the thing we want to make an artificial person exists and acts. Finding an artificial artifact’s place on the spectrum requires digging deep into the technology and the context in which it is used. Meanwhile, and almost counterintuitively, crafting personhood constructs along the spectrum requires focusing on function rather than details of a specific technical implementation. Only by focusing both on the details of the technology and the function of both the technology and personhood will a cohesive theory of personhood for AI systems—whether in a corporation or not—emerge.
By Professor Carla L. Reyes, Assistant Professor of Law at SMU Dedman School of Law and Chair of the Texas Work Group on Blockchain Matters.
The ECGI does not, consistent with its constitutional purpose, have a view or opinion. If you wish to respond to this article, you can submit a blog article or 'letter to the editor' by clicking here.