Introduction

Artificial intelligence is compared to fire, oil, and electricity in taking humankind to the next level of development. In legal studies, artificial intelligence has been compared to nature, animals, children, idols and money. (See, e.g., Gordon 2021; Gunkel & Wales 2021; Kurki 2019; Solaiman 2017; Beck 2016; Solum 1992.) Nevertheless, the analogy between AI and companies is perhaps the most widely used. The analogy between AI and corporations also works at the level of fire, oil and electricity: Corporate personhood is a legal invention that significantly added value to society during the Roman period, the Middle Ages and the colonial era. Corporate persons continue to create added value in today’s market economy. (See, e.g., Micklethwait and Wooldridge 2003; Berle 1952; Dodd 1948; Savigny 1884, pp. 86−88).

While the previous discussion on AI legal personhood has been extensive, the contribution of this article is that it utilizes the hybrid model of corporate legal personhood. It applies three distinct models of corporate legal personhood simultaneously: the real entity, aggregate entity, and artificial entity models. (Raskulla 2022, p. 324.) Simultaneous application of several models has also been envisioned by Chatman (2018), with the significant difference that Chatman only applies the real entity and artificial entity models. The article returns to Chatman’s model and its application in Chapter 6.

Many studies argue that artificial intelligence, particularly one with moral autonomy, challenges pre-existing models, rendering them useless or risky. They suggest that new models of legal personhood are required. (See, e.g., Novelli et al. 2022, p. 202; Laukyte 2020, p. 445; Chen and Burgess 2019, p. 76. See also Mocanu 2021; Kurki 2019.) While the suggestion is warranted, this article studies whether a less radical approach could be adopted. It examines whether the hybrid model of legal personhood could be applied to AI and whether it would offer a new perspective for the challenges we face.

Chapter 2 of the article introduces key concepts: artificial intelligence and legal personhood. Chapter 3 introduces the theoretical framework by outlining three competing models of corporate legal personhood and discussing the hybrid model. Chapter 4 then aims to resolve whether AI legal personhood is best described by the real entity model, aggregate entity model, artificial entity model, or hybrid model. The objective is not to answer the question of whether AI should be provided with legal personhood but whether it could be provided with legal personhood. These are two very different questions. Nevertheless, Chapter 5 reviews the discussion on the possible risks and advantages of AI legal personhood and considers whether the hybrid model could be useful in resolving any of the issues recognised. Finally, chapter 6 discusses the article’s contribution, and Chapter 7 concludes by summarizing key outcomes and formulating questions for future research.

About artificial intelligence and legal persons

This chapter provides working definitions for the article’s key concepts: artificial intelligence and legal personhood. The task is far from simple. As Laykute (2021, p. 446) has described, we do not have a definition for AI because “we have yet to find a consensus on what AI is”. Concerning legal personhood, Banteka (2021, p. 550) has recognised how “its meaning is far from controversial”. Nevertheless, this chapter aims to provide some general definitions applied in this article.

Firstly, the term “artificial intelligence” has a variety of meanings that have also adapted through time. (Singh Grewal 2014, p. 9.) Therefore, it is no wonder it is often substituted with other terms. For example, the Institute of Electrical and Electronics Engineers (IEEE) developing industry standards for these technologies refers to “Artificial Intelligence Systems”. (IEEE 2021.) However, even the IEEE has changed its terminology multiple times. (See IEEE 2021, 2017, 2016). Other alternative terms include, for instance, “non-biological intelligence” (Jaynes 2020, p. 344) and “autonomous artificial agents” (Chopra & White 2011, pp. 27, 28). Nevertheless, as the objective is this article is to connect with a discussion that commonly utilizes the term “artificial intelligence”, this term will be used here with the following specification: For this article, AI is not considered to have consciousness, moral autonomy or intrinsic value that would necessitate full personhood. (See, e.g., Gordon 2020, 2022; Martinez and Winter 2021; Jewitt 2021; Chen and Burgess 2019; Jaynes 2020.) Thus, this article does not cover such entities, even if they would be possible at “this stage of technical evolution”. (Cappuro 2012, p. 485. See also Gordon 2020, 2022; Anderson and Anderson 2011, p. 1).

Of course, the possible issue is that we do not know whether an artificial entity has reached consciousness. Nevertheless, this article works from two crude (and possibly false) assumptions:

  1. (a)

    Those who know the artificially intelligent system can also estimate whether it functions on the level of consciousness, even if it successfully imitates consciousness. (See, e.g., List 2021, p. 1219; Penrose 1990, pp. 9, 10. See also Shevlin 2021; Proudfoot 2011; Dennett 1988).

And:

  1. (b)

    The legal personhood model here applies to systems we can confidently say do not have a consciousness. Hence, the scope of the article excludes systems such as artificial general intelligence (AGI), strong AI, artificial consciousness, or spontaneous intelligence. (See, e.g., Banteka 2021, pp. 542−548; Chen and Burgess 2019, p. 75.) Instead, it covers systems with clearly defined operations, societal roles, and corresponding legal rights and duties. (See, e.g., Novelli et al. 2022, p. 217, 218; Bartneck et al. 2020, pp. 13, 14; Laykute 2021, p. 453.). These systems can utilize machine learning on different levels of autonomy and be adaptive. However, such autonomy is limited both in breadth and depth. Within these limits, the article aims at technology neutrality.

Another key concept is “legal personhood”. Legal persons are “non-human entities that are granted certain rights and duties by law”. (Chesterman 2020, p. 822.) Not everyone considers the possession of duties a necessary condition for legal personhood. (E.g., Kurki 2019, pp. 62−71.) Nevertheless, it is a widespread approach. (See, e.g., Laykute 2021, 447; Chesterman 2020, 822; Bryson et al. 2017, pp. 280, 281; Chipman et al. 1997, p. 19; Solum 1992, p. 1239; Dewey 1926, p. 661.) It is also a practical approach for this article, its purpose being to consider whether AI could have both legal rights and legal duties.

Who can be legal persons is a broad debate with various approaches. Comprehensive overviews are provided in previous literature. (See, e.g., Novelli et al. 2022, pp. 202−208; Gordon 2021, pp. 464−466; Chopra and White 2011, pp. 153−162; Kurki 2019, pp. 121−125.) This article adopts a legalistic position that anything can be a legal person. (See. e.g., Naffine 2003, pp. 346 and 351; Novelli et al. 2022, p. 207; Dewey 1926, p. 655.) However, while anything can be a legal person, there might be strong pragmatic reasons not to confer legal personhood on non-human entities, such as trees or rocks. The legal personhood of non-human entities is based on philosophical functionalism. (Bryson et al. 2017, pp. 278 and 282.) Legal persons can be created for different purposes and on different grounds, in which case the legal personhood of the entity originates from the added value, such as economic advantages created by corporate entities. (Smith 1928, pp. 288, 289; Novelli et al. 2022, pp. 208 and 212; Bryson et al. 2017, p. 277. See also Novelli 2021; Zevenbergen et al. 2018; Bertolini and Episcopo 2022.)

The question of whether conferring AIs as autonomous legal personhood creates sufficient added value is briefly discussed in Chapter 5.

Hybrid model of corporate legal personhood

When discussing corporate legal personhood, three models are generally referred to. This chapter briefly outlines these models, as more extensive overviews have been provided elsewhere. (See, e.g., Banteka 2021, pp. 551−557; Chesterman 2020, pp. 822−824; Laukyte 2020, pp. 447−449; Chatman 2018, pp. 818−825; Solaiman 2017, pp. 162−166. Cf. Kurki 2019, p. 156.)

The first model is the artificial entity model, also referred to as fiction theory, dependent theory, and concession theory. While some of these models have different origins, they all describe one key idea: The legal personhood of an artificial entity is devised for a purpose. The transference of “jural capacities” creates artificial judicial persons. (Savigny 1884, p. 2.) Ultimately, it is the State that recognises the legal capacities of artificial entities. (See e.g., Donyets-Kedar 2017; Berle 1952).

The second is the aggregate entity model, also called contract theory or symbolist theory. (See Chesterman 2020, p. 823; Donyets-Kedar 2017, pp. 65, 66.) This model perceives companies as aggregates of their members. Some aggregate theorists extend this membership to employees, customers, or local communities, while some only include shareholders or other such constituencies. (Phillips 1994, p. 1066 and 1091.) The aggregate entity theory represents the nature of corporations as group agents.

The final model considered here is the real entity model, also referred to as realist theory or independent person theory. (See, e.g., Solaiman 2017; Machen 1911a, 1911b). In the real entity model, the company “is an entity distinct from the sum of the members that compose it, and [–] that this entity is a person”. (Machen 1911a, p. 258.) The existence of a company as an entity is an objective fact, even without legal recognition. This model holds corporations as “objectively real entities”. (Chesterman 2020, p. 823.) The law merely gives legal effect to the existence of this entity (Machen 1911a, p. 261). In the real entity model, companies have ambitions, interests, intentions, and a sense of morality and duty (Phillips 1994, pp. 1097, 1098; Machen 1911b, p. 348). Objectively companies are as real as natural persons (Machen 1911a, p. 261).

Each model can be accommodated to incorporate opposing views, which has proved challenging when determining which model should be applied. (See, e.g., Chatman 2018, p. 860; Donyets-Kedar 2017, pp. 76, 77; Phillips 1994, p. 1063; Dewey 1926, p. 655.) However, rather than forcing a particular set of legal powers onto a particular theoretical model, the hybrid model utilizes all models simultaneously. (Raskulla 2022, p. 56.) Furthermore, the hybrid model recognises how corporations as legal entities have key characteristics from all three models:

  1. (a)

    Corporations are artificial entities because their recognition as autonomous legal persons depends on fulfilling requirements described by law. The law also describes their societal function of doing business. Correspondingly, companies can be stripped of their legal powers and personhood if they do not practice or violate laws.

  2. (b)

    Corporations are aggregate entities because a strong contractual element exists in establishing a corporate person and defining its objectives and legal powers.

  3. (c)

    When doing business, corporations are as real as natural persons. They may acquire rights, make contracts, or be a party to a legal proceeding within the limits of legal powers corresponding to their societal function. (Raskulla 2022, pp. 135−140).

The simultaneous application of the three models creates a hybrid model of corporate legal personhood. It accommodates the fact that corporations as legal persons can simultaneously “be people” and “never be people” (Chatman 2018, p. 814) or be “a subject of ownership” and “an object of ownership” (Kurki 2019, p. 106). Because of this, the hybrid model corresponds better with the legal reality of corporate legal personhood than any single model alone. (Raskulla 2022, p. 144). For example, it accommodates the fact that different corporate models have different levels of autonomy. (See, e.g., Hansmann et al. 2006, p. 1337.) The hybrid model also dissolves the dichotomies between different models. It was suggested that the hybrid model could apply to other organizations, including States, or work as an analytical framework to conceptualize the legal personhood of AI. (Raskulla 2022, p. 327.) The following chapter aims to do just this.

Artificial intelligence as a hybrid legal person

This chapter aims to study whether the hybrid model of corporate legal personhood applies to AI legal persons. The objective is to resolve whether AI legal personhood is best described by the (1) real entity model, (2) aggregate entity model, (3) artificial entity model, or (4) hybrid model. The examination also highlights some key characteristics of legal persons: autonomy, adaptability, and artificiality.

  1. (1)

    AI as a real entity

A key feature of real entities is legal autonomy. Hence, it could be argued that entities operating with significant autonomy are eligible for legal personhood. Artificial intelligence is autonomous by definition: Artificially intelligent systems have the potential to operate autonomously and interact with their surroundings (whether physical or digital), impacting their environment. (See, e.g., Novelli et al. 2022, p. 197; Laukyte 2020, pp. 447, 448; Singh Grewal 2014, p. 9; Chopra & White 2011, pp. 9, 10 and 187, 188).

AI’s potential for de facto autonomy is far greater than corporate entities, as it may not need human intermediate for interaction. For example, we can conceive a company run and maintained by AI without human intervention after the system is operational, but not vice versa. (See Reyes 2021). Once provided with necessary data, algorithms and system architecture, and connection to external systems and devices, AI may formulate and execute decisions. It may also adapt autonomously by learning from previous interactions. (Floridi & Sanders 2014, p. 358.) Increasing AI autonomy is the core reason for the current legal debate. (See below, ch. 5).

Due to these properties, it can be argued that AI can be a real entity from a legal perspective. At least it has much stronger operational aptitudes to be a real entity than corporations, which can be reduced to names on a registry. However, AIs are not real entities, for even though AI can operate autonomously, “whatever they are or do is ultimately determined by choices of others, namely their designers, producers and users”. (Novelli et al. 2022, p. 200).

  1. (2)

    AI as an aggregate entity

Corporations and other entities are described as group entities or agents. (See, e.g., Kurki 2019, pp. 158, 159; Duschkant 2015, pp. 2084, 2085; Savigny 1884, p. 181. See also Laukyte 2021; List 2021.) Artificial intelligence is not necessarily perceived as an aggregate entity due to its high level of autonomy. (See, e.g., Reyes 2021, p. 1497.) However, there are also key parallels: Neither corporation nor AI has a representational stance, motivational stance or a capacity to act according to those stances without human actors somewhere in the chain. (List 2021, p. 1219.) While AI might appear as a real entity, it is nevertheless the outcome of decisions and actions of natural persons.

The aggregate property of AI systems is essential when considering accountability. For example, insurance systems can be used to establish legal liability for an AI system to secure swift compensation for damages. (See e.g., Novelli 2022; Solum 1992). However, the aggregate entity model recognises the underlying responsibility of the associated individuals (e.g., manufacturers, programmers, users) to design, operate and maintain the systems carefully.

  1. (3)

    AI as an artificial entity

Corporations and AIs are considered here as non-natural entities without intrinsic value. Therefore, there is no ontological reason for them to have legal personhood. (See Bertolini & Episcopo 2022). However, corporations and other legal persons may have State recognised legal capacities. This capacity is conferred to artificial legal persons because they serve a particular societal function and create added value. For example, corporations of the 1900s built the North American train network, and today they enhance economic efficiency by playing in the modern market system. Similarly, conferring AI with legal personhood can be expected to create some societal added value.

As corporate founders and legislators define the corporate’s purpose (Chatman 2018, p. 853), the purpose of AI is recognised by the State according to its technical properties provided by the human actors. While associated natural entities are aware of the technical peculiarities of the AI necessary to determine their operational and corresponding legal autonomy level and hence provide vital information for legislators, the State’s task is to ensure that AIs as legal persons create added value and that the risks are controlled. (See, e.g., Novelli et al. 2022, pp. 201 and 203; Beck 2016, p. 48.) However, to adequately control the risks, it is necessary to recognise some liabilities for the associated human actors, that is, to recognise the nature of AI as an aggregate entity.

  1. (4)

    AI as a hybrid entity

According to the hybrid model, a legal person may simultaneously have attributes of a real entity, an aggregate entity and an artificial entity. Based on the previous analysis whereby artificial intelligence was recognised to hold key characteristics of each of the three models, it is reasonable to claim that no single model alone can illustrate the complex nature of AI. Instead, a hybrid model is required when considering AI legal personhood. (Table 1).

Table 1 Applicability of hybrid model of legal personhood to various AI systems with different levels of adaptability, autonomy and risks

As the legal powers of AI systems are determined by their technical properties and systems can vary, one size does not fit all. (Chen & Burgess 2019, p. 77.) However, the hybrid model is flexible. It recognises that legal persons, whether corporations or AI, may have a varying degree of autonomy, group agency and connection with private and public interests. Nevertheless, compared to corporations, AIs have more significant potential for legal personhood: The legal autonomy of AI can be proportional to its real autonomy determined by its technical properties. Therefore, it could also be argued that without autonomy, the AI system does not fulfill conditions for legal autonomy but should be seen as an aggregate entity. However, while the high level of real autonomy can be seen as both justification and a prerequisite for legal autonomy, real autonomy does not necessitate corresponding legal autonomy. On the contrary, high-risk and highly autonomous systems may be too autonomous for legal personhood, particularly when combined with high adaptability.

Too autonomous for legal personhood?

This article has considered whether AI could be provided with legal personhood by studying the analogy between corporations and AI and applying the hybrid model of legal personhood. The outcome is positive. However, the key question is, does the creation of AI legal personhood create added value? It is necessary to highlight the difference between the added value created by AI and the added value created by conferring legal personhood to AI. This chapter provides a brief overview of this discussion and considers whether the hybrid model could be useful in resolving any of the issues recognised.

The approach to the evaluation of added value adopted here is the evaluation of costs and benefits in terms of (a) material interests and (b) moral rights and obligations while (c) prioritizing moral rights over material interests. (Bryson et al. 2017, p. 283. See also Laykute 2021, p. 453.) Hence, while Saudi Arabia may have gained added value in the form of commercial value by providing legal personhood for the robot Sophia, the decision has been criticized for undermining human value. (See Parviainen & Coeckelbergh 2021; Pagallo 2018a; Yampolskiy 2018).

Arguments, both pro and con AI legal personhood has been presented widely in previous literature:

Con: Key arguments against AI legal personhood are that AI legal personhood does not create added value: Legal personhood is an unnecessary and overstated measure, or the risks of AI legal autonomy outweigh the potential benefits. (See, e.g., Chesterman 2020, pp. 825−827 and 844. See also Avila Negri 2021; Jowitt 2021; Zech 2021; Wendehorst 2020; Pagallo 2018b; Bryson et al. 2017; Chopra and White 2011).

Pro: Arguments for AI legal personhood are also multiple and often refer to bringing the accountability gap: In many cases, the “responsibility cannot be traced back to any particular person”. (Novelli 2021, p. 1. See also Novelli et al. 2022, pp. 200−202; Banteka 2021, p. 540.) Therefore, conferring AI with legal duties and corresponding liabilities could and should secure more efficient compensation for victims when proper mechanisms exist. (See e.g., Bertolini & Episcopo 2022; Lai 2021; Erdélyi & Erdélyi 2021; Giuffrida 2019; Sellwood 2017; Beck 2016; Solum 1992).

Of course, resolving the question of AI legal personhood requires policy choices, solving coordination problems, and overall careful regulation. (Novelli et al. 2022, p. 202; List 2021, p. 1232.) As Novelli et al. (2022, p. 201) described, “different sets of rights and obligation will be required depending on the technical peculiarities of AI systems, and maybe the areas in which they are used”. The more autonomous an artificial system is, the higher the risks are.

It is precisely the functional autonomy of the AI that could prevent the conferring of legal autonomy. AI systems such as Artificial General Intelligence (AGI) may be too autonomous to be legal persons compared to corporations. (See List 2021, p. 1213.) To avoid the risks, AI should have only” specific rights and obligations” that “apply only to particular contexts”. (Beck 2016, p. 480.) Indeed, it could be argued that the condition of legal personhood is either that AI is not really an AI, but smart software, which has strictly defined operations, or that it is a weak or narrow AI that performs autonomous operations and adapts only in a narrow, well-defined area of tasks. (Bartneck et al. 2020, pp. 13, 14).

The hybrid model does not provide straightforward solutions to whether AI should be provided with legal personhood. Nevertheless, key variables in the discussion are the properties of artificial intelligence systems, such as the level of autonomy in different AI models. Therefore, the hybrid model provides an analytical framework capable of grasping AI’s many characteristics and describing them with legal powers.

Discussion

The article briefly overviewed the hybrid model and its applicability to artificial intelligence. Further research is required to recognise its implications. Nonetheless, the hybrid model provides a new perspective on the legal personhood of both corporations and AI, with the potential to recognise the complex nature of artificial legal entities and dissolve the dichotomies between the three legal models of personhood. However, as the introductory chapter recognised, this is not the first hybrid model provided.

Carliss Chatman (2018) introduces the hybrid model of corporate legal personhood. However, Chatman’s model had one significant difference from the hybrid model applied here: It excludes the aggregate entity model. Based on critical empirical analysis of U.S. Court cases, Chatman concludes that the application of the aggregate entity model has resulted in the overextension of the Constitutional rights of the associated natural persons, such as religious freedoms, on the corporation itself. (See Chatman 2018. See also Avila Negri 2021; Banteka 2021; Reyes 2021; Laykute 2021).

Chatman (2018, p. 858) states that the model “properly apportions rights to the corporation itself while giving adequate deference to state power”. A hybrid model recognises corporations’ “many facets” (Id., p. 860.) However, the aggregate entity model is an invaluable facet of the hybrid model, particularly in bridging accountability gaps. (See, e.g., List 2021, pp. 1221−1232.) Its exclusion here is unwarranted considering differences between legal systems: Firstly, even though the European Union has bypassed the doctrinal issue of corporate legal personhood (Fleischer 2010, p. 1704), the legal powers of corporations and clearly limited to those necessitated by their societal function. (See Raskulla 2022.) Secondly, in the European legal systems, the problems raised by legal persons are likely resolved through interpretation and legislation, not interpretation alone.

However, other risks related to AI legal personhood expose to “repeating the same problems” (Avila Negri 2021, p. 7). These include the transference of risks to third parties. (See, e.g., Chesterman 2020, p. 825.) It also could lead to the creation of “too big” legal persons. (Laukyte 2020, p. 451.) Chapter 5 concluded that while the hybrid model does not solve such problems, it provides an analytical framework inside which such issues could be constructively discussed. Indeed, the question of AI legal personhood may help resolve the remaining vagueness in corporate legal personhood. (See, e.g., Raskulla 2022, p. 328, 329.) Finally, the focus should be on fixing the problems related to corporate legal personhood rather than avoiding them. Even if AI legal personhood is not politically feasible (see, e.g., Novelli et al. 2022, p. 201; see also European Commission 2021), the ongoing debate could nevertheless help develop a more robust modeling of existing legal persons.

Conclusion

At the beginning of the 1900s, companies were allowed only by a special provision to serve a public purpose. The relationship between corporations as private legal persons and public interests is less direct today than before. Nevertheless, it is not lost. Furthermore, artificial intelligence is now considered the new vessel for humankind to take us into the future. One of the issues raised has been the legal autonomy of AI.

This article joined the debate on AI legal personhood. It contributed to the discussion by applying a hybrid model of corporate legal personhood on artificial intelligence. Some AI systems’ high de facto autonomy could allow higher de jure autonomy, corresponding with the real entity model. Nevertheless, some natural persons have designed and produced AI. Therefore, the systems are outcomes of decisions made by natural persons, and AI has the characteristics of an aggregate entity. However, the final decision conferring AI with legal personhood status lies on the State. Furthermore, legal personhood is conditional to the societal added value. Therefore, while the significant autonomy of AI systems is its most value-adding quality, it could mean that AI is too autonomous for legal personhood because of the related risks.

The hybrid model of legal personhood offers no easy solutions. Nevertheless, as an analytical framework, it encaptures the complex nature of legal persons as autonomous, aggregate and artificial entities. The hybrid model dissolves the dichotomies between these different models. It also recognizes that legal persons might have different levels of autonomy. Moreover, while AI legal personhood might not be an option in the foreseeable future, the model could nevertheless be useful to conceptualize the challenges of today and anticipate the challenges of tomorrow. However, more research is required on the applicability of the hybrid model of legal personhood to render practical recommendations.