Self-Driving Corporations?

Self-Driving Corporations?

John Armour, Horst Eidenmüller

November 05 2019

What are the implications of artificial intelligence (AI) for corporate law? In a recent essay, we consider the trajectory of AI’s evolution, analyze the effects of its application on business practice, and investigate the impact of these developments for corporate law. Overall, we claim that the increasing use of AI in corporations implies a shift from viewing the enterprise as primarily private and facilitative, towards a more public, and regulatory, conception of the law governing corporate activity.

The standard account of corporate law is functional in its orientation; that is, it is premised on a social-scientific analysis of what actually happens in a business firm. The starting point for our inquiry is therefore to ask how AI will affect the activities of firms. Today’s AI is not going to replace humans in the C-suite. AI is dominated today by machine learning applications which assist and augment human decision-making. However, these applications raise multiple challenges for business organization, the management of which we collectively term ‘data governance’.

The impact of today’s AI on corporate law is coming to be felt along two margins. First, we expect a reduction across many standard dimensions of internal agency and coordination costs. Second, the oversight challenges—and liability risks—at the top of the firm will rise significantly. The new agency costs will be increasingly ‘strategic’ in their reach—that is, having potentially far-reaching consequences for corporate performance. Identifying where to monitor these, and how best to do it, will be a progressively complex and important task. For corporate law, this means that the duties of directors, who are ultimately responsible for oversight of firms’ performance, will increasingly come to recognize the significance of data governance for corporate success.

Tomorrow’s AI may permit humans to be replaced even at the apex of corporate decision-making. This is likely to happen first in what we call ‘self-driving subsidiaries’ performing very limited corporate functions. Replacing humans on corporate boards by machines implies a fundamental shift in focus: from controlling internal costs to the design of appropriate strategies for controlling ‘algorithmic failure’, i.e. unlawful acts by an algorithm with potentially severe negative effects (physical or financial harm) on external third parties.

We discuss corporate goal-setting, which in the medium term is likely to become the center of gravity for debate on AI and corporate law. This will only intensify as technical progress moves toward the possibility of fully self-driving corporations. We outline potential regulatory strategies for their control. Fully self-driving corporations might be subject to an ex ante assessment of controlling algorithms as well as to strict liability for algorithmic failure, combined with compulsory corporate liability insurance. As a regulatory alternative, we consider an unlimited pro rata liability of shareholders for corporate torts.

The potential for regulatory competition weakens lawmakers’ ability to respond, and so even though the self-driving corporation is not yet a reality, we believe the regulatory issues deserve attention well before tomorrow’s AI becomes today’s.