Skip to main content

Key Finding

AI can help boards overcome governance challenges, but a new study warns it may undermine accountability—urging a "human-in-the-loop" approach to keep directors legally responsible for AI-influenced decisions

Abstract

The use of AI introduces entirely new considerations for corporate governance as well as novel dimensions to existing issues. In this paper, we consider how AI affects the core responsibilities of the board as a monitoring organ in large public companies. The paper outlines the changing power dynamics of the corporation and its governance as well as the so-called institutional failures (constraints in time, in information, and in board composition) that have been thought to act as an impediment to the board's effective operation as a monitor in a human-centric governance framework. The paper then explores the different types of AI Systems that are being deployed in a corporate setting to either augment or automate decisions and how these may help to address institutional failures by providing for efficiencies. However, the paper then notes that some of these efficiencies may act contrary to established legal accountability mechanisms, using the Australian legal framework as an example. The paper concludes that while the board may use AI to help address institutional failures, individual directors remain responsible for the ultimate decisions made by the board. To ensure that directors comply with their legal obligations, we need to implement 'human-in-the-loop' approach to corporate governance that ensure that the right people are involved in overseeing decisions that involve AI ('right-human-in-the-loop') as well as guidance on how board decisions should be made when they involve AI ('human-in-the-loop-right').

Related Working Papers

Scroll to Top