Skip to main content
Statutory safe harbour provisions and provisions relating to reliance and delegation operate in a human-centric environment, and do not contemplate the use of AI.

Institutional Failures, AI and Corporate Governance

The challenges that corporate boards face are growing as industry becomes increasingly complex and the demands on directors intensify. From a legal perspective, the board of directors performs various duties but conventional wisdom suggests that the board of large, complex, public corporations functions primarily as an organ that monitors the management of their corporation. As a part of this duty, boards monitor internal operations, external compliance, and the behaviour of other main decision-makers within the corporation (i.e. managers and employees). They must perform these functions in the interests of shareholders and, increasingly, other key stakeholders. However, as Eisenberg identified in the 1970s, a number of institutional failures inhibit the effectiveness of the board as a corporate monitor. Constraints in time, in the provision of adequate information, and the board's composition mean that boards often struggle to carry out their monitoring function. In a more modern context, Gilson and Gordon argue that well-meaning directors are ‘thinly informed, under-resourced, and boundedly motivated’.

The rise of technology promises to address these shortcomings. Artificial intelligence (AI), in particular, has been presented as a technology that could help address constraints relating to time, information, and board composition. They can help by automating routine processes, through the provision of cognitive insights, and through cognitive engagement. However, despite its many promises, AI also inevitably complicates the corporate governance puzzle as it poses significant accountability issues and imposes other costs when addressing institutional failures. To provide greater clarity, our article, The Board Monitoring Function: Artificial Intelligence in the Era of Heightened Accountability, explores the emergence of AI (emphasizing three developments: large language models, generative AI and agentic AI), analyses its potential benefits and challenges in the boardroom, and evaluates the operation of existing legal and governance accountability measures where AI is deployed in the corporate setting, including in the boardroom. The article sets out a number of proposed reform options to our corporate law and corporate governance frameworks to address the deployment of AI in the corporate context.

Refining the ‘Human-in-the-Loop’ Approach 

The use of AI in the boardroom must be consistent with fiduciary and conduct duties. For example, in Australia, directors are bound by statutory and general law duties of care and diligence as well as a number of duties based in loyalty. Using AI in the boardroom is unlikely to discharge directors from these duties, and may, instead, impose new pressures on directors seeking to discharge their duties as AI has the potential to operate as a black box technology. Statutory safe harbour provisions and provisions relating to reliance and delegation operate in a human-centric environment, and do not contemplate the use of AI.

Our article analyses these issues, drawing from several court decisions and scholarship in the field. We note that directors must remain actively involved, critically assess AI insights, ask difficult questions, and ultimately exercise independent judgment to satisfy their legal duties when exercising monitoring functions. The emphasis on accountability is particularly important as courts consider whether to attribute liability directly on a corporation either through systems adopted in the corporation or through group decision-making. This raises new considerations for corporate accountability in the era of AI.

The emphasis on retaining human involvement in decision-making is consistent with the legislative and governance framework in Australia and also resonates with the current Human-in-the-Loop scholarship. However, we believe this concept needs to be adapted in two ways: the Right-Human-in-the-Loop and the Human-in-the-Loop-Right. To ensure the Right-Human-in-the-Loop, we argue that legal and governance settings should provide that AI expertise is used in the boardroom (whether through the appointment of directors with technological expertise or through the reliance on expert advice regarding the operation of particular AI Systems) as well as expertise that relates to the ultimate decisions in which AI featured. The allocation of responsibilities related to AI Systems should be incorporated into the mapping exercise of an accountability regime for executives, managers and directors.

To provide a Human-in-the-Loop-Right approach to AI, on the other hand, we must ensure that the decisions of directors that rely on AI are auditable, explainable, and ultimately consistent with legal and ethical obligations. We argue that the requirement for audit is particularly important, although controversial, given the reluctance of corporations to provide access to the algorithms they use. Such measures should be combined with a legal obligation on directors to declare in their reporting that AI has been used in the boardroom and the corporation more broadly, and commit to reviewing and verifying the accuracy of information derived from AI. Further, external AI auditors should conduct regular and ad hoc audits to ensure compliance with legal and governance obligations. This, in turn, would allow for the proper regulatory oversight of AI used in the corporate context, with the relevant regulator and auditor working with corporations and the management team to ensure the appropriate use of AI.

This proposed governance framework, as a whole, aims to balance the efficiency of AI with the irreplaceable value of human judgment and legal accountability.

Embracing AI with Accountability

Corporate governance is at a crossroads in the age of AI. Boards must resist the temptation to adopt AI for innovation without adequately considering accountability. Instead, they should take a refined human-in-the-loop approach to cultivate AI literacy, establish transparent oversight processes, and maintain and heighten an accountability regime.

The ultimate goal of integrating emerging technologies like AI is not to replace directors with algorithms but to provide directors with better tools to perform their monitoring functions better, act faster, and govern smarter. Corporate governance and the law can help achieve this goal as long as we ensure that the accountability regime adequately embraces a human-centric approach. 

____________

Steve Kourabas is an Associate Professor at Monash University.

Cheng-Yun Tsang is an Associate Professor at Monash University Faculty of Law (Monash Law).

__________

This blog is based on a paper presented at the Asian Corporate Law Forum (ACLF). Visit the event page to explore more conference-related blogs.

The ECGI does not, consistent with its constitutional purpose, have a view or opinion. If you wish to respond to this article, you can submit a blog article or 'letter to the editor' by clicking here.

This article features in the ECGI blog collection Board of Directors

Related Blogs

Scroll to Top