
The ECGI blog is kindly supported by

Interview with Katja Langenbucher on AI Board Decision-Making and Corporate Governance
This interview is based on Katja Langenbucher's paper, "Ownership and Trust – A Corporate Law Framework for Board Decision-Making in the Age of AI", which explores how boards can responsibly integrate AI into their decision-making processes.
🏆 The paper has been awarded the 2025 Cleary Gottlieb Law Prize for the Best Paper in the ECGI Law Working Paper Series in 2024.
________________________________
AI & Board Decision-Making:
- What key safeguards should boards have in place when using AI as a decision-support tool?
A board must address two key questions: WhichAI to use?, and What should it do? The first question involves “AI-due diligence”. Most firms will not build their own AI but purchase a (narrow or foundation) model from a commercial provider, finetuning it with the firm’s own data. While selecting a reputable model provider is important, the latter step is core: The board is in a unique position to evaluate how well structured, reliable, relevant and unbiased the firm’s data is. The second question requires the board to define the AI’s intended purpose. In the paper, I give examples such as predicting the share price for an IPO or moving from brick-and-mortar bank branches online. Each use case presents distinct challenges. To determine what AI can and cannot do for them, board members must engage in dialogue with technical experts who explain potential limitations and risks of the AI’s contribution.
The Role of Explainability in AI Governance:
- Your paper challenges the idea that black-box AI should be excluded from board decision-making. How important is AI explainability for board decision-making? Should boards be required to understand AI decision processes, or is outcome reliability more relevant?
Black-box AI can seem intimidating. However, it is important to remember that managing risk and uncertainty is a core responsibility of the board. They routinely evaluate known unknowns. Upon closer examination, much depends on the context of the board decision. Does a model such as Alphafold inspire the board to invest in a new drug? Understanding each step might be less important than assessing sunk costs should the model’s prediction be wrong. This is different if the board asks how a judge will decide a complex legal matter in various jurisdictions. Methods of explainable AI, such as LIME, SHAP or DiCE, are valuable as they help the board gauge the reliability of outcomes. However, any AI prediction is a function of both, model and data. Explainability tools might ultimately be less important than the quality of the data and an understanding of the limitations of AI’s legal reasoning capabilities.
Ownership vs. Trust in AI-Augmented Boards:
- You propose a framework where judicial review varies depending on the balance of ownership and trust. How can boards ensure they "own" decisions while leveraging AI’s predictive capabilities?
What I call ownership implies that a board may not abdicate its authority. Stories you might find online about boards deferring their decision-making to the suggestions of a quirky robot-AI fall into this category. More importantly, ownership requires boards to take on accountability for their decision-making vis-à-vis shareholders. To truly own a decision that incorporates an unexpected or surprising AI prediction, the board must understand the differences between human cognition and the inner workings of an AI. AIs are “induction machines” that recognize recurring patterns in historical data. Humans are conceptual thinkers. They build hypotheses and seek to establish causality. Good strategic board decisions are a combination of both approaches. The AI provides input, serves as a sounding board or challenges group-think among board members. Humans leverage theory-driven reasoning to identify and scrutinize novel business opportunities. Ultimately, it may be a unique, firm-specific strength that produces a lasting competitive edge.
Regulatory Implications & Global Variations:
- Do you see significant differences in how U.S., EU, and other jurisdictions regulate AI in corporate governance? How might emerging AI regulations shape future board responsibilities?
There are significant differences in global approaches to regulating AI. President Trump’s 2025 Executive Order emphasizes fostering AI development with minimal government interference, except for sectoral regulation such as finance or health. Some U.S. states have introduced their own AI laws. China adopts a closely monitored, centralized approach featuring mandatory ethics committees, algorithm accountability, and data privacy laws. The EU has passed the AI Act, which applies a risk-based approach across all sectors, tailoring compliance requirements to risk-levels of AI use cases. The UK is in the process of designing laws that combine elements of both, principle- and rule-based approaches. While AI in corporate governance has not yet been the direct focus of regulation, firms now face a complex array of global laws. For boards, this results in an unwelcome increase in compliance responsibilities. At the same time, regulatory competition between jurisdictions offers opportunities to select the most business-friendly environment.
Looking Ahead: AI & the Future of Corporate Law
- As AI becomes more integrated into corporate strategy, what key legal and governance challenges do you anticipate in the next five years?
I distinguish between two types of board decisions. Type 1 decisions structure the use of AI by officers and employees, type 2 decisions concern the integration of AI into core board decision-making. Type 1-governance is primarily about if and how to use an AI. The if question identifies a value proposition: Does the firm have suitable data and employee skillsets, can it cope with error costs? The how question concerns the choice between execution-only, fully autonomous or human-cooperating AI. Legal challenges include oversight duties related to employment, data privacy, and securities law. For type 2 decisions, I anticipate two focus areas: Especially in regulated industries, firms and supervisors will discuss AI’s role in risk management. Across industries, boards will consider the business judgment rule’s requirement to be reasonably informed. Shareholders might be just as critical of a board’s overreliance on AI as of a board that fails to use AI.
_______________
The ECGI does not, consistent with its constitutional purpose, have a view or opinion. If you wish to respond to this article, you can submit a blog article or 'letter to the editor' by clicking here.