Skip to main content

Are boards ready for AI?

blog

Issue № 2 | April 2025

Welcome back!

This month we are looking at the future of Boards and AI. Artificial intelligence is no longer a futuristic dream tucked away in research labs. It’s already here, moving into boardrooms at an incredibly fast speed. The trend is undeniable, and companies — along with directors — are increasingly aware of it.

In a survey conducted by Deloitte in 2024, corporate leaders  were asked to select their three main priorities for the next 12 months: 43% selected “increasing the use of AI across the organization”—making it the number one priority.

It is no wonder that regulators are also paying attention. Take the new EU AI Act. It does not directly assign duties to boards — no explicit language about directors or fiduciary responsibilities. Yet, by mandating compliance frameworks around risk assessment, it necessitates board-level engagement.

Investors are alert too. Some of the largest U.S. pension funds recently announced they would oppose the re-election of directors at companies that failed to adequately oversee AI-related risks. In her featured interview (see below) for this newsletter, Prof. Katja Langenbucher interestingly points out that ‘Shareholders might be just as critical of a board’s overreliance on AI as of a board that fails to use AI.’
 
The Question of Ownership

When it comes to AI and boardrooms, one immediate challenge is responsibility. If AI helps directors to make decisions — or worse, makes them independently — who owns these decisions?

Katja Langenbucher's paper — yesterday awarded the 2025 ECGI Law Prize (Best paper in the Law Working Paper series) — offers a useful framework. She proposes a matrix based on two variables, "ownership" and "trust" which helps identify the appropriate level of judicial scrutiny when boards use AI to inform their decisions.

Directors are expected to "own" their decisions, meaning that they cannot abdicate their authority. In return, corporate law places “trust” in directors to make business judgments, free from judicial second-guessing. 

When both ownership and trust are low — for example, when boards rely heavily on AI and the decisions don’t concern business judgment — courts, she argues, should apply more intensive judicial review.
 
The Myth of Perfect Tech

It's tempting to think that AI will fix what humans get wrong. That old corporate governance issues — bias, agency problems, information asymmetries — might just fade away under the cool logic of machines. But Profs. Enriques and Zetzsche warn against this optimism. They call it the “tech nirvana fallacy”: the false hope that technology will solve all corporate governance problems.

Certainly, if well deployed, technology can increase the transparency of board decisions, speed-up processes and enhance their accuracy. But despite these advantages, AI could actually make governance harder.

Heavy reliance on AI could increase the risk of paralysis: machines may tend to identify a high number of red flags to be tackled and goals to be prioritised. More data isn’t always helpful. Boards may become overwhelmed, pressured by the weight of heightened expectations and the burden of deeper oversight.

Tech is not perfect — not yet, at least. Apart from its now-famous struggle to generate realistic hands, AI may replicate or even amplify misalignments between directors and shareholders if not trained carefully. Like human directors, machines can pursue their own objectives — or more likely, reflect the biases of those who designed them.

Not to mention that an excessive use of AI may increase cybersecurity threats, making companies — and boards — more susceptible to hackers’ attacks and the ensuing reputational risk. There is evidence that markets are punishing those companies that fail to properly manage cybersecurity.

Beyond Mere Cosmetic Solutions

Several  companies – including Boeing, eBay, Dell Technologies – have announced the appointment of a Chief AI Officer (or CAIO). Should this role be extended to the Board? Some boards may think that bringing in an “AI expert” solves the governance problem. It certainly looks good in the annual report and might reassure investors in the short term.

However, research by Profs. Shapira and Nili suggests being more sceptical. Specialist directors (like AI ones) often serve a symbolic rather than structural purpose. Their presence may distort board dynamics, as well as excessively enlarge the board size without improving oversight. 
 
Where This Leaves Us

The more we look at it, the clearer it becomes: most boards are likely not ready, at least not fully. AI has its place, and deep familiarity with its capabilities is increasingly necessary. Boards must adapt to fit a world where machines no longer quietly execute in the background but play a visible role in strategic choices.

Directors must remain vigilant, able to detect AI-related risks and mitigate errors. Ultimately, they must retain ownership and accountability over decisions, regardless of how intelligent the systems become. 
 
Learn More

For a deeper exploration of how boards can navigate AI governance challenges, read our exclusive interview with Prof. Katja Langenbucher (available below). She shares valuable and practical insights on black-box AI, board accountability and regulatory developments in this disruptive field.
 
Ciao for now,
~ Riccardo

ricardo_rao

Riccardo Rao is a PhD candidate in business law at the Universities of Udine and Trieste, Italy. His PhD research focuses on benefit corporations, conducting comparative analysis across Europe and North America. 

✉️ Please feel free to get in touch, share your thoughts and let us know how we're doing, email future@ecgi.org.

Featured Interview

with Katja C. Langenbucher


This interview is based on Prof. Langenbucher's paper"Ownership and Trust – A Corporate Law Framework for Board Decision-Making in the Age of AI", which explores how boards can responsibly integrate AI into their decision-making processes.

🏆 The paper has been named the winner of the 2025 ECGI Law Series Prize.

__________________________

AI & Board Decision-Making

 - What key safeguards should boards have in place when using AI as a decision-support tool?

A board must address two key questions: Which AI to use?, and What should it do? The first question involves “AI-due diligence”. Most firms will not build their own AI but purchase a (narrow or foundation) model from a commercial provider, finetuning it with the firm’s own data. While selecting a reputable model provider is important, the latter step is core: The board is in a unique position to evaluate how well structured, reliable, relevant and unbiased the firm’s data is.

The second question requires the board to define the AI’s intended purpose. In the paper, I give examples such as predicting the share price for an IPO or moving from brick-and-mortar bank branches online. Each use case presents distinct challenges. To determine what AI can and cannot do for them, board members must engage in dialogue with technical experts who explain potential limitations and risks of the AI’s contribution.


The Role of Explainability in AI Governance

 - Your paper challenges the idea that black-box AI should be excluded from board decision-making. How important is AI explainability for board decision-making? Should boards be required to understand AI decision processes, or is outcome reliability more relevant?

Black-box AI can seem intimidating. However, it is important to remember that managing risk and uncertainty is a core responsibility of the board. They routinely evaluate known unknowns. Upon closer examination, much depends on the context of the board decision. Does a model such as Alphafold inspire the board to invest in a new drug? Understanding each step might be less important than assessing sunk costs should the model’s prediction be wrong. This is different if the board asks how a judge will decide a complex legal matter in various jurisdictions. Methods of explainable AI, such as LIME, SHAP or DiCE, are valuable as they help the board gauge the reliability of outcomes. However, any AI prediction is a function of both, model and data. Explainability tools might ultimately be less important than the quality of the data and an understanding of the limitations of AI’s legal reasoning capabilities.


Ownership vs. Trust in AI-Augmented Boards

 - You propose a framework where judicial review varies depending on the balance of ownership and trust. How can boards ensure they "own" decisions while leveraging AI’s predictive capabilities?

What I call ownership implies that a board may not abdicate its authority. Stories you might find online about boards deferring their decision-making to the suggestions of a quirky robot-AI fall into this category. More importantly, ownership requires boards to take on accountability for their decision-making vis-à-vis shareholders. To truly own a decision that incorporates an unexpected or surprising AI prediction, the board must understand the differences between human cognition and the inner workings of an AI. AIs are “induction machines” that recognize recurring patterns in historical data. Humans are conceptual thinkers. They build hypotheses and seek to establish causality. Good strategic board decisions are a combination of both approaches. The AI provides input, serves as a sounding board or challenges group-think among board members. Humans leverage theory-driven reasoning to identify and scrutinize novel business opportunities. Ultimately, it may be a unique, firm-specific strength that produces a lasting competitive edge.


Regulatory Implications & Global Variations

 - Do you see significant differences in how U.S., EU, and other jurisdictions regulate AI in corporate governance? How might emerging AI regulations shape future board responsibilities?

There are significant differences in global approaches to regulating AI. President Trump’s 2025 Executive Order emphasizes fostering AI development with minimal government interference, except for sectoral regulation such as finance or health. Some U.S. states have introduced their own AI laws. China adopts a closely monitored, centralized approach featuring mandatory ethics committees, algorithm accountability, and data privacy laws. The EU has passed the AI Act, which applies a risk-based approach across all sectors, tailoring compliance requirements to risk-levels of AI use cases. The UK is in the process of designing laws that combine elements of both, principle- and rule-based approaches. While AI in corporate governance has not yet been the direct focus of regulation, firms now face a complex array of global laws. For boards, this results in an unwelcome increase in compliance responsibilities. At the same time, regulatory competition between jurisdictions offers opportunities to select the most business-friendly environment. 


Looking Ahead: AI & the Future of Corporate Law

 - As AI becomes more integrated into corporate strategy, what key legal and governance challenges do you anticipate in the next five years?

I distinguish between two types of board decisions. Type 1 decisions structure the use of AI by officers and employees, type 2 decisions concern the integration of AI into core board decision-making. Type 1-governance is primarily about if and how to use an AI. The if question identifies a value proposition: Does the firm have suitable data and employee skillsets, can it cope with error costs? The how question concerns the choice between execution-only, fully autonomous or human-cooperating AI. Legal challenges include oversight duties related to employment, data privacy, and securities law. For type 2 decisions, I anticipate two focus areas: Especially in regulated industries, firms and supervisors will discuss AI’s role in risk management. Across industries, boards will consider the business judgment rule’s requirement to be reasonably informed. Shareholders might be just as critical of a board’s overreliance on AI as of a board that fails to use AI.

Scroll to Top