The ECGI blog is kindly supported by
Boards and AI: Why the Governance Gap Is Structural, Not Technical
Artificial intelligence is no longer an emerging issue for boards; it is a present one. Across sectors, AI is shaping capital allocation, risk management, workforce strategy, and competitive positioning. Yet while directors increasingly acknowledge its importance, many boards remain uneasy about their ability to govern it effectively. The problem, as recent evidence suggests, is not primarily a lack of technological sophistication. It is a governance problem.
A new regional survey of board members across Latin America and the Caribbean highlights this tension clearly. Drawing on responses from 263 directors serving on more than 700 boards, the study finds that while technology and AI are widely recognised as strategically relevant, boards consistently rate their preparedness to oversee these issues well below their overall governance effectiveness. This gap is revealing. It suggests that the challenge lies less in understanding what AI is, and more in how boards are structured to engage with risks and opportunities that cut across strategy, culture, ethics, and organisational design.
Boards have encountered unfamiliar risks before—from complex financial instruments to cybersecurity and climate exposure. In hindsight, governance failures in these areas rarely stemmed from a complete absence of information. More often, they reflected fragmented oversight, unclear ownership, and decision-making processes ill-suited to issues that evolve quickly and span multiple committees. AI fits squarely into this pattern.
Survey respondents report that AI-related topics most often reach the board through defensive channels: cybersecurity incidents, regulatory compliance, or risk reviews. By contrast, far fewer boards regularly engage with AI as a driver of business model transformation or long-term value creation. Even when AI appears on the agenda, discussions tend to lack depth. On a ten-point scale, directors rate the strategic depth of board deliberations on technology and AI at just under six. That is not ignorance; it is structural thinness.
One reason is dispersion. Responsibility for AI oversight is often split across risk, audit, technology, or cybersecurity committees. In principle, this is sensible. In practice, the absence of strong coordination mechanisms makes it difficult to assemble a coherent picture of materiality, trade-offs, and escalation. Issues surface late, in narrow form, and often after strategic options have already been constrained. Oversight becomes reactive rather than anticipatory.
Training alone does not resolve this. While directors who have received AI-related training report higher confidence, the overall effect is modest. Only a third of respondents report having received any training on the fundamentals or applications of AI, and even among those who have, perceived preparedness remains moderate. This reinforces a crucial point: familiarity with tools does not substitute for governance capability. What boards struggle with is not how AI works, but how to ask the right questions, challenge management assumptions, and integrate technological choices into fiduciary judgment.
The survey also reveals a quieter but equally significant shift. Directors themselves are increasingly using generative AI tools—often to summarise board materials or prepare questions. There is nothing inherently problematic about this. Used well, such tools can enhance efficiency and broaden access to information. But efficiency should not be confused with judgment. When individual directors rely on opaque or unverified outputs outside collective deliberation, boards risk fragmenting their shared factual base. Confidentiality, record integrity, and the provenance of analysis become governance issues in their own right.
Interestingly, the data puncture some common myths. AI use among directors does not vary meaningfully by age or gender. What matters instead is disposition: curiosity, willingness to experiment, and commitment to ongoing learning. Governance gaps, in other words, are shaped as much by institutional norms as by individual capability.
Where boards appear more confident, one feature stands out: institutionalisation. Boards with dedicated structures—regular agenda time, specialised committees, external expertise, or clear escalation triggers—report significantly deeper discussions and greater strategic engagement. Institutional design matters. When AI governance is treated as episodic or ad hoc, it remains defensive. When it is embedded into board routines, it becomes deliberative.
This has important fiduciary implications. A board that limits its engagement with AI to compliance and control risks falling short of its duties of anticipation and care. AI is not merely another operational risk. It reshapes incentives, reallocates decision rights, and alters how organisations learn and adapt. Governing it requires boards to move beyond reassurance toward judgment.
The proliferation of AI governance frameworks reflects genuine demand, but it also carries a risk. Piecemeal guidance—separating ethics from strategy, or innovation from risk—can reinforce the very fragmentation boards are struggling with. AI’s defining characteristic is that it cuts across all of these domains simultaneously.
The central question, then, is not whether boards understand AI in technical detail. It is whether their governance structures are capable of identifying material risks and opportunities early, integrating perspectives across committees, and sustaining informed collective judgment. Without that, boards may remain formally compliant yet substantively unprepared.
________________
Bruno Sbardellini Cossi is the Corporate Governance Director at IDB Invest.
The ECGI does not, consistent with its constitutional purpose, have a view or opinion. If you wish to respond to this article, you can submit a blog article or 'letter to the editor' by clicking here.