Skip to Content

AI and the Quest for Governance – Why Waiting Is No Longer an Option

11.05.2025

Thoughts for Boards and Corporate Leaders


The Inflection Point

Every generation of corporate leadership encounters a defining moment – a point where technology, economics, and society converge to redefine what it means to govern responsibly.

For boards and senior executives today, that moment is artificial intelligence.

AI is not merely another wave of digital transformation or another tool to improve efficiency. It is a foundational shift in how organizations think, decide, and act. It promises immense value, but it also carries risks that challenge traditional governance. It can create outcomes no one intended, decisions no one fully understands, and responsibilities that no one yet knows how to assign.

And yet, for all the promise and peril, the uncomfortable truth is this – most boards and leadership teams are not ready.

Not because they don’t care, but because the rules of engagement have changed faster than the structures that keep organizations accountable.

Boards are used to governing within frameworks – financial controls, cybersecurity frameworks, data privacy regimes, ESG reporting. Each of those evolved over decades. AI is moving at a speed that compresses those decades into months.

As one director recently said to me, “We don’t even know what questions to ask yet.”

That’s the essence of this moment – not ignorance, but uncertainty in the face of acceleration.


The World That Changed Overnight

It wasn’t that long ago that artificial intelligence was confined to the back offices of data science teams – a predictive model here, an optimization algorithm there. Quietly, methodically, AI-powered recommendations, scheduling, and fraud detection.

Then came generative and agentic AI – systems that no longer just predict, but create and act. They write, code, design, negotiate, and increasingly, they make decisions. They no longer just process data; they participate in business processes.

Today, AI drafts marketing campaigns, screens job applicants, handles customer inquiries, recommends credit terms, analyzes legal contracts, and assists physicians. It’s moving into logistics, procurement, and even strategic planning.

If you’re a director, that means AI is already influencing – directly or indirectly – your company’s strategy, your brand, your workforce, and your exposure.

The diffusion of AI has been breathtakingly fast. Adoption cycles that once took years now happen in weeks. A single software update can propagate across the enterprise overnight. A model update at a third-party vendor can change outcomes for thousands of customers before management even knows it happened.

This velocity has outpaced traditional governance. The quarterly rhythm of board oversight was designed for predictable systems – not self-learning ones.


When Risk Becomes Invisible

One of the hardest things about AI is that its failures are rarely obvious until they are catastrophic, for example:

  • A generative model that hallucinates a false product claim.
  • An HR algorithm that quietly filters out older candidates.
  • A chatbot that offers harmful advice to a vulnerable customer.
  • A risk model that “learns” to cut corners in ways no one programmed.

These aren’t science fiction scenarios; they are already happening. And when they do, the fallout extends far beyond the IT department.

Regulators ask questions about accountability. Customers lose trust. Employees question the integrity of leadership. Shareholders demand to know who was responsible.

And, as always, those questions land at the board’s door.

But AI is unlike any prior governance challenge. The systems evolve continuously. Their behavior is probabilistic, not deterministic. Their creators often don’t fully understand how they reach conclusions.

When boards ask, “Can management assure us this system is safe?”, the honest answer – though rarely spoken aloud – is often, “We don’t know.”


The Governance Gap

Boards have faced transformative technologies before – the internet, cybersecurity, data privacy, ESG – and each time, they adapted. Committees were formed. Policies were written. New experts were invited to the table.

But AI introduces something different – unpredictability.

Cybersecurity breaches traditionally followed patterns. Privacy risks can be traced to policies and controls. But AI introduces emergent behavior – actions and outputs that arise from complex interactions between data, code, and context. A small tweak in data can create a large, unintended outcome.

Moreover, AI governance doesn’t fit neatly into existing committees. Is it technology? Risk? Cyber? Audit? Ethics? Strategy? The answer, of course, is yes – it’s all of them. Which often means the risk is being owned by none.

And this is where boards must evolve. Governance in the AI era is not about new committees; it’s about new fluency. Directors don’t need to become AI engineers, but they must become AI literate – able to discern when enthusiasm outpaces ethics, when efficiency undermines accountability, when innovation veers toward irresponsibility.


The Anatomy of Failure

When I advise leaders on AI readiness, I often start with a simple exercise – Imagine an AI failure in your organization. What does it look like? Who finds out first? How fast does it reach the board?

In nearly every session, silence follows.

That silence is telling. It reveals how invisible AI has become in organizational processes – everywhere, but accountable nowhere.

The failure modes are familiar but amplified:

  • A regulatory shock when a new AI law requires disclosures you can’t produce.
  • A reputational crisis when an AI-generated decision goes viral for the wrong reasons.
  • An operational cascade when an algorithm’s error propagates across thousands of transactions before human detection.
  • A vendor failure when a third-party model you licensed introduces bias or liability you can’t see.
  • A workforce backlash when employees discover that their roles are being quietly automated without explanation or fairness.

These are not distant possibilities – they are happening now in finance, retail, healthcare, law, and manufacturing.

And yet, each of these scenarios is preventable – not by predicting every possible outcome, but by establishing disciplined governance.


What Leadership Looks Like in the AI Era

Leadership in the age of AI doesn’t begin with mastery of algorithms; it begins with mastery of questions.

The most effective boards I’ve seen don’t ask, “What AI tools are we using?” They ask, “What decisions are we delegating to AI, and how do we know they’re aligned with our values and obligations?”

They don’t ask, “Are we using AI ethically?” They ask, “How would we explain this AI decision to a regulator, a journalist, or our own employees?”

They don’t ask, “How fast can we adopt?” They ask, “How best can we govern?”

That’s what stewardship looks like.

Because in every era of technological change, the hardest leadership challenge is not the technology itself – it’s the human tendency to treat technology as destiny.

AI may change everything, but it does not absolve boards of their duty to think critically, question deeply, and act prudently.


The Invisible Human Cost

There’s another dimension boards can’t ignore – the people.

AI doesn’t just transform processes; it transforms work. It automates tasks, alters skill requirements, and, in some cases, displaces roles.

Employees are watching closely. They are asking whether leadership is acting with integrity and transparency. They want to know whether automation means opportunity or obsolescence.

Boards that fail to anticipate this human dimension risk eroding trust internally as well as externally.

The most forward-thinking directors I’ve worked with now ask - “What is our workforce transition plan? How are we reskilling employees for an AI-augmented future? How are we communicating our intentions clearly and honestly?”

These are not perfunctory questions – they are strategic. Because culture and trust, once lost, are far harder to rebuild than any model or process.


Redefining Fiduciary Duty

AI governance isn’t just a matter of ethics – it’s a matter of duty.

Boards are bound by fiduciary obligations of care, loyalty, and good faith. That means understanding the material risks that affect the enterprise, and ensuring management has systems in place to manage them.

Today, AI is a material risk.

It touches compliance, data privacy, cybersecurity, intellectual property, employment, consumer interactions, and even securities disclosure. It shapes the information that informs decisions – and sometimes, the decisions themselves.

That means the board’s duty of care must evolve. Ignorance of AI risk will not be an excuse. Regulators are already signaling that they expect boards to exercise active oversight of AI strategy, risk, and ethics – just as they do with financial reporting or cyber resilience.

In that sense, AI governance is not optional; it is an extension of corporate duty.


The Leadership Imperative

What distinguishes successful leaders in the AI era is not perfection – it’s posture.

Passive boards wait for clarity. Active boards create it.

They don’t wait for regulators to tell them what to do; they anticipate and shape standards. They don’t ask for guarantees from management; they ask for evidence. They don’t delegate AI to a subcommittee and move on; they embed it into strategic and ethical oversight.

In this moment, leadership looks less like bold proclamations and more like disciplined questions.

It looks like a board chair saying, “Before we scale this model, show me the risk assessment.”

It looks like a director asking, “Have we tested how this AI handles edge cases?”

It looks like a CEO telling the market, “We’re innovating responsibly – and here’s how we know.”

Those moments, small and deliberate, are what build durable organizations.


A Call to Stewardship

The AI era challenges us not only to lead differently, but to think differently about what leadership means.

For decades, corporate governance has been built around predictability – financial statements, audits, controls. AI introduces uncertainty as a feature, not a flaw. That requires humility, curiosity, and a willingness to govern what we cannot fully predict.

But uncertainty is not an excuse for paralysis. It is an invitation to stewardship.

Boards that embrace this challenge – that invest in understanding, governance, and accountability – will not only protect their organizations; they will position them to lead with credibility and trust in a world where those are the scarcest commodities of all.


A Closing Thought

There will come a moment – maybe next quarter, maybe next year – when an AI-driven decision inside your organization produces a surprising, even uncomfortable, outcome.

When that happens, the question that will define your legacy as a board will not be, “Why did it happen?” but “Were we prepared?”

Preparedness, in this context, means more than technical readiness. It means intellectual honesty, ethical clarity, and a governance framework strong enough to adapt as fast as the technology it oversees.

The future of AI is not being written in code. It’s being written in boardrooms and in C-suites – by the leaders who decide how, when, and why this technology will be used.

That’s where stewardship begins. That’s where your leadership is needed most.

Because in the age of AI, leadership isn’t about adopting the next technology – it’s about ensuring that humanity, responsibility, and governance never fall behind it.