Skip to content

10 Questions Every Board Must Answer About GenAI in 2026

92% of companies plan to increase AI investment. Only 1% have an operating model that captures value at scale. These are the 10 questions that separate the two groups.

AI has moved to the top of the board agenda. But the questions have changed. Directors are no longer asking "What is our AI strategy?" They are asking "Where is the return?" and "Who is in control?" This diagnostic covers both — value capture and governance — because you cannot have one without the other.

01

Where is AI creating measurable value — and where are we stuck in pilots?

Board Test

Can you name 3 AI initiatives with proven, CFO-validated ROI? Or are you funding experiments?

Only 1% of companies describe themselves as "mature" on AI deployment. 90% are running experiments. The gap is not technology — it is operating model. Organizations stuck in "pilot purgatory" are spending without capturing value. The CEO question is not "what AI should we buy?" but "why aren't our pilots scaling?"

McKinsey 2025: 92% of companies plan to increase AI investments, but only 1% call themselves mature. 66% of leaders are dissatisfied with AI progress. (McKinsey Superagency, BCG From Potential to Profit)

02

Do we have an AI operating model — or are decisions happening ad hoc?

Board Test

Who decides which AI use cases to pursue, fund, scale, or kill? Is that written down?

Most organizations have no formal structure for AI decisions. IT picks tools, business units run experiments, nobody governs the portfolio. AI without an operating model produces scattered pilots, duplicated spend, and no accountability. The operating model — not the algorithm — is the competitive moat.

AI pilots stall because enterprises treat execution as a human responsibility even when the decision is repeatable, low-risk, and time-sensitive. Intelligence without authority cannot compound. (RT Insights / MIT Sloan, 2025)

03

Are we redesigning processes — or just adding tools to broken workflows?

Board Test

Can you show measurable cycle-time reduction or cost savings — or just demos and anecdotes?

AI accelerates whatever it touches — including waste. Organizations that add AI to existing processes get marginal gains. Those that redesign workflows — shorter decision cycles, eliminated handoffs, automated reports — capture structural advantage. The ROI is in process redesign, not tool addition.

BCG: Winners deploy GenAI in everyday tasks (10-20% productivity), reshape critical functions (30-50% efficiency), and invent new business models. Only 10% achieve the third level. (BCG From Potential to Profit, 2024)

04

Are we building capability — or renting it?

Board Test

If your AI provider changes terms tomorrow, what do you still own and control?

Models are commoditizing. Competitive advantage comes from your operating model, data discipline, and process DNA — not which vendor you chose. Organizations that build internal capability retain strategic optionality. Those that rent everything face vendor lock-in and hollow competency.

55% of enterprises have deployed 100+ AI models, yet few have concrete strategies connecting business to technology strategy. AI has surpassed cybersecurity as companies' top investment area. (Enterprise AI Maturity Index, McKinsey CIO Survey, 2025)

05

What happens to our competitive position if competitors achieve AI maturity first?

Board Test

Have you scenario-planned the cost of being 18 months late on AI operating model maturity?

AI maturity compounds. Early adopters build data advantages, process advantages, and talent advantages that are difficult to replicate. 88% of agentic AI early adopters already see ROI, compared to 74% across all organizations. The cost of delay is not just missed savings — it is competitive displacement.

93% of leaders believe that those who successfully scale AI agents in the next 12 months will gain an edge over industry peers. 82% of early adopters have deployed 10+ agents vs. 39% across all orgs. (Capgemini, Google Cloud ROI of AI, 2025)

06

Do we know where AI is already used — including shadow AI?

Board Test

Can you produce a real inventory — including tools employees adopted without IT — within 48 hours?

In 69% of cases, AI is brought into organizations by individuals, not IT. ChatGPT for analysis, AI for email drafting, copilots for code. None of it appears in your technology registry. This is not a future risk — it is your current exposure. You cannot govern what you cannot see.

In 69% of cases, AI is brought by individuals. Digital skills score: 42/100. Less than 20% of AI users can detect hallucinations. (AI4SP Digital Skills Compass, 2025)

07

Who is accountable for each AI system — by name?

Board Test

If bias or data leakage occurs in an AI system tomorrow, who answers to the board?

Most organizations cannot answer this question. Not the vendor. Not the data team. Not "the business unit." Silence is risk. Named accountability per AI system is the foundation of any operating model. Without it, you have technology — not governance.

Singapore and WEF frameworks call for revising policies to make humans accountable for operational deployment and risk management of agentic AI. (DWT Governance Framework Analysis, 2025)

08

What authority are we delegating to AI systems?

Board Test

Which outputs are informational, which are decision-making, and which execute without human review?

AI is moving from copilots to autonomous agents. The question is not whether to delegate authority — it is how much, to which systems, with what guardrails. High-impact, irreversible decisions require human judgment. Routine, reversible decisions do not. Most organizations have not made this distinction explicit.

The goal is calibrated authority, not blanket autonomy. Decision design matters more than algorithmic accuracy. (MIT research, cited in RT Insights 2025). Trust in autonomous agents dropped from 43% to 27% in 12 months. (Capgemini, 2025)

09

Can we satisfy regulators and clients on AI governance?

Board Test

Can you explain each AI system's risk category and reconstruct any AI-influenced decision?

The EU AI Act fully applies to high-risk systems from August 2, 2026. Morocco's CNDP enforces data protection for AI systems processing personal data. But regulation is only part of the picture — your European clients will start requiring AI governance documentation. The organizations that build this proactively win trust. Those that scramble reactively pay a premium.

EU AI Act penalties: up to €35M or 7% of global turnover. The Act applies to third-country operators whose AI output is used in the EU. (EU AI Act, Article 2). All AI processing personal data of Moroccan citizens must comply with Loi 09-08. (CNDP)

10

Does the board have a dashboard for AI value and risk?

Board Test

What metrics does the board review quarterly — value capture, pilot-to-production rate, coverage, incidents?

Boards that govern AI without structured metrics are governing blind. The dashboard should track both sides: value realization (ROI, pilot-to-production rate, cost savings) AND governance health (coverage, incidents, audit readiness). Most boards have neither. That is the gap this pack is designed to close.

Set up a strategic AI council involving business leaders, CHRO, CDO, and CIO. This council should coordinate investments and implement value-tracking based on KPIs tied to business outcomes. (KPMG Agentic AI Advantage, 2025)

Downloadable Template

AI Strategy & Governance Board Pack (2026)

An 8-page diagnostic — not a report, a mirror. Pages 1–3 cover value and strategy. Pages 4–8 cover governance and oversight. Most teams complete 3–4 pages with confidence. The rest reveals where the gaps are.

01

AI Value Capture Scorecard — initiatives, investment, actual vs. expected return

02

AI Operating Model Canvas — structure, roles, decision flow

03

AI System Inventory — including shadow AI discovery prompts

04

Regulatory Applicability Check — EU AI Act, GDPR, Loi 09-08, client requirements

05

Autonomy Tiers — Assist → Recommend → Execute → Autonomous, with controls

06

Decision Rights Matrix — who proposes, approves, overrides, audits

07

Governance & Value KPIs — value realization, coverage, incidents, readiness

08

90-Day Sprint Plan — Discover → Design → Operationalize

Five mistakes CEOs make with AI governance

01

Treating AI as a tool acquisition problem

Buying platforms does not create capability. Without governance, decision rights, and accountability, tools remain expensive experiments.

02

Running pilots without executive mandate

Uncoordinated pilots produce demos, not impact. AI needs board-level sponsorship and a structured intake pipeline.

03

Confusing innovation risk with governance risk

The real exposure comes when AI enters production without oversight. Innovation risk is a board presentation. Governance risk is a regulator at the door.

04

Underestimating shadow AI

In 69% of cases, AI enters organizations through individuals. The risk is not future adoption — it is unmanaged experimentation happening right now.

05

Delegating AI entirely to IT

AI affects legal, workforce, compliance, and strategy. It requires board-level oversight — not a technology committee.

Ready to govern AI with clarity?

Start with a 15-minute diagnostic call. We will assess where you stand on these 10 questions and identify the 3 highest-priority gaps.