The 70/14 Paradox: Why governance structures don't equal governance readiness
- Feb 20
- 7 min read
Updated: Mar 10

The boardroom scene is familiar. The agenda reaches AI governance. The chair asks for an update.
"Committee established," the governance lead confirms. "Policy approved," adds the chief risk officer. "Principles published," reports the chief executive.
Three ticks. The board nods. AI governance: done.
Then a non-executive director asks: "Which AI systems are we actually using, and how are they performing?"
Silence.
The CTO ventures: "Three main systems, customer service chatbot, predictive analytics, fraud detection."
Reality, discovered months later: 47 AI-enabled systems operating across the organisation. HR screening tools are used to make hiring decisions. Finance software forecasting budgets. Supply chain applications optimising logistics.
The CTO wasn't lying. The CTO didn't know.
This is the 70/14 paradox, and it's everywhere.
The governance gap
ChangeSchool's analysis of six authoritative board playbooks reveals a troubling pattern: most organisations have built impressive governance structures, whilst operational readiness remains dangerously low.
Sedgwick's 2026 Global Risk Report surveying 300 Fortune 500 senior leaders is stark: 70% report having AI risk committees, 67% report progress on AI infrastructure, and 41% have dedicated AI governance teams. Yet only 14% say they are fully ready for AI deployment.
McKinsey, citing NACD deepens the concern: Less than 25% have board-approved AI policies with clear scaling rules. Only 15% receive AI-related metrics. Meanwhile, 66% of directors report limited AI knowledge, and nearly one in three say AI doesn't regularly appear on their board agenda.
Yet 88% use AI in at least one function, and 67% expect to invest more in AI over the next three years (McKinsey)
The arithmetic doesn't work. Organisations are deploying AI faster than they're building the capability to govern it.
Dr Rumman Chowdhury, former Director of Machine Learning Ethics, Transparency, and Accountability (META) at Twitter and US Science Envoy for AI, notes: "Most organisations have principles. Few have practice."
This isn't a failure of intention; it's governance theatre. Boards have created visible oversight structures without the operational foundations to make them effective.
What's typically missing even in organisations with formal governance?
Comprehensive AI inventories. Most organisations can't answer which AI systems are operating, where they are deployed, the decisions they influence, or the data they process. You can't govern what you can't see.
Usage monitoring. Policy says "monitor for bias and drift", but systems to actually detect when models change behaviour or produce discriminatory outcomes don't exist. Which systems are staff using? How frequently? With what error rates?
Clear escalation protocols. When AI fails or creates harm, who's notified? How quickly? Who can pause the system? What investigation follows?
Measurable outcomes. Boards receive status updates ("piloting AI in three areas"), not outcome tracking ("AI improved margin by 3.2% with quality indicators holding").
These aren't exotic requirements; they're the foundations that convert governance structures into governance reality.
Nithya Das, General Manager for Governance and Chief Legal Officer at Diligent, frames the challenge: "The next era of compliance will be defined not by checklists, but by confidence." Confidence requires evidence that governance functions in operations, not just on paper.

Why 2026 is the inflexion point
Governance theatre might have been sustainable when AI was experimental. 2026 marks the end of that grace period.
Regulatory pressure intensifies. Colorado's SB 24-205 (the Colorado Artificial Intelligence Act, effective June 2026) requires developers and deployers of high-risk AI systems to use reasonable care to protect against algorithmic discrimination. Texas's Responsible AI Governance Act is active. The EU AI Act's phased enforcement includes fines up to €35 million or 7% of global turnover. The UK's FCA and PRA publish AI governance expectations, with the ICO enhancing guidance on AI and data protection.
Joe Knight, Senior Managing Director at FTI Consulting: "AI governance in 2026 is moving from high-level principles to enforceable rules. Documentation, risk classification, and model lifecycle controls are shifting from best practices to baseline requirements."
AI shifts from experimental to operational. The Sedgwick's report shows 67% of Fortune 500 leaders reporting progress on AI infrastructure - these aren't funding more pilots; they're scaling into production systems, making consequential decisions. Experimental AI that fails is a learning opportunity. Production AI failures are regulatory incidents, customer harm, or safety threats.
Stakeholder scrutiny grows as customers demand transparency, employees raise concerns, investors incorporate AI governance into ESG, civil society monitors bias, and media investigations cause reputational damage. Organisations with committees but no capabilities now face enforceable regulations, production deployments, and intense scrutiny, governance that appears solid but can't explain AI actions.
What's the ready 14% do differently
The organisations reporting full readiness didn't reach it with elaborate committees or policies. They closed the operational gap through three key shifts.
1. They Embed Governance in Workflows
The Institute of Directors emphasises that AI governance must integrate into the daily work of developing, deploying, and monitoring AI, not exist as external oversight.
The ready 14% have governance gates built into deployment workflows. Before any AI moves from pilot to production, risk classification, bias testing, and data quality validation are required.
A financial services firm redesigned around embedded governance with clear low/medium/high-risk criteria. Result: 12 deployments in Q1 2025, four times the previous year, with stronger governance, not despite it.
2. They Measure Outcomes, Not Activity
Joe Knight's emphasis: shift from "Do we have an AI policy?" to "Can we demonstrate our policy is working?"
The 14% track tangible metrics: AI system performance, bias testing, incident frequency, response times, override rates, and human rejection of AI recommendations.
Governance theatre reports: "We have an AI ethics committee that meets quarterly and has approved responsible AI principles."
The ready 14% report: "Our governance covered 23 systems this quarter. Bias testing found bias in two; they were fixed before release. Performance issues arose in one, initiating oversight. No high-risk systems lack governance review."
Activity versus outcomes. The difference is demonstrable effectiveness.
3. They Invest in Capabilities not just Frameworks
The 70% invested in visible governance, committee establishment, policy drafting, and the publication of principles. The ready 14% invested in capabilities:
People, processes, tools, and data foundations on AI risk assessment. Building internal expertise so organisations aren't dependent on vendors to explain what AI does. Workflow integration for governance gates. Discovery mechanisms surfacing invisible AI. Monitoring systems provide real-time visibility. Data quality and lineage documentation.
Some experts attribute '90% of AI governance work is data management.'
An illustration: A manufacturing company invested £150,000 in governance capability. Within six months: comprehensive AI inventory, systematic risk classifications, bias testing covering customer-facing systems. When a competitor faced £600,000 in regulatory penalties, the ROI became clear: 4x through avoiding rollbacks alone.
From theatre to readiness: The essential shifts

For organisations recognising themselves in the 70%, the path forward requires four operational shifts:
Implement continuous audit, not annual review. Real-time inventory maintenance, not annual stocktaking. Quarterly risk reviews. Monitoring provides ongoing visibility, not point-in-time snapshots.
Establish scaling rules and escalation triggers. McKinsey, citing NACD found that fewer than 25% have defined scaling rules, criteria for when pilots earn capital to scale. Without them, bespoke board debates delay deployment. With them, clear success criteria enable faster scaling.
Deploy outcome dashboards. Joe Knight's framework moves from qualitative assurance to quantitative demonstration. Dashboards showing system coverage, bias testing status, incident tracking, governance efficiency, and financial impact.
Recognise governance professionals' bridging role. Not creating more policies, translating board requirements into operational processes. Building monitoring systems that function. Measuring effectiveness. The governance professional's value: capability building, not compliance policing."
THE CHOICE
The governance structures from 2024-2025 were the necessary first steps. By 2026, those structures must evolve into systems. Regulatory enforcement (Colorado SB 24-205 in June, EU AI Act, UK FCA/PRA), increased investment (Sedgwick reports 67%), and stakeholder demands are converging.
The 70% with structures but without readiness face a choice:
Continue governance theatre, hoping committees and policies suffice as scrutiny increases. This widens the gap between oversight and reality, risking regulatory action, competitive disadvantage, or stakeholder crisis.
Or join the 14% operationalising governance. Shift investment from governance design to implementation. Build capabilities such as inventory management, monitoring, escalation, outcome measurement, and workflows to make policies work. View governance as operational infrastructure, not just documentation.
The ready 14% discovered what others will learn: strong governance doesn't slow AI deployment, it accelerates it. Clear rules enable faster decisions than endless debates. Monitoring catches problems early. Stakeholder trust compounds into a competitive advantage.
For governance professionals, 2026 presents both challenges and opportunities. The challenge: bridging the 70⁄14 gap within your organisation. The opportunity: demonstrating governance's strategic value, not as the function that slows things down, but as the function that enables confident progress by building capabilities that make responsible AI scalable.
Are you governing AI, or governing your anxiety about AI?
ChangeSchool's research into AI governance best practices informs our executive education programmes for boards and leadership teams across engineering, manufacturing, and education sectors. Our discovery-based approach helps organisations move from governance structures to governance readiness.
About the Series: This is article 1 of a six-week series examining AI governance for boards.
About the Author: Viren Lall is Managing Director of ChangeSchool, an EFMD award-winning executive education delivery partner. ChangeSchool develops transformational AI capability for leaders and boards through discovery-based approaches that bridge academic rigour with operational reality.
SOURCES:*
Sedgwick (2025). "2026 Global Risk Report: Forecasting Report." Survey of 300 senior leaders at Fortune 500 companies (CEO, COO, CFO, CHRO, CRO, EVPs, SVPs, VPs, and directors). Available at: www.sedgwick.com/global-risk-report/ Reported in: Estrada, S. (18 December 2025). "AI governance becomes a board mandate as operational reality lags." Fortune. https://fortune.com/2025/12/18/ai-governance-becomes-board-mandate-operational-reality-lags/
Institute of Directors (IoD) (2025). "AI Governance in the Boardroom." IoD Business Paper.
McKinsey & Company (2024-2025). AI governance research and board posture frameworks.
BCG (2025). Board governance and AI transformation research.
EY (2025). "Board of the Future Study."
KPMG Board Leadership Centre (October 2025). "Boardroom View on GenAI Adoption."
Australian Institute of Company Directors (IoCD) (2025). "AI: Use by Directors and Boards."
Expert quotes from:
- Dr Rumman Chowdhury, US Science Envoy for AI, former Director of META at Twitter
- Nithya Das, General Manager for Governance & Chief Legal Officer, Diligent
- Joe Knight, Senior Managing Director, FTI Consulting

