top of page
changeschool logo

When AI Errors Cost Lives, Customers, Or Markets: The Board’s Guide to Risk-Proportionate Governance

  • 6 days ago
  • 6 min read
risk differentiation gap

Week 1 exposed the 70/14 paradox: structures without readiness. Week 2 revealed invisible AI operating ungoverned. Week 3 showed a transformation from observer boards to multiplier boards.


Massive reader response raised a critical question: are all AI risks equal?

Early February 2026 answered with brutal clarity.


Threats to existing business models

On February 3, Anthropic released industry-specific plugins for its Claude Cowork automation platform. Within hours, $285 billion in market value vanished across software, legal services, and financial technology stocks. Thomson Reuters lost $8.2 billion in a single session. The "SaaSpocalypse" revealed a fundamental governance challenge: Claude's entry into vertical segments had suddenly rendered existing business models obsolete.


This isn't theoretical. No amount of standard governance could have anticipated this threat, where AI from an upstream foundational model completely eroded incumbent value-creation capabilities. As Serish Venkata Gandikota said in his post, "Intelligence had become a commodity."


The distinction between tolerable AI errors and catastrophic AI failures determines whether boards enable innovation or create unacceptable stakeholder harm.


The Risk Tier Framework

Not all AI systems require identical governance. Four risk tiers exist, but one towers above: business model redundancy. Patient harm generates settlements. Customer loss generates churn. But business model obsolescence generates permanent organisational death. Four tiers demand differentiated approaches:


Tier 1: Manageable Errors (Back-office AI, process automation)

Where errors create inefficiency but remain recoverable. Internal document generation, meeting summaries, data analysis. Governance enables experimentation whilst maintaining oversight. Errors trigger process improvements, not stakeholder harm.


Tier 2: Relationship-Ending Errors (Customer-facing AI, brand-defining systems)

Poor experiences permanently destroy customer lifetime value. IVR systems that frustrate customers, chatbots providing false information, agentic AI driving users to competitors. The error cost isn't an immediate financial loss but a permanent relationship breakdown.


Tier 3: Catastrophic/Unacceptable Errors (Medical diagnosis, safety-critical systems)

Where false positives or missed detections create catastrophic outcomes. Patient harm, regulatory sanctions, litigation exposure. A single diagnostic error in sepsis prediction can mean patient death within hours. These systems demand heavy governance before deployment.


Professor Randal Peterson, London Business School, articulates why differentiation is a leadership imperative: "If leaders treat AI as a tool, they'll manage it tactically. If they treat it as a system that reshapes behaviour and power, they'll govern it strategically. That difference will matter more than the technology itself."


Tier 4: Permanent Disruption of Business Model

The February Anthropic event illustrates competitive disruption risk transcending operational errors. When foundation model companies move up the value chain, they eliminate entire market categories. While catastrophic errors create patient harm boards can address through one-off settlements, business model redundancy creates permanent value destruction with no recovery path.


This isn't governance failure but market structure transformation boards must anticipate.


case study: anthropic claude cowork

Tier 1: Where Errors Are Manageable

Back-office AI operates differently. Errors in internal document generation or data analysis create inefficiency, not catastrophic outcomes. This doesn't mean ungoverned. Prashant Rathi's analysis of 50+ deployments revealed: "What people see is 20%. What they don't see (governance, compliance, monitoring) is 80% of what makes it sustainable."

Even manageable-risk AI requires clear ownership, data quality controls, approval workflows, continuous monitoring, and incident playbooks. The distinction: back-office AI governance can be lighter, faster, experimentation-friendly whilst maintaining controls. The board's role shifts from detailed approval to ensuring appropriate frameworks exist.


Tier 2: When Errors Destroy Customer Relationships

Customer-facing AI poses risks like relationship damage, not physical harm. Air Canada's chatbot (2024) provided false bereavement discount info, leading to liability and setting legal precedent that companies own AI misinformation. Qualtrics' 2026 report found 20% of AI customer service users saw no benefits, with a failure rate four times higher than average, and consumer concern about data misuse rose 8 points to 53%. The systematic governance failure involves deploying AI without proper testing or monitoring. Daniel Hulme notes: "When agents message customers and make decisions quickly, verification becomes a board-level issue."


case study: epic sepsis model


Tier 3: When Errors Are Catastrophic

Epic's Sepsis Model claimed 80% accuracy in predicting sepsis across US hospitals, where sepsis causes one-third of hospital deaths. University of Michigan validation (JAMA Internal Medicine, 2021) found only 63% accuracy, missing two-thirds of cases and creating false alarms for 18%. Clinicians must investigate 109 alerts for one true case.


Hospitals deployed the model without independent validation, risking patient lives. When errors lead to death, trusting vendors reflects governance failure.


By January 2025, FDA requires performance monitoring, bias testing, and continuous surveillance, ensuring governance matches the risk – independent validation, ongoing checks, and clear escalation protocols.


case study customer service ai failures


Tier 4: Permanent Disruption of Business Model

The February 2026 Anthropic event wiped out $285 billion when foundation model plugins made legal, compliance, and financial services commoditised overnight. Thomson Reuters dropped 18%, RELX 14%, Wolters Kluwer 13%. This exceeds catastrophic error, as business viability collapses. As Serish noted, "Intelligence may be becoming a commodity. There isn't a protective moat left anymore based entirely on internal resources and capabilities." Open-source GitHub files caused the steepest software stock selloff since April 2025's tariff crisis. Markets now see foundation model companies capturing application revenue directly. No governance can prevent external providers from erasing your value. Boards must evaluate whether their competitive position remains defendable as intelligence commoditises and providers move up the value chain.


The Differentiated Governance Imperative

Boards must choose: govern AI uniformly, risking innovation, or protect stakeholders by adopting risk-based governance that balances speed and safety.

Differentiated governance offers a competitive edge. Tier 1 favors experimentation and fast innovation. Tier 2 requires brand protection, real-time customer feedback, and human escalation. Tier 3 calls for validation, performance tracking against outcomes, and humanReview. Tier 4 involves continuous monitoring of source providers, planning for commoditization, and strategic adjustments.


The Board’s Action Framework

Implementing risk-proportionate governance requires four systematic shifts:

First: Classify All AI Systems by Risk Tier

Audit existing AI deployments. Assess potential harm from errors in each system. Place every AI application in appropriate risk category. Review quarterly as systems evolve.


Second: Implement Differentiated Approval Processes

Tier 1: Technical team approval with governance review

Tier 2: Marketing and board approval with brand impact assessment

Tier 3: Executive and board approval with external validation

Tier 4: Board and strategy committee review with external competitive analysis


Third: Establish Risk-Appropriate Monitoring

Tier 1: Operational efficiency metrics

Tier 2: Customer satisfaction, lifetime value impact, brand sentiment

Tier 3: Patient safety outcomes, regulatory compliance

Tier 4: Competitive positioning, value chain vulnerability, foundation model provider roadmaps


Fourth: Document Risk Tolerance Explicitly

Define acceptable error rates per tier. Establish escalation triggers. Create response protocols for governance failures. Review quarterly.


This framework doesn't slow AI deployment. It accelerates appropriate adoption whilst preventing unacceptable harm.


The Choice

The Anthropic disruption catalysed thinking about differentiated risk. But the lesson extends beyond market volatility. Boards face AI systems creating fundamentally different consequences when they fail.


A sepsis algorithm missing two-thirds of cases creates patient harm boards cannot accept. A customer service chatbot swearing at frustrated customers destroys relationships boards spent years building. A back-office document generator requiring revision cycles creates manageable inefficiency.


Risk-proportionate governance recognises these distinctions. It enables back-office experimentation whilst demanding heavy oversight for medical deployments. It protects customer relationships whilst accelerating internal automation. Most critically, it recognises business model disruption as the ultimate risk transcending operational failures.


The ready 14% implemented these frameworks. Their governance enables both speed (in low-risk areas) and safety (in high-stakes domains). They discovered differentiated governance multiplies organisational capability by applying appropriate oversight intensity to each risk tier.


For other boards, the framework is clear: classify AI by consequence severity, govern proportionately to stakeholder harm, and measure outcomes, not activity. The Anthropic event reminded everyone that AI reshapes competitive dynamics rapidly. Risk-proportionate governance ensures boards respond with both agility and accountability.


Not all AI risks are equal. Boards governing them identically will either constrain innovation everywhere or fail to protect stakeholders anywhere. The differentiation imperative isn't optional. It's a fiduciary duty.


What are the reflection questions for your board?

 

About The Series

This is Week 4 Special of our AI Governance series. This article emerged from massive reader response and recent market events revealing the critical importance of risk differentiation. Next article: returning to our planned series examining governance measurement frameworks.

About the Author: Viren Lall is Managing Director of ChangeSchool, an EFMD award-winning executive education delivery partner. ChangeSchool develops transformational AI capability for leaders and boards through discovery-based approaches that bridge academic rigour with operational reality.



 

Sources


  1. Bloomberg (2026). “Anthropic AI Tool Sparks Selloff From Software to Broader Market.” February 3, 2026.

  2. Wall Street Journal (2026). “The Week Anthropic Tanked the Market and Pulled Ahead of Its Rivals.” February 2026.

  3. FDA (2025). “Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations.” Draft Guidance, January 2025.

  4. Wong, A. et al. (2021). “External Validation of a Widely Implemented Proprietary Sepsis Prediction Model in Hospitalized Patients.” JAMA Internal Medicine, 181(8):1065-1070.

  5. Qualtrics XM Institute (2025). “2026 Consumer Experience Trends Report.” Survey of 20,000+ consumers, Q3 2025.

  6. China State Administration for Market Regulation (2024). Consumer complaints data on intelligent customer service.

  7. Bipartisan Policy Center (2024). “FDA Oversight: Understanding the Regulation of Health AI Tools.” December 2024.


 Are you governing AI, or governing your anxiety about AI?


ChangeSchool's research into AI governance best practices informs our executive education programmes for boards and leadership teams across engineering, manufacturing, and education sectors. Our discovery-based approach helps organisations move from governance structures to governance readiness.


About the Series: This is Week 1 of a six-week series examining AI governance for boards.

.


About the Author: Viren Lall is Managing Director of ChangeSchool, an EFMD award-winning executive education delivery partner. ChangeSchool develops transformational AI capability for leaders and boards through discovery-based approaches that bridge academic rigour with operational reality.






bottom of page