Case · 2024
Multiple Enterprises
Deloitte survey of 2773 director-to-C-suite respondents on GenAI deployment
Maturity stage
Scaling
Use-case type
Multiple
Function
Multiple
Company size
Enterprise
Evidence
IT most mature function; <30% of experiments will be fully scaled
ROI / outcome figure
78% plan increased AI spending
What RAPID would have flagged
Failure mode: Measurement — Inability to track AI outcomes, unclear attribution, or missing baseline metrics that prevent learning and justification
Dimensions a pre-deployment RAPID assessment would have surfaced
- Measurement Maturity (low score < 50%)
Mitigations the framework recommends
- Define success metrics and baselines before deployment (DeLone & McLean IS Success Model)
- Build real-time measurement dashboards tracking AI-specific KPIs
- Isolate AI contribution through A/B testing or controlled rollouts
- Establish quarterly ROI review cadence with executive stakeholders
Dimensions this case illuminates
AStrategic Alignment
Are GenAI investments tied to measurable business objectives?
More from Multiple
Multiple Fortune 500 · CASE-003 · Failure
30% of GenAI projects abandoned after POC by end of 2025
Multiple Fortune 500 · CASE-004 · Failure
95% of companies report GenAI pilots falling short
UW academic study (Mistral AI, Salesforce, Contextual AI open-source LLMs) · CASE-012 · Failure
White-associated names preferred 85% vs Black-associated names 9%; demonstrates that off-the-shelf LLMs carry resume-screening bias
Apply this to your team
Take the RAPID assessment to see whether your organisation is exposed to the same failure modes as this case - or already has the discipline that made it work.