Plain-English for the framework, the theory, and the dataset.
Entries cover RAPID itself, the academic models it draws on, the metrics surfaced on results, and the source-tier conventions used in /sources. Every entry has a stable anchor (e.g. /glossary#tam) so other pages can deep-link.
A five-dimension maturity framework for GenAI investment - Readiness, Alignment, Portfolio, Impact, Diffusion - that scores an organisation against a 44-case enterprise dataset.
Each letter maps to a foundational management-research model: Readiness draws on UTAUT, Alignment on TAM, Portfolio on Markowitz, Impact on DeLone & McLean, Diffusion on Rogers and Kotter.
Buckets that summarise an overall RAPID score: Nascent (0-25%), Developing (26-50%), Established (51-75%), Advanced (76-100%).
Levels are coarse so cross-organisation comparison is meaningful at scale; per-dimension percentages give the precision needed for a roadmap.
Adoption Audit
A five-element checklist that predicts deployment success: business objectives, performance tracking, change management, iterative deployment, and a named executive sponsor.
Organisations with four or more elements see ~82% success in the dataset; zero to two drops to 23%.
§ 02
Theory.
Technology Acceptance Model (TAM)
Davis (1989). Predicts technology adoption from perceived usefulness and perceived ease of use. Strongest predictor of whether a tool gets used at all.
DeLone & McLean (2003). Six interdependent dimensions of IS success: information quality, system quality, service quality, use, user satisfaction, and net benefits.
Brynjolfsson & Hitt (1993). IT productivity gains require organisational change with a 3-5 year lag. AI productivity figures show the same pattern - per-task minute savings do not aggregate automatically into firm-level productivity.
A specific cited case or academic citation that supports a claim in the framework. The Task Mapper labels every quantitative claim either anchored or illustrative.
Case ID
Stable identifier (CASE-001 through CASE-045) for each entry in the 44-case dataset. Used in citations and cross-references throughout the site.
Failure mode
One of six categories where GenAI deployments tend to break down: data, technical, organisational, strategic, measurement, economic.
Each failure mode is mapped to specific cases in the dataset (e.g. CASE-010 Amazon hiring tool is an organisational + data failure).
Use-case type
One of six GenAI deployment archetypes: Automation, Generation, Analysis, Augmentation, Chatbots, Decision Support. Each has a verified success-rate band derived from the dataset.
Design Science Research Methodology
Peffers et al. (2007). The development methodology behind RAPID: identify problem, define objectives, design artefact, demonstrate, evaluate, communicate.
§ 04
Metric.
Success rate
Share of measurable cases (success + failure + partial) in a cohort that achieved their stated objective. Excludes Too Early outcomes.
Percentile
Where an organisation's score sits relative to peers in the dataset. The 60th percentile means 60% of peers scored at or below this point.
ROI range
Industry-specific projected return on GenAI investment, derived from thesis Chapter 4. Tech 18-24%, Financial Services 15-22%, Healthcare 12-18% are the most-studied bands.
Time to ROI
Typical payback window for GenAI investments by industry. Ranges from 12-18 months (tech, retail) to 24-36 months (government, regulated sectors).
POC abandonment
Share of GenAI proofs-of-concept that never reach production. Gartner forecasts 30% by end of 2025; BCG finds only 22% of organisations advance past POC at all.
Deployment stage
Where a case sits on the maturity arc: Pilot/POC, Scaling, Production, Optimization, or Abandoned. The 44-case dataset is weighted toward Production and Optimization.
§ 05
Dataset.
44-case dataset
The curated, primary-source-linked corpus behind the framework. Each case has an organisation, period, industry, function, outcome, evidence statement, and source URL.
Maintained as a separate citable artefact so it can evolve independently of the website. See /sources for the searchable view.
Confidence interval
Width of the band around an industry benchmark, reflecting sample size. Industries with fewer than 10 cases (Healthcare, Manufacturing, Government) carry wider intervals.
Blind spot
On the team-comparison view (/compare/team): a dimension where every participant scored below 40%. Likely the cheapest place for the team to improve together.
Want to cite the framework itself? Use the methodology page's Cite-this widget for BibTeX, APA, and MLA references. Each entry in /sources also has a Cite-this affordance for case-level references.