Glossary

Plain-English for the framework, the theory, and the dataset.

Entries cover RAPID itself, the academic models it draws on, the metrics surfaced on results, and the source-tier conventions used in /sources. Every entry has a stable anchor (e.g. /glossary#tam) so other pages can deep-link.

Jump to
RAPIDReadiness (R)Alignment (A)Portfolio (P)Impact (I)Diffusion (D)Maturity LevelAdoption AuditTechnology Acceptance Model (TAM)UTAUTKotter 8-Step Change ModelDeLone & McLean IS SuccessDiffusion of InnovationsIT Productivity ParadoxTier 1 sourceTier 2 sourceAnchor sourceCase IDFailure modeUse-case typeDesign Science Research MethodologySuccess ratePercentileROI rangeTime to ROIPOC abandonmentDeployment stage44-case datasetConfidence intervalBlind spot
§ 01

Framework.

RAPID

A five-dimension maturity framework for GenAI investment - Readiness, Alignment, Portfolio, Impact, Diffusion - that scores an organisation against a 44-case enterprise dataset.

Each letter maps to a foundational management-research model: Readiness draws on UTAUT, Alignment on TAM, Portfolio on Markowitz, Impact on DeLone & McLean, Diffusion on Rogers and Kotter.

Readiness (R)

Data and technical readiness: data infrastructure, MLOps maturity, model-evaluation discipline, and AI/ML talent depth.

Related:RAPID

Alignment (A)

Strategic alignment: how well GenAI investments are tied to measurable business objectives with named executive sponsorship.

Related:RAPID

Portfolio (P)

Portfolio balance: diversification across risk levels and time horizons; the discipline to kill or scale initiatives based on results.

Related:RAPID

Impact (I)

Measurement maturity: KPI definition, baseline establishment, and ROI review processes for GenAI initiatives.

Related:RAPID

Diffusion (D)

Organisational adoption: change management, user training, and cross-functional collaboration that turn a deployment into a behaviour change.

Related:RAPID

Maturity Level

Buckets that summarise an overall RAPID score: Nascent (0-25%), Developing (26-50%), Established (51-75%), Advanced (76-100%).

Levels are coarse so cross-organisation comparison is meaningful at scale; per-dimension percentages give the precision needed for a roadmap.

Adoption Audit

A five-element checklist that predicts deployment success: business objectives, performance tracking, change management, iterative deployment, and a named executive sponsor.

Organisations with four or more elements see ~82% success in the dataset; zero to two drops to 23%.

§ 02

Theory.

Technology Acceptance Model (TAM)

Davis (1989). Predicts technology adoption from perceived usefulness and perceived ease of use. Strongest predictor of whether a tool gets used at all.

UTAUT

Venkatesh et al. (2003). Unifies eight technology-adoption models. Performance expectancy, effort expectancy, social influence, and facilitating conditions explain ~70% of adoption variance.

Kotter 8-Step Change Model

Kotter (1996). Eight sequential steps for organisational change: urgency, coalition, vision, communication, empowerment, wins, consolidation, anchoring.

DeLone & McLean IS Success

DeLone & McLean (2003). Six interdependent dimensions of IS success: information quality, system quality, service quality, use, user satisfaction, and net benefits.

Related:Impact (I)

Diffusion of Innovations

Rogers (2003). Innovations spread through social systems via adoption categories: innovators, early adopters, early majority, late majority, laggards.

IT Productivity Paradox

Brynjolfsson & Hitt (1993). IT productivity gains require organisational change with a 3-5 year lag. AI productivity figures show the same pattern - per-task minute savings do not aggregate automatically into firm-level productivity.

Related:Impact (I)
§ 03

Methodology.

Tier 1 source

Peer-reviewed journals, NBER working papers, government findings, and investigative-journalism cases with documented evidence trails.

Tier 2 source

Business press, corporate disclosures, and analyst reports with primary-source URLs but lower formal-vetting depth than Tier 1.

Anchor source

A specific cited case or academic citation that supports a claim in the framework. The Task Mapper labels every quantitative claim either anchored or illustrative.

Case ID

Stable identifier (CASE-001 through CASE-045) for each entry in the 44-case dataset. Used in citations and cross-references throughout the site.

Failure mode

One of six categories where GenAI deployments tend to break down: data, technical, organisational, strategic, measurement, economic.

Each failure mode is mapped to specific cases in the dataset (e.g. CASE-010 Amazon hiring tool is an organisational + data failure).

Use-case type

One of six GenAI deployment archetypes: Automation, Generation, Analysis, Augmentation, Chatbots, Decision Support. Each has a verified success-rate band derived from the dataset.

Design Science Research Methodology

Peffers et al. (2007). The development methodology behind RAPID: identify problem, define objectives, design artefact, demonstrate, evaluate, communicate.

§ 04

Metric.

Success rate

Share of measurable cases (success + failure + partial) in a cohort that achieved their stated objective. Excludes Too Early outcomes.

Percentile

Where an organisation's score sits relative to peers in the dataset. The 60th percentile means 60% of peers scored at or below this point.

ROI range

Industry-specific projected return on GenAI investment, derived from thesis Chapter 4. Tech 18-24%, Financial Services 15-22%, Healthcare 12-18% are the most-studied bands.

Time to ROI

Typical payback window for GenAI investments by industry. Ranges from 12-18 months (tech, retail) to 24-36 months (government, regulated sectors).

POC abandonment

Share of GenAI proofs-of-concept that never reach production. Gartner forecasts 30% by end of 2025; BCG finds only 22% of organisations advance past POC at all.

Deployment stage

Where a case sits on the maturity arc: Pilot/POC, Scaling, Production, Optimization, or Abandoned. The 44-case dataset is weighted toward Production and Optimization.

§ 05

Dataset.

44-case dataset

The curated, primary-source-linked corpus behind the framework. Each case has an organisation, period, industry, function, outcome, evidence statement, and source URL.

Maintained as a separate citable artefact so it can evolve independently of the website. See /sources for the searchable view.

Confidence interval

Width of the band around an industry benchmark, reflecting sample size. Industries with fewer than 10 cases (Healthcare, Manufacturing, Government) carry wider intervals.

Blind spot

On the team-comparison view (/compare/team): a dimension where every participant scored below 40%. Likely the cheapest place for the team to improve together.

Want to cite the framework itself? Use the methodology page's Cite-this widget for BibTeX, APA, and MLA references. Each entry in /sources also has a Cite-this affordance for case-level references.