Show your work. Every score has a citation behind it.
RAPID is a five-dimension diagnostic for enterprise GenAI maturity, developed as a Georgetown McDonough capstone using Design Science Research methodology. Each dimension is grounded in established management and information-systems theory, then validated against 44 verified enterprise deployments. The headline finding: ROI gaps are organizational, not technical.
About the researcher.
Shah Baig
Capstone author · Georgetown McDonough MPTM 900
RAPID started as an attempt to answer a question I couldn't answer for my own work: how do you tell if an enterprise GenAI program is actually working? The framework, the assessment, the 44-case dataset, and the methodology PDF are all open.
Released under CC-BY 4.0. If it's useful, cite it; if it's wrong, write to me — corrections, missing cases, and construct critiques are the lifeblood of v3.0.
The RAPID framework.
Five dimensions, each anchored in peer-reviewed theory and calibrated against the case dataset. Click through to /sources for the full source map per dimension.
Dataset composition.
- Financial Services (26%)
- Retail/E-commerce (13%)
- Technology/Software (8%)
- Period:
- 2018-2025
- Top failure:
- Data Quality/Bias (43%)
- Sources:
- Gartner, MIT, NBER, HBS, Forrester, McKinsey, BCG, Deloitte, AWS, Google Cloud
Scoring methodology.
Fifteen Likert questions (three per dimension) on a 4-point scale, plus a five-element adoption audit. Per-dimension score = sum / 12 × 100. Overall score = average of the five dimensions. Range 0–100%.
Theoretical grounding.
Each dimension is anchored in peer-reviewed theory. The models below inform construct definition and scoring criteria.
Key insight: Strategic alignment with business needs drives perceived usefulness, which is the strongest predictor of adoption.
Key insight: Measurement maturity requires tracking all six dimensions, not just technical performance metrics.
Key insight: Relative advantage, compatibility, simplicity, trialability, and observability account for 49-87% of adoption variation.
Key insight: Most change efforts fail at steps 1-3 (urgency, coalition, vision). GenAI deployments follow the same pattern.
Key insight: Organizations that concentrate AI investments in a single use case face concentration risk. Portfolio diversification across risk profiles improves resilience.
Key insight: Technical readiness (facilitating conditions) is necessary but insufficient. Social influence and performance expectancy matter equally.
What this framework cannot do.
Methodological constraints and their mitigations. Click each item to expand.
Selected references — the spine.
Key works cited across the framework. The full bibliography lives in the methodology PDF and on the sources page.
AI value flows to complements: data, judgment, action, organizational design
IT productivity paradox; value requires organizational change with 3-5 year lag
Technology Acceptance Model (TAM) - foundation for Alignment dimension
IS Success Model (6 factors) - foundation for Impact/Measurement dimension
Design Science Research methodology underpinning the RAPID framework development
Null labor market effects despite LLM adoption - productivity gains require structural change
IT portfolio management theory - foundation for Portfolio dimension
8-step change model - foundation for Diffusion/Organizational Adoption dimension
Portfolio diversification theory - foundation for Portfolio Balance dimension
DSRM methodology followed in RAPID framework development
Diffusion categories (innovators to laggards); adoption timing theory for Diffusion dimension
Systems thinking and organizational learning - informs holistic RAPID approach
UTAUT model explains 70% of adoption variance - informs Alignment dimension
Digital maturity model framework - informs Readiness dimension design
High variable costs, limited direct revenue; suited to explicit vs. tacit knowledge tasks