Our Methodology

How we assess AI readiness, classify processes, and design agent architectures — grounded in established frameworks from NIST, the EU, OECD, and leading AI research institutions.

Contents
Scoring Framework AI Classification Maturity Model Agent Architecture Governance & RAI Framework References What's Original

6-Dimension Scoring Framework

Original IP

Every process is scored across 6 dimensions (1-4 each) that determine its AI transformation potential. These dimensions capture the fundamental factors that determine whether a process can be automated, augmented, or should remain human-led.

DS
Data Structure
How structured and machine-readable are the inputs? Unstructured data (documents, audio) requires extraction agents. Structured data (APIs, databases) enables direct processing. Informed by NIST AI RMF MAP 1.1: "Characterize the AI system's data properties."
DL
Decision Logic
How rule-based vs. judgment-driven are the decisions? Rule-based processes can be automated. Judgment-driven processes require human-in-the-loop. Aligned with OECD AI Principle 1.4: "Human oversight should be proportionate to the AI system's risk."
VF
Volume / Frequency
How often does this process run? High-volume processes generate more ROI from AI and more data for model training. Low-frequency processes may not justify AI investment.
ER
Exception Rate
How predictable is the process? Low exception rates enable autonomous AI execution. High exception rates require robust human escalation paths. Informed by NIST AI RMF MAP 2.2: "Identify known limitations of the AI system."
RS
Regulatory Sensitivity
How much regulatory or compliance risk does AI carry for this process? Directly mapped to EU AI Act risk tiers: RS=1 maps to High-Risk AI systems requiring conformity assessment, RS=4 maps to Minimal Risk with standard obligations. Aligned with EU AI Act risk classification (Art. 6) and NIST AI RMF GOVERN function.
SD
Strategic Differentiation
Is human involvement a competitive advantage? When human expertise IS the value proposition (advisory, creative, relationship), AI assists but doesn't replace. When the process is commodity, full automation is expected. Aligned with OECD Principle on "preserving human agency and oversight."

3-Class AI Classification

Original IP

Processes are classified into three AI classes based on scoring dimensions. Classification determines the appropriate level of AI autonomy and human oversight.

A
Agentic AI
AI executes multi-step workflows autonomously. Human monitors and intervenes on exceptions.
EU AI Act: Requires Art. 14 human oversight measures
B
Augmented AI
AI drafts, recommends, and analyzes. Human reviews and approves every output before action.
OECD: Human-in-the-loop per Principle 1.4
E
Human-Led + AI Assist
Human owns the decision. AI prepares inputs, drafts outputs, and automates surrounding workflow.
Human judgment is the core value proposition

Classification rules: DL=1 → E (human judgment required). Process Readiness ≥ 13 with VF≥3, SD≥3 → A (high automation potential). DL=4, DS=4, VF≥3 → A (fully deterministic). Otherwise → B (AI augments human).

5-Level Agentic Maturity Model

Original IP NIST AI RMF aligned

Each process has a maturity progression from fully manual to fully agentic. The model defines what changes at each level — workflow, human role, technology, governance, and responsible AI requirements.

0
Manual
1
AI-Assisted
2
AI-Augmented
3
AI-Automated
4
Agentic

NIST AI RMF alignment: Each maturity level maps to increasing NIST AI RMF function engagement. Level 1-2 primarily involves MAP (system characterization) and MEASURE (performance monitoring). Level 3-4 requires full implementation across GOVERN (policies), MAP (risk identification), MEASURE (metrics), and MANAGE (response) functions.

Responsible AI requirements scale with maturity: Higher autonomy levels require stronger fairness testing, transparency/explainability, accountability chains, and privacy protections — aligned with OECD AI Principles and the Singapore Model AI Governance Framework.

Agent Architecture Blueprints

Original IP

Agent architectures are computed dynamically from process scoring and maturity level. The patterns are informed by published research from AI platform providers.

Pattern selection is informed by Anthropic's "Building Effective Agents" guidance (December 2024) and OpenAI's Agents SDK architecture (March 2025). Both emphasize starting with the simplest effective pattern — our framework defaults to single agents before recommending multi-agent architectures.

Guardrail design follows the principle of separating safety checks from agent logic — a core recommendation from both Anthropic and OpenAI. Our compliance guard agents run in parallel with processing agents, aligned with OWASP AI Security guidelines for defense-in-depth.

Human oversight requirements in our blueprints are calibrated to EU AI Act Article 14 (human oversight for high-risk AI) and OECD Principle 1.4 (human agency and oversight).

Governance & Responsible AI

Our governance requirements at each maturity level are informed by established responsible AI frameworks. Higher AI autonomy requires stronger governance.

Principles Applied (per OECD AI Recommendation)
Fairness
AI decisions monitored for bias. Automated fairness testing required at Level 3+.
Transparency
AI involvement disclosed. Explainability required for autonomous decisions per EU AI Act Art. 13.
Privacy
Data minimization enforced. DPIA required for Level 4 autonomous systems processing personal data.
Accountability
Human accountable for every AI decision. Governance board required at Level 4 per NIST GOVERN function.
Safety & Robustness
Fallback procedures at Level 3+. Adversarial testing and red-teaming at Level 4 per NIST MEASURE function.
Human Oversight
Oversight proportionate to autonomy. Level 2: review all. Level 3: review exceptions. Level 4: strategic governance.

Framework References

Our methodology is informed by the following established, publicly available frameworks. We reference these for alignment — our framework is original intellectual property.

NIST AI Risk Management Framework (AI RMF 1.0)
Published by the U.S. National Institute of Standards and Technology (January 2023). Public domain — U.S. government work.
Our alignment: Scoring dimensions map to MAP function. Maturity governance requirements map to GOVERN, MEASURE, and MANAGE functions.
EU Artificial Intelligence Act (2024)
European Union regulation establishing risk-based framework for AI systems. Public legislation.
Our alignment: RS dimension maps to EU risk tiers (High/Limited/Minimal). Human oversight requirements align with Art. 14. Transparency requirements align with Art. 13.
OECD Recommendation on AI (2019, updated 2024)
Adopted by 46+ countries. Establishes principles for trustworthy AI including human oversight, transparency, and fairness.
Our alignment: Human oversight calibration across maturity levels. Responsible AI principles at each level. Classification model reflects oversight proportionality.
World Economic Forum — AI Governance Frameworks
Published frameworks for responsible AI adoption across industries. Available under Creative Commons terms.
Our alignment: Cross-industry governance functions. Industry-specific AI adoption patterns.
Stanford HAI — AI Index Report
Annual research report from Stanford's Institute for Human-Centered AI. Freely published academic research.
Our alignment: Industry AI adoption trends informing process coverage and maturity benchmarks.
Anthropic — "Building Effective Agents" (2024)
Published guidance on agent design patterns: workflows vs. agents, prompt chaining, orchestrator-workers, evaluator-optimizer. Publicly available blog post.
Our alignment: Agent architecture patterns and complexity recommendations. "Start simple" principle.
OpenAI Agents SDK (2025)
Open-source framework for building multi-agent systems with handoffs, guardrails, and tools. Published under MIT license.
Our alignment: Agent implementation patterns. Guardrail architecture. Multi-agent handoff design.
OWASP AI Security Guidelines
Open-source security guidelines for AI systems from the Open Web Application Security Project.
Our alignment: Agent guardrail design. Security requirements at each maturity level. Adversarial testing recommendations.
Singapore Model AI Governance Framework (2020, updated 2024)
Published by IMDA/PDPC. Practical AI governance framework adopted by organizations globally. Freely available.
Our alignment: Governance requirements at each maturity level. Practical human oversight implementation.

What's Original to HypersightAI

While our methodology is informed by established frameworks, the following components are original intellectual property developed by HypersightAI:

  • 6-Dimension Scoring Framework — the specific dimensions, scoring scales, and classification rules
  • 3-Class AI Classification (A/B/E) — the classification model and derivation logic
  • 5-Level Agentic Maturity Model — the maturity levels and progression criteria
  • Process Library — 2,878 processes across 24 industries with descriptions, transformation views, and case studies
  • Agent Architecture Blueprints — dynamic blueprint computation from scoring and maturity
  • Process Taxonomy — the industry/sub-industry/L1/L2/L3 process hierarchy
  • Skills Marketplace — Claude Code skill files for agent design, building, and testing

This methodology does not use or reproduce content from proprietary frameworks including APQC, CMMI, Gartner, TOGAF, or ITIL. All process names, descriptions, and taxonomies are original.