1. The Governance Gap: Why Checklists Are Not Governance Intelligence
AI governance has been operationalized by most enterprises as a compliance exercise: a set of controls, organized in a spreadsheet or GRC platform, checked periodically by a risk team. The controls ask questions like "does a model card exist for this AI system?" or "has the training data been documented?" Checking the box requires manual effort; the answer reveals nothing about whether the governance posture is actually improving.
The fundamental limitation of checklist governance is that it is static and backward-looking. It tells you whether governance controls existed at the point of the last audit. It cannot tell you whether a code change made yesterday broke a governance assumption. It cannot tell you which of your 47 AI-assisted services will be affected by the new EDPB guidance published this morning. It cannot surface the pattern of mistakes your team made in the last 18 months before you commit the next one.
Governance Intelligence vs Governance Compliance
Checklist Governance
- ✘ Static control verification
- ✘ Point-in-time audit
- ✘ Manual update cycle
- ✘ Separate from development workflow
- ✘ Cannot reason across systems
GraQle Intelligence
- ✔ Continuous graph traversal
- ✔ Real-time impact analysis
- ✔ Auto-updated from git hooks
- ✔ Native in developer IDE
- ✔ Multi-hop cross-system reasoning
GraQle was built from a different premise: governance intelligence should be continuous, developer-native, and based on the actual state of the codebase — not on the state of a documentation artifact that was accurate when written and diverged from reality the next time the code changed.
2. GraQle Knowledge Graph: Codebase + Architecture + Compliance as One Graph
The GraQle knowledge graph is built from three source layers that are represented as a unified graph rather than separate data stores:
Layer 1: Codebase Graph
Every module, service, function, data model, API endpoint, dependency, and configuration in the codebase is indexed as a graph node. Relationships encode: calls, imports, depends-on, exposes, processes-data-of, and deploys-alongside. The codebase graph is kept current through git post-commit hooks that incrementally update the affected nodes after every commit — no full re-scan required for incremental changes.
Layer 2: Architecture Graph
Infrastructure components — cloud services, deployment regions, data residency zones, network boundaries, authentication systems, logging and monitoring services — are added as nodes connected to the codebase graph. This layer enables architecture-level governance questions: "which services process data outside the EU?" or "which modules have no audit logging despite processing personal data?"
Layer 3: Compliance Requirements Graph
Applicable regulatory obligations — from the EU AI Act, GDPR, NIS2, and sector-specific regulations — are represented as obligation nodes connected to the codebase and architecture nodes they govern. This binding layer means that when a code change touches a node carrying a compliance obligation, GraQle can immediately surface which obligations are affected and whether the change satisfies or violates them.
The unified graph representation is what enables the multi-hop governance queries that differentiate GraQle from documentation-based tools. A question like "which of our AI systems are high-risk under EU AI Act Annex III and lack the required technical documentation?" requires traversing the compliance layer (Annex III obligations), the architecture layer (which systems are in scope), and the codebase layer (what documentation exists in the repository) simultaneously. GraQle's graph-of-agents reasoning does this in a single traversal.
3. 99.7% Accuracy on MultiGov-30: What the Benchmark Measures
MultiGov-30 is a benchmark of 30 multi-hop AI governance questions designed to test whether a governance tool can reason across codebase, architecture, and compliance simultaneously. The questions are calibrated to reflect real governance queries from enterprise risk and compliance teams — not simplified retrieval tasks.
Representative MultiGov-30 questions include:
- "Which services in our codebase process biometric data, and which of those lack the documentation required by EU AI Act Article 11?"
- "If we deprecate the shared authentication module, which downstream services would lose their documented human oversight capability?"
- "Which of our AI systems have been modified in the last 30 days in ways that could affect their EU AI Act risk classification?"
- "What is the full dependency chain from our customer-facing recommendation service to any data processing that occurs outside the EU?"
- "Which past architectural decisions have been identified as governance mistakes, and does the proposed change repeat any of those patterns?"
MultiGov-30 Accuracy: GraQle vs Competing Approaches
4. Graph-of-Agents Reasoning: Distributed Intelligence Across the Graph
Graph-of-agents is GraQle's core reasoning architecture. When a governance query is submitted, GraQle's query router identifies the set of graph nodes most relevant to the question — the anchor set. Each node in the anchor set is then instantiated as an autonomous reasoning agent with the node's local context: its properties, its outgoing relations, and its immediate neighbors.
These agents reason in parallel about their portion of the governance question, then exchange their intermediate findings with adjacent agents via the graph edges. Adjacent agents incorporate the incoming findings and update their reasoning. The process runs for a configurable number of reasoning rounds — typically 2–3 rounds suffice for most governance queries — and then a synthesis agent aggregates the distributed findings into a coherent answer with full source attribution.
The architectural advantage of graph-of-agents over single-agent reasoning is working memory efficiency. A governance question that touches 40 graph nodes would require loading 40 nodes worth of context into a single LLM context window with single-agent approaches — often exceeding practical context limits. Graph-of-agents distributes the context across 40 agents, each holding only its local context, and aggregates the results. This enables reasoning over governance graphs with hundreds of thousands of nodes without hitting context limits.
5. Six MCP Tools: graq_context Through graq_lessons
GraQle exposes its intelligence through six MCP (Model Context Protocol) tools, each optimized for a specific governance use case. The tools are designed on a cost hierarchy — cheaper tools for lookups, expensive tools for reasoning — so that developers and governance teams use the minimum reasoning necessary for each query.
graq_context
~500 tokensReturns a focused 500-token summary of any service, module, or component in the knowledge graph — including its governance status, applicable obligations, and recent changes. Replaces 20,000–60,000 token brute-force file reads for understanding what a component does and what governs it.
graq_reason
~1,500–3,000 tokensFull graph-of-agents reasoning for multi-hop governance questions. Use when graq_context is insufficient — for questions that span multiple services, require compliance cross-referencing, or need architectural analysis. Returns the answer plus the full reasoning path through the graph.
graq_inspect
~300 tokensReturns graph statistics and structure for a specified scope: node counts, edge density, governance coverage percentage, and unresolved compliance gaps. Used for governance health dashboard queries and scope-level audits.
graq_preflight
~500–1,000 tokensPre-change safety check that analyzes a proposed code or configuration change against the governance graph before the change is made. Returns a safety report: which governance obligations are affected, which compliance assumptions could be invalidated, and which past mistake patterns the change resembles.
graq_impact
~800–1,500 tokensImpact analysis for a specified change — answers "what breaks if I change X?" across the governance, compliance, and architectural dimensions. Traverses the dependency graph downstream from the changed node to identify all services, obligations, and governance controls that depend on the changed component.
graq_lessons
~400–800 tokensSurfaces past mistake patterns from the governance graph — architectural decisions that caused compliance gaps, technical choices that introduced governance blind spots, and change patterns that preceded incidents. Run before making significant architectural changes to avoid repeating known mistakes.
6. Integration with Claude Code, Cursor, and VS Code
GraQle ships as an MCP server — a standards-based server that any MCP-compatible development tool can connect to. Setup requires adding the GraQle MCP server configuration to the IDE's MCP client config file (typically .mcp.json in the project root), which registers all six tools with the IDE's AI assistant.
Integration is confirmed across three major development environments:
Claude Code
Full MCP tool registration. All 6 tools available in conversation. graq_preflight and graq_lessons are particularly powerful in the Claude Code agentic workflow for pre-commit governance checks.
Cursor
Native MCP support. GraQle tools available in Cursor chat and Composer. graq_context and graq_impact integrate naturally with Cursor's codebase-aware chat workflow.
VS Code
Available via VS Code MCP extension and GitHub Copilot with MCP support. graq_inspect works well with VS Code's source control integration for pre-push governance validation.
After registering the MCP server, the developer experience is seamless: governance intelligence is available in natural language through the same AI assistant interface the developer already uses for code questions. No context-switching to a separate governance dashboard, no manual audit spreadsheet updates, no governance team bottleneck for routine impact analysis.
7. Use Cases: Impact Analysis, Lessons Surfacing, Pre-Change Safety
Three use cases account for the majority of enterprise GraQle deployments:
Impact Analysis Before Architectural Changes
Before refactoring a shared service, deprecating a module, or modifying a data pipeline, a developer runs graq_impact to understand the downstream governance consequences. A typical impact analysis for a mid-sized service returns: affected downstream services (average 12–18), compliance obligations that depend on the service's current behavior (average 3–7), and documentation that will need updating (average 2–4 documents). Without this analysis, developers regularly make changes that inadvertently break governance assumptions — discovered weeks later during an audit.
Lessons Learned Surfacing Before New Development
Before beginning a new AI feature, a developer runs graq_lessons to retrieve the pattern of past governance mistakes in similar features. The knowledge graph accumulates every governance gap, compliance violation, and architectural decision that was subsequently identified as problematic — and surfaces them when a new development pattern matches the pre-mistake pattern. This is institutional memory made queryable.
Pre-Change Safety Checks in CI/CD
GraQle integrates into CI/CD pipelines via the REST API. A pre-merge governance check runs graq_preflight on each pull request, returning a governance safety report. Pull requests that would violate compliance assumptions or break governance controls are flagged for review before merge — not discovered in the next quarterly audit.
8. Developer Experience: 500 Tokens vs 60K Brute-Force Reads
The token efficiency of graq_context vs brute-force file reading is not an abstract benchmark — it directly affects the developer experience of governance-aware development.
When a developer needs to understand the governance posture of an unfamiliar service, the brute-force approach — reading all relevant source files, configuration, documentation, and compliance records — consumes 20,000–60,000 tokens. At typical LLM speeds, this takes 15–45 seconds and exhausts a significant portion of the context window, leaving less room for the actual development task. The developer either performs this expensive read (and pays the time and cost) or skips it (and proceeds without governance context).
Context Retrieval: graq_context vs Brute-Force
Brute-Force File Read
graq_context
The 120x token efficiency means that governance-aware context retrieval is no longer a trade-off. A developer can run graq_context on every service they touch without significantly affecting their development session's context budget. Governance becomes a routine part of the development workflow rather than an occasional expensive check.
9. Pricing and Deployment: SaaS vs Self-Hosted
GraQle is available in two deployment models designed for different organizational requirements:
SaaS
- Hosted in EU (Frankfurt region, AWS eu-central-1)
- Git repository scanning without storing raw source
- Automatic graph updates via GitHub/GitLab webhooks
- SOC 2 Type II certified infrastructure
- 99.9% SLA with 72h support response
- GDPR data processing agreement included
Pricing: per developer seat per month. Contact for current rates.
Self-Hosted
- Docker-based deployment on your infrastructure
- Air-gapped environment support
- No outbound data transmission required
- Minimum: 8GB RAM, 4 vCPU (up to 500K graph nodes)
- Annual license with source code access option
- Dedicated onboarding and integration support
Pricing: annual license. Contact for enterprise pricing.
Both deployment models support all six MCP tools and the REST API. The self-hosted model additionally supports custom OWL entity type extensions for organizations with specialized governance requirements beyond the standard codebase, architecture, and EU compliance layers.
10. Frequently Asked Questions
What is GraQle and how does it differ from a standard AI governance checklist tool?▾
What is the MultiGov-30 benchmark?▾
How does the graph-of-agents reasoning model work in GraQle?▾
How do the 6 MCP tools integrate with developer workflows?▾
What is the difference between SaaS and self-hosted GraQle deployments?▾
Related AI Governance Guides
AI Governance Maturity Model
How to assess and advance your organization's AI governance maturity from ad-hoc to intelligence-driven
AI Governance Audit Checklist
The structured checklist for enterprise AI governance audits, mapped to EU AI Act obligations
AI Governance in Europe
The pillar guide to enterprise AI governance across the EU regulatory landscape
EU AI Act Compliance Guide
How GraQle integrates with TraceGov.ai for a complete EU AI Act compliance and governance stack
