1. Audit Framework Overview
The 47-control framework is organized across seven domains that map directly to EU AI Act obligation areas. The framework is designed to satisfy both internal audit requirements and the evidentiary expectations of market surveillance authorities conducting enforcement inspections under Article 74.
7
Governance
8
Risk Management
8
Documentation
5
Transparency
6
Human Oversight
7
Data Governance
6
Logging & Monitoring
47
Total Controls
Each control is rated as Standard or Critical. Critical controls carry 1.5x weighting in the TRACE score calculation. Failure on a Critical control without a documented remediation plan constitutes a material compliance gap that should be escalated to the AI Risk Committee.
2. The 47 Audit Controls by Domain
Governance (7 controls)
AI governance policy formally approved and current (within 12 months)
Evidence required: Policy document with approval date, signatory, and version number
Chief AI Officer or equivalent designated with documented accountability
Evidence required: Role charter or appointment letter naming individual and responsibilities
AI Risk Committee constituted with documented membership and terms of reference
Evidence required: Committee charter, member list, and meeting schedule
AI system inventory maintained and current
Evidence required: Inventory register with last-updated date, classification, and owner for each system
Named system owner documented for each high-risk AI system
Evidence required: System register or governance matrix with named individuals
AI literacy training completed by relevant staff
Evidence required: Training completion records by role category
DPO consulted on all high-risk AI systems processing personal data
Evidence required: DPO consultation records or sign-off on system risk assessments
Risk Management (8 controls)
Risk management system established and operational for each high-risk AI system (Article 9)
Evidence required: Article 9 RMS documentation demonstrating iterative, ongoing process — not a static document
Risk classification documented with rationale for each AI system in inventory
Evidence required: Classification register with decision rationale per system
Known and reasonably foreseeable risks identified for all high-risk AI systems
Evidence required: Risk register with identified risks, likelihood, and impact assessments
Risk mitigation measures implemented and tested
Evidence required: Mitigation implementation records and test results
Residual risk evaluation completed and within acceptable thresholds
Evidence required: Residual risk assessment with declared acceptable threshold and current measurement
Prohibited use case screening completed for all systems in inventory
Evidence required: Screening records per system with Article 5 mapping
Risk assessment updated after any significant change to AI system
Evidence required: Change log with corresponding risk assessment update records
Third-party AI vendor risk assessments completed (for deployers)
Evidence required: Vendor risk assessment records per AI system with scoring
Documentation (8 controls)
Technical documentation per Article 11 and Annex IV maintained for all high-risk AI systems
Evidence required: Complete Annex IV documentation package with currency date
Technical documentation updated to reflect current system state
Evidence required: Version history showing documentation updated within 30 days of any system change
EU Declaration of Conformity issued for all high-risk AI systems
Evidence required: Signed Declaration of Conformity per system
Quality management system documented and operational (Article 17)
Evidence required: QMS documentation demonstrating coverage of Articles 9–15
Conformity assessment records retained for 10 years
Evidence required: Retention schedule documentation and evidence of archiving
EU database registration current for all high-risk AI systems (Article 49)
Evidence required: EU database registration confirmation per system
Instructions for use prepared per Article 13 requirements
Evidence required: Current instructions for use document per high-risk AI system
Significant modification assessment documented for any system updates
Evidence required: Change assessment records with substantial modification determination
Transparency (5 controls)
Users informed when interacting with AI system (Article 50)
Evidence required: UI disclosure implementation or instructions for use transparency statement
AI-generated content labelled where required (Article 50)
Evidence required: Labelling implementation evidence or confirmation that system does not generate regulated content
Deployers notified of high-risk AI system status when supplying systems
Evidence required: Notification records or contractual clause confirming notification
GPAI model providers publish training data summary (Title VIII)
Evidence required: Published summary URL or confirmation that system is not a GPAI model
Intended purpose clearly documented and communicated to deployers
Evidence required: Intended purpose statement in technical documentation and instructions for use
Human Oversight (6 controls)
Human oversight measures implemented per Article 14 for all high-risk AI systems
Evidence required: Oversight procedure documentation and demonstration of override capability
Staff with oversight responsibility trained and competency assessed
Evidence required: Training records and competency assessment results for oversight staff
Override and stop mechanisms tested and functional
Evidence required: Test records demonstrating override and stop capabilities function as documented
Deployer human oversight arrangements documented and verified
Evidence required: Deployer oversight procedure documentation with confirmation of implementation
Automation bias risks assessed and mitigated
Evidence required: Automation bias risk assessment and corresponding training or design mitigations
Escalation procedures for AI system outputs documented and tested
Evidence required: Escalation procedure documentation and test records
Data Governance (7 controls)
Training data quality criteria defined and documented per Article 10
Evidence required: Data quality specification with measurable criteria per system
Training data quality assessment completed and documented
Evidence required: Data quality assessment report with results against defined criteria
Bias detection testing completed on training and test data
Evidence required: Bias testing methodology and results per protected characteristic
Data lineage documented from source to training
Evidence required: Data lineage documentation per training dataset
GDPR DPIA completed for high-risk AI systems processing personal data
Evidence required: Completed DPIA document with DPO sign-off
Data retention and deletion procedures aligned with GDPR
Evidence required: Retention schedule and confirmed deletion procedures
Validation and test dataset independence confirmed
Evidence required: Dataset documentation confirming no overlap between training, validation, and test sets
Logging & Monitoring (6 controls)
Automatic logging capability operational per Article 12
Evidence required: Log sample demonstrating required fields are captured and timestamps are accurate
Log retention period meets minimum requirements (6 months minimum)
Evidence required: Log retention configuration documentation and confirmed retention period
Post-market monitoring system operational per Article 72
Evidence required: Monitoring system documentation with active data collection evidence
Incident classification criteria defined and staff trained
Evidence required: Incident classification procedure and training records
Serious incident reporting capability tested (Article 73 — 15-day deadline)
Evidence required: Incident reporting procedure with simulated or real reporting test evidence
Performance metrics monitored against declared accuracy thresholds
Evidence required: Performance monitoring dashboard or reports with current metrics vs declared thresholds
3. TRACE Score Calculation Methodology
The TRACE (Traceable Regulatory AI Compliance Evidence) score is a 0–100 quantified compliance health indicator. It converts audit control results into a single actionable number that boards, risk committees, and auditors can interpret without reading the full audit report.
Each control is evaluated as: PASS (evidence is present, current, and satisfies the control statement), PARTIAL (evidence exists but is incomplete or stale), or FAIL (no evidence or evidence demonstrates non-compliance). Scores: PASS = 1.0, PARTIAL = 0.5, FAIL = 0.0. Critical controls are weighted 1.5x.
| Domain | Controls | Weight in TRACE Score | Rationale |
|---|---|---|---|
| Risk Management | 8 | 20% | Primary EU AI Act obligation area; most enforcement focus |
| Documentation | 8 | 20% | Core to conformity assessment and market surveillance review |
| Logging & Monitoring | 6 | 18% | Post-market monitoring is continuous obligation; high enforcement visibility |
| Human Oversight | 6 | 15% | Article 14 is a fundamental safety requirement for high-risk AI |
| Data Governance | 7 | 13% | Intersects EU AI Act and GDPR; important but partially covered by other frameworks |
| Governance | 7 | 10% | Enabling structure; important but indirect regulatory obligation |
| Transparency | 5 | 4% | Important but comparatively lower enforcement risk for most enterprises |
80–100
Audit-Ready
Strong evidence base; ready for market surveillance inspection
60–79
Significant Gaps
Material gaps requiring remediation before enforcement deadline
0–59
High Risk
High enforcement exposure; escalate to board and initiate remediation plan
4. Audit Frequency Schedule
Audit frequency must be calibrated to control criticality and the pace of change in both the AI system and the regulatory environment. The following schedule applies to the 47 controls.
Continuous Monitoring
8 controlsThese controls require real-time or near-real-time monitoring. Any lapse — for example, logging functionality failure or technical documentation that has not been updated following a system change — constitutes an immediate compliance gap.
RISK-01, RISK-07, DOC-01, DOC-02, DOC-08, LOG-01, LOG-03, LOG-06
Quarterly Review
11 controlsQuarterly controls address dynamic aspects of compliance: risk register currency, bias testing, human oversight verification, and inventory accuracy. Quarterly reviews are a minimum — organizations with rapidly evolving AI systems should conduct these monthly.
GOV-04, RISK-02, RISK-03, RISK-04, RISK-05, RISK-06, OVS-01, OVS-03, DATA-03, LOG-02 (check), plus periodic documentation checks
Annual Audit
28 controlsAnnual controls cover governance structure, static documentation, and procedural controls. Annual does not mean low-priority — a failed annual control that is not remediated before a market surveillance inspection creates the same enforcement exposure as a failed continuous control.
GOV-01 through GOV-03, GOV-05 through GOV-07, DOC-03 through DOC-07, TRANS-01 through TRANS-05, OVS-02, OVS-04 through OVS-06, DATA-01, DATA-02, DATA-04 through DATA-07, LOG-02, LOG-04, LOG-05, RISK-08
5. GraQle Automated Evidence Collection
Audit preparation for 47 controls across 7 domains can require 4–6 weeks of manual effort if evidence is dispersed across document management systems, ticketing platforms, training systems, and monitoring dashboards. GraQle centralizes audit evidence collection in the governance knowledge graph, reducing preparation time to 1–2 weeks.
GraQle automates evidence collection for approximately 65% of the 47 controls:
- Documentation controls: GraQle tracks document versions, approval dates, and currency against regulatory change events. Controls DOC-01 through DOC-08 are evaluated automatically.
- Logging controls: Integration with log management systems enables automated verification that logging is operational and retention periods are met. Controls LOG-01, LOG-02, LOG-06 are continuously monitored.
- Risk management controls: GraQle's risk node structure tracks risk identification, mitigation status, and residual risk measurements, enabling automated evaluation of RISK-01 through RISK-07.
- Governance controls: Policy version tracking, committee membership records, and training completion integrations automate GOV-01 through GOV-06.
TRACE Score in GraQle
GraQle computes the TRACE score in real time as evidence is collected and controls are evaluated. The score is visible on the governance dashboard and exportable as a structured report for the AI Risk Committee and external auditors. When a control status changes — for example, a policy document expires or a bias test fails — the TRACE score updates immediately and triggers a notification to the named control owner.
6. Frequently Asked Questions
What domains should an AI governance audit cover?▾
How is the TRACE score calculated?▾
How often should AI governance audits be conducted?▾
What evidence is required for an AI governance audit?▾
Can AI governance audits be automated?▾
Related AI Governance Guides
AI Governance Policy Template
8 mandatory policy sections with role assignments and review cycle requirements
AI Governance Maturity Model
Assess your organization's AI governance maturity across five dimensions
AI Governance in Europe: Complete Guide
The pillar guide to EU AI governance frameworks, maturity models, and regulatory alignment
