1. Deployer Obligations Under Articles 25–26
The EU AI Act creates a two-level obligation structure. Providers carry primary compliance obligations for the AI systems they develop. Deployers — organizations that use those systems — carry a secondary but substantial set of obligations that apply regardless of the provider's compliance posture.
Article 25: When Deployers Become Providers
Article 25 establishes three scenarios where deployers are deemed providers and must fulfill the full provider obligation set:
- Placing a high-risk AI system on the market or into service under the deployer's own name or trademark
- Making a substantial modification to a high-risk AI system obtained from a provider
- Using a general-purpose AI model to develop a high-risk AI application
Organizations that white-label vendor AI systems, add significant AI features to vendor platforms, or build high-risk applications on top of foundation models must treat themselves as providers for those systems and build the full compliance infrastructure.
Article 26: Core Deployer Obligations
Even where Article 25 does not apply, Article 26 imposes the following obligations on all deployers of high-risk AI systems:
Use high-risk AI systems in accordance with the instructions for use provided by the provider.
Assign human oversight to natural persons with necessary competence, training, and authority.
Monitor AI system performance in the operational context and report concerns to the provider.
Retain logs generated by the high-risk AI system for at least 6 months (unless sector-specific law requires longer).
Conduct a GDPR DPIA before deploying high-risk AI systems that process personal data.
Report serious incidents to the provider and the relevant market surveillance authority where the incident occurred on EU territory.
2. 5-Category Vendor Assessment Framework
The five assessment categories address the primary risk dimensions that determine whether a third-party AI vendor can be safely deployed in an EU AI Act context and whether your organization can fulfill its Article 26 obligations using the vendor's system.
Regulatory Risk
Max: 205 assessment questions
Technical Risk
Max: 205 assessment questions
Data Risk
Max: 205 assessment questions
Operational Risk
Max: 205 assessment questions
Financial Risk
Max: 205 assessment questions
Total Vendor Risk Score
1003. Assessment Template: 25 Questions Across 5 Categories
Regulatory Risk (max 20 points)
Has the vendor provided a valid EU Declaration of Conformity for any high-risk AI systems?
Is the AI system CE marked and registered in the EU database where required?
Can the vendor provide technical documentation per Annex IV on request?
Has the vendor disclosed which Annex III category (if any) the system falls under?
Does the vendor have a documented post-market monitoring program?
Technical Risk (max 20 points)
Are accuracy, precision, recall, and reliability metrics disclosed with test methodology?
Has the system undergone adversarial robustness testing with results available?
Are cybersecurity testing results (penetration testing, vulnerability assessment) available?
Does the system have automatic logging capability meeting Article 12 requirements?
Are uptime SLAs quantified and backed by financial remedies?
Data Risk (max 20 points)
Has the vendor disclosed training data sources and data quality criteria?
Has bias testing been conducted across protected characteristics with results disclosed?
Does the vendor's data processing agreement satisfy GDPR Article 28 requirements?
Is data processing geographically restricted to the EU or contractually compliant with Chapter V GDPR transfers?
Are data retention and deletion obligations documented and enforceable?
Operational Risk (max 20 points)
Does the vendor have documented incident notification procedures with defined timelines?
Are change notification procedures documented — how far in advance will deployers be notified of material changes?
Does the vendor have a business continuity and disaster recovery plan with defined RTO/RPO?
Is dedicated compliance support available for EU AI Act-related queries?
Are exit provisions documented including data portability and transition assistance?
Financial Risk (max 20 points)
Has the vendor provided audited financial statements demonstrating financial stability?
Does the vendor maintain professional indemnity or technology errors & omissions insurance adequate for the deployment?
Are liability caps in the contract adequate to cover potential regulatory penalties (up to €35M or 7% global turnover)?
Are vendor funding concentration risks acceptable (not dependent on a single funding source)?
Does the contract include price stability provisions or caps on increases during the assessment period?
4. Contractual Requirements for AI Vendors
Standard SaaS agreements are inadequate for EU AI Act-exposed deployments. The following clauses must be present in any vendor agreement for a high-risk AI system deployment.
1. Technical documentation access
Vendor must provide access to Annex IV technical documentation, conformity assessment records, and the EU Declaration of Conformity within 10 business days of written request. This right must survive contract termination for 10 years.
2. Incident notification
Vendor must notify deployer within 24 hours of becoming aware of any serious incident (as defined in Article 3(49)) involving the AI system, with full incident details within 72 hours. Vendor must cooperate with deployer's reporting obligations to market surveillance authorities.
3. Material change notification
Vendor must provide minimum 90 days' notice of any material change to the AI system, including model updates, training data changes, and changes to the intended purpose. Deployer must have the right to terminate without penalty if the change creates compliance obligations the deployer cannot fulfill.
4. Information provision
Vendor must provide, within 15 business days of written request, any information reasonably necessary for the deployer to fulfill its Article 26 obligations, including performance data from the deployer's use context if technically accessible.
5. Liability allocation
Contract must explicitly allocate liability for regulatory penalties and third-party claims arising from: (a) vendor's failure to provide accurate technical documentation; (b) vendor's failure to notify of incidents; (c) vendor's system not meeting declared performance specifications. Liability cap must be adequate relative to potential penalties (up to EUR 35M or 7% of global turnover).
6. Data processing agreement
A GDPR Article 28-compliant DPA must be in place before any personal data is processed. DPA must specify sub-processors, data transfer mechanisms, deletion timelines, and audit rights.
5. Red Flags in Vendor Responses
Vendor due diligence responses often reveal compliance gaps that marketing materials obscure. The following responses should be treated as red flags requiring escalation before procurement approval.
Cannot provide technical documentation
Any vendor claiming their AI system is not high-risk but unable to substantiate that classification with documented rationale should be treated with extreme caution. High-risk AI system providers have a legal obligation to provide technical documentation.
Refusal to accept incident notification obligations
Vendors that refuse contractual incident notification timelines typically do not have incident detection infrastructure that would enable timely notification. This is both a regulatory risk and an indicator of broader operational immaturity.
Accuracy claims without methodology disclosure
Accuracy, precision, or reliability claims that are not accompanied by test methodology, dataset description, and sample size are not verifiable. Vendors that cannot disclose their evaluation methodology may be reporting cherry-picked results.
No documented bias testing
For any AI system that produces outputs affecting individuals, the absence of documented bias testing across protected characteristics is a significant regulatory risk. This is not optional due diligence — Article 10 requires training data bias assessment.
GDPR Article 28 compliance cannot be confirmed
Any vendor unwilling to execute a GDPR-compliant DPA before accessing personal data is creating immediate GDPR exposure for the deployer, regardless of EU AI Act compliance.
Change notification measured in days, not weeks
A vendor that provides only 7–14 days' notice of model updates may make changes faster than the deployer can assess their compliance impact. Minimum 90 days for material changes is the appropriate standard for high-risk AI systems.
Liability cap below €1M for EU-facing deployments
EU AI Act penalties reach EUR 35M or 7% of global annual turnover for serious violations. A vendor liability cap of EUR 1M creates substantial unhedged regulatory exposure for the deployer.
6. Scoring Matrix: 0–100 Risk Score with Thresholds
Each of the 25 assessment questions is scored based on vendor evidence. The following scoring scale applies:
| Score | Meaning | Evidence Quality |
|---|---|---|
| Full marks | Complete, verified evidence. Contractual obligations confirmed. | Documented, dated, verifiable artifacts provided |
| 75% | Substantial evidence with minor gaps | Evidence provided but not fully current or partially incomplete |
| 50% | Partial evidence or verbal commitment only | Vendor acknowledges requirement but cannot provide documentation |
| 25% | Minimal evidence, significant gaps | Vague or generic responses without substantiation |
| 0 | No evidence or red flag response | Vendor unable or unwilling to provide any evidence |
75–100
Low Risk
Proceed with standard contract review. Document rationale for approval.
Approve with standard monitoring
50–74
Moderate Risk
Material gaps identified. Define remediation requirements as contract conditions.
Conditional approval with remediation plan
0–49
High Risk
Significant regulatory exposure. Do not proceed without AI Risk Committee approval and documented risk acceptance.
Escalate or reject
7. FrictionMelt Friction Scoring for Vendor Adoption
Risk score and friction score are related but distinct. A vendor can score 85/100 on regulatory risk — technically compliant, good documentation, adequate contractual protections — but still present high friction barriers that create operational risk during deployment and ongoing operation.
FrictionMelt measures vendor adoption friction across five dimensions, each scored 0–20:
Integration Complexity
0–20API quality, SDK availability, connector ecosystem, documentation completeness, and sandbox availability. High scores indicate low friction — the vendor makes integration straightforward.
Compliance Readiness
0–20Quality of EU AI Act documentation package, responsiveness to compliance queries, dedicated compliance contacts, and track record of regulatory updates. High scores indicate a vendor actively supporting deployer compliance.
Change Frequency and Predictability
0–20How often the vendor modifies the system, advance notice provided, communication quality of change notifications, and ability to roll back if changes cause issues. High change frequency with short notice = high friction.
Support Responsiveness
0–20SLA quality, incident response track record, escalation path quality, and availability of senior technical support for compliance-critical issues.
Exit Barriers
0–20Data portability (formats, timelines, API export), contract lock-in provisions, transition assistance obligations, and clarity of data deletion procedures upon exit.
Combined Risk and Friction Decision Matrix
The most problematic vendor profile is high regulatory risk combined with high friction — a vendor that is both non-compliant and difficult to work with. The second most dangerous profile is low regulatory risk but high friction — a technically compliant vendor whose operational barriers make it impossible for the deployer to fulfill Article 26 obligations in practice.
FrictionMelt scores are particularly valuable for identifying vendors in the "compliant but unusable" quadrant — vendors who can pass paper-based due diligence but whose actual operational behavior will prevent the deployer from implementing required monitoring and incident reporting procedures.
8. Frequently Asked Questions
What are deployer obligations when using third-party AI under the EU AI Act?▾
What contractual clauses must AI vendor agreements include?▾
How should AI vendors be scored in a risk assessment?▾
What are the red flags to watch for in AI vendor due diligence?▾
What is FrictionMelt friction scoring for AI vendor adoption?▾
Related AI Risk Management Guides
AI Risk Assessment Framework
The pillar guide to EU AI Act risk assessment methodology and risk management systems
High-Risk AI Systems Classification Guide
How to determine if your AI system is high-risk under Annex III
AI Bias Detection and Mitigation in EU
Technical approaches to bias detection and Article 10 data governance compliance
