AI Risk Management14 min read

AI Vendor Risk Assessment Template: Evaluating Third-Party AI Under the EU AI Act

Every AI system your organization purchases from a vendor carries compliance risk that the EU AI Act explicitly places on the deployer. Article 26 obligations — human oversight, log retention, incident reporting — apply regardless of what is in your vendor's marketing materials. This template provides a structured 5-category, 100-point vendor assessment framework for EU AI Act-exposed procurement decisions.

··Updated November 11, 2025

1. Deployer Obligations Under Articles 25–26

The EU AI Act creates a two-level obligation structure. Providers carry primary compliance obligations for the AI systems they develop. Deployers — organizations that use those systems — carry a secondary but substantial set of obligations that apply regardless of the provider's compliance posture.

Article 25: When Deployers Become Providers

Article 25 establishes three scenarios where deployers are deemed providers and must fulfill the full provider obligation set:

  • Placing a high-risk AI system on the market or into service under the deployer's own name or trademark
  • Making a substantial modification to a high-risk AI system obtained from a provider
  • Using a general-purpose AI model to develop a high-risk AI application

Organizations that white-label vendor AI systems, add significant AI features to vendor platforms, or build high-risk applications on top of foundation models must treat themselves as providers for those systems and build the full compliance infrastructure.

Article 26: Core Deployer Obligations

Even where Article 25 does not apply, Article 26 imposes the following obligations on all deployers of high-risk AI systems:

Art. 26(1)

Use high-risk AI systems in accordance with the instructions for use provided by the provider.

Art. 26(2)

Assign human oversight to natural persons with necessary competence, training, and authority.

Art. 26(4)

Monitor AI system performance in the operational context and report concerns to the provider.

Art. 26(5)

Retain logs generated by the high-risk AI system for at least 6 months (unless sector-specific law requires longer).

Art. 26(7)

Conduct a GDPR DPIA before deploying high-risk AI systems that process personal data.

Art. 26(9)

Report serious incidents to the provider and the relevant market surveillance authority where the incident occurred on EU territory.

2. 5-Category Vendor Assessment Framework

The five assessment categories address the primary risk dimensions that determine whether a third-party AI vendor can be safely deployed in an EU AI Act context and whether your organization can fulfill its Article 26 obligations using the vendor's system.

Regulatory Risk

Max: 20

5 assessment questions

Technical Risk

Max: 20

5 assessment questions

Data Risk

Max: 20

5 assessment questions

Operational Risk

Max: 20

5 assessment questions

Financial Risk

Max: 20

5 assessment questions

Total Vendor Risk Score

100

3. Assessment Template: 25 Questions Across 5 Categories

Regulatory Risk (max 20 points)

R1
5 pts

Has the vendor provided a valid EU Declaration of Conformity for any high-risk AI systems?

R2
4 pts

Is the AI system CE marked and registered in the EU database where required?

R3
4 pts

Can the vendor provide technical documentation per Annex IV on request?

R4
4 pts

Has the vendor disclosed which Annex III category (if any) the system falls under?

R5
3 pts

Does the vendor have a documented post-market monitoring program?

Technical Risk (max 20 points)

T1
5 pts

Are accuracy, precision, recall, and reliability metrics disclosed with test methodology?

T2
4 pts

Has the system undergone adversarial robustness testing with results available?

T3
4 pts

Are cybersecurity testing results (penetration testing, vulnerability assessment) available?

T4
4 pts

Does the system have automatic logging capability meeting Article 12 requirements?

T5
3 pts

Are uptime SLAs quantified and backed by financial remedies?

Data Risk (max 20 points)

D1
5 pts

Has the vendor disclosed training data sources and data quality criteria?

D2
5 pts

Has bias testing been conducted across protected characteristics with results disclosed?

D3
4 pts

Does the vendor's data processing agreement satisfy GDPR Article 28 requirements?

D4
3 pts

Is data processing geographically restricted to the EU or contractually compliant with Chapter V GDPR transfers?

D5
3 pts

Are data retention and deletion obligations documented and enforceable?

Operational Risk (max 20 points)

O1
5 pts

Does the vendor have documented incident notification procedures with defined timelines?

O2
4 pts

Are change notification procedures documented — how far in advance will deployers be notified of material changes?

O3
4 pts

Does the vendor have a business continuity and disaster recovery plan with defined RTO/RPO?

O4
4 pts

Is dedicated compliance support available for EU AI Act-related queries?

O5
3 pts

Are exit provisions documented including data portability and transition assistance?

Financial Risk (max 20 points)

F1
5 pts

Has the vendor provided audited financial statements demonstrating financial stability?

F2
5 pts

Does the vendor maintain professional indemnity or technology errors & omissions insurance adequate for the deployment?

F3
5 pts

Are liability caps in the contract adequate to cover potential regulatory penalties (up to €35M or 7% global turnover)?

F4
3 pts

Are vendor funding concentration risks acceptable (not dependent on a single funding source)?

F5
2 pts

Does the contract include price stability provisions or caps on increases during the assessment period?

4. Contractual Requirements for AI Vendors

Standard SaaS agreements are inadequate for EU AI Act-exposed deployments. The following clauses must be present in any vendor agreement for a high-risk AI system deployment.

1. Technical documentation access

Vendor must provide access to Annex IV technical documentation, conformity assessment records, and the EU Declaration of Conformity within 10 business days of written request. This right must survive contract termination for 10 years.

2. Incident notification

Vendor must notify deployer within 24 hours of becoming aware of any serious incident (as defined in Article 3(49)) involving the AI system, with full incident details within 72 hours. Vendor must cooperate with deployer's reporting obligations to market surveillance authorities.

3. Material change notification

Vendor must provide minimum 90 days' notice of any material change to the AI system, including model updates, training data changes, and changes to the intended purpose. Deployer must have the right to terminate without penalty if the change creates compliance obligations the deployer cannot fulfill.

4. Information provision

Vendor must provide, within 15 business days of written request, any information reasonably necessary for the deployer to fulfill its Article 26 obligations, including performance data from the deployer's use context if technically accessible.

5. Liability allocation

Contract must explicitly allocate liability for regulatory penalties and third-party claims arising from: (a) vendor's failure to provide accurate technical documentation; (b) vendor's failure to notify of incidents; (c) vendor's system not meeting declared performance specifications. Liability cap must be adequate relative to potential penalties (up to EUR 35M or 7% of global turnover).

6. Data processing agreement

A GDPR Article 28-compliant DPA must be in place before any personal data is processed. DPA must specify sub-processors, data transfer mechanisms, deletion timelines, and audit rights.

5. Red Flags in Vendor Responses

Vendor due diligence responses often reveal compliance gaps that marketing materials obscure. The following responses should be treated as red flags requiring escalation before procurement approval.

Cannot provide technical documentation

Any vendor claiming their AI system is not high-risk but unable to substantiate that classification with documented rationale should be treated with extreme caution. High-risk AI system providers have a legal obligation to provide technical documentation.

Refusal to accept incident notification obligations

Vendors that refuse contractual incident notification timelines typically do not have incident detection infrastructure that would enable timely notification. This is both a regulatory risk and an indicator of broader operational immaturity.

Accuracy claims without methodology disclosure

Accuracy, precision, or reliability claims that are not accompanied by test methodology, dataset description, and sample size are not verifiable. Vendors that cannot disclose their evaluation methodology may be reporting cherry-picked results.

No documented bias testing

For any AI system that produces outputs affecting individuals, the absence of documented bias testing across protected characteristics is a significant regulatory risk. This is not optional due diligence — Article 10 requires training data bias assessment.

GDPR Article 28 compliance cannot be confirmed

Any vendor unwilling to execute a GDPR-compliant DPA before accessing personal data is creating immediate GDPR exposure for the deployer, regardless of EU AI Act compliance.

Change notification measured in days, not weeks

A vendor that provides only 7–14 days' notice of model updates may make changes faster than the deployer can assess their compliance impact. Minimum 90 days for material changes is the appropriate standard for high-risk AI systems.

Liability cap below €1M for EU-facing deployments

EU AI Act penalties reach EUR 35M or 7% of global annual turnover for serious violations. A vendor liability cap of EUR 1M creates substantial unhedged regulatory exposure for the deployer.

6. Scoring Matrix: 0–100 Risk Score with Thresholds

Each of the 25 assessment questions is scored based on vendor evidence. The following scoring scale applies:

ScoreMeaningEvidence Quality
Full marksComplete, verified evidence. Contractual obligations confirmed.Documented, dated, verifiable artifacts provided
75%Substantial evidence with minor gapsEvidence provided but not fully current or partially incomplete
50%Partial evidence or verbal commitment onlyVendor acknowledges requirement but cannot provide documentation
25%Minimal evidence, significant gapsVague or generic responses without substantiation
0No evidence or red flag responseVendor unable or unwilling to provide any evidence

75–100

Low Risk

Proceed with standard contract review. Document rationale for approval.

Approve with standard monitoring

50–74

Moderate Risk

Material gaps identified. Define remediation requirements as contract conditions.

Conditional approval with remediation plan

0–49

High Risk

Significant regulatory exposure. Do not proceed without AI Risk Committee approval and documented risk acceptance.

Escalate or reject

7. FrictionMelt Friction Scoring for Vendor Adoption

Risk score and friction score are related but distinct. A vendor can score 85/100 on regulatory risk — technically compliant, good documentation, adequate contractual protections — but still present high friction barriers that create operational risk during deployment and ongoing operation.

FrictionMelt measures vendor adoption friction across five dimensions, each scored 0–20:

Integration Complexity

0–20

API quality, SDK availability, connector ecosystem, documentation completeness, and sandbox availability. High scores indicate low friction — the vendor makes integration straightforward.

Compliance Readiness

0–20

Quality of EU AI Act documentation package, responsiveness to compliance queries, dedicated compliance contacts, and track record of regulatory updates. High scores indicate a vendor actively supporting deployer compliance.

Change Frequency and Predictability

0–20

How often the vendor modifies the system, advance notice provided, communication quality of change notifications, and ability to roll back if changes cause issues. High change frequency with short notice = high friction.

Support Responsiveness

0–20

SLA quality, incident response track record, escalation path quality, and availability of senior technical support for compliance-critical issues.

Exit Barriers

0–20

Data portability (formats, timelines, API export), contract lock-in provisions, transition assistance obligations, and clarity of data deletion procedures upon exit.

Combined Risk and Friction Decision Matrix

The most problematic vendor profile is high regulatory risk combined with high friction — a vendor that is both non-compliant and difficult to work with. The second most dangerous profile is low regulatory risk but high friction — a technically compliant vendor whose operational barriers make it impossible for the deployer to fulfill Article 26 obligations in practice.

FrictionMelt scores are particularly valuable for identifying vendors in the "compliant but unusable" quadrant — vendors who can pass paper-based due diligence but whose actual operational behavior will prevent the deployer from implementing required monitoring and incident reporting procedures.

8. Frequently Asked Questions

What are deployer obligations when using third-party AI under the EU AI Act?
Article 26 establishes that deployers must: implement human oversight as specified in the instructions for use; monitor AI system performance in their operational context; maintain logs for at least 6 months; conduct a GDPR DPIA if personal data is processed; and report serious incidents to the provider and market surveillance authority. Deployers also become providers — taking on the full provider obligation set — if they substantially modify the system or deploy it under their own name (Article 25).
What contractual clauses must AI vendor agreements include?
Required clauses include: technical documentation access rights; incident notification obligations (24 hours for serious incidents); material change notification (minimum 90 days); information provision obligations for Article 26 compliance; liability allocation for regulatory penalties; and a GDPR Article 28-compliant DPA before any personal data is processed.
How should AI vendors be scored in a risk assessment?
Score vendors across five categories: Regulatory (EU AI Act compliance posture), Technical (system accuracy, security testing), Data (training data governance, bias testing), Operational (SLA terms, business continuity), and Financial (stability, insurance, liability). Each category is scored 0–20, producing a total out of 100. Vendors below 50 present high regulatory exposure for EU AI Act-exposed deployments.
What are the red flags to watch for in AI vendor due diligence?
Key red flags include: inability to provide technical documentation; refusal to accept contractual incident notification obligations; accuracy claims without methodology disclosure; no documented bias testing; GDPR DPA compliance cannot be confirmed; change notification shorter than 30 days; and vendor liability caps below €1M for EU-facing deployments.
What is FrictionMelt friction scoring for AI vendor adoption?
FrictionMelt scores vendor adoption friction across five dimensions: integration complexity, compliance readiness, change frequency and predictability, support responsiveness, and exit barriers. The friction score complements the risk score — a technically compliant vendor with high friction barriers can make it operationally impossible for the deployer to fulfill Article 26 obligations in practice.

Related AI Risk Management Guides

Related Topics

Harish Kumar

Harish Kumar

Founder & CEO, Quantamix Solutions B.V.

18+ years in enterprise AI across Amazon Ring, Philips (200 GenAI Champions), ING Bank, Rabobank (€400B+ AUM), Deutsche Bank, and Reserve Bank of India. FRM, PMP, GCP certified. Patent holder (EP26162901.8). Published researcher (SSRN 6359818). Building traceable, auditable AI for regulated industries.