1. Regulatory Context
The EU AI Act (Regulation (EU) 2024/1689) employs a four-tier risk classification: unacceptable risk (prohibited), high risk (regulated), limited risk (transparency obligations), and minimal risk (no specific obligations). The prohibited tier is defined in Article 5 of the regulation and represents AI applications that the European legislature considers incompatible with EU values and fundamental rights.
The prohibition took effect on 2 February 2025 — just six months after the regulation's entry into force on 1 August 2024. This unusually short transition period reflects the legislative judgment that these practices pose such severe risks that immediate action was necessary. The European Parliament's impact assessment cited threats to human dignity, non-discrimination, privacy, and democratic governance.
Enforcement Is Active
As of March 2026, the prohibited practices ban has been in force for over 13 months. National market surveillance authorities in multiple member states have begun conducting audits. The penalties are the highest in the regulation: up to €35 million or 7% of worldwide annual turnover, whichever is greater. For context, GDPR's maximum fine is €20 million or 4% of turnover.
2. All 8 Prohibited Practices at a Glance
| # | Prohibited Practice | Article | Exceptions? | Example |
|---|---|---|---|---|
| 1 | Subliminal manipulation | Art. 5(1)(a) | No | Dark-pattern AI that uses subconscious audio/visual cues to influence purchasing decisions causing financial harm |
| 2 | Exploitation of vulnerabilities | Art. 5(1)(a) | No | AI targeting elderly users with high-interest financial products exploiting age-related cognitive decline |
| 3 | Social scoring by public authorities | Art. 5(1)(c) | No | Government AI rating citizens based on social media activity to determine access to public services |
| 4 | Individual criminal risk assessment | Art. 5(1)(d) | Yes* | AI predicting likelihood of criminal behavior solely from personality traits or demographics |
| 5 | Untargeted facial recognition scraping | Art. 5(1)(e) | No | Building facial recognition databases by scraping images from social media or public CCTV |
| 6 | Emotion recognition in workplace/education | Art. 5(1)(f) | Yes** | AI monitoring employee facial expressions to detect engagement levels during meetings |
| 7 | Biometric categorization by sensitive attributes | Art. 5(1)(g) | No | AI inferring political opinions or sexual orientation from biometric data like gait analysis |
| 8 | Real-time remote biometric identification in public | Art. 5(1)(h) | Yes*** | Live facial recognition CCTV in city centers identifying every passerby |
* Permitted when augmenting human assessment based on objective, verifiable facts linked to criminal activity. ** Permitted for medical or safety purposes. *** Permitted for specific law enforcement scenarios with judicial authorization.
3. Subliminal Manipulation
Article 5(1)(a) prohibits AI systems that deploy subliminal techniques beyond a person's consciousness, or purposely manipulative or deceptive techniques, with the objective or effect of materially distorting a person's behavior in a manner that causes or is reasonably likely to cause that person or another person significant harm.
The key legal elements are: (1) the technique operates below conscious awareness or is purposely manipulative/deceptive, (2) it materially distorts behavior (not merely influences preferences), and (3) it causes or is reasonably likely to cause significant harm — physical, psychological, or financial.
Practical Examples
- AI-driven interfaces embedding subliminal visual or auditory signals to influence purchasing decisions without user awareness
- Recommendation engines using personalized psychological profiling to drive compulsive behavior (e.g., gambling addiction, excessive spending) through deliberate exploitation of cognitive biases
- AI chatbots designed to simulate emotional connection and then leverage that connection to extract financial commitments
Boundary Clarification
Standard advertising personalization and A/B testing are not prohibited under this article. The prohibition targets techniques that operate sublimally (below consciousness) or are purposely deceptive and cause significant harm. Persuasive design that operates transparently within conscious awareness generally falls outside scope, though it may raise other regulatory concerns under consumer protection law.
4. Exploitation of Vulnerabilities
Also under Article 5(1)(a), this prohibition targets AI systems that exploit vulnerabilities of specific persons or groups due to their age, disability, or social or economic situation. The distortion must be material and the harm must be significant.
The regulation specifically names three vulnerability categories:
- Age — Includes both children and elderly individuals. AI targeting children with manipulative content or targeting elderly persons with deceptive financial products
- Disability — Physical, sensory, cognitive, or psychological disabilities that may reduce capacity to resist AI-driven manipulation
- Social or economic situation — Financial distress, social isolation, immigration status, housing instability, or other conditions of vulnerability
Research by the European Commission's Joint Research Centre (JRC, 2022) documented cases of AI-driven predatory lending platforms that specifically targeted users in financial distress with high-interest loan offers, using behavioral signals of financial vulnerability to time and personalize offers. Such systems fall squarely within this prohibition.
5. Social Scoring
Article 5(1)(c) prohibits AI systems used by or on behalf of public authorities for evaluating or classifying natural persons over a certain period based on their social behavior or known, inferred, or predicted personal or personality characteristics, where the resulting social score leads to detrimental or unfavourable treatment in social contexts unrelated to the contexts in which the data was originally collected, or where the treatment is disproportionate to the social behavior or its gravity.
This prohibition is specifically targeted at government-operated or government-commissioned social scoring systems. The Chinese Social Credit System is the most cited reference case. The ban covers both government agencies directly operating such systems and private companies operating them on behalf of public authorities.
Private Sector Scoring
Private-sector scoring systems (such as credit scores by financial institutions, customer loyalty scores, or insurance risk assessments) are not prohibited under this article, as they are not operated by or on behalf of public authorities. However, some of these systems may qualify as high-risk AI under Annex III and face Article 9–15 requirements from August 2026. Credit scoring AI, for example, is classified as high-risk.
6. Individual Criminal Risk Assessment
Article 5(1)(d) prohibits AI systems that assess the risk of a natural person committing a criminal offense, based solely on the profiling of that person or on assessing their personality traits and characteristics. This targets predictive policing tools that make individual-level predictions based on profiling rather than objective evidence.
The Exception
The prohibition does not apply to AI systems that augment human assessment based on objective, verifiable facts directly linked to criminal activity. For example, an AI system that analyzes evidence in an ongoing investigation (financial transaction patterns, communication metadata linked to a known criminal network) to support a human investigator's assessment is permitted. The key distinction is: profiling-only = prohibited; evidence-augmented human judgment = permitted.
Recital 42 of the regulation clarifies that crime-analytics tools used to identify patterns and correlations in large datasets to support investigations (without targeting specific individuals based on profiling) are outside the scope of this prohibition.
7. Untargeted Facial Recognition Scraping
Article 5(1)(e) prohibits the creation of facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage. This directly targets companies like Clearview AI, which built a facial recognition database of over 30 billion images scraped from public internet sources (Clearview AI marketing materials, 2023; multiple GDPR enforcement actions in France, Italy, Greece, and the UK, 2021–2023).
The prohibition has no exceptions. Any organization that has built or is building a facial recognition database through untargeted scraping — from social media platforms, publicly available photo galleries, CCTV feeds, or any other mass-collection mechanism — is in violation of the EU AI Act. The regulation does not grandfather existing databases: if the database exists and was built through untargeted scraping, its use within the EU is prohibited.
8. Emotion Recognition in Workplace & Education
Article 5(1)(f) prohibits the use of AI systems to infer emotions of natural persons in the areas of workplace and educational institutions. This targets AI tools that analyze facial expressions, voice patterns, body language, or physiological signals to determine emotional states of employees or students.
What Is Banned
- AI monitoring employee facial expressions during video calls to measure “engagement” or “attention”
- AI analyzing student webcam footage during online exams to detect “stress” or “deception”
- Wearable-based AI systems used by employers to track employee emotional states through physiological signals (heart rate variability, skin conductance)
- Voice analysis tools used in HR to assess candidate “enthusiasm” or “confidence” during interviews
Permitted Exceptions
Emotion recognition is permitted in workplace and educational settings when used for medical purposes (e.g., detecting pain in patients, monitoring mental health conditions with consent in a clinical context) or safety purposes (e.g., monitoring driver drowsiness in commercial transport, detecting fatigue in safety-critical industrial roles).
Research by the European Data Protection Board (EDPB Guidelines 05/2022) established that emotion recognition technology produces unreliable results across demographic groups, with accuracy rates varying by 20–30% depending on ethnicity, gender, and cultural background. This scientific concern underpinned the legislative decision to prohibit these systems in workplace and educational contexts.
9. Biometric Categorization by Sensitive Attributes
Article 5(1)(g) prohibits AI systems that categorize natural persons individually based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation. This prohibition has no exceptions.
The regulation targets the use of biometric data (facial features, voice patterns, gait analysis, fingerprints, iris patterns) as input for inferring sensitive personal attributes that fall under Article 9 of the GDPR (special categories of personal data). This includes both direct inference (“this person's facial features suggest ethnicity X”) and proxy-based inference (“this gait pattern correlates with religious practice Y”).
Note: Biometric categorization for non-sensitive attributes (e.g., estimating age range for content restriction purposes) is not prohibited but may fall under limited-risk transparency obligations.
10. Real-Time Remote Biometric Identification in Public Spaces
Article 5(1)(h) prohibits real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes. “Real-time” means the identification occurs without significant delay — the system identifies individuals as they pass through the monitored area. “Publicly accessible” includes streets, parks, transport hubs, shopping centers, and any space accessible to the general public.
Three Narrow Exceptions (Art. 5(2)–(3))
Real-time biometric identification is permitted only for law enforcement and only in three specific scenarios:
- Searching for specific victims — Abduction victims, victims of human trafficking, or missing persons, including missing children
- Preventing imminent terrorist threats — Where there is a specific, substantial, and imminent threat to the life or physical safety of natural persons, or a genuine and present or foreseeable threat of a terrorist attack
- Locating suspects of serious crimes — Suspects of criminal offenses carrying a maximum prison term of at least 4 years in the member state concerned (e.g., murder, human trafficking, sexual exploitation, organized crime, terrorism)
Strict Procedural Requirements
Even where exceptions apply, use requires prior authorization by a judicial authority or an independent administrative authority. In cases of duly justified urgency, use may begin without prior authorization, but authorization must be sought within 24 hours. If authorization is refused, all data and output must be deleted immediately. Member states may impose stricter rules than the minimum EU requirements.
11. Penalties for Prohibited Practice Violations
Violations of the prohibited practices provisions carry the highest penalties under the EU AI Act:
| Organization Type | Fixed Amount | % of Turnover | Applied Amount |
|---|---|---|---|
| Large enterprise (>€500M turnover) | €35,000,000 | 7% | Whichever is higher |
| Mid-size enterprise (€50M turnover) | €35,000,000 | €3,500,000 | €35M (fixed is higher) |
| SME (€5M turnover) | €35,000,000 | €350,000 | €350K (proportionate cap) |
For comparison, GDPR's maximum fine is €20 million or 4% of global annual turnover. The GDPR Enforcement Tracker (2024 data) reports that the average large GDPR fine is approximately €35 million, indicating that regulators are not hesitant to impose substantial penalties. The AI Act's higher ceiling (€35M / 7%) signals that EU legislators consider prohibited AI practices at least as serious as GDPR violations.
For a complete breakdown of the penalty structure: EU AI Act Penalties: What European Companies Risk for Non-Compliance.
12. What Companies Must Do Now
If your organization has not already audited its AI systems against the prohibited practices list, this should be treated as an urgent priority. The following steps provide a practical framework:
Step 1: Inventory All AI Systems
Create a comprehensive list of every AI system deployed, in development, or under procurement across your organization. Include third-party AI systems used under license or API.
Estimated: 1-2 weeksStep 2: Screen Against Article 5 Categories
Evaluate each AI system against all 8 prohibited practice categories. Pay particular attention to systems involving biometric data, employee/student monitoring, and user behavior influence. Document the analysis for each system.
Estimated: 1-2 weeksStep 3: Assess Exception Applicability
For systems that potentially fall within prohibited categories, assess whether any statutory exception applies. Engage legal counsel for borderline cases. Document the legal reasoning.
Estimated: 1 weekStep 4: Decommission or Modify
Systems that are prohibited with no applicable exception must be immediately decommissioned. Systems that can be modified to fall outside the prohibited scope (e.g., removing emotion recognition features from workplace monitoring) should be modified and re-assessed.
Estimated: 1-4 weeksStep 5: Establish Ongoing Monitoring
Implement a process to screen new AI systems and procurements against prohibited practices before deployment. Integrate this check into your AI governance framework.
Estimated: Ongoing13. Frequently Asked Questions
Are there any exceptions to the prohibited AI practices?▾
Who enforces the ban on prohibited AI practices?▾
What should we do if we discover we are using a prohibited system?▾
Can we still use emotion recognition AI after the ban?▾
Related Articles
The Complete EU AI Act Compliance Guide
Pillar guide covering the full regulation: risk classifications, conformity assessments, and implementation roadmap
EU AI Act Penalties & Enforcement
The 3-tier penalty structure, enforcement mechanisms, and how fines are calculated
High-Risk AI Systems Classification
How to classify your AI system under Annex III and what requirements apply
