Risk Classification – Overview
Four Risk Tiers of the AI Act
The AI Act follows a risk-based approach. Obligations scale with the level of risk:
| Tier | Description | Obligations | BAUER GROUP Relevance |
|---|---|---|---|
| Unacceptable Risk | AI practices that threaten fundamental rights, safety or democratic values | Prohibited (Art. 5) | Screen all products against the prohibition catalogue |
| High Risk | AI systems in critical areas (Annex I/III) | Full compliance package (Art. 8–49) | Go/no-go decision per product |
| Limited Risk | AI systems that interact with natural persons | Transparency obligations (Art. 50) | Label chatbots and AI-generated content |
| Minimal Risk | All other AI systems | No specific obligations | Spam filters, AI-assisted games, etc. |
High-Risk Classification (Art. 6)
An AI system is high-risk if:
Path 1 – Art. 6(1) (Product Safety): The system is a safety component of a product falling under EU harmonisation legislation (Annex I Section A) AND requires a third-party conformity assessment.
Path 2 – Art. 6(2) (Annex III): The system falls under one of the eight high-risk categories of Annex III.
Exception (Art. 6(3))
An Annex III system is not high-risk if it:
- Does not perform profiling of natural persons, AND
- Has a narrow procedural scope of tasks (preparatory, not decision-making), AND
- Does not significantly influence or replace human decision-making
WARNING
If the AI system performs profiling of natural persons, it is always high-risk — the exception does not apply.
Classification Obligation
Providers that classify an Annex III system as non-high-risk must document this assessment before placing it on the market (Art. 6(4)) and register it in the EU database (Art. 49(2)).