This document is under active development and has not been finalised.
Skip to content

Risk Classification – Overview

Four Risk Tiers of the AI Act

The AI Act follows a risk-based approach. Obligations scale with the level of risk:

TierDescriptionObligationsBAUER GROUP Relevance
Unacceptable RiskAI practices that threaten fundamental rights, safety or democratic valuesProhibited (Art. 5)Screen all products against the prohibition catalogue
High RiskAI systems in critical areas (Annex I/III)Full compliance package (Art. 8–49)Go/no-go decision per product
Limited RiskAI systems that interact with natural personsTransparency obligations (Art. 50)Label chatbots and AI-generated content
Minimal RiskAll other AI systemsNo specific obligationsSpam filters, AI-assisted games, etc.

High-Risk Classification (Art. 6)

An AI system is high-risk if:

Path 1 – Art. 6(1) (Product Safety): The system is a safety component of a product falling under EU harmonisation legislation (Annex I Section A) AND requires a third-party conformity assessment.

Path 2 – Art. 6(2) (Annex III): The system falls under one of the eight high-risk categories of Annex III.

Exception (Art. 6(3))

An Annex III system is not high-risk if it:

  • Does not perform profiling of natural persons, AND
  • Has a narrow procedural scope of tasks (preparatory, not decision-making), AND
  • Does not significantly influence or replace human decision-making

WARNING

If the AI system performs profiling of natural persons, it is always high-risk — the exception does not apply.

Classification Obligation

Providers that classify an Annex III system as non-high-risk must document this assessment before placing it on the market (Art. 6(4)) and register it in the EU database (Art. 49(2)).

See: Non-High-Risk Assessment Template

Documentation licensed under CC BY-NC 4.0 · Code licensed under MIT