Human Oversight – Art. 14
Requirement
High-risk AI systems must be designed in such a way that they can be effectively overseen by natural persons.
Oversight Requirements
Persons to whom human oversight is assigned must be enabled to:
- Understand the relevant capabilities and limitations of the system and properly monitor its operation
- Remain aware of the possible tendency to automatically rely on the system output (Automation Bias)
- Correctly interpret the output of the system
- Decide not to use the system or to disregard, override or reverse the output
- Safely interrupt the system using a stop mechanism
BAUER GROUP Implementation
| Measure | Implementation |
|---|---|
| Stop mechanism | Kill switch / deactivation of the AI component |
| Automation Bias prevention | Warning notice in UI: "AI-generated result — human review required" |
| Output transparency | Display confidence scores, explainability where technically feasible |
| Training | Deployer training as part of the product documentation |