The EU AI Act is no longer a future regulation — it is active law with real penalties. If your AI product is used by anyone in the European Union, you are subject to its requirements regardless of where your company is based. Fines reach €35 million or 7% of global annual revenue, whichever is higher. Here is what you need to know and do right now.
The Risk Classification System You Must Understand
- Unacceptable Risk (Banned): Social scoring systems, real-time biometric surveillance in public spaces, AI that exploits psychological vulnerabilities
- High Risk: AI in hiring, credit scoring, medical devices, critical infrastructure, education assessment. Requires conformity assessments, transparency, human oversight
- Limited Risk: Chatbots and deepfake generators must disclose they are AI
- Minimal Risk: Most AI applications — spam filters, recommendation engines, AI in video games
High-Risk AI: What Compliance Actually Requires
- Risk management system — documented identification and mitigation throughout the AI lifecycle
- Data governance — training data must be relevant, representative, and documented
- Technical documentation — architecture, training approach, and performance metrics
- Transparency — users must understand how the system makes decisions
- Human oversight — mechanisms to monitor, override, and stop the AI system
- CE marking and registration in the EU database before deployment
Case Study: Making a Fintech Credit AI Compliant
A client's AI-powered credit scoring system was a clear High Risk application. Our compliance implementation included: bias testing across 12 demographic variables, explainability layer using SHAP values, human review queue for borderline cases, complete audit logging, and a conformity assessment document. Total implementation: 6 weeks. Alternative: €35M fine.
The Minimum You Should Do Today
- Classify all your AI systems against the four risk tiers
- Add AI disclosure to any user-facing chatbot or generative feature
- Start documenting your training data sources and model architecture
- Build a human override mechanism for any AI-driven decision affecting users
Compliance is not optional — but it does not have to be a product blocker. The right architecture makes compliant AI faster to build than retrofitting compliance onto an existing system.
