EU AI Act: Practical Impact for Businesses
The EU AI Act is the world’s first comprehensive AI legislation. Since its entry into force in August 2024, transition periods are ticking. Some obligations are already active, others arrive in 2025 and 2026. For companies operating in or selling into the European Union, this is not a distant concern — it is a present-tense planning priority.
Having advised organizations across manufacturing, financial services, and technology on their AI strategies, I consistently see the same pattern: companies that treated GDPR as a last-minute scramble paid dearly in effort and fines. The AI Act offers a chance to do it right the first time.
The Risk-Based Classification System
The AI Act does not regulate AI uniformly. Instead, it assigns obligations based on the risk level of each AI system. Understanding where your systems fall is the essential first step.
Unacceptable Risk (Prohibited)
These applications are banned outright, with prohibitions in effect since February 2025:
- Social scoring by public authorities
- Real-time biometric identification in public spaces (with narrow law enforcement exceptions)
- Emotion recognition in workplaces and educational institutions
- Manipulation of vulnerable groups (e.g., exploiting age, disability)
- Predictive policing based solely on profiling
Practical implication: Audit your AI systems now. Some applications that seem innocuous — like an employee engagement tool that analyzes facial expressions during video calls — may fall squarely into the prohibited category.
High Risk
Strict requirements apply to AI systems used in:
- Biometric identification and categorization
- Critical infrastructure management (energy, water, transport)
- Education: Automated grading, admission decisions, proctoring
- Employment: CV screening, interview analysis, performance evaluation, termination decisions
- Financial services: Credit scoring, insurance risk assessment, fraud detection
- Access to essential services: Social benefits, emergency services
- Law enforcement and migration: Risk assessment tools, border control
- Justice and democracy: Legal research tools used in judicial decision-making
For high-risk systems, the AI Act mandates:
- A documented risk management system covering the entire lifecycle
- Data governance and quality management for training, validation, and test datasets
- Technical documentation sufficient for authorities to assess compliance
- Logging capabilities that enable traceability of system behavior
- Transparency — users must know they are interacting with AI and understand its limitations
- Human oversight mechanisms with the ability to override or shut down the system
- Accuracy, robustness, and cybersecurity standards appropriate to the risk level
For guidance on the cybersecurity dimension, see our detailed guide on LLM security in enterprise environments.
Limited Risk
Transparency obligations apply to:
- Chatbots: Must be clearly identified as AI (not impersonating humans)
- Deepfakes: AI-generated or manipulated content must be labeled
- Emotion recognition: Affected individuals must be informed
- AI-generated text published to inform on public interest matters must be labeled
Minimal Risk
The majority of AI applications — spam filters, recommendation engines, inventory optimization, AI in video games — fall here and face no specific obligations under the AI Act. However, voluntary codes of conduct are encouraged.
Industry-Specific Examples
Financial Services
A bank using AI for credit scoring operates a high-risk system. This means full documentation of training data sources, bias testing across protected characteristics (age, gender, ethnicity, nationality), human override capabilities for every automated decision, and regular accuracy audits. If the same bank uses AI for marketing email subject line optimization, that is minimal risk — no special obligations.
Manufacturing
A manufacturer using computer vision for quality inspection on the production line is generally minimal or limited risk. But if the same system is repurposed to monitor worker behavior or productivity, it crosses into the employment domain and becomes high-risk. The risk classification depends on the use case, not the technology.
Healthcare
AI-assisted diagnostic tools are regulated both under the AI Act (as high-risk) and under the EU Medical Device Regulation (MDR). Companies face dual compliance requirements. The documentation burden is significant but overlaps substantially between the two frameworks.
Human Resources
Any AI used in recruitment — resume screening, video interview analysis, psychometric testing — is high-risk. This applies whether you build the tool yourself or purchase it from a vendor. As a deployer, you carry compliance responsibilities even for third-party tools.
The Fine Structure: What Is at Stake
The AI Act has teeth. The fine structure is deliberately designed to make non-compliance more expensive than compliance:
| Violation | Maximum Fine |
|---|---|
| Prohibited AI practices | €35 million or 7% of global annual turnover |
| High-risk system obligations | €15 million or 3% of global annual turnover |
| Providing incorrect information to authorities | €7.5 million or 1.5% of global annual turnover |
For SMEs and startups, proportionally lower caps apply, but the fines are still material. And these are maximums — regulators will consider severity, duration, and cooperation when setting actual penalties.
The GDPR lesson: When GDPR launched, many companies assumed enforcement would be lenient. By 2025, cumulative GDPR fines exceeded €4.5 billion. The AI Act enforcement machinery is being built on GDPR foundations, with national AI supervisory authorities already designated in most member states. Expect enforcement to ramp up faster than GDPR.
Timeline: What Is Due When
- February 2025: Prohibitions for unacceptable-risk systems in force (already active)
- August 2025: Rules for General Purpose AI models (including foundation model providers like OpenAI, Anthropic, Meta) take effect, plus governance provisions
- August 2026: Full application of high-risk system requirements
- August 2027: Obligations for high-risk AI systems already on the market before August 2026
Practical Compliance Steps: A 7-Step Framework
Step 1: AI System Inventory
Create a comprehensive register of every AI system in your organization. Include:
- Purchased SaaS products with AI features (many CRM, HR, and ERP tools now embed AI)
- Custom-built models and pipelines
- AI components embedded in third-party products
- Internal experiments and prototypes that may have reached informal production use
An AI readiness assessment can serve as the foundation for this inventory.
Step 2: Risk Classification
Map each system to an AI Act risk category. When uncertain, classify conservatively — it is easier to downgrade a classification than to scramble for compliance after an authority challenge. Document your reasoning for each classification.
Step 3: Gap Analysis for High-Risk Systems
For each high-risk system, assess your current state against all seven obligation areas. Create a gap register with clear ownership, timelines, and effort estimates.
Step 4: Establish AI Governance
- Appoint an AI Officer or assign AI Act responsibility to an existing compliance function
- Create an AI Risk Committee with representatives from legal, IT, data, and business
- Define a risk assessment process for new AI projects (before development begins)
- Establish documentation standards — templates for technical documentation, impact assessments, and monitoring plans
Step 5: Supplier Engagement
For AI systems you procure rather than build:
- Request conformity declarations and technical documentation from vendors
- Add AI Act compliance clauses to procurement contracts
- Establish the right to audit vendor AI practices
- Require incident notification agreements
Step 6: Technical Implementation
- Implement logging and traceability for all high-risk systems
- Build or procure bias testing and monitoring tools
- Establish human override mechanisms with clear escalation paths
- Ensure cybersecurity measures meet the standard for the risk level
Step 7: Ongoing Monitoring and Audit
Compliance is not a one-time project. Establish:
- Quarterly risk reassessments
- Annual compliance audits
- Continuous monitoring of system accuracy and fairness metrics
- An incident response process specific to AI failures
Building an AI Governance Framework
From working with organizations on AI strategy, I have found that effective AI governance has three pillars:
Pillar 1: Policy
Written policies covering acceptable AI use, data handling, model development standards, procurement requirements, and incident management. These should be living documents, updated as the regulatory landscape evolves.
Pillar 2: Process
Defined workflows for AI project approval, risk assessment, documentation, testing, deployment, and decommissioning. The approval gate should come before significant development investment, not after.
Pillar 3: Technology
Tooling that makes compliance efficient rather than burdensome: automated documentation generation, bias testing frameworks, model monitoring dashboards, and audit trail systems. Governance that relies entirely on manual processes will not scale.
Common Misconceptions
“We only use ChatGPT, the AI Act does not affect us.” Wrong. If you integrate GPT into customer-facing applications, transparency obligations apply at minimum. If the application involves high-risk domains (HR, finance, healthcare), full high-risk obligations apply to you as the deployer — regardless of who built the underlying model.
“As an SME, we are exempt.” Partially true. There are reduced documentation requirements and lower fine caps for SMEs. But there is no blanket exemption. High-risk obligations apply regardless of company size.
“This only affects AI developers.” Wrong. The AI Act distinguishes between providers (developers), deployers (users), importers, and distributors. Each role carries specific obligations. A company that deploys a third-party AI system for credit scoring is a deployer and must ensure ongoing compliance, human oversight, and incident reporting.
“Open-source models are exempt.” Mostly true for the open-source model providers themselves (with exceptions for high-risk and GPAI with systemic risk). But if you deploy an open-source model in a high-risk application, all deployer obligations apply fully.
Opportunities from Regulation
The AI Act is not purely a compliance burden. Smart organizations are already turning it into advantage:
- Customer Trust: “AI Act compliant” is becoming a selling point, especially in B2B SaaS and financial services.
- Quality Improvement: The required risk management and testing processes genuinely improve system reliability. Organizations that adopted GDPR-grade data practices early often found their data quality improved as a side effect.
- International Competitiveness: The EU is setting the global template. Companies that comply with the AI Act will find it easier to meet emerging regulations in the UK, Canada, Brazil, and other jurisdictions.
- Reduced Liability: Documented risk assessments and compliance efforts significantly strengthen your position in the event of an AI-related incident or lawsuit.
Conclusion
The EU AI Act requires preparation, not panic. The transition periods are generous enough for structured implementation, but tight enough that procrastination is risky. Start now with your AI system inventory and risk classification. Build governance structures that will scale. Engage your suppliers. And learn from GDPR — the organizations that started early spent less, had better outcomes, and faced zero enforcement actions.
Need support with AI Act compliance? Contact us for an individual assessment of your AI systems and a tailored compliance roadmap.