EU AI Act: Practical Impact for Businesses

Published on June 1, 2025 by Christopher Wittlinger

The EU AI Act is the world’s first comprehensive AI legislation. Since its entry into force in August 2024, transition periods are ticking. Some obligations are already active, others arrive in 2025 and 2026. For companies operating in or selling into the European Union, this is not a distant concern — it is a present-tense planning priority.

Having advised organizations across manufacturing, financial services, and technology on their AI strategies, I consistently see the same pattern: companies that treated GDPR as a last-minute scramble paid dearly in effort and fines. The AI Act offers a chance to do it right the first time.

The Risk-Based Classification System

The AI Act does not regulate AI uniformly. Instead, it assigns obligations based on the risk level of each AI system. Understanding where your systems fall is the essential first step.

Unacceptable Risk (Prohibited)

These applications are banned outright, with prohibitions in effect since February 2025:

Practical implication: Audit your AI systems now. Some applications that seem innocuous — like an employee engagement tool that analyzes facial expressions during video calls — may fall squarely into the prohibited category.

High Risk

Strict requirements apply to AI systems used in:

For high-risk systems, the AI Act mandates:

  1. A documented risk management system covering the entire lifecycle
  2. Data governance and quality management for training, validation, and test datasets
  3. Technical documentation sufficient for authorities to assess compliance
  4. Logging capabilities that enable traceability of system behavior
  5. Transparency — users must know they are interacting with AI and understand its limitations
  6. Human oversight mechanisms with the ability to override or shut down the system
  7. Accuracy, robustness, and cybersecurity standards appropriate to the risk level

For guidance on the cybersecurity dimension, see our detailed guide on LLM security in enterprise environments.

Limited Risk

Transparency obligations apply to:

Minimal Risk

The majority of AI applications — spam filters, recommendation engines, inventory optimization, AI in video games — fall here and face no specific obligations under the AI Act. However, voluntary codes of conduct are encouraged.

Industry-Specific Examples

Financial Services

A bank using AI for credit scoring operates a high-risk system. This means full documentation of training data sources, bias testing across protected characteristics (age, gender, ethnicity, nationality), human override capabilities for every automated decision, and regular accuracy audits. If the same bank uses AI for marketing email subject line optimization, that is minimal risk — no special obligations.

Manufacturing

A manufacturer using computer vision for quality inspection on the production line is generally minimal or limited risk. But if the same system is repurposed to monitor worker behavior or productivity, it crosses into the employment domain and becomes high-risk. The risk classification depends on the use case, not the technology.

Healthcare

AI-assisted diagnostic tools are regulated both under the AI Act (as high-risk) and under the EU Medical Device Regulation (MDR). Companies face dual compliance requirements. The documentation burden is significant but overlaps substantially between the two frameworks.

Human Resources

Any AI used in recruitment — resume screening, video interview analysis, psychometric testing — is high-risk. This applies whether you build the tool yourself or purchase it from a vendor. As a deployer, you carry compliance responsibilities even for third-party tools.

The Fine Structure: What Is at Stake

The AI Act has teeth. The fine structure is deliberately designed to make non-compliance more expensive than compliance:

ViolationMaximum Fine
Prohibited AI practices€35 million or 7% of global annual turnover
High-risk system obligations€15 million or 3% of global annual turnover
Providing incorrect information to authorities€7.5 million or 1.5% of global annual turnover

For SMEs and startups, proportionally lower caps apply, but the fines are still material. And these are maximums — regulators will consider severity, duration, and cooperation when setting actual penalties.

The GDPR lesson: When GDPR launched, many companies assumed enforcement would be lenient. By 2025, cumulative GDPR fines exceeded €4.5 billion. The AI Act enforcement machinery is being built on GDPR foundations, with national AI supervisory authorities already designated in most member states. Expect enforcement to ramp up faster than GDPR.

Timeline: What Is Due When

Practical Compliance Steps: A 7-Step Framework

Step 1: AI System Inventory

Create a comprehensive register of every AI system in your organization. Include:

An AI readiness assessment can serve as the foundation for this inventory.

Step 2: Risk Classification

Map each system to an AI Act risk category. When uncertain, classify conservatively — it is easier to downgrade a classification than to scramble for compliance after an authority challenge. Document your reasoning for each classification.

Step 3: Gap Analysis for High-Risk Systems

For each high-risk system, assess your current state against all seven obligation areas. Create a gap register with clear ownership, timelines, and effort estimates.

Step 4: Establish AI Governance

Step 5: Supplier Engagement

For AI systems you procure rather than build:

Step 6: Technical Implementation

Step 7: Ongoing Monitoring and Audit

Compliance is not a one-time project. Establish:

Building an AI Governance Framework

From working with organizations on AI strategy, I have found that effective AI governance has three pillars:

Pillar 1: Policy

Written policies covering acceptable AI use, data handling, model development standards, procurement requirements, and incident management. These should be living documents, updated as the regulatory landscape evolves.

Pillar 2: Process

Defined workflows for AI project approval, risk assessment, documentation, testing, deployment, and decommissioning. The approval gate should come before significant development investment, not after.

Pillar 3: Technology

Tooling that makes compliance efficient rather than burdensome: automated documentation generation, bias testing frameworks, model monitoring dashboards, and audit trail systems. Governance that relies entirely on manual processes will not scale.

Common Misconceptions

“We only use ChatGPT, the AI Act does not affect us.” Wrong. If you integrate GPT into customer-facing applications, transparency obligations apply at minimum. If the application involves high-risk domains (HR, finance, healthcare), full high-risk obligations apply to you as the deployer — regardless of who built the underlying model.

“As an SME, we are exempt.” Partially true. There are reduced documentation requirements and lower fine caps for SMEs. But there is no blanket exemption. High-risk obligations apply regardless of company size.

“This only affects AI developers.” Wrong. The AI Act distinguishes between providers (developers), deployers (users), importers, and distributors. Each role carries specific obligations. A company that deploys a third-party AI system for credit scoring is a deployer and must ensure ongoing compliance, human oversight, and incident reporting.

“Open-source models are exempt.” Mostly true for the open-source model providers themselves (with exceptions for high-risk and GPAI with systemic risk). But if you deploy an open-source model in a high-risk application, all deployer obligations apply fully.

Opportunities from Regulation

The AI Act is not purely a compliance burden. Smart organizations are already turning it into advantage:

Conclusion

The EU AI Act requires preparation, not panic. The transition periods are generous enough for structured implementation, but tight enough that procrastination is risky. Start now with your AI system inventory and risk classification. Build governance structures that will scale. Engage your suppliers. And learn from GDPR — the organizations that started early spent less, had better outcomes, and faced zero enforcement actions.

Need support with AI Act compliance? Contact us for an individual assessment of your AI systems and a tailored compliance roadmap.