The first two chapters of the EU AI Act took effect on Feb. 2, 2025. Those provisions address AI practices and AI literacy. The remainder of the law’s provisions will be rolled out over time.
Below are some, but by no means all, elements that US enterprise AI leaders need to know.
The AI Act’s Global Scope
While the law’s largest implications will be for EU-based firms, those outside the European Union are affected as well. The act applies to all AI system providers in the EU market, whether they operate within the EU or not.
“The global reach of the EU AI Act, the significant penalties for non-compliance and the extensive requirements spanning the entire AI value chain ensure that organizations worldwide leveraging AI technologies must adhere to the regulation,” said Enza Iannopollo, Forrester principal analyst.
The law will have a significant impact on AI governance globally, Iannopollo added. “With these regulations, the EU has established the ‘de facto’ standard for trustworthy AI and AI risk management.”
Related Article: Collaborative Governance Is the Path to Globally Inclusive and Ethical AI
Understanding the EU AI Act’s Risk Framework
The core of the EU AI Act is its risk-based approach, according to Iannopollo. The higher the risk of the AI or general-purpose AI (GPAI) use case, the more requirements it must comply with and the stricter the enforcement of those requirements will be. As the risk decreases, so does the number and complexity of the requirements a company must follow.
Yingbo Ma, assistant professor of AI and machine learning (ML) Purdue University Northwest, explained the four risk levels:
- Unacceptable Risk AI: Banned. Examples: Social scoring, manipulative AI and real-time biometric identification in public spaces
- High-Risk AI: Strictly Regulated. Examples: AI in healthcare, hiring, finance and law enforcement, requiring compliance with transparency, data governance and human oversight
- Limited-Risk AI: Requiring Transparency. Examples: Chatbots and deep fakes, ensuring users are aware they are interacting with AI
- Minimal-Risk AI: Unregulated. Examples: Recommendation engines and video game AI
Providers of AI systems have different obligations than deployers, but the compliance of each actor depends on the other, according to Iannopollo. Providers must provide deployers with detailed documentation about the system (e.g., capabilities, limitations, intended use) so they can deploy the systems correctly and understand its outcomes appropriately. This information must be as accurate as possible to allow deployers to assess risks and define effective mitigation strategies.
The Mandate on AI Literacy
The AI Act requires providers and deployers to take measures to ensure, to their best extent, that staff and other people dealing with AI systems on their behalf (including consultants) have a sufficient level of AI literacy.
For these persons, literacy training should take into account their technical knowledge, experience, education, training and context the AI systems are used in. It should also consider the people or groups on whom the AI systems are used.
This requirement applies to all AI systems and not just those considered high risk under the Act.
Plenty of AI certifications and courses already exist that organizations can take advantage of. Training is also available based on roles and industries, such as courses for those in marketing, education, customer service, product management and more.
What Else Enterprises Need to Prepare For
Beyond risk levels and literacy, the EU AI Act introduces sweeping responsibilities that touch on foundational models, supply chains and the broader AI ecosystem. These additional considerations add layers of complexity — and compliance — for enterprises navigating the regulation.
New Standards for General-Purpose AI Systems
The Act takes great strides to address general purpose AI systems obligations for deployers/implementers, according to Jeff Le, managing principal at consultancy 100 Mile Strategies.
“An important step is the acknowledgment beyond EU transparency rules to put a higher threshold on foundational models that could lead to much higher risk and infect the entire software supply chain, which could have direct implications for consumers/users," said Le.
Prepare for Audits, Assessments and More
Companies must pass audits, meet transparency standards and conduct assessments, making it harder for startups to compete with big tech, noted Amir Barsoum, InVitro Capital founder and managing partner.
The European Commission can fine providers of general purpose AI models up to 3% of their annual total global turnover from the previous financial year, or €15,000,000 — whichever is greater.
These fines can be levied if the Commission finds that the provider intentionally or negligently:
- Infringed on the relevant provisions of the Act
- Failed to comply with requests for documentation or supplied incorrect, incomplete or misleading information
- Failed to comply with measures requested under Article 93, which includes implementing mitigation measures, restricting access to the model and more.
- Failed to make access to the general-purpose AI model available to the Commission.
Designing AI Systems for Explainability
The law requires transparency in AI decision-making, which should reduce black-box AI risks in areas like hiring, credit scoring and healthcare, according to Barsoum.
“Consumers across regions expect companies to be explicit about their use of any AI,” Iannopollo said. “The Act mandates that you clearly inform people exposed to an AI and GPAI system in a way that is easy to spot and understand and complies with accessibility requirements.”
Related Article: Cracking the AI Black Box: Can We Ever Truly Understand AI's Decisions?
Key Provisions of the EU AI Act Still to Come
The EU AI Act is being rolled out in stages. Other rules, such as those on GPAI models, will apply as of June 2025. Requirements on high-risk use cases will take effect after 24 to 36 months. Forrester pointed out that even though some requirements take effect at later dates, they will take substantial effort and time for an enterprise to meet the law’s rules, such as in the matter of high-risk use cases.
“There [are] still meaningful questions as to potential unintended consequences and implementation challenges,” Le said. The timeline for full effect before August 2026, he added, could be seen as a challenge for the industry, especially as future advances continue to develop.