The European Union implemented its AI Act last year, releasing guidelines to ensure compliance and balance AI innovation with safety, culminating in the July 18 launch of the AI Act Explorer, a comprehensive guide for companies navigating these regulations.
The AI Act, established to introduce safeguards for advanced artificial intelligence models while simultaneously cultivating a competitive and innovative ecosystem for AI enterprises, delineates distinct risk classifications for various models. Henna Virkkunen, EU Commission Executive Vice President for Technological Sovereignty, Security and Democracy, stated to Reuters that the guidelines issued by the Commission support the smooth and effective application of the AI Act.
Under the regulatory framework of EU law, artificial intelligence models are categorized into one of four distinct risk levels: unacceptable risk, high risk, limited risk, and minimal risk. AI classified within the unacceptable risk category faces a prohibition within the EU. This classification specifically encompasses applications such as facial recognition systems and social scoring mechanisms. Other categories are determined by the computational capacity of the AI or its designated functionalities.
The European Union defines artificial intelligence models presenting systemic risks as those developed using “greater than 1025 floating point operations (FLOPs).” Notable AI models currently falling under this classification include OpenAI’s GPT-4, OpenAI’s o3, Google Gemini 2.5 Pro, Anthropic’s more recent Claude models, and xAI’s Grok-3. The release of the AI Act Explorer guidance precedes the August 2 deadline by approximately two weeks. This deadline mandates that general-purpose AI models and those identified as posing systemic risks must achieve compliance with the Act’s provisions.
What will EU AI Act change for real?
Manufacturers of AI models identified as posing systemic risks are subject to specific obligations. These include conducting comprehensive model evaluations to identify potential systemic risks and documenting the adversarial testing performed during the mitigation of such risks. Additionally, these manufacturers are required to report serious incidents to both EU and national offices if such occurrences materialize. They must also implement appropriate cybersecurity measures to safeguard against misuse or compromise of their AI systems. The Act comprehensively places responsibility on AI companies to proactively identify and prevent potential systemic risks at their origin.
The AI Act Explorer has been designed to furnish artificial intelligence developers with explicit guidelines concerning the applicability of the Act’s various provisions to their operations. Companies are also provided with access to the EU’s accompanying compliance checker, a tool enabling them to ascertain their precise obligations under the Act. Non-compliance with the Act’s stipulations can result in substantial financial penalties. Fines range from €7.5 million, equivalent to $8.7 million, or 1.5% of a company’s global turnover, up to a maximum of €35 million, or 7% of global turnover, with the specific amount contingent upon the severity of the violation.
Critics of the AI Act have characterized its regulations as inconsistent and have asserted that it inhibits innovation. On July 18, Joel Kaplan, Meta’s Chief of Global Affairs, declared that the company would not endorse the EU’s Code of Practice for general-purpose AI models, which is a voluntary framework aligned with the AI Act. Kaplan stated on LinkedIn that this Code introduces a number of legal uncertainties for model developers, alongside measures that extend significantly beyond the scope of the AI Act. Earlier in July, chief executive officers from companies including Mistral AI, SAP, and Siemens, among others, issued a joint statement requesting the EU to pause the implementation of the regulations.
Proponents of the Act maintain that it will serve to restrain companies from prioritizing profit at the expense of consumer privacy and safety. Mistral and OpenAI have both committed to signing the Code of Practice, a voluntary mechanism that enables companies to demonstrate their alignment with the binding regulations. OpenAI recently launched ChatGPT agent, which possesses the capability to utilize a virtual computer for executing multi-step tasks, including contacting individuals at small businesses.