Home » EU AI Act Takes Effect For GPAI Providers August 2

EU AI Act Takes Effect For GPAI Providers August 2

Beginning August 2, 2025, entities providing general purpose artificial intelligence (GPAI) models within the European Union must adhere to specific stipulations outlined in the EU AI Act, including maintaining current technical documentation and training data summaries.

The EU AI Act is a comprehensive legislative framework designed to establish standards for the ethical and safe development and deployment of AI technologies. This regulation adopts a risk-based approach, categorizing AI systems based on their potential risks and impact on individuals and society within the European Union.

Although specific requirements for GPAI model providers become enforceable on August 2, 2025, a one-year grace period has been established, allowing companies to achieve full compliance without facing penalties until August 2, 2026. This grace period is intended to facilitate a smooth transition to the new regulatory landscape.

Providers of GPAI models must be cognizant of and adhere to five key sets of regulations effective August 2, 2025. These encompass various aspects of AI governance, assessment and penalties.

The first set of rules pertains to Notified Bodies, as stipulated in Chapter III, Section 4 of the EU AI Act. Providers of high-risk GPAI models must prepare to engage with these bodies for conformity assessments and understand the regulatory framework governing these evaluations. Notified Bodies are designated organizations responsible for assessing the conformity of specific products or services with applicable EU regulations.

The second set of rules, detailed in Chapter V of the Act, specifically addresses GPAI models. This section outlines the requirements for technical documentation, training data summaries, and transparency measures that GPAI model providers must implement.

The third set of rules, found in Chapter VII, concerns governance. This section defines the governance and enforcement architecture at both the EU and national levels. It mandates cooperation with the EU AI Office, the European AI Board, the Scientific Panel, and National Authorities in fulfilling compliance obligations, responding to oversight requests, and participating in risk monitoring and incident reporting processes.

The fourth set of rules, outlined in Article 78, focuses on confidentiality. All data requests made by authorities to GPAI model providers must be legally justified, securely handled, and subject to confidentiality protections, especially concerning intellectual property, trade secrets, and source code. This ensures the protection of sensitive business information during regulatory oversight.

The final set of rules, found in Articles 99 and 100, specifies penalties for non-compliance. These penalties are designed to ensure adherence to the AI Act’s provisions and can be substantial.

High-risk AI systems are defined as those that present a significant threat to health, safety, or fundamental rights. These systems are categorized into two main groups. First, those used as safety components of products governed by EU product safety laws. Second, those deployed in sensitive use cases, which include biometric identification, critical infrastructure management, education, employment and HR, and law enforcement.

GPAI models, which can be applied across multiple domains, are considered to pose “systemic risk” if they exceed 10^25 floating-point operations executed per second (FLOPs) during training and are designated as such by the EU AI Office. Prominent examples of GPAI models that meet these criteria include OpenAI’s ChatGPT, Meta’s Llama, and Google’s Gemini.

All providers of GPAI models are required to maintain comprehensive technical documentation, a training data summary, a copyright compliance policy, guidance for downstream deployers, and transparency measures regarding capabilities, limitations, and intended use. This documentation serves to provide clarity and accountability in the development and deployment of AI systems.

Providers of GPAI models that pose systemic risk face additional requirements. They must conduct model evaluations, report incidents, implement risk mitigation strategies and cybersecurity safeguards, disclose energy usage, and carry out post-market monitoring. These measures aim to address the heightened risks associated with more powerful and widely used AI models.

Regarding penalties, providers of GPAI models may face fines of up to €35,000,000 or 7% of their total worldwide annual turnover, whichever is higher, for non-compliance with prohibited AI practices as defined under Article 5. These practices include manipulating human behavior, social scoring, facial recognition data scraping, and real-time biometric identification in public spaces.

Other breaches of regulatory obligations, such as those related to transparency, risk management, or deployment responsibilities, can result in fines of up to €15,000,000 or 3% of turnover. These penalties are designed to ensure adherence to the broader requirements of the AI Act.

Supplying misleading or incomplete information to authorities can lead to fines of up to €7,500,000 or 1% of turnover. This provision underscores the importance of accurate and transparent communication with regulatory bodies.

For small and medium-sized enterprises (SMEs) and startups, the lower of the fixed amount or percentage applies when calculating penalties. The severity of the breach, its impact, the provider’s cooperation, and whether the violation was intentional or negligent are all considered when determining the appropriate penalty.

To facilitate compliance, the European Commission published the AI Code of Practice, a voluntary framework that tech companies can adopt to implement and adhere to the AI Act. Companies such as Google, OpenAI, and Anthropic have committed to it, while Meta has publicly refused to in protest of the legislation in its current form.

The Commission plans to publish supplementary guidelines with the AI Code of Practice before August 2, 2025, which will clarify which companies qualify as providers of general-purpose models and general-purpose AI models with systemic risk. These guidelines are intended to provide further clarity and support for companies navigating the regulatory landscape.

The EU AI Act was officially published in the EU’s Official Journal on July 12, 2024, and took effect on August 1, 2024. However, the implementation of its various provisions is phased in over several years.

  • On February 2, 2025, certain AI systems deemed to pose unacceptable risk, such as those used for social scoring or real-time biometric surveillance in public, were banned. Additionally, companies that develop or use AI are required to ensure their staff have a sufficient level of AI literacy.
  • By August 2, 2026, GPAI models placed on the market after August 2, 2025, must be fully compliant with the EU AI Act. Furthermore, rules for certain listed high-risk AI systems also begin to apply to those placed on the market after this date and those placed on the market before this date that have undergone substantial modification since.
  • By August 2, 2027, GPAI models placed on the market before August 2, 2025, must be brought into full compliance. High-risk systems used as safety components of products governed by EU product safety laws must also comply with stricter obligations from this date forward.
  • By August 2, 2030, AI systems used by public sector organizations that fall under the high-risk category must achieve full compliance with the EU AI Act.

Finally, by December 31, 2030, AI systems that are components of specific large-scale EU IT systems and were placed on the market before August 2, 2027, must be brought into compliance. This marks the final deadline for achieving widespread compliance across various sectors and applications.

Despite these phased implementation dates, a group representing Apple, Google, Meta, and other companies urged regulators to postpone the Act’s implementation by at least two years. This request was ultimately rejected by the EU, underscoring the commitment to the established timeline.


Featured image credit

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *