
EU AI Act: Obligations of providers of general-purpose AI (GPAI)
Understanding the EU Commission’s guidelines on the scope of the obligations for general-purpose AI models

As artificial intelligence increasingly plays a central role in innovation, the European Union has taken an important regulatory step with the AI Act.
Among the most impactful provisions are those relating to general-purpose AI models (GPAI), particularly those that pose systemic risks. Chapter V of the Act—Articles 53, 54, and 55—sets out specific legal obligations for providers of these models. Understanding these requirements is critical for compliance, risk mitigation, and promoting transparency and trust in the use of AI across the EU.
This article aims to give an overview of the obligations imposed on GPAI model providers.
What are the obligations of general-purpose AI providers?
The obligations stipulated in Chapter V of the AI Act can be grouped as follows:
1. Documentation and Transparency – Article 53 (1) (a) and (b)
GPAI model providers must prepare detailed technical documentation that includes at least the elements listed in Annex XI. These contain a general description of the GPAI model, including its tasks, architecture and number of parameters. The documentation must also outline integration requirements, design specifications, training methodologies, key design choices (including their rationale and assumptions), and details on data sources, curation methods, and bias detection measures. Additionally, it should include information computational resources used, training duration, and estimated or known energy consumption. Some of this information needs to be shared with the EU AI Office and/or provided upon request to national regulators.
Providers of GPAI models are also required to share information with downstream providers. Downstream providers are providers of AI systems who want to integrate the model into their own AI system. Sharing information should help downstream providers to understand the model’s capabilities and limitations and enable them to fulfill their own obligations. Anex XII outlines the specific information to be provided, including integration requirements, design and training choices, data sources and processing methods, as well as bias detection measures. It must also include details on computational resources, training time, and estimated or known energy consumption.
GPAI models released under a free and open-source license that do not pose systemic risks are exempt from these detailed documentation requirements.
2. Compliance with EU copyright law – Article 53 (1) (c)
Providers must implement policies to comply with EU law on copyright and related rights, including through state-of-the-art technologies to identify and respect content creator’s’ rights, including any reservations explicitly expressed.
3. Public disclosure of training data summary – Article 53 (1) (d)
Providers are required to publish a detailed summary of content and data used to train the GPAI model, using a simple and effective template provided by the AI Office. The summary is intended to enhance transparency and should be broadly comprehensive rather than technically detailed, making it accessible to the parties with a legitimate interest in exercising their rights under Union law.
4. Cooperation with relevant AI authorities – Article 53 (3)
GPAI providers must cooperate with the European Commission and relevant national authorities, by providing necessary information and ensuring such authorities to regulate the use of AI within the European Union.
5. Systemic risk management – Article 55
If a GPAI model is considered to pose systemic risks, providers have additional obligations. These include notifying the AI Office, conducting model evaluations, assessing and mitigating systemic risks, reporting serious incidents, and ensuring robust cybersecurity measures.
6. Extra-territorial Scope and AI Act Representative – Article 54
Providers of high-risk GPAI models that do not have an establishment within the European Union but that make their models available or put them into service in the EU must appoint an authorized legal representative based in the EU.
This requirement reflects the extraterritorial scope of the AI Act — similar to the GDPR — meaning it applies not only to EU-based companies but also to foreign providers whose AI systems have an impact within the EU market. Making a model available in the EU, whether through online access, API integration, or distribution via third parties, can trigger this obligation, regardless of the provider’s physical location.
The representative serves as the primary point of contact assisting providers with communications with the EU national supervisory authorities and plays a crucial role in ensuring that the provider complies with the AI Act’s obligations. The representatives’ responsibilities include handling documentation requests and cooperating with relevant authorities. This requirement applies regardless of company size and includes developers, deployers, and third-country suppliers whose AI systems reach the EU.
EU AI Act GPAI obligations: Prepare proactively
The EU AI Act places clear and increasing responsibilities on GPAI model providers, especially those whose models carry systematic risks. Compliance is not just a legal necessity but also a strategic one to ensure trust, transparency, and accountability.
Providers should proactively prepare for these obligations by aligning their documentation, risk management, and transparency practices with the requirements of the law, avoid penalties and support the responsible use of AI.
If you'd like support navigating your obligations under the EU AI Act, book a free consultation with one of our experts and see how Prighter can help.