Skip to content
Understanding the EU Commission’s guidelines on the scope of the obligations for general-purpose AI models Resource Center
Auto-generated banner for EU AI Act: Obligations of providers of general-purpose AI (GPAI)

EU AI Act: Obligations of providers of general-purpose AI (GPAI)

Understanding the EU Commission’s guidelines on the scope of the obligations for general-purpose AI models

Andreas Maetzler
Andreas Maetzler
Katharina Jokic
Katharina Jokic
4 min read
Placeholder image

As artificial intelligence increasingly plays a central role in innovation, the European Union has taken an important regulatory step with the AI Act.   

Among the most impactful provisions are those relating to general-purpose AI models (GPAI), particularly those that pose systemic risks. Chapter V of the Act—Articles 53, 54, and 55—sets out specific legal obligations for providers of these models. Understanding these requirements is critical for compliance, risk mitigation, and promoting transparency and trust in the use of AI across the EU.  

This article aims to give an overview of the obligations imposed on GPAI model providers. 

What are the obligations of general-purpose AI providers?  

The obligations stipulated in Chapter V of the AI Act can be grouped as follows: 

1. Documentation and Transparency – Article 53 (1) (a) and (b)  

GPAI model providers must prepare detailed technical documentation that includes at least the elements listed in Annex XI. These contain a general description of the GPAI model, including its tasks, architecture and number of parameters. The documentation must also outline integration requirements, design specifications, training methodologies, key design choices (including their rationale and assumptions), and details on data sources, curation methods, and bias detection measures. Additionally, it should include information computational resources used, training duration, and estimated or known energy consumption. Some of this information needs to be shared with the EU AI Office and/or provided upon request to national regulators.  

Providers of GPAI models are also required to share information with downstream providers. Downstream providers are providers of AI systems who want to integrate the model into their own AI system. Sharing information should help downstream providers to understand the model’s capabilities and limitations and enable them to fulfill their own obligations. Anex XII outlines the specific information to be provided, including integration requirements, design and training choices, data sources and processing methods, as well as bias detection measures. It must also include details on computational resources, training time, and estimated or known energy consumption. 

GPAI models released under a free and open-source license that do not pose systemic risks are exempt from these detailed documentation requirements. 

2. Compliance with EU copyright law – Article 53 (1) (c) 

Providers must implement policies to comply with EU law on copyright and related rights, including through state-of-the-art technologies to identify and respect content creator’s’ rights, including any reservations explicitly expressed.  

3. Public disclosure of training data summary – Article 53 (1) (d)  

Providers are required to publish a detailed summary of content and data used to train the GPAI model, using a simple and effective template provided by the AI Office. The summary is intended to enhance transparency and should be broadly comprehensive rather than technically detailed, making it accessible to the parties with a legitimate interest in exercising their rights under Union law.  

4. Cooperation with relevant AI authorities – Article 53 (3) 

GPAI providers must cooperate with the European Commission and relevant national authorities, by providing necessary information and ensuring such authorities to regulate the use of AI within the European Union. 

5. Systemic risk management – Article 55 

If a GPAI model is considered to pose systemic risks, providers have additional obligations. These include notifying the AI Office, conducting model evaluations, assessing and mitigating systemic risks, reporting serious incidents, and ensuring robust cybersecurity measures.  

6. Extra-territorial Scope and AI Act Representative – Article 54 

Providers of high-risk GPAI models that do not have an establishment within the European Union but that make their models available or put them into service in the EU must appoint an authorized legal representative based in the EU.  

This requirement reflects the extraterritorial scope of the AI Act — similar to the GDPR — meaning it applies not only to EU-based companies but also to foreign providers whose AI systems have an impact within the EU market. Making a model available in the EU, whether through online access, API integration, or distribution via third parties, can trigger this obligation, regardless of the provider’s physical location. 

The representative serves as the primary point of contact assisting providers with communications with the EU national supervisory authorities and plays a crucial role in ensuring that the provider complies with the AI Act’s obligations. The representatives’ responsibilities include handling documentation requests and cooperating with relevant authorities. This requirement applies regardless of company size and includes developers, deployers, and third-country suppliers whose AI systems reach the EU. 

EU AI Act GPAI obligations: Prepare proactively  

The EU AI Act places clear and increasing responsibilities on GPAI model providers, especially those whose models carry systematic risks. Compliance is not just a legal necessity but also a strategic one to ensure trust, transparency, and accountability.

Providers should proactively prepare for these obligations by aligning their documentation, risk management, and transparency practices with the requirements of the law, avoid penalties and support the responsible use of AI. 

 If you'd like support navigating your obligations under the EU AI Act, book a free consultation with one of our experts and see how Prighter can help.

About the Authors

Andreas Maetzler

Andreas Maetzler

プライバシースペシャリスト

データプライバシーを専門とするDr. Andreas Mätzlerは、Prighterの法的基盤を担うIURO法律事務所のパートナー弁護士です。
さまざまな機関からプライバシーに関する認定資格を持ち、銀行、金融機関、テクノロジー、医療分野のDPO(データ保護責任者)としても活躍中。Prighterは彼の豊富な法律知識と、プライバシープロジェクトを実行してきた実践的な経験を基に作られています。

Katharina Jokic

Katharina Jokic

プライバシースペシャリスト

Katharinaは、Prighterのウィーンオフィスを拠点とするプライバシー専門家です。
ウィーン大学で法学を学んだ後、オランダのティルバーグ大学にて「法とテクノロジー」分野のLLM(法学修士)を取得しました。国際法律事務所やテクノロジー企業でのインターンシップを通じて、プライバシーとデータ保護の分野に深く携わってきました。英語、ドイツ語、クロアチア語に堪能です。