Europe gathers global experts to draft ‘Code of Practice’ for AI

The European Union has launched the development of the first-ever Code of Practice for general-purpose AI under the EU AI Act.
The European Union has launched the development of the first-ever Code of Practice for general-purpose AI under the EU AI Act.

The European Union is making strides toward shaping the future of artificial intelligence with the development of the first “General-Purpose AI Code of Practice” for AI models under its AI Act.

According to a Sept. 30 announcement, the initiative is spearheaded by the European AI Office and brings together hundreds of global experts from academia, industry and civil society to collaboratively draft a framework that will address key issues such as transparency, copyright, risk assessment and internal governance.

Nearly 1,000 participating in shaping EU’s AI future

The kick-off plenary, held online with nearly 1,000 participants, marked the beginning of a months-long process that will conclude with the final draft in April 2025. 

The Code of Practice is set to become a cornerstone for applying the AI Act to general-purpose AI models like large language models (LLMs) and AI systems integrated across various sectors.

This session also introduced four working groups, led by distinguished industry chairs and vice-chairs, which will drive the development of the Code of Practice.

These include notable experts like Nuria Oliver, an artificial intelligence researcher, and Alexander Peukert, a German copyright law specialist. These groups will focus on transparency and copyright, risk identification, technical risk mitigation and internal risk management.

According to the European AI Office, these working groups will meet between October 2024 and April 2025 to draft provisions, gather stakeholder input and refine the Code of Practice through ongoing consultation.

Related: Greece plans new $330M data center to boost AI expansion

Setting the stage for global AI governance


The EU’s AI Act, passed by the European Parliament in March 2024, is a landmark piece of legislation that seeks to regulate the technology across the bloc.

It was created to establish a risk-based approach to AI governance. It categorizes systems into different risk levels — ranging from minimal to unacceptable — and mandates specific compliance measures  

The act is especially relevant to general-purpose AI models due to their broad applications and potential for significant societal impact, often placing them in the higher-risk categories outlined by the legislation. 

However, some major AI companies, including Meta, have criticized the regulations as too restrictive, arguing that they could stifle innovation. In response, the EU’s collaborative approach to drafting the Code of Practice aims to balance safety and ethics with fostering innovation. 

The multi-stakeholder consultation has already garnered over 430 submissions, which will help influence the writing of the code. 

The EU’s goal is that by the following April, the culmination of these efforts will set a precedent for how general-purpose AI models can be responsibly developed, deployed and managed, with a strong emphasis on minimizing risks and maximizing societal benefits.

As the global AI landscape evolves rapidly, this effort will likely influence AI policies worldwide, especially as more countries look to the EU for guidance on regulating emerging technologies.

Magazine: Advanced AI system is already ‘self-aware’ — ASI Alliance founder