Leveraging the EU AI Code of Practice for Global South AI Safety
I. Introduction
In 2024, the European Union (EU) adopted the AI Act, the first comprehensive regulation on AI, which outlines various obligations for developers and deployers of AI. On 10 July 2025, the European Commission published the General-Purpose AI Code of Practice (the AI Code of Practice), which is a voluntary code designed to help developers of general-purpose AI (GPAI) models comply with their transparency, copyright, and safety obligations under the AI Act. The Transparency and Copyright chapters of the Code apply to all GPAI models, defined as models trained on a cumulative amount of compute greater than 10^23 floating-point operations (FLOPS). The Safety and Security Chapter only applies to GPAI models with systemic risks, defined as models that are trained on a cumulative amount of compute greater than 10^25 FLOPS. Many of today’s frontier AI models including GPT 4.5, Gemini 1.0 Ultra, Claude 3 Opus and Grok 4 fall within the category of GPAI models with systemic risks.
While the Code is a regional instrument, some authors have anticipated that it may have a global effect in line with the Brussels Effect. This piece explores how Global South countries can actively seek to advance their safety goals using the Code of Practice, by collaborating with the EU AI Office whose duty is to enforce the AI Act and Code of Practice. The piece begins by exploring the Code of Practice and its potential impact on AI safety. It then identifies several ways that Global South countries can collaborate with the EU AI Office. It explains that the most promising ways are: reciprocal information-sharing with the EU AI office, collaborating on evaluations and technical capacity building.
II. AI Safety Under the EU AI Code of Practice
III. Concrete Pathways for EU-Global South Cooperation
Rather than relying on the AI Code and EU AI Office to unilaterally advance AI safety, Global South countries can seek to actively advance their safety goals by collaborating with the EU AI Office. The AI Office has been fostering international cooperation — both bilaterally and multilaterally — with various stakeholders including partner countries and regions, other AI Safety Institutes (AISIs) and the wider scientific community. For example, it has signed a cooperation agreement with the Singapore AI Safety Institute and engaged with the International Network of AISIs. The Director of the AI Office has announced that they have engaged these stakeholders on ‘topics of mutual interest’ including AI Safety. Global South countries can thus seek similar opportunities to cooperate with the EU AI Office, including in the following ways:
Communication of risks and other relevant information to the EU AI Office
The main way that many Global South countries can collaborate with the AI Office is by collecting and sharing valuable information on AI risks with the AI Office. This would be information that prompts and/or enables the AI Office to enforce the AI Code of Practice. For example, based on the information shared, the AI Office can assess compliance with the Code of Practice, evaluate AI models or require AI developers to undertake certain safety-increasing actions. Global South countries could thus share information detailing the effects that GPAI models with systemic risks have in their markets — especially the effects that could also materialize in the EU. That is, (i) any serious incidents resulting from the use of GPAI models with systemic risks in their markets and (ii) any other capabilities or risks materializing as the citizens or users in their markets interact with these models. This kind of information sharing is a very actionable way to collaborate with the EU AI Office since Global South countries will be better fitted to monitor their jurisdictions and ascertain and document this information.
This information could be obtained through (serious) incident tracking and reporting. Global South countries could keep a database of serious incidents such as those highlighted in Measure 9.3 of the Code: disruption of operation or management of critical infrastructure, death of a person, human rights violations and serious harms to property and the environment (see also, Article 3(49) of the AI Act and Section 5.2 of the Guidelines). Beyond incident tracking and reporting, Global South countries can also conduct post-deployment monitoring of the covered AI models, such as by undertaking ‘field testing to investigate how people engage with AI in regular use’. This monitoring may reveal the capabilities and risks of these AI models as users engage with them in their specific contexts in the Global South. Already, independent monitoring by researchers in the Global South has unveiled various shortcomings in advanced models such as GPT-4 providing advice on how to build a bomb and commit financial fraud when prompted in low-resource languages such as Zulu. Marta Ziosi et al. have suggested that regional AI Safety Institutes can undertake such work. Other suggestions for the institutions that may conduct this monitoring include individual governments and research organizations, and for African countries in particular, a regional and permanent AI safety and security task force.
The information conveyed by Global South countries could then be acted upon by the AI Office in two ways. First, they could use this information in assessing the developer’s compliance with the Code of Practice. Here, the AI Office could assess whether the AI developers have fulfilled obligations such as providing channels for communicating incidents as outlined in Measure 9.1. They could also assess whether this information is used by the developers in subsequent risk identification as is required under Measure 2.1. Second, the AI Office could use this information in its own evaluations of GPAI models with systemic risks. The information gained from Global South countries’ incident databases, for example, could be useful in identifying the underlying capabilities, vulnerabilities and potential risks of the GPAI models deployed globally as well as noting the trends in incidents, including cross-border incidents. Such information could thus serve as an early warning sign that a model could cause harm in other jurisdictions (including the EU) and prompt the AI Office to evaluate the model. These collaborations on incident tracking and reporting, in particular, could be bolstered by efforts to establish common reporting frameworks such as the framework proposed by the OECD.
Information-sharing between Global South countries and the EU AI Office can be done through bilateral agreements, such as between the EU AI Office and AISIs or AISI-equivalent institutions that are increasingly being established in Global South countries such as in India, Chile and Kenya. As has been recommended for civil society organisations in the EU, Global South countries may also seek to establish connections with the Scientific Panel of Experts. This may be a useful avenue to direct information or evidence which the Panel can use to issue an alert to the AI Office pursuant to Article 90 of the AI Act. Countries such as Kenya that are part of the International Network of AI Safety Institutes may also use this avenue as anticipated by the Code.
Reciprocal information-sharing with the EU AI Office to advance safety goals and standards
One of the growing roles of AISIs is international coordination and information sharing, and Global South countries through their own AISIs or AISI-equivalent institutions could seek to establish reciprocal information-sharing from the EU AI Office. Already, there are recommendations on what information should be shared between AISIs through the International Network of AI Safety Institutes, for those countries in this Network.
Whether through the Network or otherwise, Global South countries can negotiate with the EU AI Office to acquire information which they can use to assess whether advanced AI models, especially those deployed in their markets, meet their own safety goals or standards. This may include information gathered pursuant to developers’ reporting obligations under the Code of Practice. Developers are mandated to share their Safety and Security Framework and detailed Safety and Security Model Reports with the AI Office as well as information on incidents that occur in the EU. Information from these reports would be useful to Global South countries in various ways. First, information from the Model Reports on the training data used could be useful in assessing whether the training data was diverse, unbiased and representative, which is a concern that many Global South countries share. Similarly, information on how model evaluations were conducted might be helpful in identifying gaps in these evaluations as they pertain to Global South countries (and this could form a basis for collaborations on evaluations that are plural, multilingual and multicultural as explored below). Where Global South countries have access to the models, they could also use information regarding evaluations and incidents to conduct their own evaluations on the models.
Information drawn from the AI Office’s work on evaluations may also be useful to Global South countries which are currently building the technical expertise needed to evaluate models. The AI Office has been working on developing the tools, methodologies and benchmarks for evaluations, and is thus in a good position to share the insights gained from this work. It could share information ranging from evaluation standards (e.g., what systems to evaluate) to evaluation methods and the approaches to interpreting the evaluation results. This information could enable Global South countries to develop and conduct evaluations as well as interpret the evaluation results.
Global South countries may need to negotiate for some of this information subject to the confidentiality requirements under Article 78 of the AI Act. Article 78 allows for sharing of confidential information gathered by the EU Commission with regulatory authorities of third countries under bilateral or multilateral international trade agreements that are supported by confidentiality agreements. Information sharing of this kind, however, may be difficult due to concerns of commercial interests and national security, and Global South countries will need to assure the EU AI Office, and other partners, that they have sufficient measures in place to keep the information shared confidential and secure. Alternatively, Global South countries can negotiate for a form of ‘structured access’ to information where they get information that is not sensitive to national security or proprietary. For example, information on evaluations could leave out ‘pre-deployment evaluations on sensitive national security threats’ since sharing this information could ‘increase its attack surface and thus threaten its confidentiality’.
Collaborating on evaluations for shared risks
Global South countries may also seek to foster cooperation with the EU AI Office through collaborations on safety evaluations. Efforts to evaluate AI models and develop tools for such evaluations are still evolving, and Global South countries can contribute to this in various ways, in collaboration with the EU AI Office.
First, Global South countries can collaborate with the EU to develop benchmarks and other tools for evaluations that are relevant to both the EU and the Global South countries, such as plural, multilingual and multicultural benchmarks. There is precedence for this type of collaboration as the EU is collaborating with other AI Safety Institutes such as the US AISI.
Second, Global South countries could also share benchmarks that they develop and the results of the evaluations they conduct with the EU, if they ascertain that they can be useful to the EU as well. A prime example of such evaluations would be evaluations of models done in low-resource languages. Such evaluations are already proving useful for general AI safety as they have highlighted how models trained with limited low-resource languages pose risks to all large language model (LLM) users and not just the speakers of these languages.
Finally, Global South countries can also collaborate with the EU AI Office on the actual evaluations of GPAI models. The EU AI Office is already collaborating with other AISIs on joint testing of models through the International Network. However, there are still some gaps in some efforts like agentic evaluations, and collaborations with the EU AI Office, through the Network or otherwise, can help plug such gaps. Global South countries, for example, can continue providing linguistic and technical expertise. Such collaborations can also enable Global South countries to gain access to advanced AI models which they may otherwise lack, and enable evaluations that are useful to both Global South countries and the EU.
Technical capacity-building initiatives
Global South countries can also seek to collaborate with the EU AI Office on technical capacity building initiatives. Many developing countries currently lack sufficient technical experts to evaluate AI models, and may thus benefit from programmes such as expert secondments and joint training programs. Other AISIs have already established similar partnerships, such as the UK and US AISIs which had personnel exchanges as a key part of their 2024 Memorandum of Understanding. These exchanges will need to be structured in a way that also supports the EU AI Office with technical capacity rather than perpetuate its capacity problem.
IV. Risks of these Forms of Collaboration
V. Conclusion
Author Bio
Marie is a Research Associate at the ILINA Program. She holds an undergraduate law degree (top student, first class honours) from Strathmore University and a Master of Laws degree from Harvard Law School.
Acknowledgements
Special thanks to Cecil Abungu and Michelle Malonza for their thoughtful feedback on this piece, and to Laureen Nyamu for her helpful copy editing.