Leveraging the EU AI Code of Practice for Global South AI Safety

Table of Contents

Table of Contents

Table of Contents

Table of Contents

Table of Contents

Table of Contents

Table of Contents

Table of Contents

Table of Contents

Table of Contents

Share

Share

Share

Share

Share

Share

Share

Share

Share

Share

I. Introduction

In 2024, the European Union (EU) adopted the AI Act, the first comprehensive regulation on AI, which outlines various obligations for developers and deployers of AI. On 10 July 2025, the European Commission published the General-Purpose AI Code of Practice (the AI Code of Practice), which is a voluntary code designed to help developers of general-purpose AI (GPAI) models comply with their transparency, copyright, and safety obligations under the AI Act. The Transparency and Copyright chapters of the Code apply to all GPAI models, defined as models trained on a cumulative amount of compute greater than 10^23 floating-point operations (FLOPS). The Safety and Security Chapter only applies to GPAI models with systemic risks, defined as models that are trained on a cumulative amount of compute greater than 10^25 FLOPS.  Many of today’s frontier AI models including GPT 4.5, Gemini 1.0 Ultra, Claude 3 Opus and Grok 4 fall within the category of GPAI models with systemic risks. 

While the Code is a regional instrument, some authors have anticipated that it may have a global effect in line with the Brussels Effect. This piece explores how Global South countries can actively seek to advance their safety goals using the Code of Practice, by collaborating with the EU AI Office whose duty is to enforce the AI Act and Code of Practice. The piece begins by exploring the Code of Practice and its potential impact on AI safety. It then identifies several ways that Global South countries can collaborate with the EU AI Office. It explains that the most promising ways are: reciprocal information-sharing with the EU AI office, collaborating on evaluations and technical capacity building.

II. AI Safety Under the EU AI Code of Practice

What does the Code of Practice require of developers?

The Safety and Security chapter, which requires developers to institute a comprehensive risk management process for GPAI models with systemic risks, is the most relevant for promoting AI safety. This risk management process entails several stages including risk identification, risk analysis, risk acceptance determination and risk mitigation, among others. 

In risk identification, the developers are required to identify the risks that their models may present, which are of two kinds. The first is systemic risks that are compiled by considering model independent information, information about the model and similar models, and information communicated to the developer by the AI Office, the Scientific Panel of Independent Experts and initiatives endorsed by the AI Office like the International Network of AI Safety Institutes. The second kind is specific risks including chemical, biological, radiological and nuclear (CBRN) risks, loss of control, cyber offense and manipulation, and potential risks to public safety, health, security and fundamental human rights etc. The developers then analyse each identified risk using model independent information that is relevant to the risk,  state-of-the-art model evaluations,  risk modelling, risk estimation and post-market monitoring.

This is followed by risk acceptance determination, where developers assess the risks against a pre-defined systemic risk acceptance criteria, and implementation of safety mitigations along the entire lifecycle of the model. Finally, the developers are required to undertake continuous risk management through practices such as serious incident tracking, reporting and management, and post-market monitoring. Drawing from these other commitments, developers are then subject to reporting obligations including submitting a detailed Safety and Security Model Report to the AI Office before releasing a model. 

What will the enforcement of the Code of Practice look like?

The Code of Practice is to be enforced by the EU AI Office, as part of its duty to enforce the obligations of the providers of general-purpose AI outlined in the AI Act. The AI Office has several enforcement powers set out in Article 88(1) of the AI Act, some of which it has already started exercising. First, it may request for information from the developer: (i) in order to assess compliance with their obligations or (ii) upon request by the Scientific Panel. Second, following consultation with the AI Board, the AI Office may also conduct evaluations of AI models in order to assess the compliance of developers with the rules or to evaluate the systemic risks associated with the models. Third, it may request the developers to take certain actions such as putting in place safety mitigation measures, restricting the making available of the model on the market or withdrawing or recalling the model. 

What does the Code of Practice mean for AI Safety?

The Code of Practice is likely to improve AI safety by formalising and improving AI safety standards. Previously, AI developers have determined what safety measures to undertake as per their AI Safety policies. The Code’s requirements go beyond these current industry practices. For instance, the Code by specifying risks that developers should assess their models for — CBRN, manipulation, loss of control, and cyberoffence — creates uniformity but also prompts frontier AI developers who have not been assessing such risks to do so. The Code strengthens other practices along the entire cycle of risk management, for example, by mandating risk modelling and estimation, external evaluations and transparency requirements. This helps fill gaps like the lack of independent third-party evaluations, the failure to publish safety frameworks and the inconsistent publication of model cards by some developers. 

Though voluntary, the Code provides the most straightforward way to demonstrate compliance with the mandatory obligations of the AI Act. Companies that sign onto the Code such as Anthropic, MistralAI, OpenAI and xAI will only need to follow the Code’s stipulations. Those that do not sign onto the Code will still need to prove compliance with the AI Act, which may entail providing more supporting evidence and responding to more requests for information from the AI Office. Furthermore, courts and other institutions are also likely to use the Code of Practice as the baseline for assessing compliance with the AI Act, further incentivising adoption of these safety measures. 

The EU has already experimented with this kind of soft regulation before, with the Code of Practice on Disinformation which was developed in 2018. The Code was later strengthened in 2022 and transformed into a Code of Conduct under the Digital Services Act. It has generally garnered many signatories and has been quite successful in combatting online disinformation during elections and crises such as the COVID-19 pandemic and the war in Ukraine. 

However, the Code’s impact might be limited if many developers decide not to sign onto it and instead prove compliance with the AI Act via alternative adequate measures. Already, some developers like Meta have announced that they will not be signing onto the Code, and since signatories may withdraw their signatures at any time, more developers may choose not to adhere to the Code. Thus, the Code’s capacity to shape the developers’ safety practices may be limited. Furthermore, the Code’s effectiveness might be limited if the AI Office lacks the institutional capacity needed to evaluate AI models and assess compliance with the Code. Currently, the AI Office lacks the technical, policy and legal staff that are needed to assess what AI developers submit to the AI Office and their claims regarding their models and risk management efforts. If this continues, developers may end up leading the implementation of the Code, thus limiting its effect on AI safety.

How might these provisions promote AI safety in Global South countries?

While the Code’s objective is to mitigate AI risks at the EU level, it addresses widely deployed AI models whose systemic risks — including CBRN, loss of control, manipulation and cyber offense, threats to public health, safety, security and fundamental human rights — are general in nature and not limited to the EU. Indeed, these are risks that Global South countries should be concerned about as well. Due to the global nature of the risks and models addressed, the Code could promote AI safety in Global South countries where these models are deployed as well. 

The Code is likely to have such a global effect because the safety measures in the Code are to be implemented at the model level rather than the system level. All the requirements from risk assessments such as evaluations to implementing safety mitigations pertain to the models rather than the systems within which these models are integrated. Since the cost of developing or customizing different models for different jurisdictions or markets is high, developers who intend to put their models on the EU market and other markets are thus likely to rely on the EU AI Code of Practice as the standard for their safety measures. Scholars like Anu Bradford have already noted this ability of the EU to export its regulatory standards due to its large market size, regulatory capacity and political will to enact stringent rules, commonly termed as the Brussels Effect. The General Data Protection Regulation (GDPR)’s data privacy standard, for example, pushed Meta, Google and other Big Tech firms to update their privacy policies and protections for all their users around the world. 

What does the Code of Practice require of developers?

The Safety and Security chapter, which requires developers to institute a comprehensive risk management process for GPAI models with systemic risks, is the most relevant for promoting AI safety. This risk management process entails several stages including risk identification, risk analysis, risk acceptance determination and risk mitigation, among others. 

In risk identification, the developers are required to identify the risks that their models may present, which are of two kinds. The first is systemic risks that are compiled by considering model independent information, information about the model and similar models, and information communicated to the developer by the AI Office, the Scientific Panel of Independent Experts and initiatives endorsed by the AI Office like the International Network of AI Safety Institutes. The second kind is specific risks including chemical, biological, radiological and nuclear (CBRN) risks, loss of control, cyber offense and manipulation, and potential risks to public safety, health, security and fundamental human rights etc. The developers then analyse each identified risk using model independent information that is relevant to the risk,  state-of-the-art model evaluations,  risk modelling, risk estimation and post-market monitoring.

This is followed by risk acceptance determination, where developers assess the risks against a pre-defined systemic risk acceptance criteria, and implementation of safety mitigations along the entire lifecycle of the model. Finally, the developers are required to undertake continuous risk management through practices such as serious incident tracking, reporting and management, and post-market monitoring. Drawing from these other commitments, developers are then subject to reporting obligations including submitting a detailed Safety and Security Model Report to the AI Office before releasing a model. 

What will the enforcement of the Code of Practice look like?

The Code of Practice is to be enforced by the EU AI Office, as part of its duty to enforce the obligations of the providers of general-purpose AI outlined in the AI Act. The AI Office has several enforcement powers set out in Article 88(1) of the AI Act, some of which it has already started exercising. First, it may request for information from the developer: (i) in order to assess compliance with their obligations or (ii) upon request by the Scientific Panel. Second, following consultation with the AI Board, the AI Office may also conduct evaluations of AI models in order to assess the compliance of developers with the rules or to evaluate the systemic risks associated with the models. Third, it may request the developers to take certain actions such as putting in place safety mitigation measures, restricting the making available of the model on the market or withdrawing or recalling the model. 

What does the Code of Practice mean for AI Safety?

The Code of Practice is likely to improve AI safety by formalising and improving AI safety standards. Previously, AI developers have determined what safety measures to undertake as per their AI Safety policies. The Code’s requirements go beyond these current industry practices. For instance, the Code by specifying risks that developers should assess their models for — CBRN, manipulation, loss of control, and cyberoffence — creates uniformity but also prompts frontier AI developers who have not been assessing such risks to do so. The Code strengthens other practices along the entire cycle of risk management, for example, by mandating risk modelling and estimation, external evaluations and transparency requirements. This helps fill gaps like the lack of independent third-party evaluations, the failure to publish safety frameworks and the inconsistent publication of model cards by some developers. 

Though voluntary, the Code provides the most straightforward way to demonstrate compliance with the mandatory obligations of the AI Act. Companies that sign onto the Code such as Anthropic, MistralAI, OpenAI and xAI will only need to follow the Code’s stipulations. Those that do not sign onto the Code will still need to prove compliance with the AI Act, which may entail providing more supporting evidence and responding to more requests for information from the AI Office. Furthermore, courts and other institutions are also likely to use the Code of Practice as the baseline for assessing compliance with the AI Act, further incentivising adoption of these safety measures. 

The EU has already experimented with this kind of soft regulation before, with the Code of Practice on Disinformation which was developed in 2018. The Code was later strengthened in 2022 and transformed into a Code of Conduct under the Digital Services Act. It has generally garnered many signatories and has been quite successful in combatting online disinformation during elections and crises such as the COVID-19 pandemic and the war in Ukraine. 

However, the Code’s impact might be limited if many developers decide not to sign onto it and instead prove compliance with the AI Act via alternative adequate measures. Already, some developers like Meta have announced that they will not be signing onto the Code, and since signatories may withdraw their signatures at any time, more developers may choose not to adhere to the Code. Thus, the Code’s capacity to shape the developers’ safety practices may be limited. Furthermore, the Code’s effectiveness might be limited if the AI Office lacks the institutional capacity needed to evaluate AI models and assess compliance with the Code. Currently, the AI Office lacks the technical, policy and legal staff that are needed to assess what AI developers submit to the AI Office and their claims regarding their models and risk management efforts. If this continues, developers may end up leading the implementation of the Code, thus limiting its effect on AI safety.

How might these provisions promote AI safety in Global South countries?

While the Code’s objective is to mitigate AI risks at the EU level, it addresses widely deployed AI models whose systemic risks — including CBRN, loss of control, manipulation and cyber offense, threats to public health, safety, security and fundamental human rights — are general in nature and not limited to the EU. Indeed, these are risks that Global South countries should be concerned about as well. Due to the global nature of the risks and models addressed, the Code could promote AI safety in Global South countries where these models are deployed as well. 

The Code is likely to have such a global effect because the safety measures in the Code are to be implemented at the model level rather than the system level. All the requirements from risk assessments such as evaluations to implementing safety mitigations pertain to the models rather than the systems within which these models are integrated. Since the cost of developing or customizing different models for different jurisdictions or markets is high, developers who intend to put their models on the EU market and other markets are thus likely to rely on the EU AI Code of Practice as the standard for their safety measures. Scholars like Anu Bradford have already noted this ability of the EU to export its regulatory standards due to its large market size, regulatory capacity and political will to enact stringent rules, commonly termed as the Brussels Effect. The General Data Protection Regulation (GDPR)’s data privacy standard, for example, pushed Meta, Google and other Big Tech firms to update their privacy policies and protections for all their users around the world. 

What does the Code of Practice require of developers?

The Safety and Security chapter, which requires developers to institute a comprehensive risk management process for GPAI models with systemic risks, is the most relevant for promoting AI safety. This risk management process entails several stages including risk identification, risk analysis, risk acceptance determination and risk mitigation, among others. 

In risk identification, the developers are required to identify the risks that their models may present, which are of two kinds. The first is systemic risks that are compiled by considering model independent information, information about the model and similar models, and information communicated to the developer by the AI Office, the Scientific Panel of Independent Experts and initiatives endorsed by the AI Office like the International Network of AI Safety Institutes. The second kind is specific risks including chemical, biological, radiological and nuclear (CBRN) risks, loss of control, cyber offense and manipulation, and potential risks to public safety, health, security and fundamental human rights etc. The developers then analyse each identified risk using model independent information that is relevant to the risk,  state-of-the-art model evaluations,  risk modelling, risk estimation and post-market monitoring.

This is followed by risk acceptance determination, where developers assess the risks against a pre-defined systemic risk acceptance criteria, and implementation of safety mitigations along the entire lifecycle of the model. Finally, the developers are required to undertake continuous risk management through practices such as serious incident tracking, reporting and management, and post-market monitoring. Drawing from these other commitments, developers are then subject to reporting obligations including submitting a detailed Safety and Security Model Report to the AI Office before releasing a model. 

What will the enforcement of the Code of Practice look like?

The Code of Practice is to be enforced by the EU AI Office, as part of its duty to enforce the obligations of the providers of general-purpose AI outlined in the AI Act. The AI Office has several enforcement powers set out in Article 88(1) of the AI Act, some of which it has already started exercising. First, it may request for information from the developer: (i) in order to assess compliance with their obligations or (ii) upon request by the Scientific Panel. Second, following consultation with the AI Board, the AI Office may also conduct evaluations of AI models in order to assess the compliance of developers with the rules or to evaluate the systemic risks associated with the models. Third, it may request the developers to take certain actions such as putting in place safety mitigation measures, restricting the making available of the model on the market or withdrawing or recalling the model. 

What does the Code of Practice mean for AI Safety?

The Code of Practice is likely to improve AI safety by formalising and improving AI safety standards. Previously, AI developers have determined what safety measures to undertake as per their AI Safety policies. The Code’s requirements go beyond these current industry practices. For instance, the Code by specifying risks that developers should assess their models for — CBRN, manipulation, loss of control, and cyberoffence — creates uniformity but also prompts frontier AI developers who have not been assessing such risks to do so. The Code strengthens other practices along the entire cycle of risk management, for example, by mandating risk modelling and estimation, external evaluations and transparency requirements. This helps fill gaps like the lack of independent third-party evaluations, the failure to publish safety frameworks and the inconsistent publication of model cards by some developers. 

Though voluntary, the Code provides the most straightforward way to demonstrate compliance with the mandatory obligations of the AI Act. Companies that sign onto the Code such as Anthropic, MistralAI, OpenAI and xAI will only need to follow the Code’s stipulations. Those that do not sign onto the Code will still need to prove compliance with the AI Act, which may entail providing more supporting evidence and responding to more requests for information from the AI Office. Furthermore, courts and other institutions are also likely to use the Code of Practice as the baseline for assessing compliance with the AI Act, further incentivising adoption of these safety measures. 

The EU has already experimented with this kind of soft regulation before, with the Code of Practice on Disinformation which was developed in 2018. The Code was later strengthened in 2022 and transformed into a Code of Conduct under the Digital Services Act. It has generally garnered many signatories and has been quite successful in combatting online disinformation during elections and crises such as the COVID-19 pandemic and the war in Ukraine. 

However, the Code’s impact might be limited if many developers decide not to sign onto it and instead prove compliance with the AI Act via alternative adequate measures. Already, some developers like Meta have announced that they will not be signing onto the Code, and since signatories may withdraw their signatures at any time, more developers may choose not to adhere to the Code. Thus, the Code’s capacity to shape the developers’ safety practices may be limited. Furthermore, the Code’s effectiveness might be limited if the AI Office lacks the institutional capacity needed to evaluate AI models and assess compliance with the Code. Currently, the AI Office lacks the technical, policy and legal staff that are needed to assess what AI developers submit to the AI Office and their claims regarding their models and risk management efforts. If this continues, developers may end up leading the implementation of the Code, thus limiting its effect on AI safety.

How might these provisions promote AI safety in Global South countries?

While the Code’s objective is to mitigate AI risks at the EU level, it addresses widely deployed AI models whose systemic risks — including CBRN, loss of control, manipulation and cyber offense, threats to public health, safety, security and fundamental human rights — are general in nature and not limited to the EU. Indeed, these are risks that Global South countries should be concerned about as well. Due to the global nature of the risks and models addressed, the Code could promote AI safety in Global South countries where these models are deployed as well. 

The Code is likely to have such a global effect because the safety measures in the Code are to be implemented at the model level rather than the system level. All the requirements from risk assessments such as evaluations to implementing safety mitigations pertain to the models rather than the systems within which these models are integrated. Since the cost of developing or customizing different models for different jurisdictions or markets is high, developers who intend to put their models on the EU market and other markets are thus likely to rely on the EU AI Code of Practice as the standard for their safety measures. Scholars like Anu Bradford have already noted this ability of the EU to export its regulatory standards due to its large market size, regulatory capacity and political will to enact stringent rules, commonly termed as the Brussels Effect. The General Data Protection Regulation (GDPR)’s data privacy standard, for example, pushed Meta, Google and other Big Tech firms to update their privacy policies and protections for all their users around the world. 

III. Concrete Pathways for EU-Global South Cooperation

Rather than relying on the AI Code and EU AI Office to unilaterally advance AI safety, Global South countries can seek to actively advance their safety goals by collaborating with the EU AI Office. The AI Office has been fostering international cooperation — both bilaterally and multilaterally — with various stakeholders including partner countries and regions, other AI Safety Institutes (AISIs) and the wider scientific community. For example, it has signed a cooperation agreement with the Singapore AI Safety Institute and engaged with the International Network of AISIs. The Director of the AI Office has announced that they have engaged these stakeholders on ‘topics of mutual interest’ including AI Safety. Global South countries can thus seek similar opportunities to cooperate with the EU AI Office, including in the following ways: 

Communication of risks and other relevant information to the EU AI Office

The main way that many Global South countries can collaborate with the AI Office is by collecting and sharing valuable information on AI risks with the AI Office. This would be information that prompts and/or enables the AI Office to enforce the AI Code of Practice. For example, based on the information shared, the AI Office can assess compliance with the Code of Practice, evaluate AI models or require AI developers to undertake certain safety-increasing actions. Global South countries could thus share information detailing the effects that GPAI models with systemic risks have in their markets — especially the effects that could also materialize in the EU. That is, (i) any serious incidents resulting from the use of GPAI models with systemic risks in their markets and (ii) any other capabilities or risks materializing as the citizens or users in their markets interact with these models. This kind of information sharing is a very actionable way to collaborate with the EU AI Office since Global South countries will be better fitted to monitor their jurisdictions and ascertain and document this information.  

This information could be obtained through (serious) incident tracking and reporting. Global South countries could keep a database of serious incidents such as those highlighted in Measure 9.3 of the Code: disruption of operation or management of critical infrastructure, death of a person, human rights violations and serious harms to property and the environment (see also, Article 3(49) of the AI Act and Section 5.2 of the Guidelines). Beyond incident tracking and reporting, Global South countries can also conduct post-deployment monitoring of the covered AI models, such as by undertaking ‘field testing to investigate how people engage with AI in regular use’. This monitoring may reveal the capabilities and risks of these AI models as users engage with them in their specific contexts in the Global South. Already, independent monitoring by researchers in the Global South has unveiled various shortcomings in advanced models such as GPT-4 providing advice on how to build a bomb and commit financial fraud when prompted in low-resource languages such as Zulu. Marta Ziosi et al. have suggested that regional AI Safety Institutes can undertake such work. Other suggestions for the institutions that may conduct this monitoring include individual governments and research organizations, and for African countries in particular, a regional and permanent AI safety and security task force.

The information conveyed by Global South countries could then be acted upon by the AI Office in two ways. First, they could use this information in assessing the developer’s compliance with the Code of Practice. Here, the AI Office could assess whether the AI developers have fulfilled obligations such as providing channels for communicating incidents as outlined in Measure 9.1. They could also assess whether this information is used by the developers in subsequent risk identification as is required under Measure 2.1. Second, the AI Office could use this information in its own evaluations of GPAI models with systemic risks. The information gained from Global South countries’ incident databases, for example, could be useful in identifying the underlying capabilities, vulnerabilities and potential risks of the GPAI models deployed globally as well as noting the trends in incidents, including cross-border incidents. Such information could thus serve as an early warning sign that a model could cause harm in other jurisdictions (including the EU) and prompt the AI Office to evaluate the model. These collaborations on incident tracking and reporting, in particular, could be bolstered by efforts to establish common reporting frameworks such as the framework proposed by the OECD

Information-sharing between Global South countries and the EU AI Office can be done through bilateral agreements, such as between the EU AI Office and AISIs or AISI-equivalent institutions that are increasingly being established in Global South countries such as in India, Chile and Kenya. As has been recommended for civil society organisations in the EU, Global South countries may also seek to establish connections with the Scientific Panel of Experts. This may be a useful avenue to direct information or evidence which the Panel can use to issue an alert to the AI Office pursuant to Article 90 of the AI Act. Countries such as Kenya that are part of the International Network of AI Safety Institutes may also use this avenue as anticipated by the Code.   

Reciprocal information-sharing with the EU AI Office to advance safety goals and standards

One of the growing roles of AISIs is international coordination and information sharing, and Global South countries through their own AISIs or AISI-equivalent institutions could seek to establish reciprocal information-sharing from the EU AI Office. Already, there are recommendations on what information should be shared between AISIs through the International Network of AI Safety Institutes, for those countries in this Network.

Whether through the Network or otherwise, Global South countries can negotiate with the EU AI Office to acquire information which they can use to assess whether advanced AI models, especially those deployed in their markets, meet their own safety goals or standards. This may include information gathered pursuant to developers’ reporting obligations under the Code of Practice. Developers are mandated to share their Safety and Security Framework and detailed Safety and Security Model Reports with the AI Office as well as information on incidents that occur in the EU. Information from these reports would be useful to Global South countries in various ways. First, information from the Model Reports on the training data used could be useful in assessing whether the training data was diverse, unbiased and representative, which is a concern that many Global South countries share. Similarly, information on how model evaluations were conducted might be helpful in identifying gaps in these evaluations as they pertain to Global South countries (and this could form a basis for collaborations on evaluations that are plural, multilingual and multicultural as explored below). Where Global South countries have access to the models, they could also use information regarding evaluations and incidents to conduct their own evaluations on the models.

Information drawn from the AI Office’s work on evaluations may also be useful to Global South countries which are currently building the technical expertise needed to evaluate models.  The AI Office has been working on developing the tools, methodologies and benchmarks for evaluations, and is thus in a good position to share the insights gained from this work. It could share information ranging from evaluation standards (e.g., what systems to evaluate) to evaluation methods and the approaches to interpreting the evaluation results. This information could enable Global South countries to develop and conduct evaluations as well as interpret the evaluation results.

Global South countries may need to negotiate for some of this information subject to the confidentiality requirements under Article 78 of the AI Act. Article 78 allows for sharing of confidential information gathered by the EU Commission with regulatory authorities of third countries under bilateral or multilateral international trade agreements that are supported by confidentiality agreements. Information sharing of this kind, however, may be difficult due to concerns of commercial interests and national security, and Global South countries will need to assure the EU AI Office, and other partners, that they have sufficient measures in place to keep the information shared confidential and secure. Alternatively, Global South countries can negotiate for a form of ‘structured access’ to information where they get information that is not sensitive to national security or proprietary. For example, information on evaluations could leave out ‘pre-deployment evaluations on sensitive national security threats’ since sharing this information could ‘increase its attack surface and thus threaten its confidentiality’. 

Collaborating on evaluations for shared risks

Global South countries may also seek to foster cooperation with the EU AI Office through collaborations on safety evaluations. Efforts to evaluate AI models and develop tools for such evaluations are still evolving, and Global South countries can contribute to this in various ways, in collaboration with the EU AI Office. 

First, Global South countries can collaborate with the EU to develop benchmarks and other tools for evaluations that are relevant to both the EU and the Global South countries, such as plural, multilingual and multicultural benchmarks. There is precedence for this type of collaboration as the EU is collaborating with other AI Safety Institutes such as the US AISI.

Second, Global South countries could also share benchmarks that they develop and the results of the evaluations they conduct with the EU, if they ascertain that they can be useful to the EU as well. A prime example of such evaluations would be evaluations of models done in low-resource languages. Such evaluations are already proving useful for general AI safety as they have highlighted how models trained with limited low-resource languages pose risks to all large language model (LLM) users and not just the speakers of these languages. 

Finally, Global South countries can also collaborate with the EU AI Office on the actual evaluations of GPAI models. The EU AI Office is already collaborating with other AISIs on joint testing of models through the International Network. However, there are still some gaps in some efforts like agentic evaluations, and collaborations with the EU AI Office, through the Network or otherwise, can help plug such gaps. Global South countries, for example, can continue providing linguistic and technical expertise. Such collaborations can also enable Global South countries to gain access to advanced AI models which they may otherwise lack, and enable evaluations that are useful to both Global South countries and the EU. 

Technical capacity-building initiatives

Global South countries can also seek to collaborate with the EU AI Office on technical capacity building initiatives. Many developing countries currently lack sufficient technical experts to evaluate AI models, and may thus benefit from programmes such as expert secondments and joint training programs. Other AISIs have already established similar partnerships, such as the UK and US AISIs which had personnel exchanges as a key part of their 2024 Memorandum of Understanding. These exchanges will need to be structured in a way that also supports the EU AI Office with technical capacity rather than perpetuate its capacity problem.

IV. Risks of these Forms of Collaboration

These forms of collaborations, however, are not without risks. In the same way that national security interests may hinder the EU AI Office from sharing information with Global South countries, Global South countries should be careful when deciding what information they relay to the EU AI Office and how they relay that information. This is because communication of risks and incidents as they materialize in their jurisdictions or markets could communicate weaknesses such as in critical systems where the models are deployed and highlight sensitive national security information. If not shared securely, the information gained from evaluations or post-deployment monitoring, such as information on dangerous capabilities could also end up in the hands of bad actors resulting in the misuse of AI models.

Furthermore, there is also the risk that collaborations between the EU and Global South countries will perpetuate epistemic hegemony and injustice. Research has already demonstrated that Global North–Global South partnerships often reinforce power asymmetries and inequality, across various types of partnerships like development partnerships and academic research collaborations. In this regard, various scholars have illustrated how ‘Northern’ knowledge often dominates and marginalises ‘Southern’ knowledge in what are supposed to be collaborative exchanges or contributions. Global North partners also often shape the agenda, the priorities and methodologies in such collaborations. Global South countries thus need to be keen that their partnerships with the EU are not dominated by Western conceptions of AI safety and epistemic techniques rather than being truly collaborative.

These forms of collaborations, however, are not without risks. In the same way that national security interests may hinder the EU AI Office from sharing information with Global South countries, Global South countries should be careful when deciding what information they relay to the EU AI Office and how they relay that information. This is because communication of risks and incidents as they materialize in their jurisdictions or markets could communicate weaknesses such as in critical systems where the models are deployed and highlight sensitive national security information. If not shared securely, the information gained from evaluations or post-deployment monitoring, such as information on dangerous capabilities could also end up in the hands of bad actors resulting in the misuse of AI models.

Furthermore, there is also the risk that collaborations between the EU and Global South countries will perpetuate epistemic hegemony and injustice. Research has already demonstrated that Global North–Global South partnerships often reinforce power asymmetries and inequality, across various types of partnerships like development partnerships and academic research collaborations. In this regard, various scholars have illustrated how ‘Northern’ knowledge often dominates and marginalises ‘Southern’ knowledge in what are supposed to be collaborative exchanges or contributions. Global North partners also often shape the agenda, the priorities and methodologies in such collaborations. Global South countries thus need to be keen that their partnerships with the EU are not dominated by Western conceptions of AI safety and epistemic techniques rather than being truly collaborative.

V. Conclusion

The EU has taken a significant step towards promoting AI Safety, in the EU and globally, through its recently published AI Code of Practice. The EU AI Office which will be enforcing the Code (and AI Act) continues to approach its role from an angle of international cooperation with various actors. Global South countries can leverage this opportunity to collaborate with the EU in order to enhance their own safety goals. This piece has highlighted concrete ways to do this: reciprocal information sharing with the EU which targets information that would be useful to the EU and Global South countries; cooperating on the design and conduct of evaluations and technical capacity building. 

Ultimately, Global South countries should also consider developing their own safety standards and requirements in line with their safety goals. They can build onto the AI Code and contextualise its requirements, in a way that harmonises the regulatory standards to ease the burden on AI companies and developers who operate across multiple markets. This is one area that is ripe for further research. Other areas that require further research include what information Global South countries should share with the EU, how to share this securely and how Global South countries can assure the AI Office and other partners that they have sufficient measures to ensure the information they are requesting will be kept confidential and secure.

The EU has taken a significant step towards promoting AI Safety, in the EU and globally, through its recently published AI Code of Practice. The EU AI Office which will be enforcing the Code (and AI Act) continues to approach its role from an angle of international cooperation with various actors. Global South countries can leverage this opportunity to collaborate with the EU in order to enhance their own safety goals. This piece has highlighted concrete ways to do this: reciprocal information sharing with the EU which targets information that would be useful to the EU and Global South countries; cooperating on the design and conduct of evaluations and technical capacity building. 

Ultimately, Global South countries should also consider developing their own safety standards and requirements in line with their safety goals. They can build onto the AI Code and contextualise its requirements, in a way that harmonises the regulatory standards to ease the burden on AI companies and developers who operate across multiple markets. This is one area that is ripe for further research. Other areas that require further research include what information Global South countries should share with the EU, how to share this securely and how Global South countries can assure the AI Office and other partners that they have sufficient measures to ensure the information they are requesting will be kept confidential and secure.

Author Bio

Marie is a Research Associate at the ILINA Program. She holds an undergraduate law degree (top student, first class honours) from Strathmore University and a Master of Laws degree from Harvard Law School. 

Acknowledgements

Special thanks to Cecil Abungu and Michelle Malonza for their thoughtful feedback on this piece, and to Laureen Nyamu for her helpful copy editing.