How Automation-First AI Adoption in the Global South Could Amplify AI Risk

March 2026

March 2026

March 2026

March 2026

Table of Contents

Share

I. Introduction

In 2025, two of the world's leading AI developers, OpenAI and Anthropic, published major reports examining how their models are being used across the globe. OpenAI's working paper examined who uses ChatGPT and for what purposes, while Anthropic's Economic Index tracked how Claude is integrated into everyday work tasks. Taken together, these reports offer the first large-scale empirical picture of AI adoption patterns across countries at different stages of development. Both reports point to a striking divergence in how AI is being used. High-income countries increasingly use AI to augment human capabilities. Augmentation is the practice of learning, iterating, and collaborating with the model to complete a task. Lower-income countries, by contrast, disproportionately use AI to automate, delegating complete tasks with minimal iteration. The countries in this lower-income, automation-heavy category map closely onto what is commonly understood as the Global South. Anthropic's data shows that lower-income countries such as Indonesia, India, and Nigeria have among the lowest AI usage per capita. It also shows that low-adoption countries are more likely to delegate complete tasks to the model rather than engage it collaboratively. I use lower-income countries and the Global South interchangeably in this piece to reflect that overlap. In the rest of this piece, I will refer to the dominant pattern in these countries as automation-first adoption.

Anthropic offers two explanations for this automation-first pattern. First, highly capable AI models often anticipate user needs and produce high-quality outputs on the first attempt, meaning users may be more willing to trust the model with more complex, complete, and higher-stakes tasks. Second, early adopters across all countries initially gravitate towards automation before shifting to more collaborative use as they grow more comfortable and experienced. Only with increased proficiency do users adapt their workflows to engage with the model as a cognitive assistant.

These differential risk profiles carry significant implications for the Global South. OpenAI's data shows disproportionately high growth in AI use across low- and middle-income countries, even as overall usage remains higher in high-income nations. When this rapid growth is read alongside Anthropic's finding that automation is both increasingly common and more prevalent in lower-usage economies, automation-first interactions emerges as the dominant entry point for new users in the Global South.

In this piece, I argue that the automation-augmentation divide maps onto global income levels in ways that systematically shape risk exposure in the Global South. I first analyse how automation-first adoption restructures human-AI interaction, increasing susceptibility to overreliance, automation bias, and the erosion of critical skills. I then show how these interaction-level risks become systemic when AI is deployed in fragile institutional contexts and high-stakes sectors such as healthcare. Next, I examine how development narratives surrounding AI-driven growth can accelerate premature automation in the absence of adequate safeguards. I conclude by outlining the institutional and oversight mechanisms necessary to mitigate these risks.

II. Understanding the Divide and Why It Matters

The automation-augmentation divide reflects two distinct modes of human-AI interaction, with direct implications for user oversight and risk exposure. In automation-first adoption, tasks are delegated end-to-end, concentrating decision-making into fewer interaction points and reducing opportunities to scrutinise the model’s reasoning. Augmentation, by contrast, preserves human involvement through iterative collaboration where users ask the model to explain concepts, review partial outputs, and refine results throughout the process. This structural difference helps explain why automation-first adoption is particularly vulnerable to automation bias. The Center for Security and Emerging Technology (CSET) defines automation bias as the tendency for users to over-rely on automated systems, leading even otherwise knowledgeable users to make crucial or obvious errors by accepting outputs without critical evaluation. It can manifest as errors of omission, when automation failures go undetected because users become passive observers of system outputs, or errors of commission, when users treat automated recommendations as authoritative without independent verification. 

Automation-first adoption is especially prone to errors of commission. End-to-end task delegation leaves few checkpoints where users might scrutinise individual components of the output or cross-reference claims against other sources. Therefore, plausible but incorrect outputs are likely to be accepted wholesale, particularly when users lack the domain expertise or resources to conduct thorough post-hoc validation. The manifestation of errors of omission in automation-first contexts differs. When users expect AI to handle an entire task end-to-end, they may not recognise when the system has failed to address critical aspects of the request or has produced incomplete outputs. The psychological expectation of task completion can mask gaps in the AI's response. Users operating under time constraints or with limited familiarity of the task domain may not possess the foundational knowledge necessary to identify what the system has failed to produce.

Augmentation introduces structural protections against both errors. Its collaborative, iterative process includes multiple review points, allowing users to examine partial outputs, provide feedback, refine results, and detect both fabricated information and incomplete responses before final decisions are made. However, this does not eliminate error risks entirely. Errors of commission can still occur when users, despite maintaining a collaborative posture, gradually develop excessive trust in the system's reliability. Previous positive interactions may also gradually reduce vigilance over time. Additionally, errors of omission remain possible when users focus heavily on refining outputs without questioning whether critical elements are missing entirely.

These risks intensify with AI agents capable of multi-step autonomous action. When such systems act on erroneous premises, errors can cascade before human intervention occurs. In automation-dominant environments, misplaced trust can then transform from a knowledge problem into an action problem, with potentially systemic, delayed, and difficult-to-reverse consequences.

The differential risk profiles between automation-first users and users that use AI by augmentation carry implications for the Global South, given Anthropic's finding that lower-income countries exhibit more automation-focused usage. When automation-heavy adoption coincides with higher rates of undetected errors, the productivity gains AI promises may be offset by decisions based on flawed information.

What is the Likelihood that Automation-First Adoption Continues in the Global South?

Anthropic Economic Indices suggest that the pattern of automation-first adoption is likely to persist in the Global South. Understanding why requires examining what is required to shift to the augmentation approach.

Augmentation requires digital literacy, domain expertise, and the ability to critically evaluate outputs in iterative dialogue. It also requires workflow redesign, organisational training, and verification infrastructure. Building these capabilities at scale demands sustained and co-ordinated investment.

For many lower-income countries, automation-first adoption aligns more readily with existing constraints including where educational systems, technical capacity, and monitoring institutions are already stretched. 

The Consequences of Cognitive Offloading 

Overreliance also carries the long-term costs of altering how users think and work. With automation as the default mode, users may increasingly prioritise the skill of getting a model to produce the right output and that of assessing whether AI-generated output is relevant. These two skills would be preferred to the slower work of forming hypotheses, evaluating evidence, and drawing conclusions. A recent study found that AI-enhanced productivity is ‘not a shortcut to competence’ and that the incorporation of AI into our work should be done carefully to preserve skill formation. Preserving skill formation is necessary to prevent cognitive offloading, which is the transfer of mental effort to an external system. 

As confidence in the system grows, delegation increases further, reinforcing reliance on the model. The risk is that repeated delegation results in the underdevelopment of the very skills necessary for safe AI use. These skills include, critical thinking, judgment, and contextual understanding. Microsoft’s frequently cited study also shows that higher confidence in AI systems is associated with lower levels of critical thinking, while greater self-confidence in users is associated with a higher likelihood of engaging critically with AI outputs. What this makes clear is that when users trust AI systems highly, they seem likely to scrutinise outputs less, and when they scrutinise less, their capacity for critical evaluation atrophies.

The automation-augmentation divide reflects two distinct modes of human-AI interaction, with direct implications for user oversight and risk exposure. In automation-first adoption, tasks are delegated end-to-end, concentrating decision-making into fewer interaction points and reducing opportunities to scrutinise the model’s reasoning. Augmentation, by contrast, preserves human involvement through iterative collaboration where users ask the model to explain concepts, review partial outputs, and refine results throughout the process. This structural difference helps explain why automation-first adoption is particularly vulnerable to automation bias. The Center for Security and Emerging Technology (CSET) defines automation bias as the tendency for users to over-rely on automated systems, leading even otherwise knowledgeable users to make crucial or obvious errors by accepting outputs without critical evaluation. It can manifest as errors of omission, when automation failures go undetected because users become passive observers of system outputs, or errors of commission, when users treat automated recommendations as authoritative without independent verification. 

Automation-first adoption is especially prone to errors of commission. End-to-end task delegation leaves few checkpoints where users might scrutinise individual components of the output or cross-reference claims against other sources. Therefore, plausible but incorrect outputs are likely to be accepted wholesale, particularly when users lack the domain expertise or resources to conduct thorough post-hoc validation. The manifestation of errors of omission in automation-first contexts differs. When users expect AI to handle an entire task end-to-end, they may not recognise when the system has failed to address critical aspects of the request or has produced incomplete outputs. The psychological expectation of task completion can mask gaps in the AI's response. Users operating under time constraints or with limited familiarity of the task domain may not possess the foundational knowledge necessary to identify what the system has failed to produce.

Augmentation introduces structural protections against both errors. Its collaborative, iterative process includes multiple review points, allowing users to examine partial outputs, provide feedback, refine results, and detect both fabricated information and incomplete responses before final decisions are made. However, this does not eliminate error risks entirely. Errors of commission can still occur when users, despite maintaining a collaborative posture, gradually develop excessive trust in the system's reliability. Previous positive interactions may also gradually reduce vigilance over time. Additionally, errors of omission remain possible when users focus heavily on refining outputs without questioning whether critical elements are missing entirely.

These risks intensify with AI agents capable of multi-step autonomous action. When such systems act on erroneous premises, errors can cascade before human intervention occurs. In automation-dominant environments, misplaced trust can then transform from a knowledge problem into an action problem, with potentially systemic, delayed, and difficult-to-reverse consequences.

The differential risk profiles between automation-first users and users that use AI by augmentation carry implications for the Global South, given Anthropic's finding that lower-income countries exhibit more automation-focused usage. When automation-heavy adoption coincides with higher rates of undetected errors, the productivity gains AI promises may be offset by decisions based on flawed information.

What is the Likelihood that Automation-First Adoption Continues in the Global South?

Anthropic Economic Indices suggest that the pattern of automation-first adoption is likely to persist in the Global South. Understanding why requires examining what is required to shift to the augmentation approach.

Augmentation requires digital literacy, domain expertise, and the ability to critically evaluate outputs in iterative dialogue. It also requires workflow redesign, organisational training, and verification infrastructure. Building these capabilities at scale demands sustained and co-ordinated investment.

For many lower-income countries, automation-first adoption aligns more readily with existing constraints including where educational systems, technical capacity, and monitoring institutions are already stretched. 

The Consequences of Cognitive Offloading 

Overreliance also carries the long-term costs of altering how users think and work. With automation as the default mode, users may increasingly prioritise the skill of getting a model to produce the right output and that of assessing whether AI-generated output is relevant. These two skills would be preferred to the slower work of forming hypotheses, evaluating evidence, and drawing conclusions. A recent study found that AI-enhanced productivity is ‘not a shortcut to competence’ and that the incorporation of AI into our work should be done carefully to preserve skill formation. Preserving skill formation is necessary to prevent cognitive offloading, which is the transfer of mental effort to an external system. 

As confidence in the system grows, delegation increases further, reinforcing reliance on the model. The risk is that repeated delegation results in the underdevelopment of the very skills necessary for safe AI use. These skills include, critical thinking, judgment, and contextual understanding. Microsoft’s frequently cited study also shows that higher confidence in AI systems is associated with lower levels of critical thinking, while greater self-confidence in users is associated with a higher likelihood of engaging critically with AI outputs. What this makes clear is that when users trust AI systems highly, they seem likely to scrutinise outputs less, and when they scrutinise less, their capacity for critical evaluation atrophies.

III. Implications of Automation-First Adoption in the Global South

How the Promise of Automation Obscures Risk

Across much of the Global South, AI is framed as a pathway to rapid economic transformation. Ipsos polling data shows that the populations that are most excited about AI tend to be those that most expect it will benefit their economies, and this excitement is strongest in lower-income countries. Development institutions, technology companies and national governments frame AI integration into agriculture, healthcare, education, public services and financial services as essential. These sectors are considered core to a country’s GDP and the integration of AI is projected to reduce poverty and have a long-term economic impact. Widespread AI adoption is expected to translate into broader use cases, deeper automation, and sustained economic gains. 

When the urgency to deploy AI in high-stakes sectors meets a usage pattern characterised by overreliance and eroded critical evaluation, the risk of cascading failures is amplified. The expectation that AI automation will lead to transformative growth can reduce the perceived need to first invest in the complementary institutional conditions required to manage AI risk.

Let us use Uganda as a case study to understand how automation-first AI adoption in healthcare can produce catastrophic outcomes in the Global South Context. Uganda has one of the world’s highest maternity mortality rates and has one of the world’s most severe healthcare workforce shortages, with only 1 doctor per 25,000 people. Providing medical information across multiple specialities is also, according to Anthropic's data, one of the most distinctive uses of AI in Uganda. 

Consider a scenario in which Uganda's Ministry of Health deploys an AI agent to streamline prenatal care across rural health centres, where the majority of Uganda's 46 million people live. The agent autonomously monitors pregnant women throughout their pregnancies, predicting high-risk cases requiring hospital delivery versus those who can deliver locally. It schedules appointments, allocates ambulances, prioritises scarce ultrasound machines, and determines who receives restricted prenatal supplements. This directly addresses the three-delays framework that drives maternal mortality. Driven by donor pressure and early apparent success, the Ministry deploys it rapidly across all centres simultaneously.

Initial results are encouraging. Ambulance efficiency improves, high-risk women are identified, supplements reach their targets. Weekly oversight reviews become cursory as health officers struggle to justify overriding quantified scores with clinical intuition. Gradually, formal review dissolves, leaving error-flagging to swamped medical students who may lack the authority or experience to challenge a system endorsed by their seniors. Anthropic identifies the degree of AI autonomy granted in decision-making as a key economic primitive in its latest report and notes it is particularly acute in lower-income countries, where resource constraints make the marginal cost of human oversight prohibitively high.

This is where the technical failure becomes structural. Rural health centres lack laboratory capacity for comprehensive blood work, so the agent classifies women as low-risk based on the absence of abnormal test results. It treats missing data as evidence of health rather than evidence of resource constraints. It optimises for what can be measured and misses the intended outcome, actual health status. The agent also treats prenatal check-up attendance as a proxy for low risk, when in rural Uganda it simply means living near the health centre.

The consequences of systematic misclassification are compounded by the agent's autonomy. Rather than misclassifying once, it schedules entire prenatal care plans, allocates ambulances away from affected districts, deprioritises women for ultrasounds, and redirects supplements. Pre-eclampsia, which can progress to seizure, brain injury, and death within hours, is particularly likely to be missed. Gradual weight gain can be misread as healthy foetal development when it in fact signals dangerous fluid retention, especially where blood pressure equipment and urine testing are unavailable. When these women go into labour at health centres without emergency obstetric capacity, operating theatres and blood banks, complications are only survivable with resources the agent has systematically allocated elsewhere.

When labour begins and the crisis unfolds, health workers face an impossible situation with a woman haemorrhaging, seizing, or with obstructed labour, and transport often 2-4 hours away. Uganda sees approximately 15 pregnant women dying every day from direct causes like haemorrhage and hypertensive disorders. Beyond mortality, for every woman who dies, an estimated 20 or 30 suffer injuries, infections or life-long disabilities conditions that would not occur if women received timely emergency obstetric care. This agent's systematic failures in a single district could meaningfully increase this national toll, with each preventable death representing a failure that compounds across Uganda's 135 districts if similar systems were deployed nationwide. 

The harm is systemic, delayed, and difficult to reverse, concentrated on the most vulnerable women at their moment of greatest need, in a context where well-intentioned oversight dissolved under pressure, structural conditions transformed algorithmic errors into deaths, and the capacity to recognise or interrupt AI failure was minimal.

The Structural Vulnerability of Global South Countries

The sectoral vulnerabilities of these low-income economies are amplified by structural failures that make the AI harms even more consequential. As illustrated in the scenario above, there are three key weaknesses in the Global South. First, AI systems deployed in the Global South often lack complementary back up mechanisms (fail-safes). Second, these economies have inadequate monitoring infrastructure and cannot afford the high cost of recovery. These fail-safe systems operate by implementing redundancies, backup mechanisms and safety protocols that are triggered when the primary system fails. Developing AI fail-safe systems requires technical redundancy to ensure uninterrupted operation, alongside careful planning to incorporate human supervision and intervention in critical decisions. 

In lower-income countries there are often weak or no systems to catch errors before they compound and cause larger scale harm. Higher-income countries, by contrast, have structured resilience indicators and frameworks built into the design operation and monitoring of AI in critical infrastructure systems. In automation-first adoption, when an AI system fails, there is likely no safety net in place that can act as a fallback system. Anthropic’s Index acknowledges this when it states that high income countries have dominant digital infrastructure and an abundance of AI development resources. As a result, lower income countries without these resources often have a single point of failure. 

Further, in order to detect any AI failures, the capacity to audit systems and identify errors is necessary. Auditing AI systems and identifying failures requires technical capacity to log the system’s decisions, monitor performance in real time, conduct independent evaluations and trace errors. Detecting failures also requires institutional support in place to report incidents, aggregate them and notice any patterns. Lower-income countries often lack both technical capacity and institutional mechanisms. These failures can therefore adversely affect people for a longer period of time before they are recognised and responded to. When these failures occur, the cost of compensation, replacing failed systems and investing in alternatives is very high. High-income countries have greater capacity to absorb AI errors, but in lower-income settings with limited resilience and recovery capacity, those errors carry the highest costs. 

Adoption is accelerating precisely where the vulnerability factors described in this piece are most acute. This is the adoption-first usage pattern that maximises the risks described in this piece, where outputs are trusted without verification, errors cascade across critical sectors, and there is no infrastructure to catch failures. In the Global South, AI failures translate directly into financial loss, harm to vulnerable populations, and erosion of the institutions least equipped to withstand them. What makes this especially pressing is the pace at which it is unfolding. Open AI’s evidence of rapid uptake of AI in low-and middle-income countries implies that the speed of adoption may outpace the development of the human and institutional capacity to manage automation safely. Anthropic’s speculation that countries will naturally shift from automation-focused to augmentation-focused usage as adoption deepens assumes that there is time to develop experience, expertise and learn without experiencing severe consequences. However, if AI deployment in lower-income countries is concentrated in the sectors central to their economic stability, a single failure may altogether undermine continued AI adoption. 

Lower-income countries are not just using AI differently from high-income countries. They are using AI in ways that expose them to risks they are poorly positioned to manage. The automation-augmentation divide is both a usage pattern and a risk profile. For the Global South, the concern is not only that errors occur, but that the capacity to absorb and correct them is more limited. Where critical sectors are more fragile, monitoring infrastructure is weaker, and recovery from failure is costly, the same AI error can have disproportionately large economic and social consequences. The promise of AI-driven development is real but realising it requires safeguards, infrastructure and usage patterns that current adoption trends do not reflect.

IV. Strengthening AI Safety in the Global South

Automation-first adoption carries distinct risks. Where users delegate complete tasks with minimal iteration, they have fewer opportunities to verify outputs, apply contextual judgement, and catch errors before they compound. The risks this creates are not evenly distributed. They concentrate in settings where institutional safeguards are weakest, where high-stakes sectors absorb the majority of AI deployment, and where the capacity to recover from failure is most limited. This is the context in which the Global South is adopting AI. The recommendations that follow are designed with that backdrop in mind. They do not assume well-resourced institutions or mature oversight infrastructure. They assume the conditions that automation-first adoption actually creates.

Building Oversight Infrastructure

Even in well-resourced settings, effective human oversight of advanced AI systems is difficult to achieve, because humans increasingly struggle to meaningfully evaluate and verify AI outputs as task complexity grows. Designing oversight systems requires careful calibration of when humans or AI are more reliable, how AI assistance shapes human judgment, and how to avoid over-reliance and under-utilisation of human review. An effective system is therefore not a single checkpoint but a layered architecture where overseers detect errors, intervene when errors have happened, and report problems to help developers to improve the systems. This means overseers must be empowered to query the AI system directly to improve their ability to discern whether answers are accurate. Training programs that directly address overreliance and reaffirm the overseers’ upper hand in up-to-date relevant contextual information would help with this.

Beyond binary approval decisions, AI systems should facilitate the addition of relevant information by the user and enable feedback mechanisms that allow overseers to flag concerns, request clarifications, and contribute context specific knowledge the system lacks. The European Data Protection Supervisor's guidance on human oversight provides a framework that can be contextualised for resource-constrained settings. A number of the guidance’s practical recommendations can be applied directly to Global South countries that have the capacity to integrate complex AI systems into their core functions. For instance, requiring that overseers are afforded sufficient time to assess context and evaluate the decisions proposed by the system, particularly in high-stakes scenarios. Additionally, cognitive forcing functions that reduce overreliance such as those that require the users to make a preliminary decision before accepting AI outputs, have been suggested. Global South countries could also ensure that those responsible for the oversight of these AI systems are not already overburdened. Preventing cognitive overload is crucial to ensuring that the overseers can maintain the situational awareness required to make informed decisions.

Paying More Attention to AI Risk

The evidence presented throughout this analysis reveals why Global South countries cannot afford to treat AI risk as a secondary concern. Global South countries face compounded risk that makes AI errors far more consequential than in high-income countries. Global South users enter AI adoption with high expectations for transformative growth, minimal verification infrastructure, and concentrated deployment in sectors central to economic stability. When the impacts of risks from advanced AI systems are expected to be disproportionately more severe in Global South countries, we cannot simply wait for the usage patterns of these lower-income countries to shift. Policy choices about oversight infrastructure, verification systems, and deployment will shape whether AI is a tool for sustainable development or a vector for concentrated harm. The capacity to absorb those harms is lowest precisely where the risks are highest. 

Automation-first adoption carries distinct risks. Where users delegate complete tasks with minimal iteration, they have fewer opportunities to verify outputs, apply contextual judgement, and catch errors before they compound. The risks this creates are not evenly distributed. They concentrate in settings where institutional safeguards are weakest, where high-stakes sectors absorb the majority of AI deployment, and where the capacity to recover from failure is most limited. This is the context in which the Global South is adopting AI. The recommendations that follow are designed with that backdrop in mind. They do not assume well-resourced institutions or mature oversight infrastructure. They assume the conditions that automation-first adoption actually creates.

Building Oversight Infrastructure

Even in well-resourced settings, effective human oversight of advanced AI systems is difficult to achieve, because humans increasingly struggle to meaningfully evaluate and verify AI outputs as task complexity grows. Designing oversight systems requires careful calibration of when humans or AI are more reliable, how AI assistance shapes human judgment, and how to avoid over-reliance and under-utilisation of human review. An effective system is therefore not a single checkpoint but a layered architecture where overseers detect errors, intervene when errors have happened, and report problems to help developers to improve the systems. This means overseers must be empowered to query the AI system directly to improve their ability to discern whether answers are accurate. Training programs that directly address overreliance and reaffirm the overseers’ upper hand in up-to-date relevant contextual information would help with this.

Beyond binary approval decisions, AI systems should facilitate the addition of relevant information by the user and enable feedback mechanisms that allow overseers to flag concerns, request clarifications, and contribute context specific knowledge the system lacks. The European Data Protection Supervisor's guidance on human oversight provides a framework that can be contextualised for resource-constrained settings. A number of the guidance’s practical recommendations can be applied directly to Global South countries that have the capacity to integrate complex AI systems into their core functions. For instance, requiring that overseers are afforded sufficient time to assess context and evaluate the decisions proposed by the system, particularly in high-stakes scenarios. Additionally, cognitive forcing functions that reduce overreliance such as those that require the users to make a preliminary decision before accepting AI outputs, have been suggested. Global South countries could also ensure that those responsible for the oversight of these AI systems are not already overburdened. Preventing cognitive overload is crucial to ensuring that the overseers can maintain the situational awareness required to make informed decisions.

Paying More Attention to AI Risk

The evidence presented throughout this analysis reveals why Global South countries cannot afford to treat AI risk as a secondary concern. Global South countries face compounded risk that makes AI errors far more consequential than in high-income countries. Global South users enter AI adoption with high expectations for transformative growth, minimal verification infrastructure, and concentrated deployment in sectors central to economic stability. When the impacts of risks from advanced AI systems are expected to be disproportionately more severe in Global South countries, we cannot simply wait for the usage patterns of these lower-income countries to shift. Policy choices about oversight infrastructure, verification systems, and deployment will shape whether AI is a tool for sustainable development or a vector for concentrated harm. The capacity to absorb those harms is lowest precisely where the risks are highest. 

V. Conclusion

The findings from Anthropic’s Economic Index and Open AI’s usage data show that users in high-income countries are more likely to engage AI systems through iterative, augmentation-focused interactions, while users in lower-income countries across the Global South tend to rely more heavily on automation and full task delegation. From this pattern follow two closely related implications. First, automation-first adoption may increase vulnerability in contexts where the capacity to absorb and respond to errors is more limited. OpenAI’s evidence of rapid adoption growth in low-and middle-income countries, combined with Anthropic’s observation that automation dominates early stages of use, suggests that AI systems may be integrated into high-stakes sectors before supporting institutions, monitoring practices, and regulatory frameworks are fully developed. Second, the framing of AI automation as a driver of economic transformation may obscure the importance of careful integration. High expectations for rapid productivity gains create pressure to deploy AI quickly in high-stakes domains, often before the human, institutional, and infrastructural foundations for safe use are in place. The central concern is that when AI harms occur, the capacity to absorb, detect and correct is unevenly distributed. Realising the developmental potential of AI in the Global South will therefore require more than expanding access and accelerating adoption. It will require deliberate investment in oversight mechanisms, redundancy, and usage patterns that preserve human judgement and institutional learning. 

Author Bio

Michelle Malonza is a Research Associate at the ILINA Program. She has an undergraduate degree in law from Strathmore University and a Master of Laws degree from Columbia University. She is interested in agent governance, human oversight of AI, legal doctrinal questions related to AI risk and analysing the implications of reports that frontier AI companies release.

Acknowledgements

Special thanks to Cecil Abungu for helping me structure and think through this piece. I am also grateful to Sharon Malonza for always being available to talk to when I was stuck. To Gathoni Ireri, thank you for teaching me how to think about constructing threat models.  Finally, I would like to thank the rest of the ILINA team for their feedback during the review process and Faith Gakii for her copy editing.

Back to Blog