How Automation-First AI Adoption in the Global South Could Amplify AI Risk
Table of Contents
Share

I. Introduction
In 2025, two of the world's leading AI developers, OpenAI and Anthropic, published major reports examining how their models are being used across the globe. OpenAI's working paper examined who uses ChatGPT and for what purposes, while Anthropic's Economic Index tracked how Claude is integrated into everyday work tasks. Taken together, these reports offer the first large-scale empirical picture of AI adoption patterns across countries at different stages of development. Both reports point to a striking divergence in how AI is being used. High-income countries increasingly use AI to augment human capabilities. Augmentation is the practice of learning, iterating, and collaborating with the model to complete a task. Lower-income countries, by contrast, disproportionately use AI to automate, delegating complete tasks with minimal iteration. The countries in this lower-income, automation-heavy category map closely onto what is commonly understood as the Global South. Anthropic's data shows that lower-income countries such as Indonesia, India, and Nigeria have among the lowest AI usage per capita. It also shows that low-adoption countries are more likely to delegate complete tasks to the model rather than engage it collaboratively. I use lower-income countries and the Global South interchangeably in this piece to reflect that overlap. In the rest of this piece, I will refer to the dominant pattern in these countries as automation-first adoption.
Anthropic offers two explanations for this automation-first pattern. First, highly capable AI models often anticipate user needs and produce high-quality outputs on the first attempt, meaning users may be more willing to trust the model with more complex, complete, and higher-stakes tasks. Second, early adopters across all countries initially gravitate towards automation before shifting to more collaborative use as they grow more comfortable and experienced. Only with increased proficiency do users adapt their workflows to engage with the model as a cognitive assistant.
These differential risk profiles carry significant implications for the Global South. OpenAI's data shows disproportionately high growth in AI use across low- and middle-income countries, even as overall usage remains higher in high-income nations. When this rapid growth is read alongside Anthropic's finding that automation is both increasingly common and more prevalent in lower-usage economies, automation-first interactions emerges as the dominant entry point for new users in the Global South.
In this piece, I argue that the automation-augmentation divide maps onto global income levels in ways that systematically shape risk exposure in the Global South. I first analyse how automation-first adoption restructures human-AI interaction, increasing susceptibility to overreliance, automation bias, and the erosion of critical skills. I then show how these interaction-level risks become systemic when AI is deployed in fragile institutional contexts and high-stakes sectors such as healthcare. Next, I examine how development narratives surrounding AI-driven growth can accelerate premature automation in the absence of adequate safeguards. I conclude by outlining the institutional and oversight mechanisms necessary to mitigate these risks.
II. Understanding the Divide and Why It Matters
III. Implications of Automation-First Adoption in the Global South
How the Promise of Automation Obscures Risk
Across much of the Global South, AI is framed as a pathway to rapid economic transformation. Ipsos polling data shows that the populations that are most excited about AI tend to be those that most expect it will benefit their economies, and this excitement is strongest in lower-income countries. Development institutions, technology companies and national governments frame AI integration into agriculture, healthcare, education, public services and financial services as essential. These sectors are considered core to a country’s GDP and the integration of AI is projected to reduce poverty and have a long-term economic impact. Widespread AI adoption is expected to translate into broader use cases, deeper automation, and sustained economic gains.
When the urgency to deploy AI in high-stakes sectors meets a usage pattern characterised by overreliance and eroded critical evaluation, the risk of cascading failures is amplified. The expectation that AI automation will lead to transformative growth can reduce the perceived need to first invest in the complementary institutional conditions required to manage AI risk.
Let us use Uganda as a case study to understand how automation-first AI adoption in healthcare can produce catastrophic outcomes in the Global South Context. Uganda has one of the world’s highest maternity mortality rates and has one of the world’s most severe healthcare workforce shortages, with only 1 doctor per 25,000 people. Providing medical information across multiple specialities is also, according to Anthropic's data, one of the most distinctive uses of AI in Uganda.
Consider a scenario in which Uganda's Ministry of Health deploys an AI agent to streamline prenatal care across rural health centres, where the majority of Uganda's 46 million people live. The agent autonomously monitors pregnant women throughout their pregnancies, predicting high-risk cases requiring hospital delivery versus those who can deliver locally. It schedules appointments, allocates ambulances, prioritises scarce ultrasound machines, and determines who receives restricted prenatal supplements. This directly addresses the three-delays framework that drives maternal mortality. Driven by donor pressure and early apparent success, the Ministry deploys it rapidly across all centres simultaneously.
Initial results are encouraging. Ambulance efficiency improves, high-risk women are identified, supplements reach their targets. Weekly oversight reviews become cursory as health officers struggle to justify overriding quantified scores with clinical intuition. Gradually, formal review dissolves, leaving error-flagging to swamped medical students who may lack the authority or experience to challenge a system endorsed by their seniors. Anthropic identifies the degree of AI autonomy granted in decision-making as a key economic primitive in its latest report and notes it is particularly acute in lower-income countries, where resource constraints make the marginal cost of human oversight prohibitively high.
This is where the technical failure becomes structural. Rural health centres lack laboratory capacity for comprehensive blood work, so the agent classifies women as low-risk based on the absence of abnormal test results. It treats missing data as evidence of health rather than evidence of resource constraints. It optimises for what can be measured and misses the intended outcome, actual health status. The agent also treats prenatal check-up attendance as a proxy for low risk, when in rural Uganda it simply means living near the health centre.
The consequences of systematic misclassification are compounded by the agent's autonomy. Rather than misclassifying once, it schedules entire prenatal care plans, allocates ambulances away from affected districts, deprioritises women for ultrasounds, and redirects supplements. Pre-eclampsia, which can progress to seizure, brain injury, and death within hours, is particularly likely to be missed. Gradual weight gain can be misread as healthy foetal development when it in fact signals dangerous fluid retention, especially where blood pressure equipment and urine testing are unavailable. When these women go into labour at health centres without emergency obstetric capacity, operating theatres and blood banks, complications are only survivable with resources the agent has systematically allocated elsewhere.
When labour begins and the crisis unfolds, health workers face an impossible situation with a woman haemorrhaging, seizing, or with obstructed labour, and transport often 2-4 hours away. Uganda sees approximately 15 pregnant women dying every day from direct causes like haemorrhage and hypertensive disorders. Beyond mortality, for every woman who dies, an estimated 20 or 30 suffer injuries, infections or life-long disabilities conditions that would not occur if women received timely emergency obstetric care. This agent's systematic failures in a single district could meaningfully increase this national toll, with each preventable death representing a failure that compounds across Uganda's 135 districts if similar systems were deployed nationwide.
The harm is systemic, delayed, and difficult to reverse, concentrated on the most vulnerable women at their moment of greatest need, in a context where well-intentioned oversight dissolved under pressure, structural conditions transformed algorithmic errors into deaths, and the capacity to recognise or interrupt AI failure was minimal.
The Structural Vulnerability of Global South Countries
The sectoral vulnerabilities of these low-income economies are amplified by structural failures that make the AI harms even more consequential. As illustrated in the scenario above, there are three key weaknesses in the Global South. First, AI systems deployed in the Global South often lack complementary back up mechanisms (fail-safes). Second, these economies have inadequate monitoring infrastructure and cannot afford the high cost of recovery. These fail-safe systems operate by implementing redundancies, backup mechanisms and safety protocols that are triggered when the primary system fails. Developing AI fail-safe systems requires technical redundancy to ensure uninterrupted operation, alongside careful planning to incorporate human supervision and intervention in critical decisions.
In lower-income countries there are often weak or no systems to catch errors before they compound and cause larger scale harm. Higher-income countries, by contrast, have structured resilience indicators and frameworks built into the design operation and monitoring of AI in critical infrastructure systems. In automation-first adoption, when an AI system fails, there is likely no safety net in place that can act as a fallback system. Anthropic’s Index acknowledges this when it states that high income countries have dominant digital infrastructure and an abundance of AI development resources. As a result, lower income countries without these resources often have a single point of failure.
Further, in order to detect any AI failures, the capacity to audit systems and identify errors is necessary. Auditing AI systems and identifying failures requires technical capacity to log the system’s decisions, monitor performance in real time, conduct independent evaluations and trace errors. Detecting failures also requires institutional support in place to report incidents, aggregate them and notice any patterns. Lower-income countries often lack both technical capacity and institutional mechanisms. These failures can therefore adversely affect people for a longer period of time before they are recognised and responded to. When these failures occur, the cost of compensation, replacing failed systems and investing in alternatives is very high. High-income countries have greater capacity to absorb AI errors, but in lower-income settings with limited resilience and recovery capacity, those errors carry the highest costs.
Adoption is accelerating precisely where the vulnerability factors described in this piece are most acute. This is the adoption-first usage pattern that maximises the risks described in this piece, where outputs are trusted without verification, errors cascade across critical sectors, and there is no infrastructure to catch failures. In the Global South, AI failures translate directly into financial loss, harm to vulnerable populations, and erosion of the institutions least equipped to withstand them. What makes this especially pressing is the pace at which it is unfolding. Open AI’s evidence of rapid uptake of AI in low-and middle-income countries implies that the speed of adoption may outpace the development of the human and institutional capacity to manage automation safely. Anthropic’s speculation that countries will naturally shift from automation-focused to augmentation-focused usage as adoption deepens assumes that there is time to develop experience, expertise and learn without experiencing severe consequences. However, if AI deployment in lower-income countries is concentrated in the sectors central to their economic stability, a single failure may altogether undermine continued AI adoption.
Lower-income countries are not just using AI differently from high-income countries. They are using AI in ways that expose them to risks they are poorly positioned to manage. The automation-augmentation divide is both a usage pattern and a risk profile. For the Global South, the concern is not only that errors occur, but that the capacity to absorb and correct them is more limited. Where critical sectors are more fragile, monitoring infrastructure is weaker, and recovery from failure is costly, the same AI error can have disproportionately large economic and social consequences. The promise of AI-driven development is real but realising it requires safeguards, infrastructure and usage patterns that current adoption trends do not reflect.
IV. Strengthening AI Safety in the Global South
V. Conclusion
The findings from Anthropic’s Economic Index and Open AI’s usage data show that users in high-income countries are more likely to engage AI systems through iterative, augmentation-focused interactions, while users in lower-income countries across the Global South tend to rely more heavily on automation and full task delegation. From this pattern follow two closely related implications. First, automation-first adoption may increase vulnerability in contexts where the capacity to absorb and respond to errors is more limited. OpenAI’s evidence of rapid adoption growth in low-and middle-income countries, combined with Anthropic’s observation that automation dominates early stages of use, suggests that AI systems may be integrated into high-stakes sectors before supporting institutions, monitoring practices, and regulatory frameworks are fully developed. Second, the framing of AI automation as a driver of economic transformation may obscure the importance of careful integration. High expectations for rapid productivity gains create pressure to deploy AI quickly in high-stakes domains, often before the human, institutional, and infrastructural foundations for safe use are in place. The central concern is that when AI harms occur, the capacity to absorb, detect and correct is unevenly distributed. Realising the developmental potential of AI in the Global South will therefore require more than expanding access and accelerating adoption. It will require deliberate investment in oversight mechanisms, redundancy, and usage patterns that preserve human judgement and institutional learning.
Author Bio
Michelle Malonza is a Research Associate at the ILINA Program. She has an undergraduate degree in law from Strathmore University and a Master of Laws degree from Columbia University. She is interested in agent governance, human oversight of AI, legal doctrinal questions related to AI risk and analysing the implications of reports that frontier AI companies release.
Acknowledgements
Special thanks to Cecil Abungu for helping me structure and think through this piece. I am also grateful to Sharon Malonza for always being available to talk to when I was stuck. To Gathoni Ireri, thank you for teaching me how to think about constructing threat models. Finally, I would like to thank the rest of the ILINA team for their feedback during the review process and Faith Gakii for her copy editing.
Back to Blog
