The Global South can’t afford to sit out the AI debate
Wednesday, June 25, 2025

As artificial intelligence (AI) continues to evolve at breakneck speed, global debates over its impact on humanity are growing more urgent and more exclusive.

In power centers like Silicon Valley, Brussels, and Geneva, scientists and ethicists are sounding alarm about the risks AI poses: mass job displacement, pervasive surveillance, disinformation, and even the erosion of human agency—the ability to make independent decisions in a world increasingly shaped by algorithms.

Notable voices such as Geoffrey Hinton, often dubbed the "Godfather of AI,” and historian Yuval Noah Harari have warned that unchecked AI could become one of the most destabilising forces of our time.

Their fears span from AI surpassing human control to the undermining of democracy and truth. Yet while these debates intensify in the West, they remain underdeveloped in much of the Global South.

This raises a critical question: As wealthier nations debate the future of AI, should countries still building their digital literacy and technological infrastructure stay on the sidelines—or should they take their place at the table now?

Why AI is everyone’s business

In many developing countries, public understanding of AI remains shallow—not just among the general population but among policymakers as well. AI is often conflated with automation or generic digital tools.

While its promise is slowly being recognised in sectors like healthcare and education, the deeper ethical, cultural, and political implications are still largely absent from public discourse.

But AI is not waiting for these countries to catch up. Already, AI-powered tools are influencing decisions in education, health care, finance, and governance across the Global South. To suggest that these nations should delay engagement until they reach some undefined threshold of readiness is not only short-sighted—it’s dangerous.

Who is the "we" in AI?

There is also a philosophical problem embedded in the global AI conversation. When international experts discuss "our” future or what "we” must do to govern AI, who exactly do they mean?

Are communities in Sub-Saharan Africa, nomadic Tuareg populations, Indigenous Amazonians, displaced refugees, rural women, or persons with disabilities included in that "we”? Are their languages, worldviews, or priorities reflected in the data used to train AI models—or in the governance structures shaping how those models are deployed?

More often than not, they are not. AI is being developed and shaped by a small group of corporations and countries, with systems rooted in their cultural assumptions, values, and economic models. This raises not just concerns about digital inequality, but questions of cultural erasure and sovereignty.

Consider a family in West Africa trying to secure a loan to pay for a funeral—an event of great cultural importance. AI credit systems, trained on Western financial norms, may view this decision as financially irresponsible, denying access to credit. Without understanding the cultural logic behind such financial behavior, AI misjudges entire communities.

Or take the example of justice and language. During the trials following the 1994 Genocide against the Tutsi, the Kinyarwanda word "gukora," which ordinarily means "to work,” was used as a euphemism for "to kill.” Would an AI system, without historical and cultural depth, understand such nuance? The risks of misinterpretation are real, especially in contexts where lives, reputations, or justice itself is at stake.

Other communities face similar challenges. The Tuareg often use single names or clan-based identifiers that AI systems, designed for Western naming conventions, may reject as invalid. The oral ecological knowledge of Amazonian Indigenous peoples rarely makes it into the datasets that inform environmental policy or land management tools.

These are not hypothetical concerns. They are immediate, practical examples of how AI can entrench exclusion if its design and governance ignore diverse worldviews.

Even before we get to the question of AI threatening humanity’s future, we must confront how it is already influencing who gets seen, heard, or served. AI is not neutral. It reflects the priorities, biases, and blind spots of its creators.

This makes critical engagement from the Global South essential—not later, but now. Embracing innovation without examining who is included, what is assumed, and what might be lost is no longer tenable.

Making the conversation truly global

The real question isn’t whether the Global South is ready for the AI debate. It’s whether the AI debate is ready to include the Global South—meaningfully and on equal footing.

AI governance today remains largely confined to technical experts, major tech firms, and policymakers from a narrow band of countries. That must change. AI literacy should extend beyond engineers to include journalists, educators, civil society leaders, and elected officials.

Countries across the Global South should assert their place in global forums and contribute to ethical frameworks grounded in their realities, histories, and aspirations.

Investing in local research, nurturing indigenous technological ecosystems, and promoting culturally relevant ethical standards are just as important as adopting new technologies. A truly equitable digital future demands more than access—it requires agency.

If the future of AI is left to a handful of actors, it will reflect only a sliver of the world’s diversity. But if we open the conversation to include all regions, all cultures, and all voices, then we have a chance to shape AI in a way that serves—not undermines—humanity.

The Global South has more than a stake in this conversation; it has indispensable wisdom to contribute. It’s time for the world to listen.

The writer is the Senior State Attorney in the International Justice and Judicial Cooperation Department at Rwanda’s Ministry of Justice.