

Current moderation systems often overlook hate speech when expressed in non-dominant languages, or cloaked in coded terms, Rwanda’s Permanent Representative to the United Nations, Amb Martin Ngoga, has said, calling for urgent reforms in the way digital platforms monitor harmful content.
The envoy was speaking during a commemoration event held in New York on Monday, June 16, marking the International Day for Countering Hate Speech ahead of its official observance on June 18. The day aims to raise global awareness and strengthen international cooperation to prevent hate-fuelled violence, particularly in the digital age.
ALSO READ: ‘Genocide can happen anywhere,’ Nduhungirehe on dangers of hate speech
Drawing from Rwanda’s history and its continued efforts to combat genocide ideology, Ngoga said that hate speech has evolved with technology, but remains as dangerous as ever.
"Today, while the platforms and tools may have changed, the threat remains. Online hate, now accelerated by AI-generated content and algorithmic amplification, spreads faster than truth, radicalizes communities, and undermines the foundations of peace we have worked hard to build,” he warned.
Ngoga emphasized the devastating role hate speech played in the 1994 Genocide against the Tutsi in Rwanda, stressing that unchecked incitement through media and state rhetoric laid the groundwork for one of the gravest atrocities of the 20th century.
"The Genocide against the Tutsi taught us that systematic incitement to hatred can have unimaginable consequences,” he said. "AI and automated content tools are increasingly being used to translate, replicate, and amplify hate messages, making them harder to trace and more difficult to counter.”
ALSO READ: Kwibuka: Renewed calls to end hate speech, dehumanisation
Many existing moderation systems used by global tech companies lack the linguistic and cultural sensitivity necessary to detect and respond to hate speech targeting marginalized groups, he said.
"To be effective, monitoring must be multilingual, culturally aware, and capable of identifying emerging patterns before they escalate,” he stated.
"In regions like ours, where the echoes of past atrocities remain painfully close, early warning systems must reflect the specific vulnerabilities of targeted communities.”
ALSO READ: UN official denounces rising hate speech in eastern DR Congo
He urged the international community to invest in data-driven, context-specific monitoring mechanisms, and to adopt inclusive approaches that bring together governments, civil society, and the private sector.
"Governments alone cannot solve this problem, nor can technology companies act as the sole drivers,” Ngoga said. "We must include and empower all actors to disrupt harmful narratives and ensure digital spaces serve as engines of inclusion, not instruments of division.”
Rwanda, he reaffirmed, supports a coordinated global response to the regulation of AI and digital platforms, one that is rooted in international human rights law. "We must not allow the digital age to be a breeding ground for the ideologies we vowed never to tolerate again,” he said.
ALSO READ: DR Congo's hate speech crisis: When words become weapons
Ngoga also referenced the precedent set by the now defunct International Criminal Tribunal for Rwanda (ICTR) in the media trial, which clarified the legal line between protected free speech and criminal hate speech.
"For anyone wondering where to draw the line, the judicial guidance is already there,” he said, pointing to the Tribunal’s ruling in the case against Ferdinand Nahimana and others who were convicted of incitement to genocide and their roles during the genocide.
The UN Strategy and Plan of Action on Hate Speech defines hate speech as communication that attacks or discriminates against individuals or groups based on identity factors like religion, ethnicity, or gender, though a universal legal definition is still under discussion. The Plan of Action highlights the vital role of partnerships involving tech and social media companies, particularly the use of AI, in addressing hate speech. As noted, while AI offers valuable tools for early warning and conflict prevention, it also presents risks if not governed by human rights safeguards.
In July 2021, the UN General Assembly highlighted global concerns over "the exponential spread and proliferation of hate speech” around the world and adopted a resolution on "promoting inter-religious and intercultural dialogue and tolerance in countering hate speech”.
The resolution recognizes the need to counter discrimination, xenophobia and hate speech and calls on all relevant actors, including States, to increase their efforts to address this phenomenon, in line with international human rights law.
The resolution proclaimed June 18 as the International Day for Countering Hate Speech, building on the UN Strategy and Plan of Action on Hate Speech launched on June 18, 2019.