Is using AI to create explicit images a crime in Rwanda?
Saturday, January 17, 2026
UK media regulator Ofcom opened an investigation into platform X.

On January 12, the UK media regulator Ofcom opened an investigation into platform X over what it described as "deeply disturbing” AI-generated sexual images created using tools such as Grok.

ALSO READ: Why trained youth, stronger laws are key to digital safety

While the probe is taking place in Europe, similar concerns could be relevant even in Rwanda, with the growing use of AI in areas such as education and entertainment.

So, what does the law say? If an algorithm is used to create sexualised deepfake images without a person’s consent, could that amount to a criminal offence in Rwanda?

Mary Musoni, a human rights lawyer who researches technology-based violence, says existing laws are sufficient to hold offenders accountable, even without provisions that specifically mention AI. She cites Article 34 of the Cybercrimes Act and Article 135 of the Penal Code.

ALSO READ: Rwanda launches CyberHub to boost cybersecurity skills and innovation

"In Rwanda, the unauthorised creation and distribution of sexually explicit or manipulated images, especially those that violate someone’s privacy or dignity can be a criminal offence,” she said.

"When AI tools are used to produce or share explicit images without consent, it may constitute cyber harassment, violation of personal privacy, and dissemination of offensive material,” she added, noting that the intent to harm or exploit digital privacy is the key factor that turns a digital act into a cybercrime.

Intent and action

For a case to hold up in court, lawyers look for two things: the "guilty mind” and the "guilty act,” according to Innocent Muramira, the founder of Innocent Law.

"One would be held accountable because of the intention,” he said, "If you design a person without clothes and then go ahead and share it, that is what we call "mens rea” (intention) and "actus reus” (the act of sharing).”

While Muramira acknowledges that regulations specifically for AI are still growing in Rwanda, he warns that the "long arm of the law” is ready.

He suggested that Rwanda may soon follow the path of the UK and Australia by implementing stricter AI-specific guidelines.

A clash with culture

For many Rwandans, this technological trend is not just a legal issue; it’s a cultural one.

Joseline Uyisabye, a university graduate and content creator, believes that digital behaviour should mirror traditional values.

"In our Rwandan culture, public nudity is generally disapproved,” she explained.

"When I post photos or videos, I am very intentional. I focus on content that is educational, ensuring I never project an image that could be seen as undignified,” she added.

Jean Paul Ibambe, a lawyer specialising in media laws, believes there is a gap in how global platforms interact with local contexts.

"These platforms have terms and conditions that often meet the standards of the U.S., where they are based,” Ibambe said, "But they need to be customised for Rwanda. We need to engage owners of those platforms to make sure the way they are functioning or operating here is customised or contextualised.”

ALSO READ: Govt sets up cybersecurity academy to tackle tech threats

Daniel Twayinganyiki, a fresh university graduate, argued that AI is a vital tool for progress that should not be blamed for the actions of malicious users.

"The problem is not with the AI itself, because AI is a tool,” he argued, "Somebody who uses it to harm people is the problem. For me, AI is very important and more useful than harmful.”