In a recent conversation, a family member questioned why Grok, an artificial intelligence chatbot integrated into social media platform X has attracted such intense scrutiny. Image manipulation, they argued, has existed for decades. Photoshop has long been capable of altering photographs. Why, then, has Grok become a focal point of controversy? ALSO READ: Is using AI to create explicit images a crime in Rwanda? The issue, then, is not whether image manipulation tools should exist, but whether comparing a standalone editing application like Photoshop to an AI system embedded in a social media platform accurately captures the risks involved. The answer lies not in what Grok can do in isolation, but in what it is, where it exists, and how it operates at scale. At its core, Grok is a generative artificial intelligence system built to respond to user prompts. Like many AI tools, it can generate text and images. It was created to increase engagement on X by offering users a fast, interactive assistant embedded directly into the platform. In that sense, Grok is not unique in concept. Other platforms have introduced similar tools. Meta has integrated AI assistants into Facebook, Instagram, and WhatsApp, while Snapchat’s My AI is embedded directly into its messaging interface. What sets Grok apart is not its technical capability, but how it is designed to function within a public social media environment. Traditional image-editing tools such as Photoshop can operate offline for a limited time. Grok, by contrast, is integrated directly into the platform and draws on live, real-time data streams, allowing it to respond not only to individual prompts but to ongoing conversations, trending topics, and viral moments as they unfold. This design collapses three stages; creation, publication, and amplification into a single action. This is what makes Grok unique, and potentially dangerous. When users prompted Grok to generate images by placing the well-known Mackenzie sisters in bikinis, the significance lay not in whether such images were ultimately produced or circulated, but in the fact that such a prompt was possible in the first place. More broadly, Grok’s image-generation features were used to produce and publicly circulate sexualized images of women without their consent. This incident reflects a broader and well-documented pattern in digital spaces. Nonconsensual sexualized imagery disproportionately targets women, particularly public-facing women, turning their bodies into objects of experimentation, ridicule, or control. AI does not create misogyny. However, when deployed without adequate safeguards, it can reproduce and accelerate existing power imbalances at an unprecedented scale. Defenders of the platform may argue that responsibility lies with users, not with the tool itself. Elon Musk, and likely his legal team, would reasonably maintain that Grok is a product, not an actor that misuse is inevitable, and that users must be held accountable for their actions. This argument is not new, nor is it without merit. Yet it overlooks the central reality of modern digital platforms: design shapes behaviour. When a system is engineered to maximize engagement and public interaction, harmful uses become foreseeable rather than exceptional. The question is not whether a company can control every user’s action, but whether it has anticipated predictable risks and designed safeguards accordingly. This brings the discussion to a broader and more complex issue: where does regulation end, and where does censorship begin? Social media platforms are often caught between protecting users and preserving freedom of expression. Overregulation risks suppressing legitimate speech under regulation risks enabling harm. There is no simple resolution to this tension, particularly when technologies evolve faster than legal frameworks. For Rwanda, and for African societies more broadly, this debate carries particular significance. Rwanda’s social fabric is shaped by values of dignity, morality and community responsibility. At the same time, the country is actively embracing digital transformation and innovation. Many users, especially young people, are engaging with powerful technologies without the long exposure, media literacy, or regulatory maturity seen in parts of the Western world. This raises an important question: how do value-based societies integrate global technologies that were not designed with their cultural or social contexts in mind? Ensuring protection does not require rejecting innovation, nor does it require importing regulatory models wholesale from elsewhere. It requires deliberate reflection on education, platform accountability, and governance frameworks that align technological progress with societal values. The Grok controversy is not about one chatbot or one platform. As AI becomes more deeply embedded in social platforms, the risks are unlikely to disappear, they will simply become more normalized. The challenge ahead is not only how platforms respond, but how societies decide what responsibility innovation must carry with it. Mary Musoni is a human rights lawyer with an interest in technology-facilitated violence and digital harms.