From Dublin to Nairobi: Confronting AI-Driven Online Harm and Platform Responsibility
From left to right: Kimani (tribeless youth) Victor (Kictanet) Kennedy ( BAKE) Emmanuel (Internet without borders) Lillian (Watotowatch network) Julie Owono Director (Internet without borders)
🟣 Statement from the KenSafeSpace Coalition
Dublin, 27 January 2026 – The KenSafeSpace Coalition is honored to be in Dublin as part of its Learning & Advocacy Expedition, at a pivotal moment for global debates on platform governance, artificial intelligence, and the protection of fundamental rights online.
Our visit coincides with the recent investigative actions by the European Commission under the Digital Services Act (DSA) regarding Grok, following revelations about the generation and circulation of images involving the nudification of women and children on X and through AI tools. This development highlights the urgent need for a global response to technology-facilitated gender-based violence (TFGBV) and for stronger accountability from digital platforms.
asking an artificial intelligence system to “undress” individuals, particularly women and children, constitutes a form of digital sexual violence and, in many jurisdictions, an illegal act. Countries have enacted laws governing image-based sexual abuse, child sexual exploitation material, data protection, and violations of human dignity and privacy.
Such practices are not merely technological experiments; they reflect deeper patterns of gendered harm amplified by digital systems.
Our report, published in November 2025, already identified a troubling inconsistency: while Grok initially did not respond to certain nudification requests, X continued to tolerate the circulation of content generated by external AI “undressing” tools.
A new threshold appears to have been crossed. Grok itself now responds to requests to undress women, generating and publicly sharing images in comment threads. These practices violate X’s own policies and illustrate the progressive erosion of safeguards that we warned against in Voices at Risk.
This situation confirms a fundamental reality: the risks associated with digital technologies are not confined to the European Union or the United States.
They disproportionately affect women, girls, journalists, human rights defenders, and political actors in politically fragile contexts such as Kenya, as the country approaches the 2027 elections.
Understanding what happens beyond traditional centres of technological power is essential to anticipate harms, grasp their real-world impact, and design effective responses.
Understanding what happens beyond traditional centres of technological power is essential to anticipate harms, grasp their real-world impact, and design effective responses.
In Dublin, the KenSafeSpace Coalition seeks to contribute to this global conversation through dialogue with regulators, researchers, civil society actors, and technology companies. Our objective is clear: to reaffirm that content moderation, responsible AI design, and the protection of human rights are not optional, but foundational pillars of digital democracy.
In the face of rising digital violence, inaction is no longer an option. Platforms, regulators, and the international community must assume their collective responsibility to ensure a safe, inclusive, and rights-respecting digital space everywhere. This includes cross-border regulatory cooperation, shared standards on AI-generated sexual abuse, and the inclusion of the Global South perspectives on AI governance.