Artificial intelligence is set to intensify online abuse in Nigeria, with Gatefield projecting that millions of women and girls may be directly targeted each year if urgent interventions are not made.
The report, “Industrialised Harm: The Scale of AI-Facilitated Violence in Nigeria”, warns that 30–35 million women and girls could face AI-enabled harassment annually by 2030. Gatefield based its projections on Nigeria’s internet growth, online abuse trends, and the rapid adoption of generative AI technologies.
Direct targeting includes non-consensual sexual imagery, deepfake impersonation, and coordinated harassment campaigns. High-profile cases include musician Ayra Starr, Senator Natasha Akpoti-Uduaghan and Nollywood actor, Kehinde Bankole, all of whom faced AI-manipulated content.
The report highlights Nigeria’s lack of legal and institutional frameworks for AI-related abuse. There is no AI-specific governance, no clear definition of deepfakes, and no recognition of AI-enabled gender-based violence, while oversight by agencies such as NITDA, NCC, and law enforcement remains fragmented.
Gatefield points to global examples like the EU AI Act, France’s deepfake laws, and the UK’s Online Safety Act as models for mandatory platform obligations, content labelling, and rapid takedown procedures.
Recommendations include – Clear legal definitions for deepfakes and synthetic media, Binding platform transparency and algorithmic risk assessments, 24–48 hour removal timelines for harmful content, Protections for women, children, and vulnerable groups and Accessible reporting and redress mechanisms
Gatefield CEO, Adewunmi Emoruwa, said AI has made online abuse cheaper, faster, and more damaging, particularly for women in public life. Advocacy lead Shirley Ewang added that Nigeria is unprepared, stressing the need for coordinated action by government, tech platforms, and civil society.