The safety of AI-powered systems in Nigeria is under scrutiny following the discovery of vulnerabilities in GPT-4 and GPT-5 that could be exploited to manipulate outputs and access sensitive information.
In a statement on Monday, Director of Corporate Affairs and External Relations at the National Information Technology Development Agency (NITDA), Hadiza Umar, confirmed that seven critical weaknesses have been identified in the AI models. These vulnerabilities can be exploited through a method known as indirect prompt injection, where malicious instructions are embedded in ordinary online content such as webpages, social media comments, or shortened URLs.
During routine interactions, such as summarizing text, browsing, or using AI-powered tools, ChatGPT may inadvertently execute these hidden commands. Certain weaknesses also allow attackers to bypass safety filters using trusted domains, exploit markdown rendering bugs to conceal harmful content, or perform memory poisoning, which can subtly alter the AI’s behavior over multiple sessions.
The agency emphasized that these vulnerabilities could lead to unauthorized system actions, data leakage, manipulated AI outputs, and long-term behavioral changes. While OpenAI has addressed some issues, large language models still face challenges in reliably distinguishing genuine user queries from hidden malicious instructions.
NITDA advised Nigerians, particularly technology-driven enterprises, digital creators, and professionals relying on AI for content creation, data processing, and reporting, to exercise caution. Responsible usage, verification of AI outputs, and vigilance against suspicious content are recommended to protect digital operations and personal information.