The Kenya Artificial Intelligence Bill, 2026 has proposed the establishment of the Artificial Intelligence Commissioner, to be appointed by the President. The office will be tasked with regulating AI innovations and usage, conducting risk assessment of AI systems deployed in Kenya perform conformity audits and post-market surveillance of AI systems, assessing high-risk AI systems to ensure compliance.
Sponsored by Nominated Senator Karen Nyamu, the draft law mirrors the European Union Artificial Intelligence Act, the world’s first comprehensive AI law, which came into force in August 2024.
The Bill envisions that the AI Commissioner will classify AI systems used in the country according to the risk level they pose to health, safety, fundamental rights, the environment or societal welfare. This is especially as the technology shifts from automating simple tasks to agentic systems that can act independently. Yet no country in Africa has developed a dedicated law regulating AI systems; 16 out of 55 have established national AI strategies.
Providers of high-risk systems would have to carry out risk assessments and human-rights impact evaluations, ensure the systems are transparent and explainable, and maintain records of training data, outputs and performance metrics for at least five years.
Companies would also have to keep detailed records of data used to train AI models, inputs, outputs and performance metrics for at least five years. They are also required to comply with the Data Protection Act when processing personal information.
Where AI tools generate or manipulate a person’s image, voice or likeness, developers must obtain explicit consent and clearly label the output as AI-generated.
The bill also requires organisations deploying AI to disclose to users and affected individuals the nature, purpose and limitations of the system, the extent to which decisions are automated and the measures taken to mitigate bias.
The bill proposes fines of up to Sh5 million or imprisonment for up to two years for offences such as deploying prohibited AI systems, failing to conduct required risk or workforce assessments, or distributing harmful AI-generated content using someone’s likeness, commonly known as deepfakes, without consent.