
A new Keeper Security report surveying more than 1,400 education leaders in the United States and the United Kingdom finds that 41% of schools have experienced AI-related cyber incidents, underscoring mounting risks as artificial intelligence tools become commonplace in classrooms.
The study, “AI in Schools: From Promise to Peril” , shows incidents range from AI-enabled phishing and misinformation campaigns to students creating harmful deepfake content. Nearly 30% of schools reported instances of students generating damaging AI content, while security teams confirmed dozens of AI-driven attacks affecting school operations and data security.
Key findings include:
-
41% of surveyed schools reported at least one AI-related cyber incident.
-
86% of institutions permit student use of AI tools; 91% permit faculty use — but formal AI policies are rare, with most schools relying on informal guidance.
-
About 30% reported student-generated harmful AI content (deepfakes, disinformation).
-
90% of education leaders are concerned about AI-driven cybersecurity threats, yet only 25% feel “very confident” in identifying and responding to sophisticated AI attacks.
The report warns that while AI offers significant pedagogical benefits, schools are currently underprepared to manage its security risks. Lax governance, limited staff training, and aging IT infrastructure increase vulnerability to new attack methods that leverage generative AI to craft convincing phishing messages, manipulate media, or automate social-engineering schemes.
Recommendations for schools
-
Adopt clear AI governance policies that define permitted uses, data handling rules, and consequences for misuse.
-
Train educators and administrators to recognise AI-driven threats (e.g., deepfake detection, AI-augmented phishing).
-
Strengthen cybersecurity posture — multi-factor authentication, zero-trust network principles, robust logging and monitoring.
-
Teach digital literacy to students, including ethics and safe use of generative tools.
-
Conduct regular incident-response drills that simulate AI-enabled attacks to improve readiness.
School districts, education departments and vendor partners are urged to prioritise formal AI policies and immediate cybersecurity investments. Industry groups and policymakers can also help by issuing guidance, funding training programmes, and creating standards for safe AI use in education.