KCSIE 2025 and AI
- Alan Day

- Oct 14
- 1 min read

The 2025 Keeping Children Safe in Education (KCSIE) guidance integrates the use of artificial intelligence (AI) by adding new risks like misinformation and conspiracy theories, and by providing more clarity on AI in schools. Schools must now update policies to address how AI is used, with a focus on safeguarding children and staff from both the risks of generative AI and from online threats.This involves implementing practical steps like updating documentation, conducting staff training, and using AI tools to enhance safety and efficiency while maintaining a human "in the loop" approach.
Why this matters for schools
New online risks:
KCSIE 2025 specifically adds disinformation, misinformation, and conspiracy theories to the list of online safety content risks that schools must safeguard against.
AI integration:
The guidance includes new information and clarity on how to use generative AI safely in schools, which can be used to help with administrative tasks, lesson planning, and more.
Safeguarding responsibilities:
Schools must update their online safeguarding policies to reflect these changes and address the specific risks posed by generative AI, such as its potential for "hallucinations" or inaccurate information.
AI abuse:
There is a parallel and growing concern about AI being used to create realistic child sexual abuse material, which complicates detection and removal efforts and highlights the need for enhanced safeguarding measures.



Comments