AI isn’t playing around - UK Education needs to get Tech savvy

Technology, artificial intelligence (AI) in particular, is developing rapidly. Having seeped into the education sector, AI advances are having a noticeably significant impact on school communities. The Department for Education’s (DfE) position seems to be that AI can help “to modernise our education system”, assisting with efficiency and workloads. This great development has, however, brought with it significant concerns for child protection that do not yet seem to have been sufficiently addressed. For example, over the years, we’ve seen a transition from physical and verbal bullying to online harassment. AI, with all of its benefits, has now brought a whole new danger to be tackled.

In particular, we have seen the emergence of online gangs involving teenagers sharing violent content, leading to serious crimes such as cyberattacks and exploitation. Additionally, the rise of AI-powered “nudification” apps has facilitated the creation of deepfake images, posing significant risks to children’s safety and privacy. A disturbing example of such abuse involves a school in the USA, Beverly Vista Middle School, where AI was used by pupils to create and share fake nude images of their peers. The risk of AI abuse and the challenges of safeguarding pupils in this regard is a real problem that needs to be high on the government’s agenda to tackle.

The non-statutory guidance document ‘Sharing nudes and semi-nudes: Advice for education settings working with children and young people’ does reference this issue, providing some overarching advice for education providers in responding effectively. Specifically, this guidance discusses: risk assessments, safeguarding and supporting children and young people, handling devices and images, recording incidents, the role of other agencies and informing parents and carers.

It also addresses what steps to take when an incident comes to the school’s attention, such as:

“Report it to your Designated Safeguarding Lead (DSL) or equivalent immediately. Your setting’s child protection policy should outline codes of practice to be followed.

Never view, copy, print, share, store or save the imagery yourself, or ask a child to share or download – this is illegal (see note below).

Note: In exceptional circumstances, it may be necessary for the DSL (or equivalent) only to view the image to safeguard the child or young person. That decision should be based on the professional judgement of the DSL (or equivalent).

If you have already viewed the imagery by accident (e.g. if a young person has shown it to you before you could ask them not to), report this to the DSL (or equivalent) and seek support.

Do not delete the imagery or ask the young person to delete it.

Do not ask the child/children or young person(s) who are involved in the incident to disclose information regarding the imagery. This is the responsibility of the DSL (or equivalent).

Do not share information about the incident with other members of staff, the young person(s) it involves or their, or other, parents and/or carers.

Do not say or do anything to blame or shame any young people involved.

Do explain to them that you need to report it and reassure them that they will receive support and help from the DSL (or equivalent).”

This advice is non-statutory, and should be read alongside the DfE’s statutory guidance, ‘Keeping children safe in education’ (KCSIE), and non-statutory guidance ‘Searching, screening and confiscation’.

We anticipate that the KCSIE guidance will, in its 2025 update, change to include wording concerning the misuse of AI

The power of AI and social media was recently highlighted in the 2025 Netflix series, ‘Adolescence’, which explores the impact of online radicalisation and how social media algorithms and AI can influence a young or vulnerable person’s beliefs and behaviour. It explores the journey of a young boy, his family and the professionals involved, following the boy’s murder of a female classmate. In respect of this series, star and co-writer Stephen Graham explained; “not only we parent our children, and not only the school educates our children. But also there’s influences that we have no idea of that are having profound effects on our young culture, profound effects, positive and extremely negative. So it’s having a look at that and seeing that we’re all accountable.”

Below are some further considerations for education providers when dealing with the misuse of AI:

  • Establish an AI policy, setting out clear systems of reporting for both students and staff as well as effective monitoring systems
  • Conduct risk assessments before considering use of AI
  • Provide relevant staff training on recognising use of AI and dealing with any subsequent fall-out
  • Consider your online safety curriculum and how to teach children to stay safe when using AI
  • Update safeguarding policies to address the risks of AI
  • Work closely with your local authority, law enforcement and child protection agencies in particular to ensure compliance with your legal obligations and safeguarding protocols. The creation and/or distribution of sexually explicit deepfake images without consent is an offence in the UK and should be addressed accordingly
  • Support affected individuals.

In addition to the above, the use of mobile phones is being specifically considered with regards to whether having AI and social media at the tip of a finger could result in easier access to harmful content at periods of potentially increased anxiety and heightened emotion during the school day. There is current DfE guidance for schools and trusts on how to develop, implement and maintain a policy that prohibits the use of mobile phones throughout the school day. This is non-statutory guidance and should be considered alongside the behaviour in schools guidance and KCSIE.

AI apps are becoming more and more accessible on mobile phones, increasing the risk of misuse, whether for academic purposes or cyberbullying. Australia has responded to these risks with a recent ban of mobile phones for pupils when at school, aiming to curb distractions and online harm. Further to this ban, the Australian Department for Education has confirmed the following:

“A New South Wales survey shows 87 per cent of students are less distracted in the classroom and 81 per cent of students have seen improved learning, according to almost 1,000 principals. In South Australia, there has been a 63 per cent decline in critical incidents involving social media and 54 per cent fewer behavioural issues”.

We are awaiting to see if the UK follows this approach.

As expected, misuse of AI as referenced above, is anticipated to consequently have a significant effect on the mental health of children, young people and vulnerable people. The Children’s Wellbeing and Schools Bill, introduced to Parliament on 17 December 2024, aims to place children’s interests at the heart of government policy, emphasising the importance of mental health and well-being in educational settings, strengthening the role of education providers in safeguarding to better protect children from abuse, neglect, and exploitation.

We hope the publication of more guidance for educational institutions will be prioritised, as in today’s digital age, safeguarding students from online dangers is becoming of significant concern.

Please do not hesitate to reach out to Stephanie Hughes or Adam Friel for support in developing policies, providing training or guidance in crisis management further to specific incidents.

Like to talk to us?

Get Insights in your inbox

To Top