AI, education and the ethics of access

Nov 05, 2025

Miles Berry

The conversation about AI in schools often begins with excitement about transformation. Personalised learning! Automated marking! Infinite revision questions! All very encouraging. Yet if we pull back a little, the more important question is not what AI can do, but who it helps and on what terms. Whenever a new technology rolls into education, the ethics of access matter far more than the novelty.

For years we have had pupils who understand the ideas perfectly well but find the business of reading and writing stubbornly difficult. Brilliant young thinkers defeated by squiggles on the page. Pupils who can explain a concept aloud with clarity but cannot quite wrangle the spelling or the typing to get it onto paper. And others who arrive in our classrooms with a different first language and face a wall of English — or Welsh — before they even reach the curriculum content. These barriers do not reflect a lack of intelligence; they reflect a mismatch between a learner’s strengths and the shape of the tools we place in front of them.

AI offers no magic wand for this. But it does offer a genuine shift in how accessible the curriculum can be. Text-to-speech, speech-to-text, image description, translation, simplification — none of these are glamorous, but they are powerful. They let pupils move past the mechanics of decoding into the realm of meaning. A dyslexic child who can listen to Macbeth rather than stumble through it syllable by syllable has not been “given an unfair advantage”. They have been given access. There is a difference.

This is where the ethics appear. If we believe — as our legislation insists — that every learner is entitled to the curriculum, and that inclusion is not optional, then tools which widen the doorway are not indulgences. They are necessities. The challenge lies in doing this in ways that are fair, safe and genuinely supportive rather than performative.

Consider the role of AI in simplifying complex texts. A dense paragraph about photosynthesis can be rephrased at the touch of a button into something a struggling reader can encounter without sinking. That does not cheapen the science. It simply means the learner begins with understanding rather than frustration. Or take translation: if a newly arrived pupil can read a piece of content in their first language while developing their English over time, they gain both access and dignity. This is not lowering expectations. It is removing needless barriers.

But the ethics cut in two directions. We cannot talk about access without asking who has access to the tools. Yes, some of this technology is built into modern operating systems. But the more capable, fluent AI tools — the ones that generate the summaries, the explanations, the quiz questions — depend on broadband, up-to-date devices and, increasingly, subscriptions. So the learners who might benefit most from this support are often the least likely to have it. That is not a theoretical risk. It is a new kind of digital divide, built not on hardware alone but on the ability to participate in this emerging ecosystem. It is all very well to speak of democratising learning, but that only holds if the door is genuinely open.

Then we reach the thornier questions about AI and judgement. There is a world of difference between asking an AI to simplify a text and asking it to draft an education, health and care plan. Reducing the cognitive load of reading is one thing. Reducing the professional responsibility embedded in SEN decision-making is quite another. EHCPs shape the life chances of vulnerable pupils; they require sensitivity, professional knowledge, and human accountability. Outsourcing this to a system trained on whatever it has hoovered up from the internet is not simply a technical shortcut — it alters the nature of the relationship between child, family and school. If we take inclusion seriously, we should pause before handing these decisions to machines.

There is also the matter of data. Much of this technology works by absorbing what pupils write, say or upload. Pupils own the intellectual property in their work. They have rights over how it is used. So the ethical question becomes: where is that data going, and under what safeguards? The warm glow of efficiency should not distract us from the uncomfortable truth that some systems are built on practices that sit awkwardly with our obligations to protect the young people in our care.

Yet despite these concerns, there is real promise here. Some pupils who would never risk raising a hand will happily ask a question of a machine. They can rehearse their understanding privately before offering it to the group. They can check a misconception without fear. For some learners — dyslexic, autistic, anxious — the predictable patience of an AI can be calming rather than demanding. There are moments when having a non-judgemental conversational partner opens up learning rather than closing it down.

This, then, is the ethical heart of the matter. We often talk as if the danger of AI is that it will do too much for pupils. But for many children, education already asks them to fight through tasks unrelated to the thinking we actually want them to develop. If AI can reduce the friction while preserving the intellectual work, it has a place. If it removes the thinking altogether, it does not.

The promise of AI in education is not personalisation for its own sake. It is not automation as an end. It is, instead, the chance to make the curriculum accessible without diluting its substance. To support learners who have always worked twice as hard for half the acknowledgement. And to do this with a clear ethical lens rather than a rush of excitement.

As always, the technology is less important than the values we bring to it. Access, inclusion and equity must come first. Everything else can follow.

Loosely based on my talk for the Open University in Wales collaborative research network for equity and inclusion, 4 November 2025