Working with data

Feb 04, 2026

Miles Berry

Working with assessment data sits at the intersection of classroom practice, research thinking, and professional judgement. Productive use of assessment depends not simply on collecting scores but on using evidence to inform decisions and improve learning. Good assessment avoids being misled by surface impressions such as how busy pupils appear, requires clarity about the decisions it will support, and only becomes valuable when pupils can act on the feedback it generates.

This places teachers within a broader tradition of evidence-informed education, where collecting better evidence about what works can improve outcomes and strengthen professional independence. The central question is therefore not whether data exist, but how they are interpreted and used.

Education research: meaning and measurement

Educational inquiry operates across contrasting research traditions. One tradition treats the social and educational world as complex, contradictory, and multilayered, requiring interpretation of meaning and experience rather than reduction to numbers. The other emphasises measurement, control, hypothesis testing, and quantified performance as the basis for explanation.

These perspectives mirror a long-standing tension in thinking about teaching itself: whether teaching is primarily relational craft or empirical science. In practice, professional understanding draws on both. Quantitative approaches enable claims about improvement and impact, while qualitative insight explains how learning is experienced and why change occurs.

The shift toward evidence-informed practice in education reflects influence from fields such as medicine, where rigorous trials, control groups, and statistical reasoning determine whether an intervention truly works. Randomisation, comparison with placebo, and sufficient sample size guard against mistaking coincidence or expectation for genuine effect. The implication for education is caution: improvement after an intervention does not by itself prove that the intervention caused the improvement.

Using assessment data responsibly

Assessment data must therefore be analysed carefully. Comparable pre-test and post-test measures allow meaningful examination of change because differences cannot be attributed to altered difficulty or content. Where identical questions are used, any improvement reflects pupil learning, teaching influence, or wider contextual factors rather than test design alone.

Visualisation supports interpretation, but only when chosen thoughtfully. Simple charts that merely display raw scores provide little insight. Statistical representations such as box-and-whisker plots reveal distribution, spread, median performance, and variation between assessments, enabling more meaningful judgement about progress across a class.

Even then, interpretation remains tentative. An increase in average score may be small relative to the natural spread of results, suggesting that apparent improvement could arise from normal variation rather than genuine learning gain. Measures such as standard deviation and statistical testing help determine whether change is likely to be meaningful rather than accidental.

Correlation analysis further illustrates the limits of inference. Weak correlation between pre-test and post-test scores indicates that earlier attainment explains only a small proportion of later performance, highlighting complexity in learning processes and the influence of additional factors.

Alongside statistical patterns, individual pupil stories remain important. Outliers, absences, sudden improvement, or unexpected decline each require contextual explanation grounded in classroom knowledge rather than numbers alone. Data analysis therefore combines quantitative evidence with professional interpretation.

From evidence to evaluation

Evaluation frameworks formalise this reasoning. Claims that an intervention improves outcomes must address counterfactual questions: would improvement have occurred anyway, was the intervention truly responsible, did outcomes genuinely change, and will the effect generalise elsewhere. These questions shift attention from description to causal reasoning.

Structured evaluation follows staged processes of preparation, implementation, analysis, and reporting, including defining research questions, selecting outcome measures, establishing comparison groups, conducting pre- and post-tests, analysing results, and communicating findings. Such discipline transforms everyday classroom inquiry into systematic professional learning.

Quality of assessment also requires scrutiny. Useful assessments clearly define what they measure, avoid unintended influences, cover the intended content range, predict relevant outcomes, and yield consistent results when repeated or marked by others. These criteria align with broader principles of validity and reliability in educational measurement.

Ethical and professional responsibility

Research and evaluation in classrooms carry ethical obligations. Social science should respect privacy, autonomy, dignity, and diversity, employ appropriate methods with integrity, and aim to maximise benefit while minimising harm. Participants require informed consent, transparency, the right to withdraw, and protection of data and wellbeing.

Evaluation may appear to disadvantage some pupils if only part of a group receives an intervention. Ethical reasoning therefore draws on the principle of equipoise: when effectiveness is unknown, testing alternatives can itself be justified, particularly if results inform wider improvement.

Teachers routinely innovate and adapt practice; systematic evaluation simply makes this process explicit and accountable.

Data science and the computing curriculum

Working with data is not only a research skill but also a curriculum entitlement. Across primary and secondary phases, pupils are expected to collect, analyse, evaluate, and present data, model real-world systems, and apply analytic and computational thinking. This positions data science alongside algorithms as a core strand of computing education.

Teaching data science combines mathematical foundations such as probability and statistics with practical application through software and programming, alongside critical reflection on implications and trust. Motivating contexts—animation, games, robotics, art, music, or real-world datasets—support engagement and meaning-making.

Tools range from spreadsheets, which remain central to everyday analysis, to programming environments such as Python with data libraries, enabling deeper exploration and modelling. Learning to compute statistical measures or visualise patterns in code connects mathematical understanding with computational practice.

Evidence, impact, and professional learning

Evidence-informed practice ultimately seeks improved outcomes for pupils and strengthened professional autonomy. Evaluation indicates whether interventions work, saves time by preventing ineffective practice, and guides future action.

Yet certainty remains elusive. Educational contexts are variable, learning is influenced by many factors, and statistical significance does not guarantee meaningful classroom impact. Professional judgement therefore mediates between data, theory, and lived experience.

The productive stance is neither blind faith in numbers nor rejection of measurement, but disciplined curiosity: asking whether change is real, why it occurred, and how confidently it can inform future teaching.

Concluding perspective

Working with assessment data in computing involves more than technical analysis. It requires understanding research traditions, applying statistical reasoning cautiously, respecting ethical responsibilities, and connecting classroom evidence with curriculum purpose.

Data become powerful not when they are collected, but when they are interpreted thoughtfully and used to shape teaching. Through careful evaluation and reflective practice, teachers transform everyday assessment into meaningful professional knowledge—linking evidence, impact, and improved learning for pupils.

Based on the 15th Roehampton Computing Education lecture, Adaptive Teaching, 7 February 2026