Assessing Computing

Mar 15, 2017

Miles Berry

When Ofsted reported on the state of ICT education in 2011, they observed that assessment was ‘no better than satisfactory’ in nearly 60% of the schools they’d visited. It’s far from certain that the situation will have improved significantly over the intervening years: with the removal of levels and the move from ICT to computing, it’s possible that fewer teachers now know what to look for, or how to give helpful feedback to their pupils.

The challenge with any assessment is that we only can ever use proxy measures to work out if learning has happened. We can watch what a pupil does, we can review the work they produce, or we can ask them questions. Despite the prevalence of assessing portfolios of projects against agreed criteria, there’s a growing body of opinion that asking pupils questions about their knowledge and understanding is more efficient, more reliable and more valid.

The McIntosh report on ‘Assessment without Levels’ recommended the establishment of a ‘national item bank’ of formative assessment questions. Daisy Christodoulou, director of assessment at ARK and one of the commission’s members, affirms the role of questions in assessing learning: “Instead of having teachers making a judgment about whether a pupil has met each criterion, have pupils answer questions instead.” Similarly, Dylan William argues for the use of ‘hinge’ questions as a way to check on understanding of the whole class at the crucial hinge points of a lesson, such as after the initial introduction of concepts.

For computing, we’re taking this advice seriously. Computing At School, the subject association for computer science, is working with Cambridge Assessment, Durham’s Centre for Evaluation and Measurement and Diagnostic Questions to crowd source a bank of multiple choice assessment questions for teachers to use however they want to assess their pupils’ progress in computing. We call this Project Quantum. Our focus is on supporting the teaching of computing. We’re doing this through guiding content: the national curriculum for computing is admirably brief, but that can leave teachers at a bit of a loss as to how it should be interpreted; through measuring progress: the Quantum questions allow teachers to check what pupils know, and what they don’t, both before and after teaching a topic; and through identifying misconceptions: as more pupils use the Quantum questions, we get a more accurate view of just which bits of computing they struggle with, such as understanding how variables are assigned values and just what does make a good password.

The project has been running for less that a year, but we’ve already got over 2,500 questions computing questions online using Diagnostic Questions, covering all three strands (foundations, applications and implications) of the computing curriculum. Whilst we’ve got a good number of questions from the likes of exam boards, Bebras and code.org, lots of the questions have been written by teachers initially for use in their own schools, then generously shared through the project. We’re starting to curate the questions into quizzes and collections and we’re very happy to have help with this and question writing. It’s all free, and questions can be exported for use in other platforms too.

Quantum will really come into its own when we’ve got a critical mass of pupils using the questions on a regular basis. At that point we’ll be able to use some clever statistical analysis to figure out which questions do a good job at really checking understanding, as well as spotting where the common difficulties in computing lie, and perhaps even figure out what we can do to address these.

Originally published on Independent Schools Portal