I have incorporated video essays into a number of the the modules I have convened at Roehampton, in varying ways. I reflect each year on whether this form of assessment remains relevant and appropriate, and, whilst the video essay has been retained as one of our principle media for assessment, its particular form has evolved over time. For this case study, I focus on the video essay assignment for our whole-cohort Year 3 undergraduate module, computing and the foundation subjects, which I convene.
Throughout my time teaching in higher education, I have been deeply influenced by Seymour Papert’s notion of constructionist learning (Papert and Harel 1991), which builds on Piagetian constructivism but adds to this the key insight that learning happens ‘especially felicitously’ when the learner is engaged in creating a ‘knowledge artefact’. Thus my conception of learning in higher education is that, whilst experience and reflection play a part, as does discussion between students and lecturer and amongst students, it is often through the creation of artefact that much of the learning on a module takes place. Reading, attending and participating in lectures are important, and certainly facilitate learning, but it is when students have to take their internalised understanding and then externalise this through the creation of an assessable artefact that particularly meaningful learning takes place.
Recognising andragogic principles for teaching adults (Knowles 1970), I aim to draw on my students’ existing experience as a starting point for module design and for my teaching. The first lecture we give for computing includes a self-reported audit of students’ existing computing skills and knowledge - responses to this inform the iterative design and content of our modules. Students have self-assessed as, in general, having limited skills of video editing: developing these skills, through taught content, practical work and assessable outcomes has been a priority, as these are skills which beginning teachers of computing should have, but which our students do not generally possess on entry.
Whilst almost all students report familiarity with creating PowerPoint or other presentations, and have their own views as to what makes an effective presentation, few have read into the literature around effective use of multimedia (e.g. (Mayer 2009)), and so a video essay of this form provides constructionist learning on these principles. Without explicit reference to these, students’ presentations are often characterised by complex graphical layout, distracting animations and extensive use of text on slides which is duplicated in their narration. Mayer’s work was based on empirical studies of retention for multimedia. By having students construct their own multimedia presentations, whether given to a group of their peers or recorded as video, these modules provide ample opportunity for students to put Mayer’s principles into practice.
In the initial presentation of the third year course, we had offered students a choice of eight possible titles, each linked to one of the module learning outcomes and each addressed through its own lecture. Feedback from students indicated this approach was unnecessarily complex, and that students choosing titles discussed later in the course felt at a disadvantage compared to those opting for topics discussed earlier. Analysis of responses showed that some titles were much more popular than others, but also that some titles typically had higher mean marks than others - whilst I worried initially that my colleagues and I were downgrading submissions on some titles, or the titles themselves meant that upper grade levels were only accessible if particular titles were chosen, our later review of marking suggested that lower scoring candidates’ choices were clustered around particular options, viewed by these students as easier as they drew on more familiar subject knowledge.
Subsequently, I reduced the number of choices to four, although a similar pattern repeated. In the most recent iteration of this module, we removed any choice over title (despite Knowles’ andragogic principles): this has provided for a more inclusive approach to teaching in the module, and some evidence of students’ greater willingness to both engage with conceptually difficult, and previously unfamiliar, material with a high cognitive load, and to work collaboratively with one another to suggest useful further readings, identify approaches to the assignment and to discuss their conceptual grasp of the topic outside of lectures, without crossing the line into plagiarism and collusion.
As computing education tutors, my team and I see our role as, in part, facilitating the development of students’ own digital technology skills, and reflection on the processes through which they do this. We have been reluctant to specify particular technologies or approaches for students to use. As our initial skills audit demonstrates, any cohort prossess a broad range of competence and confidence with any particular technology, and thus we have striven to ensure that that all students can undertake technology-based task. We demonstrate particular tools and approaches, providing opportunities in lectures for students to practise these on small directed tasks, whilst also supporting students in exploring other tools and approaches themselves. We demonstrate how the tools we demonstrate can be used in the primary classroom. In the most recent presentation of the module, we introduced Adobe Spark Video as a simple, effective online tool for creating visually impressive slidecast presentations: the affordances of this program are closely matched to the principles of Mayer’s multimedia learning theory, supporting a close fit between theory and practice here. We look too at screen-recording, stop-motion animation, scripted animation, recording of live video and video editing. My colleagues and I have provided further support to students for technical aspects of the assignment through lunchtime workshops and individual or small group tutorials.
Module evaluations indicate that students have particularly appreciated access to examples of prior work. In the most recent iteration of the module, these have been chosen from the upper range of overall grades but across a range of different approaches to the assignment, typically linking examples shown to the techniques demonstrated and practiced in the sessions. Students are, understandably, interested to know what grade particular examples received, but I have thus far refused to give out this information, out of respect for the students in previous cohorts whose work was shown, but also to help support the development of students’ assessment literacy (Price 2012), by working through as a group where they would score particular submissions using the rubric assessment criteria, or considering what feedback they would have provided to a student on their submission.
The marking criteria are in line with the generic criteria applied programme wide for HE6 level work: in our assessment rubric, we provide the generic criteria first, and then add further explanatory text to indicate how this applies in the context of the video essay. Four of the criteria (knowledge and understanding, academic skills, evaluation and interpretation of data, and cognitive / intellectual skills) are linked explicitly to the content of the video essay, the other two (discussion of literature, data and results and communication) are associated with the video itself, addressing both creative and technical aspects of the multimedia content. Interestingly, some students took exception to the inclusion of creativity in assessment criteria, believing that this could only be assessed subjectively by tutors, whereas in fact the marking team would give credit here for students who had created their own images and diagrams (rather than sourcing image content from the web), or who had taken a novel approach to the video as a whole.
I have analysed scores from students on the module comparing these with their scores on the initial skills audit where students provided their name on the latter. I was surprised to find that scores on the assignment were broadly consistent across all five levels of video editing skills in the initial audit. I interpret this as reflecting that the skill development that took place in the module was sufficient to ensure none were overly disadvantaged by initially weak video skills, and that the marking criteria provided the right balance between content, creativity and technical aspects. The initial audit also includes a question on students’ preferred approach to learning new technology skills (qv (Kolb 1984)) - students who express a preference for learning through exploration or experiment did score a little more highly than those who preferred to work directly with a supportive peer, although the difference is not statistically significant.
For the last three year’s of the module’s delivery, in consultation with the tutors working on the module, I decided that we should require students to submit their script in addition to the video itself. In part, this was out of a desire to boost the academic rigour of the assignment, ensuring that all students took time to plan and prepare the argument they presented in the video. Access to the scripts helped tremendously with the assessment process too. By reading the script first, we were able to ensure that the four content criteria were assessed on the basis of the script without our grading for these being swayed by particularly good or weak video presentations. It also ensured we could give detailed feedback on the content of the video essay, using Turnitin comment and QuickMark tools to feedback directly on strong or weak points in the script, where previously this had to be done at some remove by referencing timecodes from the video. Turnitin’s similarity checking tools could also be used to indicate where possible plagiarism required further investigation.
With a two or three tutors teaching team for this large module, as module convenor I was keen to ensure consistency in assessment. With new tutors working on the module, as soon as the hand-in deadline had passed, I led a brief pre-moderation exercise, in which the marking team worked independently each submission from a small sample set, then met to go through how they had assessed these and the grades awarded against each of the criteria; we then discussed and agreed grades against each criteria for each submission in the sample set, establishing a common benchmark and a shared interpretation of marking criteria. I provided a detailed set of instructions for the practical process of marking submissions, using a shared spreadsheet to track progress as we worked through our own allocated submissions. I encouraged the team to ask sooner rather than later if they would value a second opinion on any submissions. I believe the on-going dialogue across the marking team helped to promote greater consistency in assessing students’ work. As we came to the end of the marking period we second marked of a further sample of scripts, discussing any differences of opinion over grading, although thanks to this process these were typically very small. Rubric based assessment against clear and distinct criteria and on-going dialogue during the marking period seem to help establish consistency across the marking team.
Over the years that the module has been running, I reflect on how the standard of submission has improved. In the module’s first year, the majority submissions were narrated slidecasts, with relatively poor slide design (judging by Meyer’s multimedia principles), in many cases taking a relatively superficial stance on less demanding titles. In the most recent iteration, students successfully tackled a complex topic on which they had limited prior knowledge, and adopted more sophisticated approaches to the video, using stop-motion animation with objects or drawing illustrations in the style of RSA Animate videos (e.g. (Robinson 2010)). Whilst some of this improvement is likely to be due to students’ own improving skills on video editing prior to the course (although initial audit scores for this measure have remained broadly consistent), much seems likely to be the result of the gradual, iterative changes I have made to teaching and assessment on the module, many through I and my colleagues and our external examiner reflecting on what has gone well, others directly as a result of feedback from students, still others through my growing understanding of assessment practice and the use of video in assessment elsewhere in higher education.
Student evaluations of this form of assessment are often postive, for example:
- The assessment is clear and engaging!
- I have really enjoyed completing the video presentation.
- The video presentation is a form of assessment that is different.
Others have been more critical, suggesting we could improve by:
- Talking more about how to do the video in class.
- [Providing] more information on marking criteria - especially in terms of the video we have to produce
- [Giving] more examples of how to create videos for part of our assignment
- [Make] clear what content from the lecture we should include in the video
The first three of the above criticisms I have addressed directly over the course of the module’s evolution. I would continue to defend against the lattermost, regarding the selection and synthesis of material as an essential skill for students at HE6 level, although, of course, my colleagues and I are always on hand to support our students through this process.
With changes to our undergraduate primary education programme, whole cohort input on computing has moved from Year 3 to Year 1, as part of a combined module with art and design, design and technology and music. I have convinced my colleagues that the video essay was worth retaining as the medium for assessment in this new module, and the computing education team continue to provide teaching and support on technical skills and approaches here, albeit with a reduction in the time available for us to do so. Following student feedback on the initial presentation of this new module, the video essay is now to be attempted as a collaborative task by groups of four students. I look forward to seeing how this approach impacts on students’ learning in the assignment, as we move from a constructionist to a social constructionist mode for this.
Learning from experience elsewhere, and sharing my own experience of video for assessment has been important to me. It was interesting to compare my approach to that in the University’s French programme (see (Betts 2014)), and I have been privileged to share our own work at the University’s e-learning meet in 2016, as part of a presentation on fresh approaches to assessment at BETT 2016, in a panel on’ How far can traditional methods of assessment suit modern pedagogy? and at the HE Leaders Summit, BETT 2017. This latter was attended by a group of teacher educators from HVL University, Bergen, Norway who contacted me afterwards about how they might integrate this approach to assessment into their programmes: I was delighted to welcome them to Roehampton in January 2018, convening a short programme of presentations about technology in their and our teacher education programmes and to visit their university with a colleague the following April to explore further possibilities for collaboration.
Knowles, Malcolm S. 1970. “What is Andragogy?” The Modern Practice of Adult Education. Pedagogy to Andragogy, 40–59.
Kolb, David A. 1984. Experiential learning : experience as the source of learning and development. Prentice-Hall.
Mayer, Richard E. 2009. Multimedia learning. Cambridge University Press.
Papert, Seymour and Harel, Idit. 1991. “Situating onstructionism” 36:1–11.
Price, Margaret. 2012. Assessment literacy : the foundation for improving student learning. ASKe, Oxford Centre for Staff; Learning Development.
Robinson, Ken. 2010. “RSA Animate - Changing Education Paradigms - RSA.”Share