The Science of Teaching
Oct 26, 2008
After exploring the artistic side of teaching{#o4rm}, I suppose it’s only fair to redress the balance and look at teaching from a scientific perspective. A post form ‘the frustrated teacher’{#utoo} a few months back explored this distinction and came down strongly in favour of seeing teaching as an art: I love the idea of ‘Double the pay, and see who shows up!’
Coming at teaching from the other perspective does make some sense though, and I think that the move to evidence based practice in medicine and social policy might yet be followed by a greater respect for research evidence in teaching too. There was a wonderful CalTech commencement talk by the great Richard Feynman on cargo cult science{#h0vy}, a term he coined to refer to research which looks like science but lacks the scrupulous integrity essential to the scientific method – he cites educational research as an example here. Well, I don’t know that this is necessarily the case, but just because it’s difficult to conduct educational research scientifically doesn’t mean that regard for scientific method and basing practice on evidence isn’t of value.
It’s perhaps worth teasing out a few aspects of the scientific method and exploring how they relate to educational research and teaching.
A
model: this is tricky – I don’t think there’s any sense of agreement on a ‘standard model’ of how people learn – the set of conflicting or complimentary views here is very large, and whilst I suspect most teachers will take a pragmatic stance, the conceptual model that we hold of what makes for good learning is going to have a huge impact on our teaching. I wonder how teachers arrive at their own conceptual model; some will be down to training, some to reading, some, I’m sure, to conversations with others in school and wider PLNs, much, I suspect is based on how they themselves were taught. In a scientific approach to teaching, the model should be one which is continually weighed against the empirical evidence, and thus one which would change over a period of time, although perhaps not quite so rapidly as governments and school leaders might want it to. Present interest in neuroscience as an underpinning of pedagogy provides some hope for a common model, but is itself dependent on an agreed model of how the brain works.
Empirical evidence. We do, of course, spend plenty of time collecting empirical data on the effect our teaching has – this is what assessment is about after all. Using that empirical data to inform our teaching is increasingly common, and ‘assessment for learning’ and school effectiveness research go some way to making teaching and school leadership more evidence based professions. The question of whether we’re collecting the right sort of evidence is, however, less clear – basing our teaching on assessment data only makes sense if the assessment data collected is an unbiased and accurate measure of learning, rather than the somewhat narrower criteria of test performance. The assessment data that’s collected is, itself, determined by the model held of what makes good learning – you have very different assessment regimes if you think education is about facts and functional skills or collaborative creativity. This seems a long way from Popper’s notion of falsifiability: difficult to obtain evidence to falsify a theory if the way you obtain evidence is built on the assumption that a theory is true. The notion of induction is also generally lacking – there doesn’t seem much sense in which our models of good practice, at least as defined by government or inspection agencies, are derived from the empirical evidence in any clear way, else why would they change so frequently?
Analysis. Even with the above difficulties concerning assessment evidence, there is, I think, plenty we could do to analyse the data that we do collect, and I think much progress has been made in this area. It’s now common place to use annual assessment as a way of monitoring progress and prompting intervention. Less common, but by no means unheard of, is item by item, student by student, analysis, leading to interventions (iterative model changes?) in how a teacher approaches particular topics or to focussing support on individual students. As more school work goes online, within VLEs and PLEs, the sort of sophisticated datamining{#wueg} that the supermarkets and credit card companies use would be practical for schools, education authorities and central government – costs are, I fear, prohibitive for the former and the big brother aspect a little disturbing with the latter, but the barriers are no longer technolgical.
With a sufficiently large data set and statistical techniques such as factor analysis{#xf:d}, it becomes possible to adopt a quasi-experimental approach to educational research. Hitherto, testing the effectiveness of a particular approach has relied on anecdotal evidence, comparisons with historical data, which are of limited validity as testing regimes evolve (become easier?) over time, or small scale randomized trials, which raise ethical concerns (why does my child get a laptop, but her friend at the next desk doesn’t?), or encounter huge difficulties in controlling for all the variables. Now however, statistical and computational advances make it, theoretically, possible to ‘measure’ the inputs and outputs of a programme, and identify which of the inputs make the most difference to the outputs, drawing conclusions which can go on to make for better teaching. Muijs and Reynolds (2000) provide an interesting early example of this approach.
Experiment needn’t be just this sort of large scale, data intensive study – the move we see towards action research projects as part of many teachers’ Masters programmes is a hugely encouraging one. Whilst these may not exhibit the standards of objectivity and control that would allow them to count as proper scientific experiments, there is nevertheless a respect for using evidence to inform practice that seems entirely appropriate to a scientific approach to teaching, and brings teaching closer into line with other professions now adopting a more evidence-based approach{#tnfi}.
Peer review, at the basis of scientific publication and debate since at least the 17th century, would allow this sort of action research to cross institutional boundaries – there’s undoubtedly a role here for governmental organizations, such as the NCSL, OFSTED and local authorities, and the traditional subject associations have always promoted the sharing of good practice between schools. More interesting is the technologically supported move from the bottom up rather than the top down, with the sharing of good practice across personal learning networks in the blogosphere, twitterverse or at teachmeets and unconferences: the mode tends to be appreciative rather than critical, and there’s rarely as much data to support the practice as you’d expect in a scientific journal, but perhaps this will evolve over time…
Hand on heart, I’d admit to favouring a more artistic than scientific approach: both are of value, but, for me at least, teaching has at its core the personal relationship between a teacher and their pupils, and too much of this is about the complex interactions that are about character and personality for it to be reducible to “If I do this, they’ll learn better”, although the evidence developed through the community of practice has to be something on which we draw to inform and refresh our teaching.
References
Muijs D and Reynolds D, 2000, School Effectiveness and Teacher Effectiveness in Mathematics: Some Preliminary Findings from the Evaluation of the Mathematics Enhancement Programme (Primary), School Effectiveness and School Improvement 11:3 pp 273-303
Share