AI and CS Teaching

Dec 20, 2017

Miles Berry

Last week, I had the interesting experience of giving evidence at a hearing of our House of Lords Artificial Intelligence select committee. The House of Lords is the (entirely unelected) upper house of the UK’s legislature, so for me, this was quite a big deal.

Their lordships were interested in the applications of AI to education in general, but they seemed much more interested in the opportunities that England’s computing curriculum would provide for our students to learn about AI.

In terms of the uses of AI in schools, we’re already seeing a fair few applications of machine learning and other aspects, and I think these look set to continue in the short to medium term. I certainly don’t see AIs replacing teachers any time soon, but I think there are plenty of aspects of the teacher’s role where some support from smart machines might be quite welcome, for example in assessment, with marking essays, judging the quality, rather than merely the correctness of a student’s code; in recommending appropriately challenging activities, resources and exercises for students; in carefully monitoring student activity, privacy concerns notwithstanding; and in responding quickly to students’ questions or requests for help.

If teaching can be reduced merely to setting work and marking work, then I would fear for the long term future of the profession: ‘Any teacher that can be replaced by a machine, should be’, as Arthur C Clarke famously put it. My Roehampton students think there’s much more to teaching than this though: teaching students how to be a person, how to get on with other people, and inspiring them to learn things that they’re not already interested in, to give just three examples. I don’t see the machines taking over these responsibilities any time soon.

More interesting are the opportunities to teach students about AI as part of CS education, or the broader school curriculum. The English programmes of study for computing are phrased broadly enough to allow, or perhaps even encourage, students to develop a grasp of how AI, and particularly machine learning, works, in age-appropriate ways from age five to eighteen. CSTA’s new standards allow scope for pupils to learn about machine learning too: between 3rd and 10th grade students should be able to use data to highlight or propose cause and-effect relationships and predict outcomes; refine computational models based on the data they have generated; and create computational models that represent the relationships among different elements of data collected.

There are some great tools out there to make this accessible to students, from Google’s teachablemachine, through Dale Lane’s fabulous, IBM Watson powered, machinelearningforkids.co.uk, to building machine learning classifiers in Mathematica (easy!) and Python (more tricky, but really not out of the question), as well as the fun that can be had building simple chatbots in Scratch or Python, and hacking Google Assistant using the Raspberry Pi Foundation’s AIY kit.

Great as these opportunities are, I am concerned that we’re not doing enough in schools to get students thinking through the ethical implications for individuals, society and civilisation of AI. Worryingly, England’s education ministers removed the wider ethical references to the computing curriculum we’d developed. Machine learning algorithms already make life-changing decisions for many of us, and the impact of these technologies on our lives seems likely to only increase over our students’ lives. Education is, at least in part, about preparing our students for the opportunities, experiences and responsibilities of their later lives, and I’m not sure we can do justice to this if we’re not teaching them how AI works, and to think through some of the big questions about how AI should be used.

Originally published on CSTA’s Advocate blog.