Ethics in the computing curriculum
Sep 18, 2018
When we talk about online safety with our trainee teachers at Roehampton, I start by asking my students what sort of qualities they’d like to see in their pupils. We get some great answers, covering things like kindness, courage, self-confidence, curiosity, courtesy, integrity, fairness and diligence. It’s hard to argue against any of these, but on the other hand, it’s far from clear how we might go about developing these qualities through the taught curriculum in general, and computing lessons in particular. Nevertheless, I’m convinced that if we can get character education right, then so much of what worries us about online safety gets addressed along the way: if young people are honest, they won’t lie about their age to get social media accounts; if young people are kind, they won’t bully one another online; if young people have courage they’re perhaps less vulnerable to online grooming.
The English computing curriculum places a lot of emphasis on personal morality (e.g. “pupils should be taught to … use technology safely, respectfully and responsibly”), but has relatively little to say about the broader sphere of ethical issues around digital technology. This wasn’t the intention of the BCS / Royal Academy of Engineering led drafting group, who included as an aim for computing education that pupils would:
“Develop awareness of the individual and societal opportunities, challenges and risks raised by digital technology, and know how to maximise opportunities and manage risks appropriately.”
At the time, ministers decided that we didn’t need the ethics bits of the draft programmes of study, and that pupils would be better prepared for the opportunities, roles and responsibilities of life through learning about binary arithmetic and Boolean logic. Four years on, the House of Lords AI select committee now recommends, “that the ethical design and use of technology becomes an integral part of the curriculum.” Quite.
Thankfully, the US CS K-12 framework and its implementation in CSTA’s K12 CS standards avoided this sort of short-sighted political interference: fostering an inclusive computing culture is one of the underpinning practices in the former and the latter has 22 standards specifically addressing the wider impact of computing.
US psychologist Lawrence Kohlberg worked on the stages of children’s moral development, seeing progress from an orientation to obedience and avoidance of punishment and self-interest, via authority and social contracts to one based on universal ethical principles. If we’re to take children’s moral development seriously, then perhaps it’s worth stepping beyond safety, responsibility and legality to consider broader ethical principles and practices. Without this broader focus in computing education, it’s arguable whether we’ll have properly prepared our pupils for a world in which technology seems likely to play an even more dominant role than it does today.
There are many ethical issues around digital technology that teachers and pupils might explore together in the computing classroom. Here are just three that I think could make good starting points for pupils’ independent research and a reasoned debate between those willing to take different perspectives:
Reliance on technology: have we as a society in general, or perhaps young people in particular, become too reliant on digital technology? In what ways are lives better as a result? In what ways have they got worse? Have we consciously chosen to allow technology into our lives in this way, or have we been cynically manipulated by big businesses, motivated by profit? Are social media or gaming harmful addictions?
Surveillance: Is there a right to privacy in the digital age? How much personal information is it appropriate to share with those outside our circles of trust? How much information does your school, internet service provider or government have about you? Under what circumstances is it right for schools, service providers and governments to monitor use of the internet? Is it ever right for individuals to circumvent this monitoring? How should these decisions be made?
Rules for AI: As machine learning impacts more aspects of our lives, what ethical safeguards should society build in, if any? If so, is this a price worth paying? How can bias or prejudice in algorithms and training data be reduced or eliminated? The GDPR demands that humans be kept ‘in the loop’ for decisions that have significant effect on human beings: is it right to do so? How should an AI make ethical decisions? What rules and principles should an AI be programmed to apply? What rights would a conscious AI have?
Beyond the specific details of these topics, there’s a case for providing young people with a framework to think ethical problems through for themselves. Both the BCS and ACM have codes of ethics for those working in computing, and an analysis of these might provide some insight into the underpinning principles: acting for the benefit of society, avoiding harm, equality, honesty, respect for the law. In the particular fields of AI and robotics, there’s important work being done by both the European Commission and the European Parliament, which would be well worth exploring with students, although there are those that worry regulation will stifle innovation.
The ethical implications of big data and AI are huge already, as a society I think we’ve a responsibility to think these through together and establish the frameworks which govern these: the GDPR is a serious attempt to do this for big data. Beyond this, I think we’ve also a responsibility to help the next generation wrestle with the as yet unimagined issues they’ll face together: teaching ethics in the computing curriculum is one way of ensuring that we do.
Originally published in Hello World 6
Share