Some colleagues, Mike Richards and Marian Petre and I recently had a book chapter published on AI in Education: Balancing Innovation and Ethics, in a book edited by Geoff Baker, Professor of Education and Lucy Caton, Senior Lecturer in Education, at the University of Greater Manchester, AI-Powered Pedagogy and Curriculum Design: Practical Insights for Educators.
The chapter is a fairly standard academic treatise with a core set of ten guidelines for the ethical use of AI in education, though my initial starting point for the piece was something of a light polemic which I had not originally intended sharing more widely.
In the wake of President Trump's recent state visit to the UK, when he parked Silicon Valley's tanks on Downing Street's lawns and got a fawning, acquiescent, servile response from the prime minister, however, I've reconsidered.
I'm not going to get into how the government's obsession with growth is misplaced. Professor Richard Murphy does that far better than I ever could. But Mr Starmer's deference to Mr Trump and his preferred Silicon Valley corporations and billionaires, has significant implications for UK sovereignty, the economy, public services generally and education in particular. So without further ado...
AI in education - some critical concerns on balancing innovation & ethics
Vast quantities of data scraped from the internet, often faulty, fallible, incomplete, collected without authorisation or consent, dirty, biased and otherwise problematic on multiple dimensions, get pumped into AI systems to “train” them. These systems are then used to make claims about the world or make decisions affecting real lives that are often not credible or even verifiable.
Large language models (LLMs), like ChatGTP, make probabilistic predictions about what the next word in a sentence should be, in order to spit out apparently plausible text. But the model, though it is assumed, falsely, to be intelligent, neutral and objective, is just making stuff up, however plausible or useful the outputs appear to be. Professor Emily M. Bender of the University of Washington describes these LLMs as synthetic text extrusion machines.
These technologies have been and are being integrated into core social, political and economic systems and institutions. There has been an exponential expansion in police rollouts of face recognition technology in the UK. Algorithms are being used for predictive policing, to decide whether the suspect in a crime should be released on bail and to determine whether someone’s digital profile is sufficiently terrorist-like that they should have a bomb dropped on their home. They are being used to select who should have access to housing and employment.
They are determining which schools kids get to go to and, in schools, children’s access routes to library books or lunch. They are deployed for automated grading of assessment and to decide whether students are attentive enough in class or whether they are cheating on exams. They are targeting people with disabilities for social welfare case review and wrongly accusing hundreds of thousands of fraud. They are becoming core to borders infrastructure deciding whether refugees should be denied passage or let through.
In short, they are being used for classification, rationing the allocation of limited resources and, all too often, inflicting harm.
They have been proven to be racist and discriminatory, disproportionately inflicting harms on historically marginalised groups.
If we are going to continue to build these tools into the infrastructure of education systems, taking the baseline assumption that education is a public good, we have to be asking, right up front, what are the protective or precautionary guardrails?
What are the checks against the harms that inappropriate or poor deployment of these technologies can cause?
What are the safeguards protecting those these tools are used on from the assumptions of the techbros who profit from them?
What are the protections against systemic and institutional racism and discrimination being hardwired into these systems?
What are the restraints on educational institutions’ systemic target/metric chasing harms that get perpetrated against students and staff. e.g. claims re student analytics being the key to student engagement, success and wellbeing. What are the guardrails, in education, when we deploy surveillance-intensive personalised tuition robo-tutor chatbots, automated assessment grading, exam proctoring software, plagiarism detection, personality, intelligence and attention deficit testing?
On the subject of systemic discrimination, how do we ensure we are not solidifying historic patterns of inequity, into these unquestionable/unchallengeable AI black boxes; biases learned from the vast quantities of data, often faulty, fallible, incomplete, collected without authorisation or consent, dirty and otherwise problematic on multiple dimensions, that we are training them on?
Specifically, we need to be asking:
How is the AI being developed?
By whom?
With what purpose?
How does it work and what data is being used to “train” it?
How and through what sources and with what permissions is that data acquired and monitored/reviewed for e.g. biases?
Where is that AI being used?
In what context?
How reliable, accurate, secure and safe is it?
Who benefits?
Who is harmed?
What other problems or opportunities are being generated?
Are the fundamental rights of students and workers being affected? Which & whose rights and how?
How is the AI being tested and monitored in operation?
How is being integrated into not just education but our society, our environment, our politics, our economies, our laws, our technologies?
What are the effects of this?
Who is accountable for those effects?
How do we mitigate harms, improve or evolve or decommission the AI and the economic/ political/social actors benefiting from those harms, when and where necessary?
Many of these questions are sidelined in public debate and institutional and political decision making. Post-hoc superficial approaches to managing AI systems are proffered or preferred. Institutions can put someone in charge of AI ethics and or hire an ethics consultant to run a training seminar for staff periodically, to tell them everyone is responsible for behaving ethically – box ticked, corporate AI ethics responsibility satisfied. Getting on top of building ethical considerations in from inception is really hard. Ticking a box on a form, having an AI ethics policy which gathers dust on a shelf or in a digital archive and putting someone in charge of AI ethics is easy.
How do industry or researchers clean up data, whether for predictive policing or student analytics? Researchers at the AINow Institute are seriously sceptical about capacity of simplistic tech fixes to address this. Importing historical biased data to design your system is the problem. The data can’t be cleaned with any degree of efficacy. And If, as most of these systems are, it is proprietary data processed by proprietary algorithms – no independent verification of the claimed data cleansing process can be performed. All outsiders can see is a black box and ‘trust us, we’re the experts’ public relations from the vendor.
The fundamental issue is that you cannot deploy structurally, systemically and institutionally racist and otherwise discriminatory systems and then engage in post hoc mitigation of harms with any degree of efficacy. By the time AI is embedded in infrastructure it is too late.
As per long held understanding of systems theorists & practitioners, one of the most critical approaches to implementing complex systems, to enable some degree of success, is to involve affected communities in the decision making about those systems, such as the adoption of AI as core educational infrastructure. The people who are obliged to use and those on whom the systems are used are going to be most cognisant of potential harms. They will also be best placed to understand the practical issues around deployment and whether they should be used at all. In the education context, students and staff are the key stakeholders that need to be involved.
There are a host of other questions to consider.
How do we think about how we represent the world and students’ needs in an educational context? What are the institutional and broader politics and ethics of AI in education? What are digital student analytics, exam proctoring, automated assessment, robot tutors proxies for? Are they legitimate, useful, adequate or harmful proxies and for which stakeholders? AI systems are political, not neutral or objective. They are built by people – disproportionately techbros – so it matters who these people are and what their worldview is and what problems they believe they are attempting to solve. They are not education specialists but they are selling ‘solutions’ to educational challenges. ‘Solutions’ that education leaders grab enthusiastically, as they vie to be seen to be innovative, forward thinking, digital futurists and innovators. Not to mention AI solutions offer promises of cost and efficiency savings.
When AI touches complex social, political and economic area such as education, it is critical that we understand the technology, its deployment and its effects, broadly and deeply. Because we are using it to affect people’s lives. Students must be encouraged and taught to be sceptical of these systems or at least analyse them critically and we need vastly more research to test AI systems independently.
The integration of AI into educational infrastructure must be treated with caution and there may well be an argument for the application of a precautionary principle when considering it. Mechanisms for key stakeholders, in particular students and staff, to participate in the decision-making processes around deployment and the facilitation of time, resources, space and agency to understand, assess, and reject such systems are crucial.
Just one final general thought to conclude which may well make all of the above moot:
Are ethics in AI or efficacious ethical guidelines or controls even remotely conceivable, with a technology built through mass exploitation & abuse of labour, including children, egregious, colonial, exploitative rare metal & other resource intensive mining and manufacturing processes, mass copyright infringement, industrial scale privacy invasion, concentrated control of power & capital enveloping it and the use of which consumes hideous amounts of energy and water?