Towards embedding responsible AI in the school system: co-creation with young people

An animated collage with an artist wearing a virtual realist headset and painting on an easel, a chalkboard, and a pointed finger. The blue background flashes different textures. A text box in the middle types out 'Responsible AI'.
Image credit: Dog and Fox Designs
  • Led by Professor Judy Robertson, University of Edinburgh

This project will investigate what generative AI could look like in secondary education. It involves working with young people as stakeholders whose right to be consulted and engaged with on this issue is a key tenet of responsible AI.

Recent advances in Generative Artificial Intelligence (GenAI) have the potential to transform education, from reactive tweaks in assessment practices to fundamental philosophical debates about what we should value in the education of humans in an age of (currently narrow) machine intelligence. Though it is still early, the implications for learning in an age of pervasive use of GenAI are significant: issues of accountability; accuracy; and inclusion need addressing. that young people (YP) have a voice in how AI could and should be used in their education. Responsible AI requires meaningful engagement with stakeholders, including YP, who have the right to be consulted about the systems which affect their lives. The project will bridge the divide between principles of explainability, fairness and privacy as they apply to educational AI, and the values, hopes and concerns of YP when faced with emerging technologies whose implications are not yet fully understood. It will produce recommendations for educational policy and visions for educational practice that are grounded in lively, specific and meaningful engagements with YP as key stakeholders in education.
The three aims of this project are to:
  • Develop a picture of what responsible GenAI could look like within secondary school education.
  • Develop and test imaginative, speculative and participatory methods for generating meaningful insights into YP’s perspectives on emerging AI technologies, testing these methods in two distinct educational contexts and providing a strong methodological foundation for a BRAID demonstrator project focusing on YP and education.
  • Produce recommendations for policymakers, educators and technology developers about what YP consider to be important considerations for including GenAI in school learning and assessment, and how GenAI literacy should be fostered.
To achieve these aims, the project will:
  • Interview academics, key government and local government and educational technology (EdTech) companies, to map how AI and data are currently used in the Scottish school system and document upcoming plans for changes and possible future developments.
  • Create educational materials to develop learners’ GenAI literacy, containing clear and accessible visual summaries of how AI and data are currently used in schools and key emerging ideas about GenAI in educational contexts.
  • Work with groups of YP to understand their ideas about possible, desirable, acceptable future uses of GenAI in education (including barriers and opportunities), using creative, speculative, design-based and story-based methodologies.
  • Disseminate initial recommendations for the responsible use of AI in secondary schools for policymakers, educators and technology developers.
We will engage with a network of educational stakeholders in a way which rebalances the power and interests of current actors. While the power to make decisions about AI and data usage in education currently lies with government and local government, our work will make current practices visible and salient to learners in an accessible way, so that they can express informed preferences about responsible uses of such technology in the future. This will lead to action-guiding recommendations for responsible AI in school education, which can be expanded and enacted in later stages of the BRAID programme.
Scroll to Top