Explainable Generative AI in the BBC

  • Led by Prof Nick Bryan-Kinns, University of the Arts London
  • Partnered with BBC

This fellowship aims to develop explainable AI approaches for creative practice within the BBC and beyond.

It is difficult, and often impossible, for us to understand how current deep-learning models work due to their enormous complexity and lack of transparency. The aim of this fellowship is to explore how explainable AI (XAI) approaches can be deployed in creative practice within the BBC and Creative Industries more widely to addresses Responsible AI (RAI) concerns of transparency, intelligibility, and accountability. Specifically, the fellowship will identify challenges and opportunities for: making generative AI models in the BBC more understandable and controllable; explaining the bias inherent in AI models; explaining attribution in AI models and generative output; and explaining the environmental impact of generative models in creative practice.

Qualitative research will be undertaken with creative practitioners within the BBC to map out the potential and barriers to more explainable AI in creative practice, to develop a set of case studies of how generative AI systems could be made more explainable, to work with BBC technology teams to develop proof-of-concept XAI demos for new media experience production, and to reflect on how such XAI systems might change creative practice for the better. These activities will be used to inform guidelines for the design and use of XAI approaches in creative practice with an emphasis on using AI for inspired and equitable innovation. In short, by finding ways to offer more explainable AI we seek to empower creative practitioners and amplify their creative abilities through more informed use of AI.

From an RAI perspective, XAI offers opportunities for “good” interaction with AI systems based on a more informed understanding of how the AI works and its impact on our world. The fellowship is primarily about AI for Inspired Innovation – infusing the broader RAI ecosystem with research on the use and potential of XAI in creative practice and the Creative Industries more broadly. The fellowship also contributes to the theme of AI for Equitable Innovation by exploring the challenges and opportunities for using XAI to explain the bias inherent in AI models and to contribute to more sustainable uses of AI models by exploring explanations of environmental impact of generative AI models. The fellowship will broaden the RAI agenda and research evidence base by generating case studies of XAI in the BBC and exploring XAI in creative practice which has been underexplored to date. We will produce novel outputs from this fellowship which will benefit the BBC directly by helping to inform more responsible and explainable AI development and deployment within the BBC, will create impact beyond the BBC and this programme by offering real-world examples of the challenges for RAI in creative practice, and contribute to discourse around RAI by developing and sharing real-world case studies of the potential of XAI in the Creative Industries.

Scroll to Top