Responsible AI in International Public Service Media

  • Led by Dr Kate Wright, University of Edinburgh
  • Partnered with Public Media Alliance

This fellowship investigates opportunities for responsible AI within international public service media. The goal is to explore how AI can help build trustworthy news delivery.

Trustworthy news is crucial to democracy and challenges the spread of misinformation and disinformation. High-quality international news can also facilitate cross-cultural dialogue and educates people about the risks that they—and others—face in an interconnected world. At a time of growing authoritarianism, global pandemics, complex conflicts and climate change, it has never been needed more. In addition, international news plays a vital role in the UK’s creative economy, which the government wants to grow. London is a global hub for international news organisations: attracting inward investment, fostering innovation, and skilled workers from around the world. The new draft Media Bill also proposes changes that would enable UK public service media to expand their audiences at home and abroad via prominent positioning on smart TVs and similar devices. AI has the potential to help news organisations grow in a sustainable way by reducing the notoriously high costs of producing multimedia, multiplatform, and often multilingual, international coverage. But surprisingly, no one has yet researched how AI is used within international news production—let alone what it might mean to use it responsibly.

As a former BBC journalist, I co-designed this project with the Public Media Alliance, through whom I have access to a global network of International Public Service Media organisations (IPSM). Using this unique access and embedded in the University of Edinburgh’s leading AI community, I provide two key contributions. These are: first, a systematic map of why and how AI is being integrated into international news media production, and second, an exploration of how news staff understand “responsibility”, including how they address dilemmas between conflicting obligations to multiple publics. Specifically, I will conduct a technical audit of the different AI tools used by IPSM, their capabilities, the data they work with, and the roles they play. This will be supplemented via analysis of internal documentation and semi-structured interviews with senior executives. However, I also interrogate journalists’ values-in-action via contrasting case studies of AI-enabled international news production within different organisations, languages, and countries. IPSM provide an ideal research testbed for research into the responsible use of AI within international news because these major networks are continually adopting and developing AI tools to support their demanding work. They are best known for providing independent news to audiences abroad, including elite policymakers, marginalised and displaced groups, and those with little or no free media. But they also disseminate content via national public broadcasters, online and/or social media. For example, the BBC World Service provides international coverage to UK news programmes and to the UK version of BBC News Online. This Fellowship will help the UK become a global AI leader by informing the efforts of the Public Media Alliance to create shared standards and best practices for IPSM—the largest of which are known collectively as the Director General 8 (DG8). The project will also shape broader industry and regulatory discussions within the UK and overseas, as well as fostering future collaborations to benefit the country’s AI economy.

Scroll to Top