The BRAID Fellowships aim to bring together UK-based researchers from across the arts and humanities to drive responsible AI innovation, in partnership with the BBC and the Ada Lovelace Institute, and supported by a series of other stakeholders beyond academia.

What are the BRAID fellowships?

The fellowships form part of the overall engagement strand of BRAID, a national research programme led by the University of Edinburgh in collaboration with the Ada Lovelace Institute and the BBC. The programme is dedicated to integrating the whole range of arts and humanities research more fully into the responsible AI ecosystem, as well as bridging the divides between academic, industry, policy and regulatory work on responsible AI.

Funded by the Arts & Humanities Research Council, BRAID will establish a cohort of researchers working to:

Build new partnerships between academia, industry, policymakers, and wider publics.

Identify and lower barriers to the adoption of responsible AI frameworks and practices.

The fellowships aim to enable and support research contributions on the following themes:

1. AI for Humane Innovation

—integrating within AI research the humanistic perspectives that enable the personal, cultural, and political flourishing of human beings, by weaving historical, philosophical, literary and other humane arts into dialogue with AI communities of research, policy and practice

2. AI for Inspired Innovation

—infusing the AI ecosystem with more vibrant, imaginative and creative visions of responsible AI futures

3. AI for Equitable Innovation

—directing research and policy attention to the need to ensure that broader UK publics, particularly those marginalised within the digital economy, can expect more sustainable and equitable futures from AI development

4. AI for Resilient Innovation

—encouraging uplifting research, policy and practise to ensure AI ameliorates growing threats to global and national security, the rule of law, liberty, and social cohesion

The fellowships are open to researchers at any stage of their career and we encourage new as well as established voices to help co-shape, interrogate and enrich visions of AI that support human flourishing.

To reach early career researchers and underrepresented voices and engage with researchers that may not have established contacts beyond academia—but have excellent knowledge and ideas—we have reached out to a series of stakeholders to set challenges for us that cover a range of expertise and experience.

Accordingly, we have developed two fellowship models:


Applicants identify their own responsible AI challenge and secure the support of their own stakeholder beyond academia to work alongside.


Applicants respond to one of a series of responsible AI challenges that various stakeholders beyond academia have developed for BRAID.

“For AI technologies to be successfully integrated into society in ways that promote shared human flourishing, their development has to be guided by more than technical acumen. A responsible AI ecosystem must meld scientific and technical expertise with the humanistic knowledge, creative vision and practical insights needed to guide AI innovation wisely. This programme will work across disciplines, sectors and publics to lay the foundations of that ecosystem.”

Professor Shannon Vallor

“We have reached a critical point within responsible AI development. There now exists a foundation of good practice, however it is rarely connected to the sites where innovation and change happen, such as industry and policy. We hope that this programme will make new connections, creating an ecosystem where responsibility is not the last, but the first thought in AI innovation.”

Professor Ewa Luger

Scroll to Top