We aim to build new connections between people across the AI ecosystem in order to establish a more complete, robust, and impactful field of responsible AI research—one that draws on the full range of arts and humanities knowledge. The challenge-led pathway is one way for us to work towards this goal.
Key Dates
1. Expressions of Interest are now due by 20 November 2023 for non-academic stakeholder challenges. Expressions of interest for programme partner challenges are now closed.
2. Applications close on 4 December 2023 at 10 am GMT
Building bridges
We understand that for those who are earlier in their careers, have taken career breaks, or are not already connected to large scale projects, finding external stakeholders to work with can be difficult. We want to give all eligible researchers the opportunity to contribute their knowledge and ideas to the responsible AI ecosystem.
Our challenge-led pathway is designed to support the researcher community. By working with stakeholders on your behalf we have developed a set of key responsible AI challenges for you to respond to. Whilst the questions are set, we invite the broadest range of disciplines and invite applicants to apply their own responses, disciplinary lenses and methodologies to the research questions.
The Expressions of Interest process is especially helpful in preparing successful applications, by enabling us to put you in touch with the relevant stakeholders directly on key matters to do with the collaboration, including data management and working arrangements.
We recognise that this is a slightly different approach to a traditional fellowship and so do feel free to reach out to us for advice and guidance.
The Ada Lovelace Institute challenges are slightly different again and offer a chance to work in a policy-responsive think tank, collaborating on a series of research outputs that respond to regulatory developments—shared outputs to be delivered alongside your own research project.
Programme Partners

The BBC Research & Development, Responsible Innovation team have developed a series of challenges for BRAID concerning the ways in which AI is transforming the media and content creation sector.

The Ada Lovelace Institute (Ada) is an independent research institute with a mission to make data and AI work for people and society.
Other Non-Academic Stakeholders

The Digital Ethics team at the Inter IKEA Group have developed two challenges for BRAID that explore how large international companies can engage with responsible AI practices and frameworks to support an approach based on culture and values.
Microsoft Research (MSR) has developed a series of challenges for BRAID that aim to embed Responsible AI research from the arts and humanities into current projects that span a variety of disciplines.

Scotland’s Futures Forum (SFF) is the Scottish Parliament’s think-tank and has developed a challenge for BRAID that centres on the role that Members of the Scottish Parliament (MSPs) can play in supporting the development and use of Responsible AI in Scotland.

As AI is integrated into the legal industry at scale, the Law Society of Scotland seeks to support and promote the responsible development and use of these technologies within the profession. Their challenge for BRAID calls for researchers to explore and innovate at the intersection of law, technology and society.

Teams at the NHS Digital Academy and the NHS Workforce, Training and Education Directorate have developed a challenge for BRAID concerning cognitive biases when clinicians use AI to support their decisions.

National Galleries Scotland (NGS) has developed a challenge for BRAID that asks about how we might approach and develop responsible uses of AI to enhance connections between audiences and collections.

The Kent Surrey Sussex Academic Health Science Network (KSS AHSN) is one of 15 Academic Health Science Networks across England, established by NHS England in 2013 to improve health and generate economic growth by spreading innovation at pace and scale.They have developed for BRAID a research challenge that focuses on gender equity and women’s health.

Diverse AI faces the challenge of wanting to re-work existing AI tech to better serve our diverse communities of users. We are looking to explore the potential for implementable processes and methods that offer genuine participatory engagement; that demonstrate a possible way to re-imagine, re-create, and re-tool existing deployed AI technology to become democratic, user-centred, participatory and inclusive.