What are the BRAID fellowships?
The fellowships form part of the overall engagement strand of BRAID, a national research programme led by the University of Edinburgh in collaboration with the Ada Lovelace Institute and the BBC. The programme is dedicated to integrating the whole range of arts and humanities research more fully into the responsible AI ecosystem, as well as bridging the divides between academic, industry, policy and regulatory work on responsible AI.
The fellowships aim to enable and support research contributions on the following themes:
The fellowships are open to researchers at any stage of their career and we encourage new as well as established voices to help co-shape, interrogate and enrich visions of AI that support human flourishing.
To reach early career researchers and underrepresented voices and engage with researchers that may not have established contacts beyond academia—but have excellent knowledge and ideas—we have reached out to a series of stakeholders to set challenges for us that cover a range of expertise and experience.
There are two models for applying:
|Call opens||Friday, 29 September 2023|
|Webinar||Wednesday, 4 October 2023 at 2pm|
|Info Evening||Friday, 13 October 2023 at 3pm|
|Expressions of Interest close||20 November 2023|
|Applications close||Monday, 4 December 2023 at 10am GMT|
|Review Panel||Early February 2024|
|Decisions communicated||Mid February 2024|
|Fellowship start date||1 May or 1 November 2024|
|Fellowship end date||30 November 2025|
Events and further information
In this section applicants can learn more about the application process and programme aims. Details of our events are listed below and our further information page provides additional information and context for the BRAID programme, the fellowships and the funding call.
Please contact email@example.com with any queries you might have or if we can help with any access needs.
“For AI technologies to be successfully integrated into society in ways that promote shared human flourishing, their development has to be guided by more than technical acumen. A responsible AI ecosystem must meld scientific and technical expertise with the humanistic knowledge, creative vision and practical insights needed to guide AI innovation wisely. This programme will work across disciplines, sectors and publics to lay the foundations of that ecosystem.”
Professor Shannon Vallor
“We have reached a critical point within responsible AI development. There now exists a foundation of good practice, however it is rarely connected to the sites where innovation and change happen, such as industry and policy. We hope that this programme will make new connections, creating an ecosystem where responsibility is not the last, but the first thought in AI innovation.”
Professor Ewa Luger