
The Ada Lovelace Institute (Ada) challenge is somewhat different to our other challenges. It allows for researchers to get first-hand experience at a leading national think-tank.
Working with the Ada team for 2-3 days each week, our BRAID Fellow will collaborate on a series of research outputs designed to complement the responsible AI challenge that your project seeks to address.
The Ada Lovelace Institute (Ada) is an independent research institute with a mission to make data and AI work for people and society. We recognise that there are major power asymmetries between those developing AI and data-driven technologies, and those impacted by them. Through our research, we aim to rebalance power by better representing people and society in decisions around AI and data. We are evidence-based, interdisciplinary and rigorous in our approach.
We aim to create change by:
- Convening diverse voices to create an inclusive understanding of the social and ethical issues arising from data and AI;
- Building evidence to support rigorous research and foster informed debate; and
- Shaping and informing good policy and practice to prioritise societal benefits in the design and deployment of data and AI.
We deliver our aims through programmes of work in our directorates:
- Society, Justice & Public Services. Explores how people and society are affected by data and AI, and how we can organise society and its institutions in a data age. Includes work on health, education, justice and public sector uses of AI and data. (challenge-code ALI-C1)
- Emerging Tech & Industry Practice. Explores how to build the best interests of people and society into the development of new and emerging technology. Includes exploring AI accountability practices, methods to inspect AI systems, the ethics of AI-powered genomics technologies, research ethics and public participation. (challenge-code ALI-C2)
- Data and AI Law & Policy. Explores and influences ways to govern and regulate data and AI technologies. Includes policy work across the UK and EU. (challenge code ALI-C3)
- Impact & Research Practice. Responsible for our large-scale public engagement work and for communication of our research. (challenge code ALI-C4)
For this BRAID fellowship round, we invite research proposals that:
- identify a current responsible AI challenge that is highly relevant to one of the key areas of Ada’s research directorates as outlined above;
- can be co-developed around a series of policy-facing outputs (co-authored research papers, briefs, discussion papers) that can be worked on with Ada, alongside your own research papers;
- call for regular contact and engagement with the Ada for 2-3 days per week;
- promote ongoing knowledge exchange; and
- benefit deeply from a flexible and collaborative approach.
Additional Context
While our 2024-2027 research agenda is not yet confirmed, there are several topic areas that we would seek to match applicants research interests and background to work on:
Exploring liability regimes for AI. In the EU and UK, policymakers are grappling with the question how to assign legal liability to developers of AI systems when things go wrong. A key challenge is to determine what kind of liability regime should be put in place (e.g. strict liability) for different kinds of AI technologies, and upon which actor in an AI supply chain that liability should rest. We are interested in what legal liability regimes have existed in other technology sectors, how well they work, and whether they may provide lessons for AI liability regimes in the UK and EU.
The ethics and law of ‘piloting’ AI systems. From self-driving features of Teslas to ChatGPT, many AI-powered technologies are being piloted, tested, and evaluated through public ‘experiments.’ Tech firms like Google have adopted the language of experimentation to describe certain products like Bard, which creates moral and legal ambiguity around a technology’s impacts and can turn public spaces into a ‘lab’ environment and render everyday people as test subjects. As tech companies seek to deploy, test and evaluate powerful AI technologies, it is critical for policymakers and regulators to ask what kinds of ethical and legal practices these companies should adopt to govern their behaviour.
The labour impacts of generative AI. A surge in generative AI applications in the last year has impacted some members of society more than others, including artists, copywriters, and educators. The ways in which these technologies are changing traditional roles, job ladders, and pay structures is relatively understudied, but essential for policymakers and trade unions to understand. What new rights, regulations and practices might need to be created in light of the surge in generative AI technologies? Are existing laws enough to protect people from experiencing various forms of harm?
‘Rethinking’ AI legislation and accountability practices. As Europe, the UK, and US seek to develop regulatory proposals for AI systems at pace, what kinds of practices and requirements should be put in place? We are interested in projects that explore what collections of rights, institutions, and legislative frameworks do we need to govern AI, and deep-dive challenges on some emerging proposals (e.g. how might an AI Ombudsman responsible for intaking reports of AI harms function, how might ‘inclusive’ AI sandboxes work in practice. This work would build on Ada projects like Rethinking Data and our existing work exploring proposed AI regulatory solutions.
Demystifying emerging technology areas. The technology sector is in a state of near-constant hype cycles around new and emerging technologies. Much of Ada’s work seeks to unpack what these technologies are, what risks they raise, and how they can be governed. Using futures methodologies, patent database analysis, and other methods for horizon scanning, we are interested in research into a range of new and emerging technologies that have a potential impact on people and society. Current ones we are considering include agriculture tech, embodied AI systems, inferential biometrics, immersive technologies, and multi-modal AI systems.
Working Arrangements
A BRAID Fellow will sit within one of Ada’s Research Directorates, reporting into a Senior Researcher. The fellow will refine their research proposal with the directorate and work on it with several other researchers within the directorate to produce a series of outputs. They will have the support of Ada’s Communications, Operations, and Policy & Public Affairs specialists to develop the project further.
Our staff are primarily based in the UK, but we also have a growing team in Brussels.
We are looking for researchers from across the arts and humanities with strong skills and interest in areas like public policy, social and cultural studies, the law and the history of technology (both qualitative and quantitative) and with a strong interest / background in AI and data. To date, Ada’s methodologies include the use of working groups and expert convenings, public deliberation initiatives, quantitative surveys, desk-based research and synthesis, policy and legal analysis, and ethnographic research. We welcome new kinds of multidisciplinary expertise and methodologies into our team, which might include expertise in data science, computer science, futures, or other disciplinary backgrounds.
Our staff come from interdisciplinary backgrounds with expertise in philosophy, public policy, sociology, critical feminist studies, data science, computer science, public participation, history, or other backgrounds. Our staff have worked in academia, the tech industry, civil society, and government.
This role would particularly suit a fellow who is:
- keen to gain experience of developing evidence-based research with a focus on impact, beyond a traditional academic environment;
- keen to gain exposure to a policy environment;
- able to collaborate within a multidisciplinary team to help refine an ambitious research agenda; and
- flexible and pragmatic, with the ability work within a cooperative project management setting.
We would like fellows who can spend between 2-3 days of their week working on their project with Ada staff. Fellows are welcome to work from our offices and join all of Ada’s core meetings and experience how a civil society organisation working on data and AI issues operates day-to-day. Ada shares an office with the Nuffield Foundation in Farringdon, London (100 St John Street). We operate with a flexible working model, with staff often coming into the office 3 days a week (usually Tuesdays – Thursdays). Fellows are welcome to work from our offices during those days.
We can accommodate other working relationships depending on availability. Proposals should be scoped to acknowledge the time commitment that a fellow can offer.
A note on collaborative outputs:
The fellow will be expected to work with other Ada researchers to shape policy positions, recommendations, and other aspects of the research throughout the research lifecycle – from ideation of a project all the way through to publications and impact.
The primary shared outputs from the fellow’s work will be a series of reports (between 2-3) relating to responsible AI challenges that Ada are working on. These reports will be co-written with other Ada research staff and will be completed over the course of the fellowship. Fellows will be attributed as an author of these reports, along with any researchers or staff who make substantial contributions to the work.
One important aspect of how Ada operates is that we create research that reflects an institutional position, where our research and policy recommendations speak for the entire institution.
This may be different from a fellowship experience within an academic context, in which a fellow works on their own discrete project and produces research under their name. At different stages – project ideation, research design, drafting – fellows should expect a more involved process of collaboration with our comms, policy, and research teams than a traditional fellowship might entail.
Ada generally does not produce academic outputs for conferences or journals. However, we support fellows to turn the outputs of their fellowship into an academic submission during and after this fellowship.
You can read more about our mission and approach here.