
The next generation of AI tools, like copilots, will rely on advanced models. To make sure these tools are useful for everyone, including marginalized groups, the challenge is to provide fair and equal service to all users. How can systems built on new multimodal foundation models better serve marginalised individuals and groups?
In the past, people tried to make specialized models that could serve marginalized groups equally. However, because different groups have unique needs and challenges, this approach often falls short in meeting the diverse needs of all users. Since foundation models like these are used in many aspects of technology, we need to go beyond fairness based on groups and focus on creating systems that work well for each individual, especially those who are marginalized. (challenge-code MSR-C3)
To do this, we need to define what it means to be marginalized in a way that makes sense both socially and mathematically. This requires knowledge from across arts and humanities research areas that have experience in dealing with the idea of marginalized groups. Solving this challenge means bringing together experts from various fields to turn theories into practical tools and designs.
Ultimately, the goal is to make AI technology that is inclusive and fair, so it doesn’t widen the gap between different groups in society but instead narrows it.
Additional Context
The Teachable AI Experiences (Tai X) is a multi-disciplinary team at Microsoft Research Lab – Cambridge (MSR) that brings together human experience and machine learning to push the state of the art in human-AI experiences. The multiple disciplines work closely together through the building of systems, addressing questions that arise through human-centred innovation in design patterns and machine learning.
We invite research proposals that draw on a deep understanding of how marginalization has been studied and measured, and researchers who are interested in working with us to think about how we might operationalize some of these concepts into measures or design mitigations that support responsible AI practices. Alongside any research papers, we encourage proposals to include practical outputs useful to industry, such as:
- a white paper on defining the concept of “margins” and the opportunities and fairness challenges that arise when using advanced AI models; and
- socio-technical frameworks that help engineering teams understand how to assess the quality of the AI models they create and the experiences they build using these models, especially when it comes to serving marginalized groups of people.
Working Arrangements
The Fellow will collaborate with the Teachable AI Experiences team at Microsoft Research Cambridge, UK. This team works on various aspects of human-AI interactions and aims to create systems that address questions related to designing AI in a way that puts people at the centre.
How it will work
- If the application is successful, MSR will work with the fellow to refine the project plan, agreeing on shared goals and outcomes as well as a timeline for shared collaboration milestones and a cadence for meetings virtually and in-person, as appropriate.
- MSR will support the fellow in terms of onboarding and providing an MSR research contact and engaging in regular meetings.
- MSR is set up for hybrid working but a level of in-person contact at our lab in Cambridge, UK would be beneficial, especially towards the start of the project.
- We expect the fellow to factor into their budget proposal travel, accommodation and subsistence costs and any specific research costs they envision.