‘CREAATIF: Crafting Responsive Assessments of AI and Tech-Impacted Futures’

  • Led by Professor David Leslie, Queen Mary University of London

This project engages with creative workers to co-develop impact assessments that address fundamental rights and working conditions in the context of generative AI. It ensures that workers have a voice in the development of these technologies and corresponding labour policy.

Generative AI (GenAI) burst into the popular imagination in late 2022 with the release of ChatGPT a chat agent that has proven not only to be very popular but also signifies a major leap forward in technological capabilities. ChatGPT is just one of several GenAI technologies that has entered the scene in recent years; others can generate (or alter) video, images, music, dialogue, and computer code. These developments have the potential to change the nature of work for many, including for workers previously deemed immune to direct competition from technology.

There is urgency to studying the impact of these tools in the specific context of creative work, in which technologically-mediated worker precarity is an ongoing but increasingly acute concern. Worker resistance, as exemplified by recent industrial action by the Writers Guild of America, highlights that impacts go beyond displacement of or access to work, and can impact established notions of authorship while also affecting worker discretion and dignity. The creative sector is at the coalface of the GenAI transformation in which emerging technologies potentially devalue labour materially (wages) and socially (recognition of contribution).

Our understanding of the transformative effects of GenAI in creative work is still emerging but present; the experience and perspective of those whose lives and livelihoods are increasingly threatened by these new technologies have not been properly factored into AI policy planning and change. What is needed is to bring these perspectives into view where they can influence labour policy in the area of data-driven technologies. To achieve this requires the building of new architectures that bridge this divide between experience and application and which promote involvement by building on the strength of UK labour law, comparable historical precedents like Scandinavian participatory design, and recent turns toward participatory algorithmic impact assessments.

Algorithmic impact assessments hold promise as accountability tools that can surface core concerns about the effects of data-driven technologies while pointing towards governance strategies for mitigating those concerns. Where impact assessments are designed to foreground the voices of people affected by emerging technologies, they can also serve as frameworks for surfacing and crystalising perspectives that reflect the lived experience of technology-mediated lives, which in turn can be channelled into policy guidance.

In this project, we bring together two leading and relevant methods of impact assessment: the Human Rights, Democracy, and the Rule of Law Assurance Framework for AI Systems (HUDERIA), and the Good Work Algorithmic Impact Assessment (GWAIA). The GWAIA has been selected as a focal point because of its specific application to questions of worker dignity. Its current design is relevant to algorithmic management tools within a ‘conventional’ employment context. We will cross-reference this with insights from HUDERIA, which brings specific insights with regards to structuring accountability in the relationship between individuals and technology producers, public and private. A central feature these tools share is the participatory engagement model of surfacing, assessing, and mitigating individual and collective risks to workers by drawing on the experiences, testimony, and ideas of workers themselves.

Scroll to Top