BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//BRAID UK - ECPv6.14.2//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:BRAID UK
X-ORIGINAL-URL:https://braiduk.org
X-WR-CALDESC:Events for BRAID UK
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:UTC
BEGIN:STANDARD
TZOFFSETFROM:+0000
TZOFFSETTO:+0000
TZNAME:UTC
DTSTART:20250101T000000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=UTC:20250130T160000
DTEND;TZID=UTC:20250130T170000
DTSTAMP:20260408T044921
CREATED:20250117T110105Z
LAST-MODIFIED:20250304T114233Z
UID:3140-1738252800-1738256400@braiduk.org
SUMMARY:'Responsible AI Futures' Hybrid Seminar - Dr Alex Taylor
DESCRIPTION:Red Teaming and the Operationalising of Responsibility\nThis spring\, I’ll be embarking on fieldwork investigating the outsourced labours and operational logics associated with red teaming. Currently in vogue and linked to responsible AI (RAI) programmes across the tech sector\, red teaming is being touted as a way to identify weaknesses in language and multi-modal AI models through adversarial or provocative prompts. The reasoning here is that weaknesses identified through this prompting might help in fine-tuning or re-training AI models\, mitigating issues such as systematically unsafe or toxic content. \nForming the basis for my BRAID fellowship\, this fieldwork will take place across so-called ‘data enrichment’ centres in the Philippines (and possibly other sites in the Global South) and examine red teaming from two standpoints. First\, it will interrogate the portrayal of red teaming as a sector-wide solution to the toxic tendencies of data-driven models\, such as large language models (LLMs). Second\, it will analyse red teaming as a case study of what I term the operationalising of responsibility in the tech sector. Across both dimensions\, my focus will be on the global flows of capital and the forms and concentrations of labour being mobilised to “responsiblise” AI. I see implications here not just for a more responsible AI but vital questions about responsibility in late capitalism. \nIn preparation for this work\, I want to use this talk to think with an audience about some of the assumptions behind and controversies surrounding red teaming. I’ll begin by elaborating on ways red teaming is being approached and put into practice in R&D. I’ll then set this technical work in a wider context of RAI in the sector to raise and invite questions about the adequacy of a ‘solution’ that continues to valorise technological innovation whilst failing to reward or indeed recognise the extractive conditions necessary for AI’s proliferation. \n\n\n\nBio\nAlex Taylor is a sociologist by training\, with longstanding commitments to critically investigating and intervening in the proliferation of technology and machine intelligence. His work has been shaped most heavily by a critical yet hopeful scholarship in feminist technoscience\, including works from Ruha Benjamin\, Simone Browne\, Vinciane Despret\, Donna Haraway\, and Anna Lowenhaupt Tsing. He’s currently a Reader in Design Informatics at the University of Edinburgh and an AHRC BRAID Fellow\, and co-runs the Critical Data Studies Cluster at the Edinburgh Futures Institute. He is also a Fellow of the RSA and holds visiting roles at the University of Sweden and City\, University of London. \n\n\n\n\nWatch the recording here:
URL:https://braiduk.org/event/responsible-ai-futures-hybrid-seminar-dr-alex-taylor
LOCATION:Inspace\, Inspace\, 1 Crichton Street\, Edinburgh\, EH8 9AB\, Edinburgh\, EH8 9AB\, United Kingdom
CATEGORIES:DI Lecture Series
ATTACH;FMTTYPE=image/png:https://braiduk.org/wp-content/uploads/2025/01/Alex-talk-visual-for-socials.png
END:VEVENT
END:VCALENDAR