BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//BRAID UK - ECPv6.14.2//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:BRAID UK
X-ORIGINAL-URL:https://braiduk.org
X-WR-CALDESC:Events for BRAID UK
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:UTC
BEGIN:STANDARD
TZOFFSETFROM:+0000
TZOFFSETTO:+0000
TZNAME:UTC
DTSTART:20250101T000000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=UTC:20250327T160000
DTEND;TZID=UTC:20250327T170000
DTSTAMP:20260410T232644
CREATED:20250210T124840Z
LAST-MODIFIED:20250210T124840Z
UID:3271-1743091200-1743094800@braiduk.org
SUMMARY:‘Responsible AI Futures’ Hybrid Seminar – Dr Claire Paterson-Young
DESCRIPTION:Ethical review to support Responsible Artificial Intelligence (AI) in policing: A preliminary study of West Midlands Police’s specialist data ethics review committee\n  \nBook your hybrid ticket now!\n\n\nThe deployment of AI by the police\, while promising more effective use of data for the prevention and detection of crime\, brings with it considerable threats of disproportionality and interference with fundamental rights. The West Midlands Office of the Police and Crime Commissioner (WMOPCC) and West Midlands Police (WMP) Ethics Committee aims to bridge the gap between ethical reflection\, scientific rigour\, and a focus on human rights\, thus contributing to responsible AI in policing. This seminar explores findings from an interdisciplinary research project that examined the impact and influence of the Committee\, including: \n\nDeveloping an understanding within the police of key ethical\, scientific\, legal and operational issues for planning and implementation.\nEmbedding genuine representation from the community that the police serve in ethical oversight committees to ensure opportunities for transparent engagement.\nImportance of explaining clearly how AI will be used in policing\, so as to enable potential benefits\, risks/harms and proportionality to be assessed in the same conversation.\nNeed for Police forces\, Police and Crime Commissioners and national bodies embarking on AI-driven policing to address the ethical\, legal and technical questions raised by policing AI\, such as reconciling privacy and security priorities relevant to the assessment of the proportionality of using suspect data.\n\n\n\n\n\nBio\nClaire Paterson-Young (BA MSc PhD) is an Associate Professor & Research Leader at the Institute for Social Innovation and Impact (ISII). Claire’s current major research projects include AI in Law Enforcement (RAI-UK funded 4-year interdisciplinary project titled ‘PROBabLE Futures – Probabilistic AI Systems in Law Enforcement Futures’). Claire has over 15 years practice and management experience in safeguarding\, child sexual exploitation\, trafficking\, sexual violence\, youth and restorative justice. Claire is Chair of the University of Northampton Research Ethics Committee and a serving member of the West Midlands Police and Crime Commissioner Ethics Committee. She formerly served as a member of the Health and Research Association Research Ethics Committee. She is a trustee of the National Association for Youth Justice (NAYJ)\, Fellow of the Royal Society of Arts for the encouragement of Arts\, Manufactures and Commerce (RSA) and Fellow of the Higher Education Academy (HEA). Claire is a Research Affiliate at Vulnerability & Policing Futures Research Centre. She has held a Visiting Fellowship position at Binus University (Indonesia) and Associate Fellowship position at Children and Young People Centre for Justice (Scotland). \nRunning Order \n16.00 – Talk by Claire Paterson-Young \n16.40 – Q&A \n17.00 – End \nIn-person: Inspace\, 1 Crichton St\, Newington\, Edinburgh EH8 9AB\nOnline: Zoom \nPlease note limited seats are available at Inspace for in-person audiences\, so please book tickets in advance. For those joining online please visit the online event page for the Zoom joining link and password. \nFor inquiries about accessibility\, please contact the DI team at designinformatics@ed.ac.uk or visit the Access webpage for more information about the venue: https://inspace.ed.ac.uk/venue-access/
URL:https://braiduk.org/event/responsible-ai-futures-hybrid-seminar-dr-claire-paterson-young
LOCATION:Inspace\, Inspace\, 1 Crichton Street\, Edinburgh\, EH8 9AB\, Edinburgh\, EH8 9AB\, United Kingdom
CATEGORIES:DI Lecture Series
ATTACH;FMTTYPE=image/png:https://braiduk.org/wp-content/uploads/2025/02/4.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20250424T160000
DTEND;TZID=UTC:20250424T170000
DTSTAMP:20260410T232644
CREATED:20241003T113453Z
LAST-MODIFIED:20250429T132426Z
UID:2624-1745510400-1745514000@braiduk.org
SUMMARY:'Responsible AI Futures' Hybrid Seminar - Dr Dan McQuillan
DESCRIPTION:Responsible AI means Decomputing\nIn this talk Dan McQuillan will argue that having a responsible approach to AI means decomputing. To start with\, decomputing means less computing; in particular\, less of the hyperscale infrastructures which underpin generative AI and whose datacentres are sprouting like mushrooms across the globe. \n\n\nBut decomputing goes beyond concern for environmental impacts to challenge the commitment of the wider AI apparatus to extractivism and scale. AI as we know it exploits sources of data and labour as well as natural resources like energy\, water and minerals. Meanwhile its claims to superior intelligence rest on the continually expanding size of its models and datasets. Decomputing draws on both decolonialism and degrowth\, arguing for an approach to AI based on the need for social justice and a just transition. \nAll too often\, AI acts as a reductive diversion from complex social and environmental questions\, so decomputing seeks alternatives that are relational\, collective and truly response-able\, because they can respond to the complexities of lived experience. \n\n\n\n\nBio\nDr Dan McQuillan\, Lecturer in Creative and Social Computing at Goldsmiths\, University of London \nAfter a Ph.D in Experimental Particle Physics\, Dan worked with people learning disabilities & mental health issues\, created websites with asylum seekers\, ran social tech camps in Kyrgyzstan and Sarajevo and worked for Amnesty International and the NHS. He recently authored ‘Resisting AI – An Anti-fascist Approach to Artificial Intelligence’ \nWatch the recording below:
URL:https://braiduk.org/event/responsible-ai-futures-hybrid-seminar-dr-dan-mcquillan
LOCATION:Online only
CATEGORIES:DI Lecture Series
ATTACH;FMTTYPE=image/png:https://braiduk.org/wp-content/uploads/2024/10/4.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20250508T160000
DTEND;TZID=UTC:20250508T170000
DTSTAMP:20260410T232644
CREATED:20241105T114930Z
LAST-MODIFIED:20250512T114432Z
UID:2782-1746720000-1746723600@braiduk.org
SUMMARY:‘Responsible AI Futures’ Hybrid Seminar – Dr Lydia Farina
DESCRIPTION:Determining responsibility considerations for AI ecosystems in the context of the creative industries\nThis talk provides key insights from our scoping BRAID project ‘Creating a dynamic archive of Responsible AI Ecosystems in the context of Creative AI’. The project lays the foundation work for mapping RAI ecosystems in this context by using bottom-up evidence already collected in specific research and artistic projects. We interpret AI ecosystems as interlinked ecosystems consisting of different individual actors and groups interacting in complex ways with one another and with AI applications. Evidence collected from the case studies are modelled into a dynamic archive to enable us to determine the boundaries of these ecosystems and the relevant responsibility considerations. The structure of the dynamic archive is based on present and future stakeholders and on responsibility priorities identified by the case studies participants. The talk includes insights relating to the responsible use of AI applications both as actors within the ecosystem and as external curators of the dynamic archive. \n\n\n\n\nBio\nLydia Farina is an Assistant Professor in Philosophy at the University of Nottingham\, working on the philosophy of mind\, metaphysics and the philosophy of artificial intelligence. More specifically she researches the nature of emotion\, AI Responsibility\, affective computing and social kinds. In the past year she researched the use of dynamic archives to determine responsible use of AI in the creative industries as the Primary Investigator of a BRAID scoping project. She holds a PhD and a MA in Philosophy from the University of Manchester\, a MA in Classics from University College London and a BA in Classics from Aristotle University of Thessaloniki. Before Academia she worked in Finance and is a member of the Chartered Institute of Taxation (CIOT). \n\n\n\n\nWatch the recording below:
URL:https://braiduk.org/event/responsible-ai-futures-hybrid-seminar-dr-lydia-farina
LOCATION:Online only
CATEGORIES:DI Lecture Series
ATTACH;FMTTYPE=image/png:https://braiduk.org/wp-content/uploads/2024/11/8-1.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20250515T160000
DTEND;TZID=UTC:20250515T170000
DTSTAMP:20260410T232644
CREATED:20250226T110248Z
LAST-MODIFIED:20250606T083255Z
UID:3298-1747324800-1747328400@braiduk.org
SUMMARY:‘Responsible AI Futures’ Hybrid Seminar – Srravya Chandhiramowuli
DESCRIPTION:Millions of workers\, particularly in global south regions\, are engaged in creating large-scale annotated datasets used for training and fine-tuning models\, as well as making AI work as intended by verifying and correcting its outcomes where required. Yet\, there is little recognition\, in AI development or governance\, of the role of data workers or the challenges they face. In this talk\, I bring attention to the contributions as well as concerns arising from data work through ethnographic insights into two data work projects\, one in which data work is structured as a repetitive\, unitised activity and another which aims to recover data work from such reductive frames using feminist-led\, participatory approaches. By tracing the work practices\, values and tensions across the two projects\, I highlight how data work\, including efforts to responsibilize it\, is caught within and shaped by the globalised supply chains that prioritise efficiency and expansion. Critically examining data work allows us to confront the scalar logics that underpin dataset (and indeed AI) production and to intervene in them as part of envisioning responsible AI futures. \n\n\n\n\nBio\nSrravya Chandhiramowuli is a PhD candidate in the University of Edinburgh’s Institute for Design Informatics and a PhD affiliate at the Centre for Technomoral Futures. Her research closely follows the on-ground practices of dataset production for AI\, bringing particular attention to systemic challenges and frictions in data and AI pipelines. Building on scholarship in Human Computer Interaction (HCI) and Science and Technology Studies (STS)\, Srravya’s research seeks to contribute towards just and equitable AI futures. \n\n\n\n\nWatch the recording here:
URL:https://braiduk.org/event/responsible-ai-futures-hybrid-seminar-srravya-chandhiramowuli
LOCATION:Inspace\, Inspace\, 1 Crichton Street\, Edinburgh\, EH8 9AB\, Edinburgh\, EH8 9AB\, United Kingdom
CATEGORIES:DI Lecture Series
ATTACH;FMTTYPE=image/png:https://braiduk.org/wp-content/uploads/2025/02/2025-sem-2-eventbrite-images-1.png
END:VEVENT
END:VCALENDAR