BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//BRAID UK - ECPv6.14.2//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:BRAID UK
X-ORIGINAL-URL:https://braiduk.org
X-WR-CALDESC:Events for BRAID UK
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:UTC
BEGIN:STANDARD
TZOFFSETFROM:+0000
TZOFFSETTO:+0000
TZNAME:UTC
DTSTART:20230101T000000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=UTC:20250515T160000
DTEND;TZID=UTC:20250515T170000
DTSTAMP:20260405T231408
CREATED:20250226T110248Z
LAST-MODIFIED:20250606T083255Z
UID:3298-1747324800-1747328400@braiduk.org
SUMMARY:‘Responsible AI Futures’ Hybrid Seminar – Srravya Chandhiramowuli
DESCRIPTION:Millions of workers\, particularly in global south regions\, are engaged in creating large-scale annotated datasets used for training and fine-tuning models\, as well as making AI work as intended by verifying and correcting its outcomes where required. Yet\, there is little recognition\, in AI development or governance\, of the role of data workers or the challenges they face. In this talk\, I bring attention to the contributions as well as concerns arising from data work through ethnographic insights into two data work projects\, one in which data work is structured as a repetitive\, unitised activity and another which aims to recover data work from such reductive frames using feminist-led\, participatory approaches. By tracing the work practices\, values and tensions across the two projects\, I highlight how data work\, including efforts to responsibilize it\, is caught within and shaped by the globalised supply chains that prioritise efficiency and expansion. Critically examining data work allows us to confront the scalar logics that underpin dataset (and indeed AI) production and to intervene in them as part of envisioning responsible AI futures. \n\n\n\n\nBio\nSrravya Chandhiramowuli is a PhD candidate in the University of Edinburgh’s Institute for Design Informatics and a PhD affiliate at the Centre for Technomoral Futures. Her research closely follows the on-ground practices of dataset production for AI\, bringing particular attention to systemic challenges and frictions in data and AI pipelines. Building on scholarship in Human Computer Interaction (HCI) and Science and Technology Studies (STS)\, Srravya’s research seeks to contribute towards just and equitable AI futures. \n\n\n\n\nWatch the recording here:
URL:https://braiduk.org/event/responsible-ai-futures-hybrid-seminar-srravya-chandhiramowuli
LOCATION:Inspace\, Inspace\, 1 Crichton Street\, Edinburgh\, EH8 9AB\, Edinburgh\, EH8 9AB\, United Kingdom
CATEGORIES:DI Lecture Series
ATTACH;FMTTYPE=image/png:https://braiduk.org/wp-content/uploads/2025/02/2025-sem-2-eventbrite-images-1.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20250508T160000
DTEND;TZID=UTC:20250508T170000
DTSTAMP:20260405T231408
CREATED:20241105T114930Z
LAST-MODIFIED:20250512T114432Z
UID:2782-1746720000-1746723600@braiduk.org
SUMMARY:‘Responsible AI Futures’ Hybrid Seminar – Dr Lydia Farina
DESCRIPTION:Determining responsibility considerations for AI ecosystems in the context of the creative industries\nThis talk provides key insights from our scoping BRAID project ‘Creating a dynamic archive of Responsible AI Ecosystems in the context of Creative AI’. The project lays the foundation work for mapping RAI ecosystems in this context by using bottom-up evidence already collected in specific research and artistic projects. We interpret AI ecosystems as interlinked ecosystems consisting of different individual actors and groups interacting in complex ways with one another and with AI applications. Evidence collected from the case studies are modelled into a dynamic archive to enable us to determine the boundaries of these ecosystems and the relevant responsibility considerations. The structure of the dynamic archive is based on present and future stakeholders and on responsibility priorities identified by the case studies participants. The talk includes insights relating to the responsible use of AI applications both as actors within the ecosystem and as external curators of the dynamic archive. \n\n\n\n\nBio\nLydia Farina is an Assistant Professor in Philosophy at the University of Nottingham\, working on the philosophy of mind\, metaphysics and the philosophy of artificial intelligence. More specifically she researches the nature of emotion\, AI Responsibility\, affective computing and social kinds. In the past year she researched the use of dynamic archives to determine responsible use of AI in the creative industries as the Primary Investigator of a BRAID scoping project. She holds a PhD and a MA in Philosophy from the University of Manchester\, a MA in Classics from University College London and a BA in Classics from Aristotle University of Thessaloniki. Before Academia she worked in Finance and is a member of the Chartered Institute of Taxation (CIOT). \n\n\n\n\nWatch the recording below:
URL:https://braiduk.org/event/responsible-ai-futures-hybrid-seminar-dr-lydia-farina
LOCATION:Online only
CATEGORIES:DI Lecture Series
ATTACH;FMTTYPE=image/png:https://braiduk.org/wp-content/uploads/2024/11/8-1.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20250424T160000
DTEND;TZID=UTC:20250424T170000
DTSTAMP:20260405T231408
CREATED:20241003T113453Z
LAST-MODIFIED:20250429T132426Z
UID:2624-1745510400-1745514000@braiduk.org
SUMMARY:'Responsible AI Futures' Hybrid Seminar - Dr Dan McQuillan
DESCRIPTION:Responsible AI means Decomputing\nIn this talk Dan McQuillan will argue that having a responsible approach to AI means decomputing. To start with\, decomputing means less computing; in particular\, less of the hyperscale infrastructures which underpin generative AI and whose datacentres are sprouting like mushrooms across the globe. \n\n\nBut decomputing goes beyond concern for environmental impacts to challenge the commitment of the wider AI apparatus to extractivism and scale. AI as we know it exploits sources of data and labour as well as natural resources like energy\, water and minerals. Meanwhile its claims to superior intelligence rest on the continually expanding size of its models and datasets. Decomputing draws on both decolonialism and degrowth\, arguing for an approach to AI based on the need for social justice and a just transition. \nAll too often\, AI acts as a reductive diversion from complex social and environmental questions\, so decomputing seeks alternatives that are relational\, collective and truly response-able\, because they can respond to the complexities of lived experience. \n\n\n\n\nBio\nDr Dan McQuillan\, Lecturer in Creative and Social Computing at Goldsmiths\, University of London \nAfter a Ph.D in Experimental Particle Physics\, Dan worked with people learning disabilities & mental health issues\, created websites with asylum seekers\, ran social tech camps in Kyrgyzstan and Sarajevo and worked for Amnesty International and the NHS. He recently authored ‘Resisting AI – An Anti-fascist Approach to Artificial Intelligence’ \nWatch the recording below:
URL:https://braiduk.org/event/responsible-ai-futures-hybrid-seminar-dr-dan-mcquillan
LOCATION:Online only
CATEGORIES:DI Lecture Series
ATTACH;FMTTYPE=image/png:https://braiduk.org/wp-content/uploads/2024/10/4.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20250327T160000
DTEND;TZID=UTC:20250327T170000
DTSTAMP:20260405T231408
CREATED:20250210T124840Z
LAST-MODIFIED:20250210T124840Z
UID:3271-1743091200-1743094800@braiduk.org
SUMMARY:‘Responsible AI Futures’ Hybrid Seminar – Dr Claire Paterson-Young
DESCRIPTION:Ethical review to support Responsible Artificial Intelligence (AI) in policing: A preliminary study of West Midlands Police’s specialist data ethics review committee\n  \nBook your hybrid ticket now!\n\n\nThe deployment of AI by the police\, while promising more effective use of data for the prevention and detection of crime\, brings with it considerable threats of disproportionality and interference with fundamental rights. The West Midlands Office of the Police and Crime Commissioner (WMOPCC) and West Midlands Police (WMP) Ethics Committee aims to bridge the gap between ethical reflection\, scientific rigour\, and a focus on human rights\, thus contributing to responsible AI in policing. This seminar explores findings from an interdisciplinary research project that examined the impact and influence of the Committee\, including: \n\nDeveloping an understanding within the police of key ethical\, scientific\, legal and operational issues for planning and implementation.\nEmbedding genuine representation from the community that the police serve in ethical oversight committees to ensure opportunities for transparent engagement.\nImportance of explaining clearly how AI will be used in policing\, so as to enable potential benefits\, risks/harms and proportionality to be assessed in the same conversation.\nNeed for Police forces\, Police and Crime Commissioners and national bodies embarking on AI-driven policing to address the ethical\, legal and technical questions raised by policing AI\, such as reconciling privacy and security priorities relevant to the assessment of the proportionality of using suspect data.\n\n\n\n\n\nBio\nClaire Paterson-Young (BA MSc PhD) is an Associate Professor & Research Leader at the Institute for Social Innovation and Impact (ISII). Claire’s current major research projects include AI in Law Enforcement (RAI-UK funded 4-year interdisciplinary project titled ‘PROBabLE Futures – Probabilistic AI Systems in Law Enforcement Futures’). Claire has over 15 years practice and management experience in safeguarding\, child sexual exploitation\, trafficking\, sexual violence\, youth and restorative justice. Claire is Chair of the University of Northampton Research Ethics Committee and a serving member of the West Midlands Police and Crime Commissioner Ethics Committee. She formerly served as a member of the Health and Research Association Research Ethics Committee. She is a trustee of the National Association for Youth Justice (NAYJ)\, Fellow of the Royal Society of Arts for the encouragement of Arts\, Manufactures and Commerce (RSA) and Fellow of the Higher Education Academy (HEA). Claire is a Research Affiliate at Vulnerability & Policing Futures Research Centre. She has held a Visiting Fellowship position at Binus University (Indonesia) and Associate Fellowship position at Children and Young People Centre for Justice (Scotland). \nRunning Order \n16.00 – Talk by Claire Paterson-Young \n16.40 – Q&A \n17.00 – End \nIn-person: Inspace\, 1 Crichton St\, Newington\, Edinburgh EH8 9AB\nOnline: Zoom \nPlease note limited seats are available at Inspace for in-person audiences\, so please book tickets in advance. For those joining online please visit the online event page for the Zoom joining link and password. \nFor inquiries about accessibility\, please contact the DI team at designinformatics@ed.ac.uk or visit the Access webpage for more information about the venue: https://inspace.ed.ac.uk/venue-access/
URL:https://braiduk.org/event/responsible-ai-futures-hybrid-seminar-dr-claire-paterson-young
LOCATION:Inspace\, Inspace\, 1 Crichton Street\, Edinburgh\, EH8 9AB\, Edinburgh\, EH8 9AB\, United Kingdom
CATEGORIES:DI Lecture Series
ATTACH;FMTTYPE=image/png:https://braiduk.org/wp-content/uploads/2025/02/4.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20250313T160000
DTEND;TZID=UTC:20250313T170000
DTSTAMP:20260405T231408
CREATED:20250210T124138Z
LAST-MODIFIED:20250226T103246Z
UID:3267-1741881600-1741885200@braiduk.org
SUMMARY:‘Responsible AI Futures’ Hybrid Seminar – Ananda Rutherford
DESCRIPTION:Do ‘Words Matter’ in Machine Learning?\n  \n\nBook your online ticket now!\n\nThis presentation will reflect on the distinctions between what is needed to produce equitable\, anti-racist information on artworks and what it is possible or desirable with the application of machine learning. What word choices in the cultural sector can or should ML be taught to make? Can machine learning be applied to address structural inequality and systemic bias within art historical and museological practice? What relationship should we be crafting between machine learning and the histories of art as presented through museum labels and interpretation? \nThe focus of this research was a dataset of texts from Tate’s Art and Artists online collection\, identified as biased in terms of language and interpretation. The research was conducted as part of the AHRC Towards a National Collection Programme\, on the Transforming Collections project. Reviewing the Tate texts against Hodan Warsame’s essay ‘Mechanisms and Tropes of Colonial Narratives’\, part of the pivotal publication Words Matter: An Unfinished Guide to Word Choices in the Cultural Sector (2018)\, alongside the development of an application to analyse object label texts revealed the need for deep contextual understanding\, both of art historical writing conventions and the artwork itself. \n\n\n\n\nBio\nAnanda Rutherford is a Research Fellow with UAL’s Decolonising Arts Institute. Her research for the AHRC/TaNC funded Transforming Collections project explored the language of museum catalogue texts and the potential application of machine learning to evidence and problematise issues of colonialism and racial bias. She is also interested in ethics in practice at the intersection of academic research\, data and technology\, and GLAM and heritage organisations. Ananda is a former museum collections and documentation manager\, with a career focus on the digitisation and dissemination of collections information online and continues to work and consult in this area. \n\n\n\n\nRunning Order \n16.00 – Talk by Ananda Rutherford \n16.40 – Q&A \n17.00 – EndOnline: Zoom \nFor those joining online please visit the online event page for the Zoom joining link and password. \nFor inquiries about accessibility\, please contact the DI team at designinformatics@ed.ac.uk or visit the Access webpage for more information about the venue: https://inspace.ed.ac.uk/venue-access/
URL:https://braiduk.org/event/responsible-ai-futures-hybrid-seminar-ananda-rutherford
LOCATION:Online only
CATEGORIES:DI Lecture Series
ATTACH;FMTTYPE=image/png:https://braiduk.org/wp-content/uploads/2025/02/3.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20250306T093000
DTEND;TZID=UTC:20250306T140000
DTSTAMP:20260405T231408
CREATED:20250117T112940Z
LAST-MODIFIED:20250211T114957Z
UID:3145-1741253400-1741269600@braiduk.org
SUMMARY:Ensuring Responsible AI Through Methodological Diversity
DESCRIPTION:Hanna Barakat + AIxDESIGN & Archival Images of AI / Better Images of AI / Data Mining 1 / CC-BY 4.0 \nBook your ticket now!\nThis event brings together leading experts from different and distal disciplines to explore the diverse research methodologies and methods that can drive responsible AI forward. The audience will be invited to participate and reflect on how different approaches and diversity of knowledge in AI research and development are key for ensuring a responsible future. From ethical considerations to technical advancements\, the event will include keynote presentations followed by a roundtable discussion and Q&A. \nSpeakers confirmed: \nDr Claire Paterson-Young: Principal Researcher & Research Leader in Social Innovation\, University of Northampton\, Institute for Social Innovation and Impact. \nProf Muffy Calder: Vice-Principal and Head of College of Science and Engineering\, University of Glasgow. Chair RAi UK Skills Pillar.\nSpecialist Area: Sensor-based Systems; Privacy Intrusion and National Security \nDr Christopher Burr: Senior Researcher in Trustworthy Systems (TPS Programme) and Head of the Innovation and Impact Hub (Turing Research and Innovation Cluster in Digital Twins)\, The Alan Turing Institute. \nDr Martin Parker: Head of Music\, Programme Director\, MScR Sound Design. The University of Edinburgh. \nProf Michael Pinchbeck: Senior Research Lead for Art & Performance Research Hub. Manchester School of Theatre\, Manchester Metropolitan University. \nOrganised by: \n \n \nUKRI funded programmes. \nFor any enquiries please email: info@rai.ac.uk \n\n\nLocation\nThe University of Edinburgh\, West Court Edinburgh College of Art\, 74 Lauriston Place\, EH3 9DF
URL:https://braiduk.org/event/ensuring-responsible-ai-through-methodological-diversity
ATTACH;FMTTYPE=image/jpeg:https://braiduk.org/wp-content/uploads/2025/01/HannaBarakat-AIxDESIGN-ArchivalImages-of-AI-DataMining-1-1280x889-1.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20250227T160000
DTEND;TZID=UTC:20250227T170000
DTSTAMP:20260405T231408
CREATED:20250210T115309Z
LAST-MODIFIED:20250310T092852Z
UID:3249-1740672000-1740675600@braiduk.org
SUMMARY:‘Responsible AI Futures’ Hybrid Seminar – Prof Andrew McStay
DESCRIPTION:Empathic AI companions: moving fast and breaking people?\nThis talk examines the ethical challenges and societal implications of empathic AI companions\, drawing on UK public attitudes and civil lawsuits against Character.ai. The lawsuit highlights critical design flaws\, inadequate safeguards\, and ethical dilemmas\, especially the blurred boundaries between reality and fiction. Survey findings reveal demographic divides in familiarity and usage\, but also shared concerns about privacy\, emotional dependency\, and the appropriateness of AI companions for children and older adults. While respondents recognise benefits such as reducing loneliness and aiding education\, anthropomorphic design elements evoke mixed reactions\, raising ethical questions about simulated emotion and inappropriate user deception. The talk advocates for age-appropriate design and stronger regulatory frameworks\, emphasising the need for balanced policies to protect vulnerable populations while fostering creativity and responsible innovation. Actionable recommendations aim to guide policymakers\, industry leaders\, and scholars in addressing the ethical complexities of this emerging digital technology. \n\n\n\n\nBio\nAndrew McStay is Professor of Technology & Society at Bangor University and the author of Automating Empathy: Decoding Technologies that Gauge Intimate Life\, published open access in 2024 with Oxford University Press. His work explores the ethical implications of AI systems claimed empathise and understand emotion. Director of the Emotional AI Lab\, his current projects include Responsible AI (RAI) funded work to diversify regional input into IEEE-based ethical technical standards for emulated empathy and human-AI partnering (IEEE P7014.1). Other recent work includes a project for the Office of the Privacy Commissioner of Canada on child-focused emotional AI systems. He is also a technology advisory panel member for the UK’s Information Commissioners’ Office. \n\n\n\n\nWatch the recording here:
URL:https://braiduk.org/event/responsible-ai-futures-hybrid-seminar-prof-andrew-mcstay
LOCATION:Inspace\, Inspace\, 1 Crichton Street\, Edinburgh\, EH8 9AB\, Edinburgh\, EH8 9AB\, United Kingdom
CATEGORIES:DI Lecture Series
ATTACH;FMTTYPE=image/png:https://braiduk.org/wp-content/uploads/2025/02/2.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20250213T160000
DTEND;TZID=UTC:20250213T170000
DTSTAMP:20260405T231408
CREATED:20250129T100735Z
LAST-MODIFIED:20250304T114010Z
UID:3232-1739462400-1739466000@braiduk.org
SUMMARY:'Responsible AI Futures' Hybrid Seminar - Dr Denis Newman-Griffis
DESCRIPTION:Responsible AI takes practice: Cross-sector insights into how we shape responsible use of AI methodologies\n\n\nEvery organisation seems to be staking a claim to responsible AI\, issuing new statements of ethical principles for AI to follow\, but what does it look like to actually do responsible AI in practice? This gap is one of the biggest challenges meaning that responsible AI too often stays as a talking point\, and too rarely becomes an action plan. This talk will present emerging findings from the past two years of research in the  Framing Responsible AI Implementation and Management (FRAIM) and Getting responsible about AI and machine learning in research funding and evaluation (GRAIL) responsible AI projects\, funded by BRAID and the Research on Research Institute. These projects are building shared knowledge of what is involved in putting responsible AI into everyday practice and how to do it effectively\, working in coproduction with nearly 20 partner organisations around the world. I will also highlight the emerging role of AI skills and competencies in bringing responsible AI practice forward in research and education. \n\n\n\n\nBio\nDenis Newman-Griffis (they/them) is a Senior Lecturer in Computer Science and AI for Health Lead in the Centre for Machine Intelligence\, University of Sheffield. Their work is an interdisciplinary blend of natural language processing\, investigates responsible AI principles\, practices\, and technologies\, with a particular focus on healthcare and disability. They are also a British Academy Innovation Fellow\, a Research Fellow of the Research on Research Institute\, and Co-Chair of the UK Young Academy\, and their research has been recognised with the American Medical Informatics Association’s Doctoral Dissertation Award. Denis is a proudly queer and neurodivergent academic and committed to fostering diversity of identity\, perspective\, and experience around the AI table. \n\n\n\n\nWatch the recording here:\n \n  \n 
URL:https://braiduk.org/event/responsible-ai-futures-hybrid-seminar-dr-denis-newman-griffis
LOCATION:Inspace\, Inspace\, 1 Crichton Street\, Edinburgh\, EH8 9AB\, Edinburgh\, EH8 9AB\, United Kingdom
CATEGORIES:DI Lecture Series
ATTACH;FMTTYPE=image/png:https://braiduk.org/wp-content/uploads/2025/01/1.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20250130T160000
DTEND;TZID=UTC:20250130T170000
DTSTAMP:20260405T231408
CREATED:20250117T110105Z
LAST-MODIFIED:20250304T114233Z
UID:3140-1738252800-1738256400@braiduk.org
SUMMARY:'Responsible AI Futures' Hybrid Seminar - Dr Alex Taylor
DESCRIPTION:Red Teaming and the Operationalising of Responsibility\nThis spring\, I’ll be embarking on fieldwork investigating the outsourced labours and operational logics associated with red teaming. Currently in vogue and linked to responsible AI (RAI) programmes across the tech sector\, red teaming is being touted as a way to identify weaknesses in language and multi-modal AI models through adversarial or provocative prompts. The reasoning here is that weaknesses identified through this prompting might help in fine-tuning or re-training AI models\, mitigating issues such as systematically unsafe or toxic content. \nForming the basis for my BRAID fellowship\, this fieldwork will take place across so-called ‘data enrichment’ centres in the Philippines (and possibly other sites in the Global South) and examine red teaming from two standpoints. First\, it will interrogate the portrayal of red teaming as a sector-wide solution to the toxic tendencies of data-driven models\, such as large language models (LLMs). Second\, it will analyse red teaming as a case study of what I term the operationalising of responsibility in the tech sector. Across both dimensions\, my focus will be on the global flows of capital and the forms and concentrations of labour being mobilised to “responsiblise” AI. I see implications here not just for a more responsible AI but vital questions about responsibility in late capitalism. \nIn preparation for this work\, I want to use this talk to think with an audience about some of the assumptions behind and controversies surrounding red teaming. I’ll begin by elaborating on ways red teaming is being approached and put into practice in R&D. I’ll then set this technical work in a wider context of RAI in the sector to raise and invite questions about the adequacy of a ‘solution’ that continues to valorise technological innovation whilst failing to reward or indeed recognise the extractive conditions necessary for AI’s proliferation. \n\n\n\nBio\nAlex Taylor is a sociologist by training\, with longstanding commitments to critically investigating and intervening in the proliferation of technology and machine intelligence. His work has been shaped most heavily by a critical yet hopeful scholarship in feminist technoscience\, including works from Ruha Benjamin\, Simone Browne\, Vinciane Despret\, Donna Haraway\, and Anna Lowenhaupt Tsing. He’s currently a Reader in Design Informatics at the University of Edinburgh and an AHRC BRAID Fellow\, and co-runs the Critical Data Studies Cluster at the Edinburgh Futures Institute. He is also a Fellow of the RSA and holds visiting roles at the University of Sweden and City\, University of London. \n\n\n\n\nWatch the recording here:
URL:https://braiduk.org/event/responsible-ai-futures-hybrid-seminar-dr-alex-taylor
LOCATION:Inspace\, Inspace\, 1 Crichton Street\, Edinburgh\, EH8 9AB\, Edinburgh\, EH8 9AB\, United Kingdom
CATEGORIES:DI Lecture Series
ATTACH;FMTTYPE=image/png:https://braiduk.org/wp-content/uploads/2025/01/Alex-talk-visual-for-socials.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20241121T160000
DTEND;TZID=UTC:20241121T170000
DTSTAMP:20260405T231408
CREATED:20241105T101053Z
LAST-MODIFIED:20250304T114443Z
UID:2774-1732204800-1732208400@braiduk.org
SUMMARY:‘Responsible AI Futures’ Hybrid Seminar – Dr Bahareh Heravi
DESCRIPTION:Responsible AI in Newsrooms\n\nAs AI becomes embedded in newsrooms\, it offers powerful tools that enhance reporting\, streamline workflows\, and engage audiences in innovative ways. However\, this integration raises critical questions about ethics\, responsibility\, and trust. This talk will explore how AI is used in newsrooms\, showcasing real-world use cases and discussing its potential\, concerns\, and challenges. Given the importance and scale of AI development\, its use in the media sector\, the potential harms it may cause at a global scale and the reputational damage it can make to media organisations\, fostering AI literacy and responsible use of AI has become a pressing challenge. The talk will highlight the role of Responsible AI and the importance of AI literacy in addressing these issues. Enhancing AI literacy empowers journalists to navigate ethical complexities\, ensuring AI is used responsibly while maintaining journalistic integrity and public trust. \n\n\n\nBio\nDr Bahareh Heravi is an Associate Professor in AI & Media at the Institute for People-Centred AI at the University of Surrey. She is also a BRAID (Bridging Responsible AI Divide) Fellow at the BBC\, working on ‘Enhancing Responsible AI Literacy at BBC and beyond’. Bahareh specialises in Data & Computational Journalism\, the use of AI in journalism and media\, ethical and responsible AI\, and data storytelling. She has widely published on the topics and has extensive international experience working with journalists and numerous news media organisations on the use of data and AI for reporting and storytelling\, in research partnerships or as a consultant or advisor. \nDr Heravi is a founding Chair of the European Data & Computational Journalism Conference\, a steering committee member of the Computation + Journalism Symposium\, and the co-chair of the Research Data Alliance’s Science Communication Interest Group. She sits on the Irish Government’s Open Data Governance Board\, and is a member of the EDI Advisory Board of the Royal Statistical Society. Additionally\, she is an expert evaluator and a project monitor for the European Commission and the Research Data Alliance\, and serves on the ethical advisory board of the EU TITAN (Fighting Disinformation with Critical Thinking & AI) project. \nBefore joining the University of Surrey\, Bahareh was at University College Dublin\, where she was the founding director of UCD Data Journalism Programme\, and served as a working group member of the Irish National Open Research Forum. She was the lead data scientist of The Irish Times during 2015-2016. Prior to her academic work\, Bahareh worked over 10 years in the industry\, designing\, developing and managing Information Systems in various small\, medium and large organisations in different sectors. \nDr Heravi was selected as one of Silicon Republic’s Sci-Tech 100\, and was named one of “22 high-flying scientists making the world a better place” in 2019.
URL:https://braiduk.org/event/responsible-ai-futures-hybrid-seminar-dr-bahareh-heravi
LOCATION:Inspace\, Inspace\, 1 Crichton Street\, Edinburgh\, EH8 9AB\, Edinburgh\, EH8 9AB\, United Kingdom
ATTACH;FMTTYPE=image/png:https://braiduk.org/wp-content/uploads/2024/11/Bahareh-eventbrite-banner-1.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20241114T160000
DTEND;TZID=UTC:20241114T170000
DTSTAMP:20260405T231408
CREATED:20241105T100444Z
LAST-MODIFIED:20250304T133053Z
UID:2768-1731600000-1731603600@braiduk.org
SUMMARY:‘Responsible AI Futures’ Hybrid Seminar – Dr Peaks Krafft
DESCRIPTION:CREAATIF: Crafting Responsive Assessments of AI and Tech-Impacted Futures\nI will be presenting the first results and prospective policy directions of the BRAID Scoping project on which I am a co-investigator\, CREAATIF: Crafting Responsive Assessments of AI and Tech-Impacted Futures. This project was designed to engage with creative workers to co-develop impact assessments that address fundamental rights and working conditions in the context of generative AI. Through collaboration with several creative industry trade union and professional bodies we have sought to ensure workers have a voice in the development of these technologies and corresponding labour policy. I will be presenting the results of a series of co-designed workshops and surveys that we undertook\, and I will discuss our next steps and future aspirations. \n\n\n\n\nBio\nDr Peaks Krafft (they/them) is Lecturer in Sociology at the University of Edinburgh\, Co-Director of Edinburgh’s MSc Digital Sociology\, and Co-Investigator on the BRAID Scoping project CREAATIF: Crafting Responsive Assessments of AI and Tech-Impacted Futures. Prior to joining Edinburgh\, Dr Peaks launched the University of the Arts London Creative Computing Institute’s MA Internet Equalities and lectured in Social Data Science at the University of Oxford Internet Institute. Dr Krafft received their PhD in Computer Science from MIT in 2017 and undertook postdoctoral work at the University of Washington Information School\, the University of California Berkeley Department of Psychology and the Data & Society Research Institute. Their publications cross AI\, cognitive science\, science & technology studies\, communications\, and sociology. \n\n\n\n\n 
URL:https://braiduk.org/event/responsible-ai-futures-hybrid-seminar-dr-peaks-krafft
LOCATION:Inspace\, Inspace\, 1 Crichton Street\, Edinburgh\, EH8 9AB\, Edinburgh\, EH8 9AB\, United Kingdom
CATEGORIES:DI Lecture Series
ATTACH;FMTTYPE=image/png:https://braiduk.org/wp-content/uploads/2024/11/Peaks.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20241031T160000
DTEND;TZID=UTC:20241031T170000
DTSTAMP:20260405T231408
CREATED:20241003T121252Z
LAST-MODIFIED:20250121T114027Z
UID:2630-1730390400-1730394000@braiduk.org
SUMMARY:'Responsible AI Futures' Hybrid Seminar - Dr Christopher Burr
DESCRIPTION:Trustworthy and Ethical Assurance of Digital Twins\nWatch the recording now:\n\n \nDigital twins are virtual representations of natural\, engineered\, or social systems that can be dynamically updated with data from the physical twin (e.g. smart building\, ocean\, human heart) using a variety of sensory and techniques. The increasing use of ML and AI to enhance their predictive capacities\, inform decision-making\, and drive scientific insights demands critical investigation. The BRAID-funded Trustworthy and Ethical Assurance of Digital Twins (TEA-DT) scoping research project has engaged several researchers and practitioners of digital twins\, across the domains of health\, natural environment\, and infrastructure. In this presentation\, Dr Christopher Burr will discuss the results of this scoping research and introduce the TEA platform—an open-source and community-centred tool that helps project teams develop and communicate justifiable assurance that a digital twin realises key ethical properties. \n\n\n\n\nBio\nDr Christopher Burr is Senior Researcher in Trustworthy Systems at the Alan Turing Institute—the UK’s national institute for data science and AI. He leads the Innovation and Impact Hub as part of the Turing’s Research and Innovation Cluster in Digital Twins. He is also principal investigator of an AHRC/BRAID-funded project\, Trustworthy and Ethical Assurance of Digital Twins (TEA-DT). He completed his PhD in Philosophy of Cognitive Science at the University of Bristol. \n\n\n\n\nRunning Order \n16.00 – Welcome by Ewa Luger \n16.10 – Talk by Christopher Burr \n16.40 – Q&A \n17.00 – End \nIn-person: Inspace\, 1 Crichton St\, Newington\, Edinburgh EH8 9AB\nOnline: Zoom \nPlease note limited seats are available at Inspace for in-person audiences\, so please book tickets in advance. For those joining online please visit the online event page for the Zoom joining link and password. \nFor inquiries about accessibility\, please contact the DI team at designinformatics@ed.ac.uk or visit the Access webpage for more information about the venue: https://inspace.ed.ac.uk/venue-access/
URL:https://braiduk.org/event/responsible-ai-futures-hybrid-seminar-dr-christopher-burr
LOCATION:Inspace\, Inspace\, 1 Crichton Street\, Edinburgh\, EH8 9AB\, Edinburgh\, EH8 9AB\, United Kingdom
CATEGORIES:DI Lecture Series
ATTACH;FMTTYPE=image/png:https://braiduk.org/wp-content/uploads/2024/10/Chris.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20241010T160000
DTEND;TZID=UTC:20241010T170000
DTSTAMP:20260405T231408
CREATED:20240924T111442Z
LAST-MODIFIED:20250121T114238Z
UID:2621-1728576000-1728579600@braiduk.org
SUMMARY:'Responsible AI Futures' Hybrid Seminar - Dr Caterina Moruzzi
DESCRIPTION:What is the value of integrating AI into creative processes?\n\n\n\n\nWatch the recording now:\n\n\n\n\n \n\n\nThe question of whether machines can be creative has been at the centre of debates among scholars and practitioners well before the origin of Artificial Intelligence as a recognised field of research. In one of its most notable definitions\, “Creativity is the ability to come up with ideas or artefacts that are new\, surprising and valuable.” (Boden\, 2004) This talk will reflect on how the third of these properties – value – encouraging us to consider an alternative to the overused question “Can AI be creative?“\, namely “What is the value of integrating AI into creative processes?”. Drawing on insights from a recent longitudinal study involving five creative professionals as part of a BRAID Fellowship with Adobe\, the talk challenges prevailing assumptions about the advantages of incorporating AI in the creative workflow\, highlighting the importance of a thoughtful approach to AI integration by both creatives and technology companies. \n\n\n\n\nBio\nCaterina Moruzzi\, Chancellor’s Fellow in Design Informatics\, University of Edinburgh \nCaterina’s research spans the fields of human and artificial creativity\, philosophy of art\, and the philosophy of Artificial Intelligence. As BRAID Research Fellow\, she leads a collaboration with Adobe to promote the responsible integration of Artificial Intelligence tools into creative practices. As Co-Investigator in the CoSTAR and DeCADE projects\, funded by UK Research and Innovation\, she investigates the disruptive effects that emerging technological innovations have on the creative sector. At the forefront of the research on modes of shared creativity between humans\, data\, and technology\, Caterina is Lead of the research cluster “Creativity\, AI\, and the Human” at the Edinburgh Futures Institute\, and Senior Fellow of the Future Unilab\, at the Una Europa Alliance. She is also part of the organising committee of the international Conference on Computation\, Communication\, Aesthetics & X\, within which in 2023 she initiated the international summer school\, the School of X. \nBluesky: @caterinamoruzzi.bsky.social \nLinkedIn: Caterina Moruzzi
URL:https://braiduk.org/event/responsible-ai-futures-hybrid-seminar-dr-caterina-moruzzi
LOCATION:Inspace\, Inspace\, 1 Crichton Street\, Edinburgh\, EH8 9AB\, Edinburgh\, EH8 9AB\, United Kingdom
ATTACH;FMTTYPE=image/png:https://braiduk.org/wp-content/uploads/2024/09/Caterina.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240919T160000
DTEND;TZID=UTC:20240919T170000
DTSTAMP:20260405T231408
CREATED:20240904T092415Z
LAST-MODIFIED:20250116T103250Z
UID:2600-1726761600-1726765200@braiduk.org
SUMMARY:‘Responsible AI Futures’ Hybrid Seminar Series – Universal Music Group
DESCRIPTION:Responsible AI in the Creative Industries – A Perspective from Universal Music Group\nWatch now:\n﻿\n \n\n\nAI in the service of artists and creativity is a wonderful thing. But AI that uses\, or worse yet\, appropriates their work – or their name\, image\, likeness\, or voice – without authorisation is not. \nThroughout all the technological revolutions\, Universal Music Group has had a long history of embracing innovation. AI is no different\, with UMG working to empower artists with AI’s benefits\, while at the same time protecting their rights. \nThis talk will examine recent developments in the field of AI and how they impact artists and the creative industries at large. While focused on music\, key opportunities and threats can be extrapolated across multiple arts-led sectors. The lecture will also seek to define what Responsible AI means for the creative industries and what AI developers\, policy makers and other practitioners can do to help shape an ecosystem that balances innovation with rewarding human artistry. \n\n\n\n\nAbout Universal Music Group:\nUniversal Music Group’s mission is to shape culture through the power of artistry. \nA community of entrepreneurs committed to creativity and innovation\, UMG owns and operates a broad array of businesses engaged in recorded music\, music publishing\, merchandising\, and audiovisual content in more than 60 countries. \nUMG identifies and develops recording artists and songwriters\, and produces\, distributes and promotes the most critically acclaimed and commercially successful music to delight and entertain fans around the world. \nIts vast catalog of recordings and songs stretches back over a century and comprises the largest\, most diverse and culturally rich collection of music ever assembled. \nAs digital technology refashions the world\, UMG’s unmatched commitment to lead in developing new services\, platforms and business models for the delivery of music and related content empowers innovators and allows new commercial and artistic opportunities to flourish. \nKnowing that music\, a powerful force for good in the world\, is unique in its ability to inspire people and bring them together\, UMG works with artists and employees to serve our communities. \nUMG is the home for music’s greatest artists\, innovators and entrepreneurs. \n\n\n\n\nBio\nCasandra Strauss\, Director\, New Digital Business & Innovation\, Global Digital Strategy\, Universal Music Group \nCasandra joined UMG’s Global Digital Strategy team in Apr 2022\, and works in a dual remit. As part of the Digital Innovation team\, she engages with tech startups and the wider ecosystem to drive new business opportunities and support entrepreneurs developing music projects. As part of the Strategic Technology team\, Casandra focuses on AI\, with an emphasis on research\, internal policy and community engagement. \nPrior to UMG\, Casandra served as Director of Innovation & Special Projects at the BPI\, UK’s recorded music industry trade body\, where she spearheaded the Music & Tech Springboard Programme and developed partnerships in this space\, among others. Earlier in her career\, Casandra held roles at INgrooves and Sony Music\, developing digital partnerships and supporting music partners including Apple and Spotify\, among many others. \n\n\n\n\nRunning Order\n16.00 – Welcome by Ewa Luger \n16.10 – Talk by Casandra Strauss \n16.40 – Q&A \n17.00 – End \nIn-person: Inspace\, 1 Crichton St\, Newington\, Edinburgh EH8 9AB\nOnline: Zoom \nPlease note limited seats are available at Inspace for in-person audiences\, so please book tickets in advance. For those joining online please visit the online event page for the Zoom joining link and password. \nFor inquiries about accessibility\, please contact the DI team at designinformatics@ed.ac.uk or visit the Access webpage for more information about the venue: https://inspace.ed.ac.uk/venue-access/
URL:https://braiduk.org/event/responsible-ai-futures-hybrid-seminar-series-universal-music-group
ATTACH;FMTTYPE=image/png:https://braiduk.org/wp-content/uploads/2024/09/UMG-final.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240626T160000
DTEND;TZID=UTC:20240626T170000
DTSTAMP:20260405T231408
CREATED:20240527T132630Z
LAST-MODIFIED:20240725T094018Z
UID:2337-1719417600-1719421200@braiduk.org
SUMMARY:The Walls Have Eyes with Petra Molnar
DESCRIPTION:Watch the recording now:\n﻿ \nPetra Molnar is a lawyer and researcher specializing in migration\, technology\, and human rights. She has worked on forced migration and refugee issues since 2008 as a settlement worker\, researcher\, and lawyer and holds a Juris Doctorate from the University of Toronto and an LL.M. specializing in International Law from the University of Cambridge. She co-runs the Refugee Law Lab at York University and is a Faculty Associate at the Berkman Klein Center for Internet and Society at Harvard University. Her study of the human rights impacts of AI and automated technologies on migration control is presented in her new book The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence\, which lays out a global story of the sharpening of borders through technological experiments\, reflecting on 6 years of on-the-ground work\, while introducing strategies of togetherness across physical and ideological borders. \nRunning Order \n16.00 – Welcome \n16.10 – Talk by Petra Molnar \n16.40 – Q&A \n17.00 – End \nIn-person: Inspace\, 1 Crichton St\, Newington\, Edinburgh EH8 9AB\nOnline: Zoom \nThis event is a collaboration between the BRAID (Bridging Responsible AI Divides) Programme\, the Edinburgh Futures Institute’s Centre for Technomoral Futures\, and the Law School at the University of Edinburgh. \n*Important Notice* This event will be photographed\, recorded and live streamed and the data published online and used for research\, promotional and reporting purposes by BRAID UK and the Centre for Technomoral Futures based at the University of Edinburgh. For further information please contact the organisers. \nPlease note limited seats are available at Inspace for in-person audiences\, so please book tickets in advance. For those joining online please visit the online event page for the Zoom joining link and password. Please note this event starts at 16:10 to allow time for in-person audiences including staff and students to move around campus. Zoom Webinar will open from 16:00 for online audiences
URL:https://braiduk.org/event/the-walls-have-eyes-with-petra-molnar
LOCATION:Inspace\, Inspace\, 1 Crichton Street\, Edinburgh\, EH8 9AB\, Edinburgh\, EH8 9AB\, United Kingdom
ATTACH;FMTTYPE=image/png:https://braiduk.org/wp-content/uploads/2024/05/The-Walls-Have-Eyes-Surviving-Migration-in-the-Age-of-Artificial-Intelligence-4.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240626T110000
DTEND;TZID=UTC:20240626T123000
DTSTAMP:20260405T231408
CREATED:20240529T141600Z
LAST-MODIFIED:20241129T152537Z
UID:2344-1719399600-1719405000@braiduk.org
SUMMARY:Petra Molnar: AI Ethnography Masterclass
DESCRIPTION:Led by Petra Molnar and chaired by Morgan Currie\, this Masterclass will set out Petra’s approach to doing ethnography in this important domain of AI studies\, and invite questions and discussion from attendees. \nAttendance can be either in person or online. It is open to researchers at any stage of their career\, but especially PhD students and postdoctoral fellows from any institution. \nThe event will take place from 11 am to 12.30 pm BST on Thursday 26 June\, at the Edinburgh Futures Institute. \nSpeaker biographies:\n \nPetra Molnar is a lawyer and anthropologist specializing in migration and human rights. She has been working in migrant justice since 2008\, first as a settlement worker and community organizer\, and now as a researcher and lawyer. She writes about digital border technologies\, immigration detention\, health and human rights\, gender-based violence\, as well as the politics of refugee\, immigration\, and international law. Petra co-runs the Refugee Law Lab at York University and is a Faculty Associate at the Berkman Klein Center for Internet and Society at Harvard University. Petra’s recently-released first book\, The Walls Have Eyes: Surviving Migration in The Age of Artificial Intelligence\, weaves together anthropological\, legal and political scholarship to understand how technology is being deployed by governments on the world’s most vulnerable with little regulation. \n \nMorgan Currie (chair) is Senior Lecturer in Data and Society in Science\, Technology and Innovation Studies at the University of Edinburgh. Her research focuses on the datafication and automation of government services and civil society oversight of these systems\, and she co-leads the Critical Data Studies Cluster at the Edinburgh Futures Institute. \nThis Masterclass is a collaboration between the BRAID (Bridging Responsible AI Divides) Programme\, the Edinburgh Futures Institute’s Centre for Technomoral Futures\, and the Law School at the University of Edinburgh. \nPlease note limited seats are available for in-person attendees\, so please book tickets in advance. For those joining online please register for online attendance and joining instructions will be sent to you ahead of the event.
URL:https://braiduk.org/event/petra-molnar-ai-ethnography-masterclass
LOCATION:Edinburgh Futures Institute\, 1 Lauriston Place\, Edinburgh\, EH3 9EF\, United Kingdom
ATTACH;FMTTYPE=image/png:https://braiduk.org/wp-content/uploads/2024/05/The-Walls-Have-Eyes-Surviving-Migration-in-the-Age-of-Artificial-Intelligence-6.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240516T160000
DTEND;TZID=UTC:20240516T170000
DTSTAMP:20260405T231408
CREATED:20240412T154432Z
LAST-MODIFIED:20240530T124804Z
UID:1853-1715875200-1715878800@braiduk.org
SUMMARY:‘Responsible AI Futures’ Hybrid Seminar Series – Bhargavi Ganesh
DESCRIPTION:Reframing Governance as Innovation: Steamboat accidents and their lessons for AI governance\nDespite the emergence of promising policy proposals worldwide\, AI governance is often discussed by policymakers and scholars alike as an intractable challenge. This is largely due to the technical/organisational complexity of sociotechnical AI systems\, and a fear that imperfect regulation will result in suppression of technological innovation. In this talk\, I will draw on the historical example of a previously “ungovernable” technology- the steamboat in the 1800’s- to challenge latent scepticism and argue that the governance of AI should in and of itself be viewed as an exercise in innovation. Steamboat governance was iterative\, requiring many instances of trial and error before achieving its aims. Similarly\, global AI governance can be reframed as an exercise in policy innovation. In doing so\, we can both celebrate the progress that has already been made\, and remain optimistic about the emergence of new regulatory interventions in response to novel challenges generated by AI. \nBio\nBhargavi Ganesh is a PhD student at the University of Edinburgh\, working on mixed-method approaches for designing and evaluating the governance of AI. In the past year\, she has worked within the Bridging Responsible AI Divides (BRAID) on a consultation for the Department of Science\, Innovation\, and Technology (DSIT)\, and interned within the former Centre for Data\, Ethics\, and Innovation. She is a member of the School of Informatics’ Artificial Intelligence Applications Institute and the Edinburgh Futures Institute’s Centre for Technomoral Futures. Bhargavi is currently affiliated with the Regulation and Design Lab at the University of Edinburgh and the Governance and Responsible AI Lab at Purdue University. Prior to her PhD\, Bhargavi’s research focused on the impacts of consumer finance policies on marginalized groups. Bhargavi holds a Bachelor’s degree (with honours) from New York University and Master’s in Computational Analysis and Public Policy from the University of Chicago. \nX: @Bhargavi_Ganesh \n  \nWatch recording now:\n﻿﻿
URL:https://braiduk.org/event/responsible-ai-futures-hybrid-seminar-series-bhargavi-ganesh
LOCATION:Inspace\, Inspace\, 1 Crichton Street\, Edinburgh\, EH8 9AB\, Edinburgh\, EH8 9AB\, United Kingdom
CATEGORIES:DI Lecture Series
ATTACH;FMTTYPE=image/png:https://braiduk.org/wp-content/uploads/2024/04/Bhargavi-Ganesh-banner.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240502T160000
DTEND;TZID=UTC:20240502T170000
DTSTAMP:20260405T231408
CREATED:20240412T154543Z
LAST-MODIFIED:20240503T142505Z
UID:1850-1714665600-1714669200@braiduk.org
SUMMARY:‘Responsible AI Futures’ Hybrid Seminar Series – Dr Emily Postan
DESCRIPTION:Uncanny Kinds\n  \nIn the fields of healthcare and health research\, there is particular interest using machine learning (ML) to generated novel or refined diagnostic\, prognostic\, risk\, and treatment categories. This talk interrogates the nature of these categories and their implications for the people thus (re)categorised. It approaches these questions through the lens of the philosophical idea of ‘human kinds’. It asks to what extent health-related categories generated by ML might function as human kinds and\, if so\, whether they might  differ\, in ethically significant ways\, from socially-originating kinds. In doing so\, it suggests that our understanding of responsible ML categorisation practices need to look beyond technical capabilities and clinical utility to consider wider personal and social impacts. \nBio\nEmily Postan is a Chancellor’s Fellow in Bioethics at the University of Edinburgh Law School and a Deputy Director of the J Kenyon Mason Institute for Medicine Life Sciences and the Law. Her research principally focuses on ethical questions about the relationship between our bodies\, our health\, and our identities\, and the ways that health technologies affect these relationships.  Her current research project ‘Identity by Algorithm’ explores the ethical implications of novel social categories generated by health applications of AI. Her wider research interests includes addressing the ethical challenges posed by data sharing\, neurotechnologies\, genomics\, and assisted reproduction. Emily has a background in philosophy and as a policy-manager at the Scottish Government. She received her PhD from Edinburgh Law School in 2017. Her monograph ‘Embodied Narratives: Protecting Identity Interests through Ethical Governance of Bioinformation’ was published by CUP 2022. \nX: @emily_postan \nWatch the recording now:\n﻿
URL:https://braiduk.org/event/responsible-ai-futures-hybrid-seminar-series-dr-emily-postan
LOCATION:Inspace\, Inspace\, 1 Crichton Street\, Edinburgh\, EH8 9AB\, Edinburgh\, EH8 9AB\, United Kingdom
CATEGORIES:DI Lecture Series
ATTACH;FMTTYPE=image/png:https://braiduk.org/wp-content/uploads/2024/04/Emily-Postan-banner.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240411T160000
DTEND;TZID=UTC:20240411T170000
DTSTAMP:20260405T231408
CREATED:20240327T101139Z
LAST-MODIFIED:20240419T101838Z
UID:1791-1712851200-1712854800@braiduk.org
SUMMARY:'Responsible AI Futures' Hybrid Seminar Series - Dr Fiona Smith
DESCRIPTION:Can artistic approaches help invite more voices into discussions about AI in Healthcare?\nThe potential benefits of utilising AI technology in healthcare are vast but there are important practical\, technological\, ethical\, and legal implications that need to be addressed in order to safeguard patients. Doctor\, AI researcher and artist Fiona Smith is particularly interested in how we can ethically curate the diverse datasets that are required to make accurate fair models. Smith will be talking about how these themes informed her latest exhibition “The BOX” which premiered at the 2024 Edinburgh Science Festival. “The BOX” is the outcome of the Creator Residency ‘STEAM Imaging V’\, hosted by Fraunhofer MEVIS\, in collaboration with the Institute for Design Informatics\, the International Fraunhofer Talent School Bremen & the School Center Walle supported by Ars Electronica. \n\n\n\n\nBio\nDr Fiona Smith is a medical doctor and an UKRI funded PhD student in the Biomedical AI CDT\, School of Informatics\, University of Edinburgh. Alongside her research and clinical work\, she is interested in the use of artistic approaches to highlight complex medical\, ethical and social issues. \nFiona Smith LinkedIn: https://www.linkedin.com/in/fiona-n-smith/ \nFiona Smith website: https://fionaniamhsmith.wixsite.com/ \nWatch the recording now:
URL:https://braiduk.org/event/braid-x-idi-hybrid-seminar-dr-fiona-smith
LOCATION:Inspace\, Inspace\, 1 Crichton Street\, Edinburgh\, EH8 9AB\, Edinburgh\, EH8 9AB\, United Kingdom
CATEGORIES:DI Lecture Series
ATTACH;FMTTYPE=image/png:https://braiduk.org/wp-content/uploads/2024/03/Eventbrite-banner-3.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240321T160000
DTEND;TZID=UTC:20240321T170000
DTSTAMP:20260405T231408
CREATED:20240319T124925Z
LAST-MODIFIED:20240411T105713Z
UID:1775-1711036800-1711040400@braiduk.org
SUMMARY:'Responsible AI Futures' Hybrid Seminar Series - Dr Elinor Carmi
DESCRIPTION:A Feminist Critique to Digital and AI Consent\nThis talk presents a feminist critique to digital and AI consent and argues that the current system is flawed. The online surveillance adtech industry that funds the web had to use a mechanism that commodifies people\, rendering their behaviors into data – products that can be sold and traded for the highest bidder. In this way\, digital consent serves as an authorizing and legalizing instrument to the business model of spying\, selling and trading people in the online ecosystem. This then helps to fuel the AI industry that uses these data as training data for the generation of text\, image and video. Using four key feminist approaches – process\, embodiment\, network and context – this talk argues that digital consent is a mechanism that transfers responsibility to people and enables exploitative-extractivist markets to exist. Consequently\, the broader educational effects of digital consent produces people as products with narrow agency and understanding. \n\n\n\n\nBio\nDr Elinor Carmi is a Senior Lecturer in Data Justice and Social Justice at the Sociology & Criminology Department at City University\, London\, UK. Dr Carmi is a digital rights advocate\, feminist\, researcher and journalist who has been working\, writing and teaching on data politics\, data literacies\, feminist approaches to media and data\, data justice and internet governance. Currently Dr Carmi works on the Nuffield Foundation project “Developing a Minimum Digital Living Standard”. Dr Carmi’s work contributes to emerging debates in academia\, policy\, health organisations and digital activism. She was a Parliamentary Academic Fellowship working with the UK’s Digital\, Culture\, Media & Sport Committee\, as well as gave evidence on Digital Literacy for the House of Lords Committee on Democracy and Digital Technologies. In 2020\, Dr Carmi was invited by the World Health Organization (WHO) as an expert on data literacy and disinformation to the first scientific discussion on infodemiology. \nX – @Elinor_Carmi; \nMastodon – @drPinkeee@assemblag.es \nBluesky -@elinorcarmi.bsky.social \nWatch the recording now:
URL:https://braiduk.org/event/braid-x-idi-hybrid-seminar-dr-elinor-carmi
LOCATION:Inspace\, Inspace\, 1 Crichton Street\, Edinburgh\, EH8 9AB\, Edinburgh\, EH8 9AB\, United Kingdom
CATEGORIES:DI Lecture Series
ATTACH;FMTTYPE=image/jpeg:https://braiduk.org/wp-content/uploads/2024/03/Dr-Elinor-Carmi-BRAID-x-IDI-Hybrid-Seminar.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20240314T160000
DTEND;TZID=UTC:20240314T170000
DTSTAMP:20260405T231408
CREATED:20240322T153424Z
LAST-MODIFIED:20240404T161237Z
UID:1781-1710432000-1710435600@braiduk.org
SUMMARY:'Responsible AI Futures' Hybrid Seminar Series - Dr Bronwyn Jones
DESCRIPTION:What’s new in news? How AI is impacting journalism\nThe use of AI systems in newsrooms and across society is shifting the terrain in which journalism operates and changing what it means to make and consume news. We’re recommended personalised articles based on datafied readings of our behaviour\, while our clicks drive analytics that shape editorial decisions about what to report. Deepfakes and mis- and dis-information creep into our social news feeds and pose increasingly intractable verification challenges for journalists. From monitoring online sources\, to transcribing interviews and processing data for investigations\, AI-driven automation of rote tasks has become commonplace. However as news organisations rapidly adopt generative AI\, they are increasingly delegating core processes of communication and meaning making to machines. In this talk\, Bronwyn weaves together three strands of her current work which spans practice\, research\, and policy. She asks: what happens if we co-write the ‘first draft of history’ with AI? If we semi-automate this ‘cornerstone of democracy’? And how might we innovate responsibly with AI for journalism in the public interest? \n\n\nBio\nDr Bronwyn Jones is a social scientist and journalist. As a Translational Fellow for the Bridging Responsible AI Divides (BRAID) programme\, she researches artificial intelligence and data-driven technologies in news production and their implications for the public sphere in democracies. As a DCMS Policy Fellow\, she is exploring the risks generative AI poses for journalism as an industry and form of knowledge production. At the BBC\, she covers regional news online and works with the research and development department to help newsrooms navigate technological change. Bronwyn is focused on fostering fruitful collaboration and translation between academia and industry; she works to ensure theory and evidence inform the development of public interest-driven socio-technical systems in the media industry\, and on-the-ground realities inform scholarly debate. \nX – @bronwynjo \n\n\nWatch the recording now:
URL:https://braiduk.org/event/responsible-ai-futures-hybrid-seminar-series-dr-bronwyn-jones
LOCATION:Inspace\, Inspace\, 1 Crichton Street\, Edinburgh\, EH8 9AB\, Edinburgh\, EH8 9AB\, United Kingdom
CATEGORIES:DI Lecture Series
ATTACH;FMTTYPE=image/png:https://braiduk.org/wp-content/uploads/2024/03/LINKEDIN-4.png
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20240118
DTEND;VALUE=DATE:20240120
DTSTAMP:20260405T231408
CREATED:20240318T161048Z
LAST-MODIFIED:20241004T144217Z
UID:1759-1705536000-1705708799@braiduk.org
SUMMARY:AI and the Arts: Who’s Responsible (Artist’s and Curatorial events)
DESCRIPTION:Photo credit: Copyright George Torode 2024. \nThe AI and the Arts: Who’s Responsible – Artist’s and Curatorial events\, happened on 18th and 19th January at the Science Gallery London\, developed in partnership with FutureEverything. The events were undertaken as part of the UKRI AHRC BRAID Programme – Inspired Innovation theme\, focusing on Responsible AI within the Creative Arts\, to coincide with Science Gallery London’s exhibition ‘AI: Who’s Looking After You?’. This was the first in a series of UKRI BRAID Creative Community Engagement events intended to build upon existing networks and strengthen the AI and Arts ecosystem in the UK. \nWe had two days of excellent discussion with a dream line up of 21 speakers\, presenters and provocateurs. Day one was a round table workshop bring together 50-ish artists\, creatives and researchers\, from across the UK Arts and AI ecosystem\, focusing on concerns and potentials within Responsible AI and the Arts\, covering issues such as IP\, consent\, bias and responsibility. Day two was a theatre style event attended by 100-ish people working in the arts sector including curators\, producers\, gallerists\, funders and researchers\, focusing on best practice\, potentials and concerns around Responsible AI and the Arts. The event included curatorial case studies\, practicalities and resources experiences\, and thinking through the stories we tell within “what we see\, show and tell\, about whom\, with what\, and why” in relation to AI (Zylinska\, 2020: 153). \nActivities included lightning presentations\, participatory tasks in small groups and an attempt to create a collective consensus statement representing the opinions of the speakers and attendees at the close of each day (which also allowed room for outlying opinion). \nIt was wonderful to have such an engaged audience of early\, mid and established career practitioners\, and a range of organisations from artist run to large-scale museums and commercial sector\, alongside freelance curators\, asking important questions about a wide range of issues around ethics\, responsibility and AI for the arts. The attendees represented a wide geographic spread\, with regional diversity across the UK\, from Inverness to Brighton\, Wales\, Liverpool and Newcastle (where possible supported by the BRAID access fund). \nKey discussions that emerged included: \n\nthe outdated concept of IP\nimbalances\, tensions and shifts in power between the arts\, tech sector and other fields\nhow best to negotiate the ethics and practicalities of the arts when employed as a testing ground for the tech sector – what does working ethically together look like?\nthe artist as benevolent witness\nsolidarity over than competitiveness in the creative arts\nthe future of creative skills as discernment\, judgement and curation\nthe need for a representation and diversity of voices within AI and the Arts\, what this diversity looks like and where current power lies\nthe precarity of the arts\, cultural institutions and funding\nthe need for pastoral care for artists\, in addition to financial and skills support\nthe decades of practice already undertaken within the arts around Responsible AI\nchallenges for arts freelancers in attending\, accessing and participating in research activities\nhow to best bridge the activities of the creative arts into impacting policy and innovation\nthe need for cross-collaboration from all nations across the UK\nAccess to tools\, technology and the cultural sector itself\n\nDocumentation of the rich consensus statements created can be found below\, accompanied by the fabulous illustrated notes created in-situ throughout the events\, by illustrator Jonny Glover. \nThanks to Science Gallery London and FutureEverything\, support and developing the event with the BRAID team\, and to Data + Design Lab at Edinburgh Futures Institute for facilitating the event activities. \nFollow us at @braid__uk and sign up to our newsletter for information about future events. \nList of Speakers: \n(Day 1 – Artist’s event)\ndmstfctn – artist duo\nReema Selhi – DACS artists’ rights management organisation\nMartin Zeilinger – University of Abertay\nDaniel Chavez Heras – Creative AI Lab/King’s College London\nYasmine Boudiaf – researcher and artist\nAlan Warburton – artist\, animator\nCaroline Sinders – human rights researcher and artist \n(Day 2 – Curatorial event)\nJoanna Zylinska – Creative AI Lab/King’s College London\nLuba Elliott – Independent Curator\nKay Watson – Serpentine Galleries\nHannah Redler Hawes –Open Data Institute\nImogen Hare – Gazelli Art House/gazell.io\nIrini Mirena –FutureEverything\nSarah Cook – University of Glasgow\nJennifer Wong – Science Gallery London\nNatalie Kane & Katherine Mitchell – V&A Museum\nDonna Holford-Lovell – NEoN Digital Arts\nDrew Hemment & Matjaz Vidmar – The New Real\nHelena Geilinger – Somerset House Studios
URL:https://braiduk.org/event/ai-and-the-arts-whos-responsible-artists-and-curatorial-events
LOCATION:Private: Science Gallery London\, Great Maze Pond\, SE1 9GU\, London\, SE1 9GU\, United Kingdom
ATTACH;FMTTYPE=image/jpeg:https://braiduk.org/wp-content/uploads/2024/03/DSC09272-scaled.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=UTC:20230915T120000
DTEND;TZID=UTC:20230915T200000
DTSTAMP:20260405T231408
CREATED:20240731T084901Z
LAST-MODIFIED:20240731T084901Z
UID:2588-1694779200-1694808000@braiduk.org
SUMMARY:BRAID Launch at BBC London Broadcasting House
DESCRIPTION:Watch the full livestream now:\n \nOn 15 September 2023\, BRAID hosted its launch event at BBC London Broadcasting House\, bringing together a diverse community of policymakers\, artists\, academics and industry representatives. With a keynote from Humane Intelligence CEO Dr Rumman Chowdhury and three panels of responsible AI experts discussing the latest issues\, challenges\, and opportunities AI is posing to today’s world\, this event truly encapsulated BRAID’s aims and values as a programme. Watch the livestream now!
URL:https://braiduk.org/event/braid-launch-at-bbc-london-broadcasting-house
ATTACH;FMTTYPE=image/jpeg:https://braiduk.org/wp-content/uploads/2023/10/DSC6806.jpg
END:VEVENT
END:VCALENDAR