
In the era of data-driven AI, traditional notions of “openness” are being rapidly redefined. From Big Tech companies branding themselves as “open,” to AI models that selectively disclose only parts of their systems, claims of “openness” often obscure more than they reveal, stretching or misapplying the term beyond its original meaning.
In cultural heritage, academic and creative contexts, “openness” has a long and celebrated history, linked to values and practices of access, transparency, collaboration, reuse and public benefit. From the OpenGLAM movement to Open Access publishing, open source software and open research principles and practices, communities have built infrastructures, policies and workflows to promote openness, supporting data-driven research, creative reuse and public engagement.
Yet, publicly funded and openly licensed datasets are increasingly being scraped into commercial AI systems without consent or attribution, resulting in proprietary outputs that undermine the very spirit of openness that those datasets intended to promote. Legal developments like the EU AI Act add complexity by exempting so-called “open source” models without clearly defining “open.” This vagueness further enables “open-washing,” allowing AI stakeholders -from data owners to developers- to evade true transparency by keeping critical components such as training data, model weights, and even outputs closed.
Yet, openness should not be seen as a problem to be fixed or merely regulated, but as an opportunity. Instead of merely mitigating the challenges of commercial AI systems, or compromising our principles of openness, we must reimagine the technical, sociocultural, and legal foundations of AI toward more transparent, collaborative, and ethically grounded future systems and practices rooted in openness. By investing in open, community-driven AI infrastructures, openly licensed datasets, and open-source models, we have an opportunity to build systems that serve the public good and foster innovation beyond the confines of dominant corporate platforms. These open and collaborative practices can also more meaningfully represent the interests of individuals, researchers, creative communities and cultural stakeholders beyond the classic remits of the creative industries, but equally crucial for our cultural fabric.
Open-source, community-driven AI is not just a technical initiative—it is a democratic one. It empowers researchers, educators, cultural practitioners, activists, and civil society to co-create a future where Responsible and Ethical AI is grounded in its true spirit: collaborative, transparent, and serving the public good.
This event will bring together researchers, creators, legal experts, cultural heritage professionals, activists, and members of the open-source AI community to critically engage with the evolving intersections of openness, AI, and cultural knowledge and to re-imagine how a true open, commons-based AI could be created.
This event is being organised by BRAID fellows Anna-Maria Sichani, Paula Westenberger and Nick Bryan-Kinns.
Find out more and register: Openness and AI in culture, arts and humanities: a BRAID roundtable Tickets, Mon, Nov 24, 2025 at 11:00 AM | Eventbrite