Skip to main content


From the silly and absurd to the misleading and 
outright dangerous, AI-generated media is now all around us. 

Social media feeds are filling up with AI slop. AI assistants are distorting and misrepresenting news content. And as more people turn to chat apps like ChatGPT and AI summaries like Google’s AI Overviews for quick answers, hallucinations and errors persist.  

It’s now easier than ever to spread misinformation and disinformation as generative AI enables cheaper and quicker content creation and distribution at scale. This means malicious actors can create networks of AI-generated websites, deepfakes, and persuasive narratives in attempts to influence public opinion, and AI spammers can monetise their slop. 

Amid all this noise, many people are struggling to work out what to believe and who to trust.  

So, we brought 70 people of all ages together with experts from the BBC and researchers from the Responsible Innovation Centre to grapple with these issues at this year’s Edinburgh Science Festival.  

They learnt about the capabilities and limitations of the latest generative AI tools from the BBC Blue Room who gave demonstrations of how these systems work. A few takeaways from their session include: 

  • Summarised news from chatbots is increasing in popularity 
  • News can be distorted due to inaccuracies introduced by AI assistants and AI-powered search engines 
  • Bias and distortion can be introduced deliberately or accidentally – and this is very difficult to mitigate 
  • A combination of AI detection and metadata labels can help, like those developed by C2PA (the Coalition for Content Provenance and Authenticity), but they’re not a silver bullet  

Participants then joined our pop-up newsroom to step into the shoes of a journalist grappling with the big AI stories and the ethical dilemmas they raise.  

First up was the issue of science disinformation videos created using AI being recommended to children. We asked: How would you cover this story for the demographic most affected – kids? 

We had a lively discussion of whose voice needs to be heard, who can be held accountable, and how to be transparent to viewers when AI-generated content is being shown on screen. We then heard from the BBC journalist who had covered the story for BBC Newsround about the ethical and editorial considerations she weighed up. 

Our pop-up newsroom then tackled the growing question of AI clones used for scams and fraud. The room discussed whether and when it’s ok to use AI as part of journalistic storytelling, drawing from the example of a BBC reporter sending his clone to a meeting before hearing from the reporter himself about the editorial justification for fooling his colleagues. 

A number of participants told us they felt poorly prepared for the big disruptions AI is bringing. They wanted more guidance on what resources, tools and techniques were available to help them check information and recognise AI manipulation.

As the need for critical AI literacies grows, opportunities like these to bring people together – from 12-year-old schoolchildren to 80-year-old grandparents – become more important than ever. They’re not just a way to share research and expertise, but a valuable chance for us all to learn from each other.

Dr Bronwyn Jones, Public Media and Democracy Lead, BRAID