Artificial intelligence (AI) is reaching our daily lives through chatbots, self-driving cars, and decision-making algorithms. It is also making its way into the NHS, and promises to create a step change inclinical decision making. Some people have even suggested that AI diagnostic tools might replace radiologists in the future. But like any new technology, AI can also introduce new risks to patients. The algorithms they use are so complex that we don’t understand why certain decisions are made – they’re a ‘black box’. With issues that are so complex – both technically and ethically – can we still involve the public, and give them a say in public policy? Collaborating with the Information Commissioner’s Office (ICO), the NIHR Greater Manchester Patient Safety Translational Research Centre recently ran two citizens’juries in Coventry and Manchester to involve the public in conversations about AI in healthcare, specifically the need for so-called ‘explainable’ AI. Our juries explored the trade-off between AI transparency and AI performance, and specifically whether an explanation for AI decisions that affect people’s lives should always be provided, even if that results in less accurate AI decisions. Such an explanation would enable someone to understand a decision, and possibly contest it, without the need for understanding the technicalities of the ’black box’.