Menu

Citizens Juries on Explainable AI

Project overview

Artificial intelligence (AI) is reaching our daily lives through chatbots, self-driving cars, and decision-making algorithms. It is also making its way into the NHS, and promises to create a step change inclinical decision making. Some people have even suggested that AI diagnostic tools might replace radiologists in the future. But like any new technology, AI can also introduce new risks to patients. The algorithms they use are so complex that we don’t understand why certain decisions are made – they’re a ‘black box’. With issues that are so complex – both technically and ethically – can we still involve the public, and give them a say in public policy? Collaborating with the Information Commissioner’s Office (ICO), the NIHR Greater Manchester Patient Safety Translational Research Centre recently ran two citizens’juries in Coventry and Manchester to involve the public in conversations about AI in healthcare, specifically the need for so-called ‘explainable’ AI. Our juries explored the trade-off between AI transparency and AI performance, and specifically whether an explanation for AI decisions that affect people’s lives should always be provided, even if that results in less accurate AI decisions. Such an explanation would enable someone to understand a decision, and possibly contest it, without the need for understanding the technicalities of the ’black box’.

START: 1st September 2018

END: 31st May 2019

Funded by:

NIHR Greater Manchester Patient Safety Translational Research Centre; Information Commisioner’s Office

Methodology

We addressed this knowledge gap through the “Citizens’ Jury” method developed originally by the
Jefferson Center in the USA. In a citizens’ jury, a cross-section of the public (representative of the population in terms of age, gender, ethnicity, education, and employment status) are recruited and paid to tackle a public policy question. The jury meets for several days and is provided with reliable, impartial information from expert witnesses. The jury members ask questions of the experts, and work together during small group discussions. AI is a complex area, and this method enables citizens to understand the subject and the trade-offs before forming their opinion. We designed and ran two citizens’ juries, each with the same agenda, expert witnesses and facilitators but with two different stratified samples of 18 members of the public from two different locations (Manchester and Coventry). The juries were designed as a comparative analysis. Four scenarios were introduced, two focusing on healthcare settings (one around AI to diagnose stroke and another to find potential matches for a kidney transplant) and two on non-health settings (one on screening job applications to determine who should be interviewed, and one in criminal justice where AI would determine who should be offered a rehabilitation programme rather than the usual court procedure). Information on each scenario was provided by expert witnesses: often in person; sometimes by pre-recorded video or through a live video link. At the end of each scenario the jury members independently completed an online vote on the importance of receiving an explanation of an
automated decision, and to what extent the lack of information on how a decision had been reached
mattered to them.The juries were carried out by Citizens’ Juries c.i.c. working in partnership with the Jefferson Center. They ran in February and March 2019 and were followed by a post-jury workshop with jurors and policymakers in May 2019 to reflect on and take forward jury findings

Benefits

In this project, the NIHR Greater Manchester Patient Safety Translational Research Centre worked
with the Information Commissioner’s Office (ICO) which has the challenging task of regulating the
use of AI. The results of the juries fed into national guidance that the ICO is producing on citizens’ rights to an explanation when decisions that affect people are made using AI. This guidance will be applicable to any organisation that produces or utilizes AI in the UK, and therefore every UK citizen will indirectly benefit from the work.

Findings and outcomes

Initially the jury members felt that AI was potentially eroding society by putting people out of work,
that it relied on data so was susceptible to hacking, and that ‘the robots might take over’. If AI cannot be explained, then what will happen if it starts going wrong? And would we even know? The juries soon realised during their discussions that AI was already being used, from deciding who should be granted finance loans (one of the jurors worked in the banking industry) to dating apps that matched people based on information they provided. Casting aside the negatives, they also recognised the benefits of AI: freeing-up more leisure time; increasing profitability; and avoiding human error. One jury member said: “The process made us recognise the speed at which AI technology is developing and how it will continue to influence all areas of our lives”. In the end, the juries concluded that whether or not an explanation for an AI decision was required depends on the context. In the two healthcare scenarios, both juries strongly favoured accuracy over explanation. “It is not essential to provide an explanation for an automated decision where it is a matter of life or death,” one member said. The speed afforded by AI was viewed as essential in stroke diagnosis but not around kidney transplantation. The results from the other two scenarios were different, with the juries recognising the importance for individuals to receive an explanation for decisions made about them. For instance, they argued that there must be an explanation in order to prove there is no bias in the criminal justice system. A majority of jury members felt that, in general, automated decisions need not be explained to individuals in contexts where human decisions would not usually be explained.

Researchers involved

Prof Niels Peek,
Dr Malcolm Oswald,
Dr Sudeh Charaghi-Sohi,
Dr Lisa Riste