CASE

Citizens Juries on Artificial Intelligence

First Submitted By Annie Pottorff

Most Recent Changes By Scott Fletcher

General Issues
Science & Technology
Location
United Kingdom
Scope of Influence
National
Links
https://jefferson-center.org/citizens-juries-artificial-intelligence/
Videos
Citizens Juries on Artificial Intelligence
Start Date
End Date
Ongoing
No
Time Limited or Repeated?
Repeated over time
Purpose/Goal
Research
Approach
Co-production in form of partnership and/or contract with government and/or public bodies
Spectrum of Public Participation
Involve
Total Number of Participants
36
Open to All or Limited to Some?
Open to All
General Types of Methods
Deliberative and dialogic process
General Types of Tools/Techniques
Facilitate dialogue, discussion, and/or deliberation
Specific Methods, Tools & Techniques
Citizens' Jury
Legality
Yes
Facilitators
Yes
Facilitator Training
Professional Facilitators
Face-to-Face, Online, or Both
Face-to-Face
Types of Interaction Among Participants
Discussion, Dialogue, or Deliberation
Information & Learning Resources
Expert Presentations
Written Briefing Materials
Video Presentations
Decision Methods
Voting
Communication of Insights & Outcomes
Public Report
Primary Organizer/Manager
Jefferson Center
Type of Organizer/Manager
Non-Governmental Organization
Funder
The National Institute for Health Research (NIHR) Greater Manchester Patient Safety Translational Research Centre (PSTRC) and the Information Commissioner’s Office
Type of Funder
Academic Institution
Staff
Yes
Volunteers
No
Types of Change
Changes in people’s knowledge, attitudes, and behavior

Citizen Jurors were charged with exploring and deliberating about how important it is to be able to understand how an Artificial Intelligence system reaches a decision (“explainability”), even if the ability to do so could make its decisions less accurate.

Problems and Purpose

Artificial Intelligence is becoming a common tool in almost every industry, from healthcare to transportation to manufacturing and human services. But many people remain unsure about what AI is and is not, or how it might or might not be used. Pop culture tends to portray AI technology as rogue cyborgs and evil computers (think HAL in 2001: A Space Odyssey), although in reality, we encounter AI almost every day, in much more mundane situations: our emails are filtered in our inbox, our music is suggested by Spotify, and Google completes our thoughts as we type. Part of the confusion and nervousness likely stems from people not knowing that AI is technology which makes it possible for “machines to learn from experience, adjust to new inputs and perform human-like tasks.” (SAS)

The National Institute for Health Research (NIHR) Greater Manchester Patient Safety Translational Research Centre (PSTRC) and the Information Commissioner’s Office in the United Kingdom wanted to explore what people expect to know about how an AI system reaches a decision. These groups recognized a need to learn about how people might weigh the benefits of increased accuracy of an AI system compared against the ability to show how that system reached its decision. To answer these questions they commissioned the Jefferson Center to work with our partner Citizens Juries c.i.c. (UK) to design and implement a pair of Citizens Juries – one in Northern England (Manchester) and one in the West Midlands (Coventry). Jurors were charged with exploring and deliberating about four scenarios where an AI system would make a decision, and subsequently decide how important it is to be able to understand how the AI system reached its decision (“explainability”), even if the ability to do so could make its decisions less accurate.

Because AI concepts are still unfamiliar to many people, can be packed with complex science and data, and have potential legal and regulatory impacts that remain unclear, researchers decided to use Citizens Juries. This approach provided participants with a chance to learn about AI, its applications, and how it functions, then generate guidance about how automated decision-making programs might be most effectively used and overseen.

Background History and Context

Know what events lead up to this initiative? Help us complete this section!

Organizing, Supporting, and Funding Entities

The Juries were commissioned by National Institute for Health Research (NIHR) Greater Manchester Patient Safety Translational Research Centre (PSTRC) and the Information Commissioner’s Office. The Jefferson Center worked with their UK partner, Citizens Juries c.i.c., to design and facilitate the Juries.

Participant Recruitment and Selection

The first Jury was in Coventry, from February 18-22, and the process was repeated in Manchester February 25-March 1. Each Jury was made up of 18 people, who were recruited via radio, newspaper, and job advertisements to represent a cross-section of the public.

Methods and Tools Used

Over 5 days, Jurors learned about different AI systems, and considered the trade-offs between AI accuracy and explanations for decisions made by AI in four different scenarios:

  • Healthcare: diagnosing an acute stroke
  • Recruitment: screening job applications and shortlisting candidates
  • Healthcare: matching donated kidneys with potential recipients
  • Criminal Justice: evaluating whether or not someone will be charged with a minor offense or given the opportunity to participate in a rehabilitation program

What Went On: Process, Interaction, and Participation

During the first two days, Jurors heard from four different expert witnesses who prepared them to explore the four scenarios. The first topics included an introduction to AI and relevant laws concerning AI, including data protection law, the information and explanations required by law from AI decisions, and the responsibilities and rights of AI software providers, citizens, and others.

Next, Jurors listened to two experts who were asked to present competing arguments on prioritizing AI performance or explainability. Jurors deliberated and then identified reasons to prioritize AI performance over explainability and vice-versa, and identified potential trade-offs for both.

On the third and fourth days, Jurors considered the scenarios relating to healthcare, criminal justice, and recruitment. For each case, they:

  1. Read the scenario
  2. Watched a video (recorded specifically for the Jury) of a person in that field (such as a person who works with stroke patients)
  3. Listened to a presentation from Dr. Allan Tucker, of Brunel University London’s Department of Computer Science, who reviewed how AI systems would be used in that scenario and responded to juror questions
  4. Deliberated
  5. Responded to questions about the given scenario and created rationale for why AI decision-making systems should be used in the scenario

On the final day of the Jury, participants worked together to create a few general conclusions about AI and AI explainability, discussed when it is essential to provide an explanation about an automated decision, and responded to a series of additional questions from the Jury conveners.

Influence, Outcomes, and Effects

Over the course of each Jury, participants learned more about Artificial Intelligence, discussed the drawbacks and limitations of AI systems, and explored the benefits and opportunities that these systems can offer.

Both Juries drafted a statement to their neighbors about the experience. As Manchester participants wrote, “This opened our eyes and gave us an inside view of how AI works on us, how we can protect ourselves, and the ways this will change our lives in the future. We are now more aware of the importance of AI for ourselves, for future generations, and for the world.”

The results of these Juries are informing guidance under development by the ICO and the Alan Turing Institute on citizens’ rights to an explanation when decisions that affect people are made using AI. Their findings will be presented to a range of stakeholders, developers, researchers and public and private interests through a series of roundtable workshops convened by the ICO. These meetings will explore how consumer and citizen perspectives align with and diverge from those utilizing and developing these technologies. Based on these discussions, the ICO will generate a report of their potential policies for future oversight of automated decision-making programs later this year.

Analysis and Lessons Learned

Want to contribute an analysis of this initiative? Help us complete this section!

See Also

References

Citizens Juries on Artificial Intelligence – The Jefferson Center

External Links

The final report

Greater Manchester PSTRC website

Notes

Edit case