Supporting User Engagement in Testing, Auditing, and Contesting AI

CSCW 2023 Workshop
Sunday, October 15, 2023, 10am-5pm

→ Submission Form (deadline 9/20/23)

Overview

In recent years, there has been a growing interest in involving end users directly in testing, auditing, and contesting AI systems. The involvement of end users from diverse backgrounds can be essential to overcome AI developers' blind spots and to surface issues that would otherwise go undetected prior to causing real-world harm. Emerging bodies of work in CSCW and HCI have begun to explore ways to engage end-users in testing and auditing AI systems, and to empower users to contest erroneous AI outputs. However, we know little about how to support effective user engagement.

In this one-day workshop at CSCW 2023, we will bring together researchers and practitioners from academia, industry, and non-profit organizations to share ongoing efforts related to this workshop's theme. Central to our discussions will be the challenges encountered in developing tools and processes to support user involvement, strategies to incentivize involvement, the asymmetric power dynamic between AI developers and end users, and the role of regulation in enhancing the accountability of AI developers and ameliorating potential burdens towards end-users. Overall, we hope the workshop outcome could orient the future of user engagement in building more responsible AI.

Call for Participation

We welcome participants who work on related areas in supporting user engagement in testing, auditing, and contesting AI. Interested participants will be asked to contribute a brief statement of interest to the workshop. Submissions can take several forms:

  1. Position paper or Paper Draft discussing or contributing to one or more themes highlighted in this proposal. Paper drafts may be under submission, and there are no page limits.
  2. Video or audio demo of an interactive system that is relevant to user-engagement in AI testing, auditing, and contesting. Submissions should fall within 3-5 minutes in length.
  3. "Encore" submission of a highly-relevant conference or journal paper.
  4. Statement of research interest for attending the workshop. Submissions should be in ACM single column format and no longer than 1 page, excluding references.

Each submission will be reviewed by 1-2 organizers and accepted based on quality of the submission and diversity of perspectives to allow for a meaningful exchange of knowledge between a broad range of stakeholders.

→ Submission Form (deadline 9/20/23)

Key Information

Submission deadline: Wednesday, September 20, 2023, 11:59pm AoE (deadline extended!) Friday, September 15, 2023, 11:59pm AoE

Notification of acceptance: Monday, September 25, 2023

Workshop date: Sunday, October 15, 2023, 10:00am-5:00pm

Workshop location: Boundary Waters C, Hyatt Regency Minneapolis

Agenda

The primary goal of this one-day, in-person workshop is to bring together researchers and AI practitioners from academia, industry, and non-profits to share their ongoing efforts around engaging end users in testing, auditing, and contesting AI systems.

  • Welcome and Introduction (10:00-10:15am): Organizers will welcome the participants, present the topic, and outline the format of the workshop session.
  • Coffee break (10:15-10:30am): Coffee and pastries are available at the hotel lobby provided by the CSCW organizers.
  • Author Presentation Group A (10:30-11:40am): Authors in Group A will briefly introduce their work. After the short presentation (5 - 6 min), each participant will have a short Q&A session from the audiences.
  • Panel A (11:40-12:00pm): One of the organizers will lead a panel discussion featuring authors from Group A, providing them with an extended opportunity to share their work and interests.
  • Lunch break (12:00-1:20pm)
  • Author Presentation Group B (1:20-2:30pm): Authors in Group B will briefly introduce their work. After the short presentation (5 - 6 min), each participant will have a short Q&A session from the audiences.
  • Coffee break (2:30-3:00pm): Coffee and pastries are available at the hotel lobby provided by the CSCW organizers.
  • Panel B (3:00-3:20pm): One of the organizers will lead a panel discussion featuring authors from Group B, providing them with an extended opportunity to share their work and interests.
  • Workshop Activity (3:20-4:20pm): We will discuss a set of questions in 4 groups, each facilitated by the organizers. Through this workshop activity, we aim to first develop a shared vision (or visions) for the preferred future of AI system testing, auditing, and contesting. We hope to identify and understand challenges in engaging end users effectively, and co-create and brainstorm potential solutions to address these challenges.
  • Workshop Report (4:20-4:50pm): Participants from each breakout group report back and discuss with the larger group.
  • Endnote (4:50-5:00pm)
  • (Optional) Dinner social event (starting 6:30pm)

Accepted Work

Workshop papers

  • Understanding the Effect of Counterfactual Explanations on Trust and Reliance on AI for Human-AI Collaborative Clinical Decision Making. Min Hun Lee and Chong Jun Chew. Singapore Management University.
  • Answerability by Design in Sociotechnical Systems. Dilara Keküllüoğlu, Louise Hatherall, Nayha Sethi, Tillmann Vierkant, Shannon Vallor, Nadin Kokciyan, Michael Rovatsos. The University of Edinburgh. [link]
  • Has Wizard of Oz Testing Passed Its Use By Date? James Simpson. Macquarie University. [link]
  • GPTutor: an open-source AI pair programming tool alternative to Copilot. Eason Chen, Ray Huang, Bo Shen Liang, Damien Chen, Pierce Huang. Carnegie Mellon University. [link]
  • Enhancing User Engagement in AI Auditing Through Gamification and Storytelling. Weirui Peng, Mengyi Wei, Kyrie Zhixuan Zhou. Columbia University, Technical University of Munich, University of Illinois at Urbana-Champaign. [link]
  • Uncovering Effective Steering Strategies in Code-Generating LLMs with Socratic Feedback. Zheng Zhang, Alex C. Williams, Jonathan Buck, Xiaopeng Li, Matthew Lease, Li Erran Li. University of Notre Dame, AWS AI [link]
  • The Potential of Diverse Youth as Stakeholders in Identifying and Mitigating Algorithmic Bias for a Future of Fairer AI. Jaemarie Solyst, Ellia Yang, Shixian Xie, Amy Ogan, Jessica Hammer, Motahhare Eslami. Carnegie Mellon University. [link]
  • Re-imagining Fairness in Machine Learning: A Framework for Building in Socio-cultural and Contextual Awareness Corey Jackson, Tallal Ahmed, Devansh Saxena. University of Wisconsin - Madison, Carnegie Mellon University. [link]
  • Auditing Personalized Recommendation Algorithms Through Spontaneous Click-Based User Interaction Qunfang Wu, Zitong Huang, Yaqi Zhang, Chung-Chin Eugene Liu. University of North Carolina at Chapel Hill, Syracuse University. [link]
  • A Democratic Platform for Engaging with Disabled Community in Generative AI Development. Deepak Giri and Erin Brady. Indiana University. [link]
  • Patient-Facing Machine Learning for Prenatal Stress Reduction in the United States: A Co-design Toolkit. Mara Ulloa, Negar Kamali, Glenn J Fernandes, Elizabeth Soyemi, Miranda Beltzer, Niharika Gopinath Menon, Nabil Alshurafa, Maia Jacobs. Northwestern University.
  • Social Network Timeline Bias and Amplification. Nathan Bartley. University of Southern California. [link]
  • (Beyond) Reasonable Doubt: Challenges that Public Defenders Face in Scrutinizing AI in Court. Angela Jin et al. University of California, Berkeley.

Statements of Interest

  • Creator-driven Auditing of YouTube Algorithms. Yoonseo Choi and Juho Kim. KAIST.
  • Challenges and opportunities for user-engagement in testing and contesting AI: Early findings from Latin America. Claudia Lopez. Universidad Técnica Federico Santa Maria.
  • Curating AI: Trust as the User's Compass. Ruyuan Wan. University of Notre Dame.
  • Centering user perspectives in an interdisciplinary AI training data quality framework. Experiences from the KITQAR project. Lou Therese Brandner and Simon David Hirsbrunner. University of Tübingen.
  • Engaging Users in Auditing Generative AI Systems by Crowdsourcing Prompts. Howard Han. Carnegie Mellon University.

Organizers