Supporting User Engagement in Testing, Auditing, and Contesting AI

CSCW 2023 Workshop
Sunday, October 15, 2023

→ Submission Form (deadline 9/20/23)


In recent years, there has been a growing interest in involving end users directly in testing, auditing, and contesting AI systems. The involvement of end users from diverse backgrounds can be essential to overcome AI developers' blind spots and to surface issues that would otherwise go undetected prior to causing real-world harm. Emerging bodies of work in CSCW and HCI have begun to explore ways to engage end-users in testing and auditing AI systems, and to empower users to contest erroneous AI outputs. However, we know little about how to support effective user engagement.

In this one-day workshop at CSCW 2023, we will bring together researchers and practitioners from academia, industry, and non-profit organizations to share ongoing efforts related to this workshop's theme. Central to our discussions will be the challenges encountered in developing tools and processes to support user involvement, strategies to incentivize involvement, the asymmetric power dynamic between AI developers and end users, and the role of regulation in enhancing the accountability of AI developers and ameliorating potential burdens towards end-users. Overall, we hope the workshop outcome could orient the future of user engagement in building more responsible AI.

Call for Participation

We welcome participants who work on related areas in supporting user engagement in testing, auditing, and contesting AI. Interested participants will be asked to contribute a brief statement of interest to the workshop. Submissions can take several forms:

  1. Position paper or Paper Draft discussing or contributing to one or more themes highlighted in this proposal. Paper drafts may be under submission, and there are no page limits.
  2. Video or audio demo of an interactive system that is relevant to user-engagement in AI testing, auditing, and contesting. Submissions should fall within 3-5 minutes in length.
  3. "Encore" submission of a highly-relevant conference or journal paper.
  4. Statement of research interest for attending the workshop. Submissions should be in ACM single column format and no longer than 1 page, excluding references.

Each submission will be reviewed by 1-2 organizers and accepted based on quality of the submission and diversity of perspectives to allow for a meaningful exchange of knowledge between a broad range of stakeholders.

→ Submission Form (deadline 9/20/23)

Key Dates

Submission deadline: Wednesday, September 20, 2023, 11:59pm AoE (deadline extended!) Friday, September 15, 2023, 11:59pm AoE

Notification of acceptance: Monday, September 25, 2023

Workshop date: Sunday, October 15, 2023


The primary goal of this one-day, in-person workshop is to bring together researchers and AI practitioners from academia, industry, and non-profits to share their ongoing efforts around engaging end users in testing, auditing, and contesting AI systems. We will survey participants before the workshop to ask about their accessibility needs, workshop constraints, and their ideal takeaways from attending the workshop. We will use the results of this survey to guide the final schedule of our workshop. However, we plan to organize the workshop around the following activities:

  • Welcome and Introduction (15 min): Opening talk by one of the organizers, in which they will warmly welcome the participants, present the topic, and outline the format of the workshop session
  • Keynote Speaker (30 min): Keynote speaker will speak on their insights around user-engaged AI testing, auditing, and contesting
  • Author Presentations + Panel One (60 min): Each participant will introduce themselves and their work. After the short presentation, each participant will join a panel facilitated by one of the organizers
  • Coffee Break (30 min)
  • Author Presentations + Panel Two (60 min): Each participant will introduce themselves and their work. After the short presentation, each participant will join a panel facilitated by one of the organizers
  • Lunch Break (120 min)
  • Design Workshop (group activity) (60 min): Design workshop activities for 4-5 groups based on workshop themes each facilitated by the organizers. Note: we are open to updating the agenda and incorporating new topics based on the submissions from workshop participants and their backgrounds.
  • Coffee Break (15 min)
  • Workshop Report (30 min): Participants from each breakout group report back and discuss with the larger group.
  • End note (15 min): We plan to create a Slack or Discord channel to foster ongoing communication and collaboration among participants, extending the discussion beyond the scope of this workshop session.

We will invite a keynote speaker and begin the workshop with a 30-minute keynote around their insights in user-engaged AI testing, auditing, and contesting. We will alternate between participant presentations, Q&A, smaller group activities and plenary discussions throughout the day. By facilitating interaction with different groups of attendees, participants will have ample opportunities to exchange ideas and perspectives. To streamline communication and coordination, we plan on using a Slack channel and a Google Drive shared with participants to coordinate activities before, during, and after the workshop. During the workshop, we will provide essential materials such as sticky notes, markers, and whiteboards to facilitate the design workshop activities. Furthermore, to ensure accessibility and ongoing documentation, a designated note-taker will capture key points in a shared Google Doc. This approach will allow participants to review discussions and contributions even after the workshop concludes. Depending on the support from the conference organizers, we will include a variety of video-, audio-, and text-based ways of engaging to support participants with a diversity of visual, hearing, speech, and cognitive abilities.