no code implementations • 18 May 2022 • Yilun Zhou, Julie Shah
Feature attribution methods are popular for explaining neural network predictions, and they are often evaluated on metrics such as comprehensiveness and sufficiency, which are motivated by the principle that more important features -- as judged by the explanation -- should have larger impacts on model prediction.
1 code implementation • 30 Apr 2022 • Yilun Zhou, Marco Tulio Ribeiro, Julie Shah
Interpretability methods are developed to understand the working mechanisms of black-box models, which is crucial to their responsible deployment.
1 code implementation • 14 Oct 2021 • Yiming Zheng, Serena Booth, Julie Shah, Yilun Zhou
We call for more rigorous and comprehensive evaluations of these models to ensure desired properties of interpretability are indeed achieved.
1 code implementation • 27 Apr 2021 • Yilun Zhou, Serena Booth, Marco Tulio Ribeiro, Julie Shah
Feature attribution methods are exceedingly popular in interpretable machine learning.
no code implementations • 14 Feb 2021 • Ganesh Ghalme, Vineet Nair, Vishakha Patil, Yilun Zhou
Fairness has emerged as an important concern in automated decision-making in recent years, especially when these decisions affect human welfare.
1 code implementation • 29 Dec 2020 • Yilun Zhou, Adithya Renduchintala, Xian Li, Sida Wang, Yashar Mehdad, Asish Ghoshal
Active learning (AL) algorithms may achieve better performance with fewer data because the model guides the data selection process.
1 code implementation • 19 Feb 2020 • Serena Booth, Yilun Zhou, Ankit Shah, Julie Shah
To address these challenges, we introduce a flexible model inspection framework: Bayes-TrEx.
no code implementations • 16 Jan 2020 • Mycal Tucker, Yilun Zhou, Julie Shah
Robotic agents must adopt existing social conventions in order to be effective teammates.
no code implementations • 9 Jan 2020 • Serena Booth, Ankit Shah, Yilun Zhou, Julie Shah
In this paper, we consider the problem of exploring the prediction level sets of a classifier using probabilistic programming.
1 code implementation • WS 2019 • Yilun Zhou, Julie A. Shah, Steven Schockaert
Commonsense procedural knowledge is important for AI agents and robots that operate in a human environment.
no code implementations • 11 Sep 2019 • Yilun Zhou, Derrik E. Asher, Nicholas R. Waytowich, Julie A. Shah
Multi-agent reinforcement learning (MARL) extends (single-agent) reinforcement learning (RL) by introducing additional agents and (potentially) partial observability of the environment.
1 code implementation • 21 Feb 2019 • Yilun Zhou, Steven Schockaert, Julie A. Shah
In this paper we instead propose to learn to predict path quality from crowdsourced human assessments.