no code implementations • 21 Mar 2024 • Lucas Monteiro Paes, Dennis Wei, Hyo Jin Do, Hendrik Strobelt, Ronny Luss, Amit Dhurandhar, Manish Nagireddy, Karthikeyan Natesan Ramamurthy, Prasanna Sattigeri, Werner Geyer, Soumya Ghosh
To address the challenges of text as output and long text inputs, we propose a general framework called MExGen that can be instantiated with different attribution algorithms.
no code implementations • 19 Mar 2024 • Pierre Dognin, Jesus Rios, Ronny Luss, Inkit Padhi, Matthew D Riemer, Miao Liu, Prasanna Sattigeri, Manish Nagireddy, Kush R. Varshney, Djallel Bouneffouf
Developing value-aligned AI agents is a complex undertaking and an ongoing challenge in the field of AI.
no code implementations • 22 Jun 2022 • Q. Vera Liao, Yunfeng Zhang, Ronny Luss, Finale Doshi-Velez, Amit Dhurandhar
We argue that one way to close the gap is to develop evaluation methods that account for different user requirements in these usage contexts.
no code implementations • 8 Feb 2022 • Ronny Luss, Amit Dhurandhar, Miao Liu
Many works in explainable AI have focused on explaining black-box classification models.
1 code implementation • 2 Feb 2022 • Keerthiram Murugesan, Vijay Sadashivaiah, Ronny Luss, Karthikeyan Shanmugam, Pin-Yu Chen, Amit Dhurandhar
Knowledge transfer between heterogeneous source and target networks and tasks has received a lot of attention in recent times as large amounts of quality labeled data can be difficult to obtain in many applications.
no code implementations • 29 Sep 2021 • Ronny Luss, Amit Dhurandhar, Miao Liu
Many works in explainable AI have focused on explaining black-box classification models.
no code implementations • ICLR 2022 • Keerthiram Murugesan, Vijay Sadashivaiah, Ronny Luss, Karthikeyan Shanmugam, Pin-Yu Chen, Amit Dhurandhar
Knowledge transfer between heterogeneous source and target networks and tasks has received a lot of attention in recent times as large amounts of quality labelled data can be difficult to obtain in many applications.
no code implementations • 24 Sep 2021 • Vijay Arya, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss, Aleksandra Mojsilovic, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, Yunfeng Zhang
As artificial intelligence and machine learning algorithms become increasingly prevalent in society, multiple stakeholders are calling for these algorithms to provide explanations.
no code implementations • 16 Sep 2021 • Saneem Chemmengath, Amar Prakash Azad, Ronny Luss, Amit Dhurandhar
Contrastive explanations for understanding the behavior of black box models has gained a lot of attention recently as they provide potential for recourse.
no code implementations • 13 Sep 2021 • Ronny Luss, Amit Dhurandhar
To overcome these limitations, we propose a novel method called Path-Sufficient Explanations Method (PSEM) that outputs a sequence of sufficient explanations for a given input of strictly decreasing size (or value) -- from original input to a minimally sufficient explanation -- which can be thought to trace the local boundary of the model in a smooth manner, thus providing better intuition about the local model behavior for the specific input.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI)
no code implementations • 25 Sep 2019 • Amit Dhurandhar, Karthikeyan Shanmugam, Ronny Luss
Our method also leverages the per sample hardness estimate of the simple model which is not the case with the prior works which primarily consider the complex model's confidences/predictions and is thus conceptually novel.
2 code implementations • 6 Sep 2019 • Vijay Arya, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss, Aleksandra Mojsilović, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, Yunfeng Zhang
Equally important, we provide a taxonomy to help entities requiring explanations to navigate the space of explanation methods, not only those in the toolkit but also in the broader literature on explainability.
no code implementations • ICML 2020 • Amit Dhurandhar, Karthikeyan Shanmugam, Ronny Luss
Our method also leverages the per sample hardness estimate of the simple model which is not the case with the prior works which primarily consider the complex model's confidences/predictions and is thus conceptually novel.
2 code implementations • 29 May 2019 • Ronny Luss, Pin-Yu Chen, Amit Dhurandhar, Prasanna Sattigeri, Yunfeng Zhang, Karthikeyan Shanmugam, Chun-Chen Tu
As the application of deep neural networks proliferates in numerous areas such as medical imaging, video surveillance, and self driving cars, the need for explaining the decisions of these models has become a hot research topic, both at the global and local level.
1 code implementation • 31 Jul 2018 • Jie Chen, Ronny Luss
The theory assumes that one can easily compute an unbiased gradient estimator, which is usually the case due to the sample average nature of empirical risk minimization.
no code implementations • NeurIPS 2018 • Amit Dhurandhar, Karthikeyan Shanmugam, Ronny Luss, Peder Olsen
Our transfer method involves a theoretically justified weighting of samples during the training of the simple model using confidence scores of these intermediate layers.
1 code implementation • 24 Jun 2018 • Anna Choromanska, Benjamin Cowen, Sadhana Kumaravel, Ronny Luss, Mattia Rigotti, Irina Rish, Brian Kingsbury, Paolo DiAchille, Viatcheslav Gurev, Ravi Tejwani, Djallel Bouneffouf
Despite significant recent advances in deep neural networks, training them remains a challenge due to the highly non-convex nature of the objective function.
4 code implementations • NeurIPS 2018 • Amit Dhurandhar, Pin-Yu Chen, Ronny Luss, Chun-Chen Tu, Pai-Shun Ting, Karthikeyan Shanmugam, Payel Das
important object pixels in an image) to justify its classification and analogously what should be minimally and necessarily \emph{absent} (viz.
no code implementations • 12 Jul 2017 • Amit Dhurandhar, Vijay Iyengar, Ronny Luss, Karthikeyan Shanmugam
We provide a novel notion of what it means to be interpretable, looking past the usual association with human understanding.
no code implementations • 9 Jun 2017 • Amit Dhurandhar, Vijay Iyengar, Ronny Luss, Karthikeyan Shanmugam
This leads to the insight that the improvement in the target model is not only a function of the oracle model's performance, but also its relative complexity with respect to the target model.
no code implementations • 19 Feb 2014 • Aleksandr Y. Aravkin, Anju Kambadur, Aurelie C. Lozano, Ronny Luss
We consider new formulations and methods for sparse quantile regression in the high-dimensional setting.
no code implementations • NeurIPS 2010 • Ronny Luss, Saharon Rosset, Moni Shahar
A new algorithm for isotonic regression is presented based on recursively partitioning the solution space.
no code implementations • NeurIPS 2007 • Ronny Luss, Alexandre d'Aspremont
In this paper, we propose a method for support vector machine classification using indefinite kernels.