no code implementations • 29 Sep 2021 • Amit Dhurandhar, Karthikeyan Natesan Ramamurthy, Kartik Ahuja, Vijay Arya
Locally interpretable model agnostic explanations (LIME) method is one of the most popular methods used to explain black-box models at a per example level.
no code implementations • 24 Sep 2021 • Vijay Arya, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss, Aleksandra Mojsilovic, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, Yunfeng Zhang
As artificial intelligence and machine learning algorithms become increasingly prevalent in society, multiple stakeholders are calling for these algorithms to provide explanations.
no code implementations • 18 Jan 2020 • Karan Dabas, Nishtha Madan, Vijay Arya, Sameep Mehta, Gautam Singh, Tanmoy Chakraborty
To preserve anonymity and obfuscate their identity on online platforms users may morph their text and portray themselves as a different gender or demographic.
2 code implementations • 6 Sep 2019 • Vijay Arya, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss, Aleksandra Mojsilović, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, Yunfeng Zhang
Equally important, we provide a taxonomy to help entities requiring explanations to navigate the space of explanation methods, not only those in the toolkit but also in the broader literature on explainability.
no code implementations • 5 Dec 2018 • Vijay Arya, Sayandeep Sen, Palani Kodeswaran
Typically, a data owner (API provider) develops a model, often over proprietary data, and leverages the infrastructure services of a cloud vendor for hosting and serving API requests.
no code implementations • 20 Nov 2017 • Manish Kesarwani, Bhaskar Mukhoty, Vijay Arya, Sameep Mehta
In this work, we present a cloud-based extraction monitor that can quantify the extraction status of models by observing the query and response streams of both individual and colluding adversarial users.