no code implementations • 21 Apr 2024 • Jensen Hwa, Qingyu Zhao, Aditya Lahiri, Adnan Masood, Babak Salimi, Ehsan Adeli
We are able to enforce conditional independence of the diffusion autoencoder latent representation with respect to any protected attribute under the equalized odds constraint and show that this approach enables causal image generation with controllable latent spaces.
no code implementations • 14 Jun 2022 • Aditya Lahiri, Kamran Alipour, Ehsan Adeli, Babak Salimi
With the widespread use of sophisticated machine learning models in sensitive applications, understanding their decision-making has become an essential task.
no code implementations • 10 Jun 2022 • Kamran Alipour, Aditya Lahiri, Ehsan Adeli, Babak Salimi, Michael Pazzani
Despite their high accuracies, modern complex image classifiers cannot be trusted for sensitive tasks due to their unknown decision-making process and potential biases.
no code implementations • 13 Dec 2021 • Aditya Ahuja, Aditya Lahiri, Aniruddha Das
Figuring out the price of a listed Airbnb rental is an important and difficult task for both the host and the customer.
no code implementations • 14 Jun 2021 • Sahil Verma, Aditya Lahiri, John P. Dickerson, Su-In Lee
The goal of explainable ML is to intuitively explain the predictions of a ML system, while adhering to the needs to various stakeholders.
no code implementations • 11 Sep 2020 • Aditya Lahiri, Narayanan Unny Edakunni
We use a Generative Adversarial Network for synthetic data generation and train a piecewise linear model in the form of Linear Model Trees to be used as the surrogate model. In addition to individual feature attributions, we also provide an accompanying context to our explanations by leveraging the structure and property of our surrogate model.