1 code implementation • NeurIPS 2021 • Burak Varici, Karthikeyan Shanmugam, Prasanna Sattigeri, Ali Tajer
This paper considers the problem of estimating the unknown intervention targets in a causal directed acyclic graph from observational and interventional data.
no code implementations • 28 Oct 2021 • Abhin Shah, Yuheng Bu, Joshua Ka-Wing Lee, Subhro Das, Rameswar Panda, Prasanna Sattigeri, Gregory W. Wornell
Selective regression allows abstention from prediction if the confidence to make an accurate prediction is not sufficient.
no code implementations • 24 Sep 2021 • Vijay Arya, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss, Aleksandra Mojsilovic, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, Yunfeng Zhang
As artificial intelligence and machine learning algorithms become increasingly prevalent in society, multiple stakeholders are calling for these algorithms to provide explanations.
1 code implementation • 2 Jun 2021 • Soumya Ghosh, Q. Vera Liao, Karthikeyan Natesan Ramamurthy, Jiri Navratil, Prasanna Sattigeri, Kush R. Varshney, Yunfeng Zhang
In this paper, we describe an open source Python toolkit named Uncertainty Quantification 360 (UQ360) for the uncertainty quantification of AI models.
1 code implementation • 1 Jun 2021 • Jiri Navratil, Benjamin Elder, Matthew Arnold, Soumya Ghosh, Prasanna Sattigeri
Accurate quantification of model uncertainty has long been recognized as a fundamental requirement for trusted AI.
no code implementations • ICCV 2021 • Assaf Arbelle, Sivan Doveh, Amit Alfassy, Joseph Shtok, Guy Lev, Eli Schwartz, Hilde Kuehne, Hila Barak Levi, Prasanna Sattigeri, Rameswar Panda, Chun-Fu Chen, Alex Bronstein, Kate Saenko, Shimon Ullman, Raja Giryes, Rogerio Feris, Leonid Karlinsky
In this work, we focus on the task of Detector-Free WSG (DF-WSG) to solve WSG without relying on a pre-trained detector.
no code implementations • ICLR 2021 • Yue Meng, Rameswar Panda, Chung-Ching Lin, Prasanna Sattigeri, Leonid Karlinsky, Kate Saenko, Aude Oliva, Rogerio Feris
Temporal modelling is the key for efficient video action recognition.
no code implementations • 1 Jan 2021 • Seungwook Han, Akash Srivastava, Cole Lincoln Hurwitz, Prasanna Sattigeri, David Daniel Cox
First, we generate images in low-frequency bands by training a sampler in the wavelet domain.
no code implementations • 30 Dec 2020 • Joshua Lee, Yuheng Bu, Prasanna Sattigeri, Rameswar Panda, Gregory Wornell, Leonid Karlinsky, Rogerio Feris
As machine learning algorithms grow in popularity and diversify to many industries, ethical and legal concerns regarding their fairness have become increasingly relevant.
no code implementations • 15 Nov 2020 • Umang Bhatt, Javier Antorán, Yunfeng Zhang, Q. Vera Liao, Prasanna Sattigeri, Riccardo Fogliato, Gabrielle Gauthier Melançon, Ranganath Krishnan, Jason Stanley, Omesh Tickoo, Lama Nachman, Rumi Chunara, Madhulika Srikumar, Adrian Weller, Alice Xiang
Explainability attempts to provide reasons for a machine learning model's behavior to stakeholders.
no code implementations • 25 Oct 2020 • Akash Srivastava, Yamini Bansal, Yukun Ding, Cole Hurwitz, Kai Xu, Bernhard Egger, Prasanna Sattigeri, Josh Tenenbaum, David D. Cox, Dan Gutfreund
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the (aggregate) posterior to encourage statistical independence of the latent factors.
no code implementations • 9 Sep 2020 • Seungwook Han, Akash Srivastava, Cole Hurwitz, Prasanna Sattigeri, David D. Cox
First, we generate images in low-frequency bands by training a sampler in the wavelet domain.
1 code implementation • NeurIPS 2020 • N. Joseph Tatro, Pin-Yu Chen, Payel Das, Igor Melnyk, Prasanna Sattigeri, Rongjie Lai
Yet, current curve finding algorithms do not consider the influence of symmetry in the loss surface created by model weight permutations.
1 code implementation • ECCV 2020 • Yue Meng, Chung-Ching Lin, Rameswar Panda, Prasanna Sattigeri, Leonid Karlinsky, Aude Oliva, Kate Saenko, Rogerio Feris
Specifically, given a video frame, a policy network is used to decide what input resolution should be used for processing by the action recognition model, with the goal of improving both accuracy and efficiency.
1 code implementation • ECCV 2020 • Zhiqiang Tang, Yunhe Gao, Leonid Karlinsky, Prasanna Sattigeri, Rogerio Feris, Dimitris Metaxas
First is that most if not all modern augmentation search methods are offline and learning policies are isolated from their usage.
no code implementations • 10 Jun 2020 • Sainyam Galhotra, Karthikeyan Shanmugam, Prasanna Sattigeri, Kush R. Varshney
In this work, we consider fairness in the integration component of data management, aiming to identify features that improve prediction without adding any bias to the dataset.
no code implementations • 27 Apr 2020 • Jayaraman J. Thiagarajan, Prasanna Sattigeri, Deepta Rajan, Bindya Venkatesh
The wide-spread adoption of representation learning technologies in clinical decision making strongly emphasizes the need for characterizing model reliability and enabling rigorous introspection of model behavior.
1 code implementation • 15 Mar 2020 • Leonid Karlinsky, Joseph Shtok, Amit Alfassy, Moshe Lichtenstein, Sivan Harary, Eli Schwartz, Sivan Doveh, Prasanna Sattigeri, Rogerio Feris, Alexander Bronstein, Raja Giryes
Few-shot detection and classification have advanced significantly in recent years.
1 code implementation • ECCV 2020 • Moshe Lichtenstein, Prasanna Sattigeri, Rogerio Feris, Raja Giryes, Leonid Karlinsky
The field of Few-Shot Learning (FSL), or learning from very few (typically $1$ or $5$) examples per novel class (unseen during training), has received a lot of attention and significant performance advances in the recent literature.
no code implementations • 10 Feb 2020 • Bindya Venkatesh, Jayaraman J. Thiagarajan, Kowshik Thopalli, Prasanna Sattigeri
The hypothesis that sub-network initializations (lottery) exist within the initializations of over-parameterized networks, which when trained in isolation produce highly generalizable models, has led to crucial insights into network initialization and has enabled efficient inferencing.
no code implementations • ICLR 2020 • Akash Srivastava, Yamini Bansal, Yukun Ding, Bernhard Egger, Prasanna Sattigeri, Josh Tenenbaum, David D. Cox, Dan Gutfreund
In this work, we tackle a slightly more intricate scenario where the observations are generated from a conditional distribution of some known control variate and some latent noise variate.
no code implementations • NeurIPS 2019 • Joshua Lee, Prasanna Sattigeri, Gregory Wornell
However, for practical, privacy, or other reasons, in a variety of applications we may have no control over the individual source task training, nor access to source training samples.
no code implementations • 18 Nov 2019 • Shivashankar Subramanian, Ioana Baldini, Sushma Ravichandran, Dmitriy A. Katz-Rogozhnikov, Karthikeyan Natesan Ramamurthy, Prasanna Sattigeri, Kush R. Varshney, Annmarie Wang, Pradeep Mangalath, Laura B. Kleiman
More than 200 generic drugs approved by the U. S. Food and Drug Administration for non-cancer indications have shown promise for treating cancer.
no code implementations • 29 Oct 2019 • Newton M. Kinyanjui, Timothy Odonga, Celia Cintas, Noel C. F. Codella, Rameswar Panda, Prasanna Sattigeri, Kush R. Varshney
We find that the majority of the data in the the two datasets have ITA values between 34. 5{\deg} and 48{\deg}, which are associated with lighter skin, and is consistent with under-representation of darker skinned populations in these datasets.
no code implementations • 25 Sep 2019 • N. Joseph Tatro, Pin-Yu Chen, Payel Das, Igor Melnyk, Prasanna Sattigeri, Rongjie Lai
Empirically, this initialization is critical for efficiently learning a simple, planar, low-loss curve between networks that successfully generalizes.
1 code implementation • 9 Sep 2019 • Jayaraman J. Thiagarajan, Bindya Venkatesh, Prasanna Sattigeri, Peer-Timo Bremer
With rapid adoption of deep learning in critical applications, the question of when and how much to trust these models often arises, which drives the need to quantify the inherent uncertainties.
3 code implementations • 6 Sep 2019 • Vijay Arya, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss, Aleksandra Mojsilović, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R. Varshney, Dennis Wei, Yunfeng Zhang
Equally important, we provide a taxonomy to help entities requiring explanations to navigate the space of explanation methods, not only those in the toolkit but also in the broader literature on explainability.
3 code implementations • 29 May 2019 • Ronny Luss, Pin-Yu Chen, Amit Dhurandhar, Prasanna Sattigeri, Yunfeng Zhang, Karthikeyan Shanmugam, Chun-Chen Tu
As the application of deep neural networks proliferates in numerous areas such as medical imaging, video surveillance, and self driving cars, the need for explaining the decisions of these models has become a hot research topic, both at the global and local level.
no code implementations • 30 Nov 2018 • Vidya Muthukumar, Tejaswini Pedapati, Nalini Ratha, Prasanna Sattigeri, Chai-Wah Wu, Brian Kingsbury, Abhishek Kumar, Samuel Thomas, Aleksandra Mojsilovic, Kush R. Varshney
Recent work shows unequal performance of commercial face classification services in the gender classification task across intersectional groups defined by skin type and gender.
no code implementations • NeurIPS 2018 • Abhishek Kumar, Prasanna Sattigeri, Kahini Wadhawan, Leonid Karlinsky, Rogerio Feris, William T. Freeman, Gregory Wornell
Deep neural networks, trained with large amount of labeled data, can fail to generalize well when tested with examples from a \emph{target domain} whose distribution differs from the training data distribution, referred as the \emph{source domain}.
8 code implementations • 3 Oct 2018 • Rachel K. E. Bellamy, Kuntal Dey, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Kalapriya Kannan, Pranay Lohia, Jacquelyn Martino, Sameep Mehta, Aleksandra Mojsilovic, Seema Nagar, Karthikeyan Natesan Ramamurthy, John Richards, Diptikalyan Saha, Prasanna Sattigeri, Moninder Singh, Kush R. Varshney, Yunfeng Zhang
Such architectural design and abstractions enable researchers and developers to extend the toolkit with their new algorithms and improvements, and to use it for performance benchmarking.
no code implementations • 20 Sep 2018 • Jayaraman J. Thiagarajan, Deepta Rajan, Prasanna Sattigeri
The hypothesis that computational models can be reliable enough to be adopted in prognosis and patient care is revolutionizing healthcare.
no code implementations • 24 May 2018 • Prasanna Sattigeri, Samuel C. Hoffman, Vijil Chenthamarakshan, Kush R. Varshney
In this paper, we introduce the Fairness GAN, an approach for generating a dataset that is plausibly similar to a given multimedia dataset, but is more fair with respect to protected attributes in allocative decision making.
no code implementations • 15 Nov 2017 • Huan Song, Jayaraman J. Thiagarajan, Prasanna Sattigeri, Andreas Spanias
To this end, we develop the DKMO (Deep Kernel Machine Optimization) framework, that creates an ensemble of dense embeddings using Nystrom kernel approximations and utilizes deep learning to generate task-specific representations through the fusion of the embeddings.
1 code implementation • ICLR 2018 • Abhishek Kumar, Prasanna Sattigeri, Avinash Balakrishnan
Disentangled representations, where the higher level data generative factors are reflected in disjoint latent dimensions, offer several benefits such as ease of deriving invariant representations, transferability to other tasks, interpretability, etc.
no code implementations • NeurIPS 2017 • Abhishek Kumar, Prasanna Sattigeri, P. Thomas Fletcher
Semi-supervised learning methods using Generative Adversarial Networks (GANs) have shown promising empirical success recently.
no code implementations • 28 Dec 2016 • Huan Song, Jayaraman J. Thiagarajan, Prasanna Sattigeri, Karthikeyan Natesan Ramamurthy, Andreas Spanias
Kernel fusion is a popular and effective approach for combining multiple features that characterize different aspects of data.
no code implementations • 14 Dec 2016 • Jayaraman J. Thiagarajan, Prasanna Sattigeri, Karthikeyan Natesan Ramamurthy, Bhavya Kailkhura
In this paper, we propose the use of quantile analysis to obtain local scale estimates for neighborhood graph construction.
no code implementations • 22 Nov 2016 • Jayaraman J. Thiagarajan, Bhavya Kailkhura, Prasanna Sattigeri, Karthikeyan Natesan Ramamurthy
In this paper, we take a step in the direction of tackling the problem of interpretability without compromising the model accuracy.
no code implementations • 15 Jun 2016 • Prasanna Sattigeri, Aurélie Lozano, Aleksandra Mojsilović, Kush R. Varshney, Mahmoud Naghshineh
Innovation is among the key factors driving a country's economic and social growth.