no code implementations • 8 Feb 2022 • Chacha Chen, Shi Feng, Amit Sharma, Chenhao Tan
Our key result is that without assumptions about task-specific intuitions, explanations may potentially improve human understanding of model decision boundary, but they cannot improve human understanding of task decision boundary or model error.
no code implementations • 4 Feb 2022 • Tomas Geffner, Javier Antoran, Adam Foster, Wenbo Gong, Chao Ma, Emre Kiciman, Amit Sharma, Angus Lamb, Martin Kukla, Nick Pawlowski, Miltiadis Allamanis, Cheng Zhang
Causal inference is essential for data-driven decision making across domains such as business engagement, medical treatment or policy making.
no code implementations • 26 Dec 2021 • Victor Chernozhukov, Carlos Cinelli, Whitney Newey, Amit Sharma, Vasilis Syrgkanis
Therefore, simple plausibility judgments on the maximum explanatory power of omitted variables (in explaining treatment and outcome variation) are sufficient to place overall bounds on the size of the bias.
no code implementations • 24 Nov 2021 • Sai Srinivas Kancheti, Abbavaram Gowtham Reddy, Vineeth N Balasubramanian, Amit Sharma
A trained neural network can be interpreted as a structural causal model (SCM) that provides the effect of changing input variables on the model's output.
no code implementations • 28 Oct 2021 • Mathias Lécuyer, Sang Hoon Kim, Mihir Nanavati, Junchen Jiang, Siddhartha Sen, Amit Sharma, Aleksandrs Slivkins
We develop a methodology, called Sayer, that leverages implicit feedback to evaluate and train new system policies.
1 code implementation • 7 Oct 2021 • Divyat Mahajan, Shruti Tople, Amit Sharma
Through extensive evaluation on a synthetic dataset and image datasets like MNIST, Fashion-MNIST, and Chest X-rays, we show that a lower OOD generalization gap does not imply better robustness to MI attacks.
1 code implementation • 27 Aug 2021 • Amit Sharma, Vasilis Syrgkanis, Cheng Zhang, Emre Kiciman
Estimation of causal effects involves crucial assumptions about the data-generating process, such as directionality of effect, presence of instrumental variables or mediators, and whether all relevant confounders are observed.
1 code implementation • 23 Aug 2021 • Jason Lequyer, Reuben Philip, Amit Sharma, Laurence Pelletier
Recent approaches have allowed for the denoising of single noisy images without access to any training data aside from that very image.
no code implementations • 27 May 2021 • Varun Chandrasekaran, Darren Edge, Somesh Jha, Amit Sharma, Cheng Zhang, Shruti Tople
However for real-world applications, the privacy of data is critical.
no code implementations • 24 Feb 2021 • Amit Sharma
In this note we introduce a notion of free cofibrations of permutative categories.
Category Theory Algebraic Topology
no code implementations • 11 Jan 2021 • Alexander Lavin, Ciarán M. Gilligan-Lee, Alessya Visnjic, Siddha Ganju, Dava Newman, Atılım Güneş Baydin, Sujoy Ganguly, Danny Lange, Amit Sharma, Stephan Zheng, Eric P. Xing, Adam Gibson, James Parr, Chris Mattmann, Yarin Gal
The development and deployment of machine learning (ML) systems can be executed easily with modern tools, but the process is typically rushed and means-to-an-end.
no code implementations • 21 Dec 2020 • Naman Goel, Alfonso Amayuelas, Amit Deshpande, Amit Sharma
For example, in multi-stage settings where decisions are made in multiple screening rounds, we use our framework to derive the minimal distributions required to design a fair algorithm.
no code implementations • 11 Nov 2020 • Yanbo Xu, Divyat Mahajan, Liz Manrao, Amit Sharma, Emre Kiciman
For many kinds of interventions, such as a new advertisement, marketing intervention, or feature recommendation, it is important to target a specific subset of people for maximizing its benefits at minimum cost or potential harm.
2 code implementations • 10 Nov 2020 • Ramaravind Kommiya Mothilal, Divyat Mahajan, Chenhao Tan, Amit Sharma
In addition, by restricting the features that can be modified for generating counterfactual examples, we find that the top-k features from LIME or SHAP are often neither necessary nor sufficient explanations of a model's prediction.
2 code implementations • 9 Nov 2020 • Amit Sharma, Emre Kiciman
In addition to efficient statistical estimators of a treatment's effect, successful application of causal inference requires specifying assumptions about the mechanisms underlying observed data and testing whether they are valid, and to what extent.
no code implementations • 17 Sep 2020 • Saloni Dash, Vineeth N Balasubramanian, Amit Sharma
We present a method for generating counterfactuals by incorporating a structural causal model (SCM) in an improved variant of Adversarially Learned Inference (ALI), that generates counterfactuals in accordance with the causal relationships between attributes of an image.
1 code implementation • arXiv 2020 • Divyat Mahajan, Shruti Tople, Amit Sharma
In the domain generalization literature, a common objective is to learn representations independent of the domain after conditioning on the class label.
Ranked #1 on
Domain Generalization
on Rotated Fashion-MNIST
no code implementations • LREC 2020 • Devansh Mehta, Sebastin Santy, Ramaravind Kommiya Mothilal, Brij Mohan Lal Srivastava, Alok Sharma, Anurag Shukla, Vishnu Prasad, Venkanna U, Amit Sharma, Kalika Bali
The primary obstacle to developing technologies for low-resource languages is the lack of usable data.
3 code implementations • 6 Dec 2019 • Divyat Mahajan, Chenhao Tan, Amit Sharma
For explanations of ML models in critical domains such as healthcare and finance, counterfactual examples are useful for an end-user only to the extent that perturbation of feature inputs is feasible in the real world.
1 code implementation • ICML 2020 • Shruti Tople, Amit Sharma, Aditya Nori
Such privacy risks are exacerbated when a model's predictions are used on an unseen data distribution.
no code implementations • 3 Sep 2019 • Arpita Biswas, Siddharth Barman, Amit Deshpande, Amit Sharma
To quantify this bias, we propose a general notion of $\eta$-infra-marginality that can be used to evaluate the extent of this bias.
no code implementations • 10 Jul 2019 • Rathin Desai, Amit Sharma
We show that many popular methods, including back-door methods can be considered as weighting or representation learning algorithms, and provide general error bounds for their causal estimates.
6 code implementations • 19 May 2019 • Ramaravind Kommiya Mothilal, Amit Sharma, Chenhao Tan
Post-hoc explanations of machine learning models are crucial for people to understand and act on algorithmic predictions.
no code implementations • 5 Feb 2019 • Jackson A. Killian, Bryan Wilder, Amit Sharma, Daksha Shah, Vinod Choudhary, Bistra Dilkina, Milind Tambe
Digital Adherence Technologies (DATs) are an increasingly popular method for verifying patient adherence to many medications.
1 code implementation • 28 Nov 2016 • Amit Sharma, Jake M. Hofman, Duncan J. Watts
We present a method for estimating causal effects in time series data when fine-grained information about the outcome of interest is available.
no code implementations • 23 Jan 2014 • Priyankar Ghosh, Amit Sharma, P. P. Chakrabarti, Pallab Dasgupta
The proposed algorithms use a best first search technique and report the solutions using an implicit representation ordered by cost.