Search Results for author: Amirata Ghorbani

Found 16 papers, 5 papers with code

Beyond Importance Scores: Interpreting Tabular ML by Visualizing Feature Semantics

no code implementations10 Nov 2021 Amirata Ghorbani, Dina Berenbaum, Maor Ivgi, Yuval Dafna, James Zou

We address this limitation by introducing Feature Vectors, a new global interpretability method designed for tabular datasets.

Feature Importance

Data Shapley Valuation for Efficient Batch Active Learning

no code implementations16 Apr 2021 Amirata Ghorbani, James Zou, Andre Esteva

In this work, we introduce Active Data Shapley (ADS) -- a filtering layer for batch active learning that significantly increases the efficiency of active learning by pre-selecting, using a linear time computation, the highest-value points from an unlabeled dataset.

Active Learning

Accurate Prediction of Free Solvation Energy of Organic Molecules via Graph Attention Network and Message Passing Neural Network from Pairwise Atomistic Interactions

no code implementations15 Apr 2021 Ramin Ansari, Amirata Ghorbani

As a result, these models are capable of making accurate predictions of the molecular properties without the time consuming process of running an experiment on each molecule.

Graph Attention

Data Valuation for Medical Imaging Using Shapley Value: Application on A Large-scale Chest X-ray Dataset

no code implementations15 Oct 2020 Siyi Tang, Amirata Ghorbani, Rikiya Yamashita, Sameer Rehman, Jared A. Dunnmon, James Zou, Daniel L. Rubin

In this study, we used data Shapley, a data valuation metric, to quantify the value of training data to the performance of a pneumonia detection algorithm in a large chest X-ray dataset.

Pneumonia Detection

How Does Mixup Help With Robustness and Generalization?

no code implementations ICLR 2021 Linjun Zhang, Zhun Deng, Kenji Kawaguchi, Amirata Ghorbani, James Zou

For robustness, we show that minimizing the Mixup loss corresponds to approximately minimizing an upper bound of the adversarial loss.

Data Augmentation

Improving Adversarial Robustness via Unlabeled Out-of-Domain Data

no code implementations15 Jun 2020 Zhun Deng, Linjun Zhang, Amirata Ghorbani, James Zou

In this work, we investigate how adversarial robustness can be enhanced by leveraging out-of-domain unlabeled data.

Adversarial Robustness Data Augmentation +2

A Distributional Framework for Data Valuation

no code implementations ICML 2020 Amirata Ghorbani, Michael P. Kim, James Zou

Shapley value is a classic notion from game theory, historically used to quantify the contributions of individuals within groups, and more recently applied to assign values to data points when training machine learning models.

Neuron Shapley: Discovering the Responsible Neurons

1 code implementation NeurIPS 2020 Amirata Ghorbani, James Zou

We develop Neuron Shapley as a new framework to quantify the contribution of individual neurons to the prediction and performance of a deep network.

DermGAN: Synthetic Generation of Clinical Skin Images with Pathology

no code implementations20 Nov 2019 Amirata Ghorbani, Vivek Natarajan, David Coz, Yu-An Liu

Despite the recent success in applying supervised deep learning to medical imaging tasks, the problem of obtaining large and diverse expert-annotated datasets required for the development of high performant models remains particularly challenging.

Data Augmentation

Who's responsible? Jointly quantifying the contribution of the learning algorithm and training data

no code implementations9 Oct 2019 Gal Yona, Amirata Ghorbani, James Zou

We propose Extended Shapley as a principled framework for this problem, and experiment empirically with how it can be used to address questions of ML accountability.

Data Shapley: Equitable Valuation of Data for Machine Learning

4 code implementations5 Apr 2019 Amirata Ghorbani, James Zou

As data becomes the fuel driving technological and economic growth, a fundamental challenge is how to quantify the value of data in algorithmic predictions and decisions.

Towards Automatic Concept-based Explanations

1 code implementation NeurIPS 2019 Amirata Ghorbani, James Wexler, James Zou, Been Kim

Interpretability has become an important topic of research as more machine learning (ML) models are deployed and widely used to make important decisions.

Feature Importance

Knockoffs for the mass: new feature importance statistics with false discovery guarantees

no code implementations17 Jul 2018 Jaime Roquero Gimenez, Amirata Ghorbani, James Zou

This is often impossible to do from purely observational data, and a natural relaxation is to identify features that are correlated with the outcome even conditioned on all other observed features.

Feature Importance

Multiaccuracy: Black-Box Post-Processing for Fairness in Classification

1 code implementation31 May 2018 Michael P. Kim, Amirata Ghorbani, James Zou

Prediction systems are successfully deployed in applications ranging from disease diagnosis, to predicting credit worthiness, to image recognition.

Classification Fairness +2

INTERPRETATION OF NEURAL NETWORK IS FRAGILE

no code implementations ICLR 2018 Amirata Ghorbani, Abubakar Abid, James Zou

In this paper, we show that interpretation of deep learning predictions is extremely fragile in the following sense: two perceptively indistinguishable inputs with the same predicted label can be assigned very different}interpretations.

Feature Importance

Interpretation of Neural Networks is Fragile

1 code implementation29 Oct 2017 Amirata Ghorbani, Abubakar Abid, James Zou

In this paper, we show that interpretation of deep learning predictions is extremely fragile in the following sense: two perceptively indistinguishable inputs with the same predicted label can be assigned very different interpretations.

Feature Importance

Cannot find the paper you are looking for? You can Submit a new open access paper.