Search Results for author: Arushi Gupta

Found 13 papers, 2 papers with code

Skill-Mix: a Flexible and Expandable Family of Evaluations for AI models

no code implementations26 Oct 2023 Dingli Yu, Simran Kaur, Arushi Gupta, Jonah Brown-Cohen, Anirudh Goyal, Sanjeev Arora

The paper develops a methodology for (a) designing and administering such an evaluation, and (b) automatic grading (plus spot-checking by humans) of the results using GPT-4 as well as the open LLaMA-2 70B model.

New Definitions and Evaluations for Saliency Methods: Staying Intrinsic, Complete and Sound

1 code implementation5 Nov 2022 Arushi Gupta, Nikunj Saunshi, Dingli Yu, Kaifeng Lyu, Sanjeev Arora

Saliency methods compute heat maps that highlight portions of an input that were most {\em important} for the label assigned to it by a deep net.

Understanding Influence Functions and Datamodels via Harmonic Analysis

no code implementations3 Oct 2022 Nikunj Saunshi, Arushi Gupta, Mark Braverman, Sanjeev Arora

Influence functions estimate effect of individual data points on predictions of the model on test data and were adapted to deep learning in Koh and Liang [2017].

Data Poisoning

On Predicting Generalization using GANs

no code implementations ICLR 2022 Yi Zhang, Arushi Gupta, Nikunj Saunshi, Sanjeev Arora

Research on generalization bounds for deep networks seeks to give ways to predict test error using just the training dataset and the network parameters.

Generalization Bounds Generative Adversarial Network

New Definitions and Evaluations for Saliency Methods: Staying Intrinsic and Sound

no code implementations29 Sep 2021 Arushi Gupta, Nikunj Saunshi, Dingli Yu, Kaifeng Lyu, Sanjeev Arora

Saliency methods seek to provide human-interpretable explanations for the output of machine learning model on a given input.

Image Functions In Neural Networks: A Perspective On Generalization

no code implementations29 Sep 2021 Arushi Gupta

In this work, we show that training with SGD on ReLU neural networks gives rise to a natural set of functions for each image that are not perfectly correlated until later in training.

A Representation Learning Perspective on the Importance of Train-Validation Splitting in Meta-Learning

1 code implementation29 Jun 2021 Nikunj Saunshi, Arushi Gupta, Wei Hu

An effective approach in meta-learning is to utilize multiple "train tasks" to learn a good initialization for model parameters that can help solve unseen "test tasks" with very few samples by fine-tuning from this initialization.

Meta-Learning Representation Learning

Neural Networks Preserve Invertibility Across Iterations: A Possible Source of Implicit Data Augmentation

no code implementations1 Jan 2021 Arushi Gupta

We believe the fact that higher layers may interpret weight changes made by lower layers as changes to the data may produce implicit data augmentation.

Data Augmentation

Inherent Noise in Gradient Based Methods

no code implementations26 May 2020 Arushi Gupta

We find that this noise penalizes models that are sensitive to perturbations in the weights.

A Simple Technique to Enable Saliency Methods to Pass the Sanity Checks

no code implementations25 Sep 2019 Arushi Gupta, Sanjeev Arora

This involves computing saliency maps for all possible labels in the classification task, and using a simple competition among them to identify and remove less relevant pixels from the map.

A Simple Saliency Method That Passes the Sanity Checks

no code implementations27 May 2019 Arushi Gupta, Sanjeev Arora

There is great interest in "saliency methods" (also called "attribution methods"), which give "explanations" for a deep net's decision, by assigning a "score" to each feature/pixel in the input.

Parameter identification in Markov chain choice models

no code implementations2 Jun 2017 Arushi Gupta, Daniel Hsu

The underlying parameters of the model were previously shown to be identifiable from the choice probabilities for the all-products assortment, together with choice probabilities for assortments of all-but-one products.

Cannot find the paper you are looking for? You can Submit a new open access paper.