Search Results for author: Harshay Shah

Found 5 papers, 4 papers with code

ModelDiff: A Framework for Comparing Learning Algorithms

1 code implementation22 Nov 2022 Harshay Shah, Sung Min Park, Andrew Ilyas, Aleksander Madry

We study the problem of (learning) algorithm comparison, where the goal is to find differences between models trained with two different learning algorithms.

Data Augmentation

Do Input Gradients Highlight Discriminative Features?

1 code implementation NeurIPS 2021 Harshay Shah, Prateek Jain, Praneeth Netrapalli

We believe that the DiffROAR evaluation framework and BlockMNIST-based datasets can serve as sanity checks to audit instance-specific interpretability methods; code and data available at https://github. com/harshays/inputgradients.

Image Classification

The Pitfalls of Simplicity Bias in Neural Networks

2 code implementations NeurIPS 2020 Harshay Shah, Kaustav Tamuly, aditi raghunathan, Prateek Jain, Praneeth Netrapalli

Furthermore, previous settings that use SB to theoretically justify why neural networks generalize well do not simultaneously capture the non-robustness of neural networks---a widely observed phenomenon in practice [Goodfellow et al. 2014, Jo and Bengio 2017].

Number of Connected Components in a Graph: Estimation via Counting Patterns

no code implementations1 Dec 2018 Ashish Khetan, Harshay Shah, Sewoong Oh

This representation is crucial in introducing a novel estimator for the number of connected components for general graphs, under the knowledge of the spectral gap of the original graph.

Growing Attributed Networks through Local Processes

1 code implementation29 Dec 2017 Harshay Shah, Suhansanu Kumar, Hari Sundaram

Despite the knowledge that individuals use limited resources to form connections to similar others, we lack an understanding of how local and resource-constrained mechanisms explain the emergence of rich structural properties found in real-world networks.

Social and Information Networks

Cannot find the paper you are looking for? You can Submit a new open access paper.