1 code implementation • 22 Nov 2022 • Harshay Shah, Sung Min Park, Andrew Ilyas, Aleksander Madry
We study the problem of (learning) algorithm comparison, where the goal is to find differences between models trained with two different learning algorithms.
1 code implementation • NeurIPS 2021 • Harshay Shah, Prateek Jain, Praneeth Netrapalli
We believe that the DiffROAR evaluation framework and BlockMNIST-based datasets can serve as sanity checks to audit instance-specific interpretability methods; code and data available at https://github. com/harshays/inputgradients.
2 code implementations • NeurIPS 2020 • Harshay Shah, Kaustav Tamuly, aditi raghunathan, Prateek Jain, Praneeth Netrapalli
Furthermore, previous settings that use SB to theoretically justify why neural networks generalize well do not simultaneously capture the non-robustness of neural networks---a widely observed phenomenon in practice [Goodfellow et al. 2014, Jo and Bengio 2017].
no code implementations • 1 Dec 2018 • Ashish Khetan, Harshay Shah, Sewoong Oh
This representation is crucial in introducing a novel estimator for the number of connected components for general graphs, under the knowledge of the spectral gap of the original graph.
1 code implementation • 29 Dec 2017 • Harshay Shah, Suhansanu Kumar, Hari Sundaram
Despite the knowledge that individuals use limited resources to form connections to similar others, we lack an understanding of how local and resource-constrained mechanisms explain the emergence of rich structural properties found in real-world networks.
Social and Information Networks