no code implementations • 18 Aug 2023 • Siva Athreya, Soumik Pal, Raghav Somani, Raghavendra Tripathi
In both cases we show that, as the size of the graph goes to infinity, the random trajectories of the stochastic processes converge to deterministic curves on the space of measure-valued graphons.
no code implementations • 2 Oct 2022 • Zaid Harchaoui, Sewoong Oh, Soumik Pal, Raghav Somani, Raghavendra Tripathi
We consider stochastic gradient descents on the space of large symmetric matrices of suitable functions that are invariant under permuting the rows and columns using the same permutation.
no code implementations • 18 Nov 2021 • Sewoong Oh, Soumik Pal, Raghav Somani, Raghavendra Tripathi
Wasserstein gradient flows on probability measures have found a host of applications in various optimization problems.
1 code implementation • NeurIPS 2021 • Aditya Kusupati, Matthew Wallingford, Vivek Ramanujan, Raghav Somani, Jae Sung Park, Krishna Pillutla, Prateek Jain, Sham Kakade, Ali Farhadi
We further quantitatively measure the quality of our codes by applying it to the efficient image retrieval as well as out-of-distribution (OOD) detection problems.
1 code implementation • 22 Apr 2021 • Jonathan Hayase, Weihao Kong, Raghav Somani, Sewoong Oh
There have been promising attempts to use the intermediate representations of such a model to separate corrupted examples from clean ones.
no code implementations • NeurIPS 2020 • Weihao Kong, Raghav Somani, Sham Kakade, Sewoong Oh
Together, this approach is robust against outliers and achieves a graceful statistical trade-off; the lack of $\Omega(k^{1/2})$-size tasks can be compensated for with smaller tasks, which can now be as small as $O(\log k)$.
no code implementations • ICML 2020 • Weihao Kong, Raghav Somani, Zhao Song, Sham Kakade, Sewoong Oh
In modern supervised learning, there are a large number of tasks, but many of them are associated with only a small amount of labeled data.
1 code implementation • ICML 2020 • Aditya Kusupati, Vivek Ramanujan, Raghav Somani, Mitchell Wortsman, Prateek Jain, Sham Kakade, Ali Farhadi
Sparsity in Deep Neural Networks (DNNs) is studied extensively with the focus of maximizing prediction accuracy given an overall parameter budget.
no code implementations • 21 Oct 2019 • Abhishek Panigrahi, Raghav Somani, Navin Goyal, Praneeth Netrapalli
What enables Stochastic Gradient Descent (SGD) to achieve better generalization than Gradient Descent (GD) in Neural Network training?
no code implementations • 17 May 2019 • Raghav Somani, Navin Goyal, Prateek Jain, Praneeth Netrapalli
This paper proposes and demonstrates a surprising pattern in the training of neural networks: there is a one to one relation between the values of any pair of losses (such as cross entropy, mean squared error, 0/1 error etc.)
no code implementations • NeurIPS 2018 • Raghav Somani, Chirag Gupta, Prateek Jain, Praneeth Netrapalli
This paper studies the problem of sparse regression where the goal is to learn a sparse vector that best optimizes a given objective function.
no code implementations • 31 Oct 2018 • Gaurush Hiranandani, Raghav Somani, Oluwasanmi Koyejo, Sreangsu Acharyya
This non-linear transformation of the rating scale shatters the low-rank structure of the rating matrix, therefore resulting in a poor fit and consequentially, poor recommendations.
no code implementations • 7 Jul 2017 • Arabin Kumar Dey, Raghav Somani, Sreangsu Acharyya
In this article we provide a formulation of empirical bayes described by Atchade (2011) to tune the hyperparameters of priors used in bayesian set up of collaborative filter.