Search Results for author: Raghav Somani

Found 10 papers, 3 papers with code

LLC: Accurate, Multi-purpose Learnt Low-dimensional Binary Codes

1 code implementation NeurIPS 2021 Aditya Kusupati, Matthew Wallingford, Vivek Ramanujan, Raghav Somani, Jae Sung Park, Krishna Pillutla, Prateek Jain, Sham Kakade, Ali Farhadi

We further quantitatively measure the quality of our codes by applying it to the efficient image retrieval as well as out-of-distribution (OOD) detection problems.

Image Retrieval

SPECTRE: Defending Against Backdoor Attacks Using Robust Statistics

1 code implementation22 Apr 2021 Jonathan Hayase, Weihao Kong, Raghav Somani, Sewoong Oh

There have been promising attempts to use the intermediate representations of such a model to separate corrupted examples from clean ones.

Robust Meta-learning for Mixed Linear Regression with Small Batches

no code implementations NeurIPS 2020 Weihao Kong, Raghav Somani, Sham Kakade, Sewoong Oh

Together, this approach is robust against outliers and achieves a graceful statistical trade-off; the lack of $\Omega(k^{1/2})$-size tasks can be compensated for with smaller tasks, which can now be as small as $O(\log k)$.

Meta-Learning

Meta-learning for mixed linear regression

no code implementations ICML 2020 Weihao Kong, Raghav Somani, Zhao Song, Sham Kakade, Sewoong Oh

In modern supervised learning, there are a large number of tasks, but many of them are associated with only a small amount of labeled data.

Meta-Learning Small Data Image Classification

Non-Gaussianity of Stochastic Gradient Noise

no code implementations21 Oct 2019 Abhishek Panigrahi, Raghav Somani, Navin Goyal, Praneeth Netrapalli

What enables Stochastic Gradient Descent (SGD) to achieve better generalization than Gradient Descent (GD) in Neural Network training?

Universality Patterns in the Training of Neural Networks

no code implementations17 May 2019 Raghav Somani, Navin Goyal, Prateek Jain, Praneeth Netrapalli

This paper proposes and demonstrates a surprising pattern in the training of neural networks: there is a one to one relation between the values of any pair of losses (such as cross entropy, mean squared error, 0/1 error etc.)

Support Recovery for Orthogonal Matching Pursuit: Upper and Lower bounds

no code implementations NeurIPS 2018 Raghav Somani, Chirag Gupta, Prateek Jain, Praneeth Netrapalli

This paper studies the problem of sparse regression where the goal is to learn a sparse vector that best optimizes a given objective function.

Generalization Bounds

Clustered Monotone Transforms for Rating Factorization

no code implementations31 Oct 2018 Gaurush Hiranandani, Raghav Somani, Oluwasanmi Koyejo, Sreangsu Acharyya

This non-linear transformation of the rating scale shatters the low-rank structure of the rating matrix, therefore resulting in a poor fit and consequentially, poor recommendations.

Recommendation Systems

A case study of Empirical Bayes in User-Movie Recommendation system

no code implementations7 Jul 2017 Arabin Kumar Dey, Raghav Somani, Sreangsu Acharyya

In this article we provide a formulation of empirical bayes described by Atchade (2011) to tune the hyperparameters of priors used in bayesian set up of collaborative filter.

Cannot find the paper you are looking for? You can Submit a new open access paper.