Search Results for author: Prateek Jain

Found 105 papers, 22 papers with code

Optimization and Analysis of the pAp@k Metric for Recommender Systems

no code implementations ICML 2020 Gaurush Hiranandani, Warut Vijitbenjaronk, Sanmi Koyejo, Prateek Jain

Modern recommendation and notification systems must be robust to data imbalance, limitations on the number of recommendations/notifications, and heterogeneous engagement profiles across users.

Recommendation Systems

MET: Masked Encoding for Tabular Data

no code implementations17 Jun 2022 Kushal Majmundar, Sachin Goyal, Praneeth Netrapalli, Prateek Jain

Typical contrastive learning based SSL methods require instance-wise data augmentations which are difficult to design for unstructured tabular data.

Contrastive Learning Representation Learning

DP-PCA: Statistically Optimal and Differentially Private PCA

no code implementations27 May 2022 Xiyang Liu, Weihao Kong, Prateek Jain, Sewoong Oh

For sub-Gaussian data, we provide nearly optimal statistical error rates even for $n=\tilde O(d)$.

Matryoshka Representations for Adaptive Deployment

1 code implementation26 May 2022 Aditya Kusupati, Gantavya Bhatt, Aniket Rege, Matthew Wallingford, Aditya Sinha, Vivek Ramanujan, William Howard-Snyder, KaiFeng Chen, Sham Kakade, Prateek Jain, Ali Farhadi

The flexibility within the learned Matryoshka Representations offer: (a) up to 14x smaller embedding size for ImageNet-1K classification at the same level of accuracy; (b) up to 14x real-world speed-ups for large-scale retrieval on ImageNet-1K and 4K; and (c) up to 2% accuracy improvements for long-tail few-shot classification, all while being as robust as the original representations.

Representation Learning

Real-time Recognition of Yoga Poses using computer Vision for Smart Health Care

no code implementations19 Jan 2022 Abhishek Sharma, Yash Shah, Yash Agrawal, Prateek Jain

In this work, a self-assistance based yoga posture identification technique is developed, which helps users to perform Yoga with the correction feature in Real-time.

Statistically and Computationally Efficient Linear Meta-representation Learning

no code implementations NeurIPS 2021 Kiran K. Thekumparampil, Prateek Jain, Praneeth Netrapalli, Sewoong Oh

To cope with such data scarcity, meta-representation learning methods train across many related tasks to find a shared (lower-dimensional) representation of the data where all tasks can be solved accurately.

Few-Shot Learning Representation Learning

Node-Level Differentially Private Graph Neural Networks

no code implementations23 Nov 2021 Ameya Daigavane, Gagan Madan, Aditya Sinha, Abhradeep Guha Thakurta, Gaurav Aggarwal, Prateek Jain

Even though each node can be involved in the inference for multiple nodes, by employing a careful sensitivity analysis anda non-trivial extension of the privacy-by-amplification technique, our method is able to provide accurate solutions with solid privacy parameters.

Privacy Preserving

Neural Network Compatible Off-Policy Natural Actor-Critic Algorithm

no code implementations19 Oct 2021 Raghuram Bharadwaj Diddigi, Prateek Jain, Prabuchandran K. J., Shalabh Bhatnagar

Learning optimal behavior from existing data is one of the most important problems in Reinforcement Learning (RL).

Online Target Q-learning with Reverse Experience Replay: Efficiently finding the Optimal Policy for Linear MDPs

no code implementations ICLR 2022 Naman Agarwal, Syomantak Chaudhuri, Prateek Jain, Dheeraj Nagaraj, Praneeth Netrapalli

The starting point of our work is the observation that in practice, Q-learning is used with two important modifications: (i) training with two networks, called online network and target network simultaneously (online target learning, or OTL) , and (ii) experience replay (ER) (Mnih et al., 2015).

Q-Learning

IGLU: Efficient GCN Training via Lazy Updates

no code implementations ICLR 2022 S Deepak Narayanan, Aditya Sinha, Prateek Jain, Purushottam Kar, Sundararajan Sellamanickam

Training multi-layer Graph Convolution Networks (GCN) using standard SGD techniques scales poorly as each descent step ends up updating node embeddings for a large portion of the graph.

Robust Training in High Dimensions via Block Coordinate Geometric Median Descent

1 code implementation16 Jun 2021 Anish Acharya, Abolfazl Hashemi, Prateek Jain, Sujay Sanghavi, Inderjit S. Dhillon, Ufuk Topcu

Geometric median (\textsc{Gm}) is a classical method in statistics for achieving a robust estimation of the uncorrupted data; under gross corruption, it achieves the optimal breakdown point of 0. 5.

Ranked #17 on Image Classification on MNIST (Accuracy metric)

Image Classification

LLC: Accurate, Multi-purpose Learnt Low-dimensional Binary Codes

1 code implementation NeurIPS 2021 Aditya Kusupati, Matthew Wallingford, Vivek Ramanujan, Raghav Somani, Jae Sung Park, Krishna Pillutla, Prateek Jain, Sham Kakade, Ali Farhadi

We further quantitatively measure the quality of our codes by applying it to the efficient image retrieval as well as out-of-distribution (OOD) detection problems.

Image Retrieval OOD Detection

Near-optimal Offline and Streaming Algorithms for Learning Non-Linear Dynamical Systems

no code implementations NeurIPS 2021 Prateek Jain, Suhas S Kowshik, Dheeraj Nagaraj, Praneeth Netrapalli

In this work, we improve existing results for learning nonlinear systems in a number of ways: a) we provide the first offline algorithm that can learn non-linear dynamical systems without the mixing assumption, b) we significantly improve upon the sample complexity of existing results for mixing systems, c) in the much harder one-pass, streaming setting we study a SGD with Reverse Experience Replay ($\mathsf{SGD-RER}$) method, and demonstrate that for mixing systems, it achieves the same sample complexity as our offline algorithm, d) we justify the expansivity assumption by showing that for the popular ReLU link function -- a non-expansive but easy to learn link function with i. i. d.

Sample Efficient Linear Meta-Learning by Alternating Minimization

no code implementations18 May 2021 Kiran Koshy Thekumparampil, Prateek Jain, Praneeth Netrapalli, Sewoong Oh

We show that, for a constant subspace dimension MLLAM obtains nearly-optimal estimation error, despite requiring only $\Omega(\log d)$ samples per task.

Meta-Learning

Streaming Linear System Identification with Reverse Experience Replay

no code implementations NeurIPS 2021 Prateek Jain, Suhas S Kowshik, Dheeraj Nagaraj, Praneeth Netrapalli

Thus, we provide the first -- to the best of our knowledge -- optimal SGD-style algorithm for the classical problem of linear system identification with a first order oracle.

Time Series Time Series Analysis

Do Input Gradients Highlight Discriminative Features?

1 code implementation NeurIPS 2021 Harshay Shah, Prateek Jain, Praneeth Netrapalli

We believe that the DiffROAR evaluation framework and BlockMNIST-based datasets can serve as sanity checks to audit instance-specific interpretability methods; code and data available at https://github. com/harshays/inputgradients.

Image Classification

Optimal Regret Algorithm for Pseudo-1d Bandit Convex Optimization

no code implementations15 Feb 2021 Aadirupa Saha, Nagarajan Natarajan, Praneeth Netrapalli, Prateek Jain

We study online learning with bandit feedback (i. e. learner has access to only zeroth-order oracle) where cost/reward functions $\f_t$ admit a "pseudo-1d" structure, i. e. $\f_t(\w) = \loss_t(\pred_t(\w))$ where the output of $\pred_t$ is one-dimensional.

Decision Making online learning

Learning Accurate Decision Trees with Bandit Feedback via Quantized Gradient Descent

2 code implementations15 Feb 2021 Ajaykrishna Karthikeyan, Naman jain, Nagarajan Natarajan, Prateek Jain

Decision trees provide a rich family of highly non-linear but efficient models, due to which they continue to be the go-to family of predictive models by practitioners across domains.

online learning

Everything You Wanted to Know About Noninvasive Glucose Measurement and Control

no code implementations22 Jan 2021 Prateek Jain, Amit M. Joshi, Saraju Mohanty

There is requirement to develop the Internet-Medical-Things (IoMT) integrated Healthcare Cyber-Physical System (H-CPS) based Smart Healthcare framework for glucose measurement with purpose of continuous health monitoring.

Medical Physics

Projection Efficient Subgradient Method and Optimal Nonsmooth Frank-Wolfe Method

no code implementations NeurIPS 2020 Kiran Koshy Thekumparampil, Prateek Jain, Praneeth Netrapalli, Sewoong Oh

Further, instead of a PO if we only have a linear minimization oracle (LMO, a la Frank-Wolfe) to access the constraint set, an extension of our method, MOLES, finds a feasible $\epsilon$-suboptimal solution using $O(\epsilon^{-2})$ LMO calls and FO calls---both match known lower bounds, resolving a question left open since White (1993).

Programming by Rewards

no code implementations14 Jul 2020 Nagarajan Natarajan, Ajaykrishna Karthikeyan, Prateek Jain, Ivan Radicek, Sriram Rajamani, Sumit Gulwani, Johannes Gehrke

The goal of the synthesizer is to synthesize a "decision function" $f$ which transforms the features to a decision value for the black-box component so as to maximize the expected reward $E[r \circ f (x)]$ for executing decisions $f(x)$ for various values of $x$.

Program Synthesis

Globally-convergent Iteratively Reweighted Least Squares for Robust Regression Problems

no code implementations25 Jun 2020 Bhaskar Mukhoty, Govind Gopakumar, Prateek Jain, Purushottam Kar

We provide the first global model recovery results for the IRLS (iteratively reweighted least squares) heuristic for robust regression problems.

Least Squares Regression with Markovian Data: Fundamental Limits and Algorithms

no code implementations NeurIPS 2020 Guy Bresler, Prateek Jain, Dheeraj Nagaraj, Praneeth Netrapalli, Xian Wu

Our improved rate serves as one of the first results where an algorithm outperforms SGD-DD on an interesting Markov chain and also provides one of the first theoretical analyses to support the use of experience replay in practice.

The Pitfalls of Simplicity Bias in Neural Networks

2 code implementations NeurIPS 2020 Harshay Shah, Kaustav Tamuly, aditi raghunathan, Prateek Jain, Praneeth Netrapalli

Furthermore, previous settings that use SB to theoretically justify why neural networks generalize well do not simultaneously capture the non-robustness of neural networks---a widely observed phenomenon in practice [Goodfellow et al. 2014, Jo and Bengio 2017].

COVID-19: Strategies for Allocation of Test Kits

no code implementations3 Apr 2020 Arpita Biswas, Shruthi Bannur, Prateek Jain, Srujana Merugu

Thus, it is important to allocate a separate budget of test-kits per day targeted towards preventing community spread and detecting new cases early on.

DROCC: Deep Robust One-Class Classification

1 code implementation ICML 2020 Sachin Goyal, aditi raghunathan, Moksh Jain, Harsha Vardhan Simhadri, Prateek Jain

Classical approaches for one-class problems such as one-class SVM and isolation forest require careful feature engineering when applied to structured domains like images.

Anomaly Detection Classification +3

Rich-Item Recommendations for Rich-Users: Exploiting Dynamic and Static Side Information

no code implementations28 Jan 2020 Amar Budhiraja, Gaurush Hiranandani, Darshak Chhatbar, Aditya Sinha, Navya Yarrabelly, Ayush Choure, Oluwasanmi Koyejo, Prateek Jain

In this paper, we study the problem of recommendation system where the users and items to be recommended are rich data structures with multiple entity types and with multiple sources of side-information in the form of graphs.

TAG

Provable Non-linear Inductive Matrix Completion

no code implementations NeurIPS 2019 Kai Zhong, Zhao Song, Prateek Jain, Inderjit S. Dhillon

Inductive matrix completion (IMC) method is a standard approach for this problem where the given query as well as the items are embedded in a common low-dimensional space.

Matrix Completion

On Scaling Data-Driven Loop Invariant Inference

no code implementations26 Nov 2019 Sahil Bhatia, Saswat Padhi, Nagarajan Natarajan, Rahul Sharma, Prateek Jain

Automated synthesis of inductive invariants is an important problem in software verification.

Learning Functions over Sets via Permutation Adversarial Networks

1 code implementation12 Jul 2019 Chirag Pabbaraju, Prateek Jain

In this paper, we consider the problem of learning functions over sets, i. e., functions that are invariant to permutations of input set items.

Recommendation Systems

Efficient Algorithms for Smooth Minimax Optimization

2 code implementations NeurIPS 2019 Kiran Koshy Thekumparampil, Prateek Jain, Praneeth Netrapalli, Sewoong Oh

This paper studies first order methods for solving smooth minimax optimization problems $\min_x \max_y g(x, y)$ where $g(\cdot,\cdot)$ is smooth and $g(x,\cdot)$ is concave for each $x$.

Universality Patterns in the Training of Neural Networks

no code implementations17 May 2019 Raghav Somani, Navin Goyal, Prateek Jain, Praneeth Netrapalli

This paper proposes and demonstrates a surprising pattern in the training of neural networks: there is a one to one relation between the values of any pair of losses (such as cross entropy, mean squared error, 0/1 error etc.)

Making the Last Iterate of SGD Information Theoretically Optimal

no code implementations29 Apr 2019 Prateek Jain, Dheeraj Nagaraj, Praneeth Netrapalli

While classical theoretical analysis of SGD for convex problems studies (suffix) \emph{averages} of iterates and obtains information theoretically optimal bounds on suboptimality, the \emph{last point} of SGD is, by far, the most preferred choice in practice.

Adaptive Hard Thresholding for Near-optimal Consistent Robust Regression

no code implementations19 Mar 2019 Arun Sai Suggala, Kush Bhatia, Pradeep Ravikumar, Prateek Jain

We provide a nearly linear time estimator which consistently estimates the true regression vector, even with $1-o(1)$ fraction of corruptions.

SGD without Replacement: Sharper Rates for General Smooth Convex Functions

no code implementations4 Mar 2019 Prateek Jain, Dheeraj Nagaraj, Praneeth Netrapalli

For {\em small} $K$, we show \sgdwor can achieve same convergence rate as \sgd for {\em general smooth strongly-convex} functions.

FastGRNN: A Fast, Accurate, Stable and Tiny Kilobyte Sized Gated Recurrent Neural Network

1 code implementation NeurIPS 2018 Aditya Kusupati, Manish Singh, Kush Bhatia, Ashish Kumar, Prateek Jain, Manik Varma

FastRNN addresses these limitations by adding a residual connection that does not constrain the range of the singular values explicitly and has only two extra scalar parameters.

Action Classification Speech Recognition +2

Support Recovery for Orthogonal Matching Pursuit: Upper and Lower bounds

no code implementations NeurIPS 2018 Raghav Somani, Chirag Gupta, Prateek Jain, Praneeth Netrapalli

This paper studies the problem of sparse regression where the goal is to learn a sparse vector that best optimizes a given objective function.

Generalization Bounds

Multiple Instance Learning for Efficient Sequential Data Classification on Resource-constrained Devices

1 code implementation NeurIPS 2018 Don Dennis, Chirag Pabbaraju, Harsha Vardhan Simhadri, Prateek Jain

We propose a method, EMI-RNN, that exploits these observations by using a multiple instance learning formulation along with an early prediction technique to learn a model that achieves better accuracy compared to baseline models, while simultaneously reducing computation by a large fraction.

General Classification Multiple Instance Learning +2

Nonlinear Inductive Matrix Completion based on One-layer Neural Networks

no code implementations26 May 2018 Kai Zhong, Zhao Song, Prateek Jain, Inderjit S. Dhillon

A standard approach to modeling this problem is Inductive Matrix Completion where the predicted rating is modeled as an inner product of the user and the item features projected onto a latent space.

Matrix Completion Recommendation Systems

Neural-Guided Deductive Search for Real-Time Program Synthesis from Examples

no code implementations ICLR 2018 Ashwin Kalyan, Abhishek Mohta, Oleksandr Polozov, Dhruv Batra, Prateek Jain, Sumit Gulwani

In this work, we propose Neural Guided Deductive Search (NGDS), a hybrid synthesis technique that combines the best of both symbolic logic techniques and statistical models.

Program Synthesis

Differentially Private Matrix Completion Revisited

no code implementations ICML 2018 Prateek Jain, Om Thakkar, Abhradeep Thakurta

We provide the first provably joint differentially private algorithm with formal utility guarantees for the problem of user-level privacy-preserving collaborative filtering.

Collaborative Filtering Matrix Completion +1

Non-convex Optimization for Machine Learning

no code implementations21 Dec 2017 Prateek Jain, Purushottam Kar

The goal of this monograph is to both, introduce the rich literature in this area, as well as equip the reader with the tools and techniques needed to analyze these simple procedures for non-convex problems.

Consistent Robust Regression

no code implementations NeurIPS 2017 Kush Bhatia, Prateek Jain, Parameswaran Kamalaruban, Purushottam Kar

We present the first efficient and provably consistent estimator for the robust regression problem.

A Markov Chain Theory Approach to Characterizing the Minimax Optimality of Stochastic Gradient Descent (for Least Squares)

no code implementations25 Oct 2017 Prateek Jain, Sham M. Kakade, Rahul Kidambi, Praneeth Netrapalli, Venkata Krishna Pillutla, Aaron Sidford

This work provides a simplified proof of the statistical minimax optimality of (iterate averaged) stochastic gradient descent (SGD), for the special case of least squares.

Leveraging Distributional Semantics for Multi-Label Learning

no code implementations18 Sep 2017 Rahul Wadbude, Vivek Gupta, Piyush Rai, Nagarajan Natarajan, Harish Karnick, Prateek Jain

Our approach is novel in that it highlights interesting connections between label embedding methods used for multi-label learning and paragraph/document embedding methods commonly used for learning representations of text data.

Document Embedding Multi-Label Learning +2

FlashProfile: A Framework for Synthesizing Data Profiles

no code implementations17 Sep 2017 Saswat Padhi, Prateek Jain, Daniel Perelman, Oleksandr Polozov, Sumit Gulwani, Todd Millstein

However, manual inspection of data to identify the different formats is infeasible in standard big-data scenarios.

Active Heteroscedastic Regression

no code implementations ICML 2017 Kamalika Chaudhuri, Prateek Jain, Nagarajan Natarajan

In this work, we consider a theoretical analysis of the label requirement of active learning for regression under a heteroscedastic noise model, where the noise depends on the instance.

Active Learning

Nearly Optimal Robust Matrix Completion

no code implementations ICML 2017 Yeshwanth Cherapanamjeri, Kartik Gupta, Prateek Jain

Finally, an application of our result to the robust PCA problem (low-rank+sparse matrix separation) leads to nearly linear time (in matrix dimensions) algorithm for the same; existing state-of-the-art methods require quadratic time.

Low-Rank Matrix Completion

Learning Mixture of Gaussians with Streaming Data

no code implementations NeurIPS 2017 Aditi Raghunathan, Ravishankar Krishnaswamy, Prateek Jain

However, by using a streaming version of the classical (soft-thresholding-based) EM method that exploits the Gaussian distribution explicitly, we show that for a mixture of two Gaussians the true means can be estimated consistently, with estimation error decreasing at nearly optimal rate, and tending to $0$ for $N\rightarrow \infty$.

Recovery Guarantees for One-hidden-layer Neural Networks

no code implementations ICML 2017 Kai Zhong, Zhao Song, Prateek Jain, Peter L. Bartlett, Inderjit S. Dhillon

For activation functions that are also smooth, we show $\mathit{local~linear~convergence}$ guarantees of gradient descent under a resampling rule.

Accelerating Stochastic Gradient Descent For Least Squares Regression

no code implementations26 Apr 2017 Prateek Jain, Sham M. Kakade, Rahul Kidambi, Praneeth Netrapalli, Aaron Sidford

There is widespread sentiment that it is not possible to effectively utilize fast gradient methods (e. g. Nesterov's acceleration, conjugate gradient, heavy ball) for the purposes of stochastic optimization due to their instability and error accumulation, a notion made precise in d'Aspremont 2008 and Devolder, Glineur, and Nesterov 2014.

Stochastic Optimization

Thresholding based Efficient Outlier Robust PCA

no code implementations18 Feb 2017 Yeshwanth Cherapanamjeri, Prateek Jain, Praneeth Netrapalli

That is, given a data matrix $M^*$, where $(1-\alpha)$ fraction of the points are noisy samples from a low-dimensional subspace while $\alpha$ fraction of the points can be arbitrary outliers, the goal is to recover the subspace accurately.

Structured Sparse Regression via Greedy Hard Thresholding

no code implementations NeurIPS 2016 Prateek Jain, Nikhil Rao, Inderjit S. Dhillon

Several learning applications require solving high-dimensional regression problems where the relevant features belong to a small number of (overlapping) groups.

Mixed Linear Regression with Multiple Components

no code implementations NeurIPS 2016 Kai Zhong, Prateek Jain, Inderjit S. Dhillon

Furthermore, our empirical results indicate that even with random initialization, our approach converges to the global optima in linear time, providing speed-up of up to two orders of magnitude.

Parallelizing Stochastic Gradient Descent for Least Squares Regression: mini-batching, averaging, and model misspecification

1 code implementation12 Oct 2016 Prateek Jain, Sham M. Kakade, Rahul Kidambi, Praneeth Netrapalli, Aaron Sidford

In particular, this work provides a sharp analysis of: (1) mini-batching, a method of averaging many samples of a stochastic gradient to both reduce the variance of the stochastic gradient estimate and for parallelizing SGD and (2) tail-averaging, a method involving averaging the final few iterates of SGD to decrease the variance in SGD's final iterate.

Efficient and Consistent Robust Time Series Analysis

no code implementations1 Jul 2016 Kush Bhatia, Prateek Jain, Parameswaran Kamalaruban, Purushottam Kar

We illustrate our methods on synthetic datasets and show that our methods indeed are able to consistently recover the optimal parameters despite a large fraction of points being corrupted.

Time Series Time Series Analysis

Nearly-optimal Robust Matrix Completion

no code implementations23 Jun 2016 Yeshwanth Cherapanamjeri, Kartik Gupta, Prateek Jain

Finally, an application of our result to the robust PCA problem (low-rank+sparse matrix separation) leads to nearly linear time (in matrix dimensions) algorithm for the same; existing state-of-the-art methods require quadratic time.

Low-Rank Matrix Completion

Structured Sparse Regression via Greedy Hard-Thresholding

no code implementations19 Feb 2016 Prateek Jain, Nikhil Rao, Inderjit Dhillon

Several learning applications require solving high-dimensional regression problems where the relevant features belong to a small number of (overlapping) groups.

Sparse Local Embeddings for Extreme Multi-label Classification

no code implementations NeurIPS 2015 Kush Bhatia, Himanshu Jain, Purushottam Kar, Manik Varma, Prateek Jain

The objective in extreme multi-label learning is to train a classifier that can automatically tag a novel data point with the most relevant subset of labels from an extremely large label set.

Classification Extreme Multi-Label Classification +4

Alternating Minimization for Regression Problems with Vector-valued Outputs

no code implementations NeurIPS 2015 Prateek Jain, Ambuj Tewari

In regression problems involving vector-valued outputs (or equivalently, multiple responses), it is well known that the maximum likelihood estimator (MLE), which takes noise covariance structure into account, can be significantly more accurate than the ordinary least squares (OLS) estimator.

Locally Non-linear Embeddings for Extreme Multi-label Learning

no code implementations9 Jul 2015 Kush Bhatia, Himanshu Jain, Purushottam Kar, Prateek Jain, Manik Varma

Embedding based approaches make training and prediction tractable by assuming that the training label matrix is low-rank and hence the effective number of labels can be reduced by projecting the high dimensional label vectors onto a low dimensional linear subspace.

Extreme Multi-Label Classification General Classification +3

Robust Regression via Hard Thresholding

no code implementations NeurIPS 2015 Kush Bhatia, Prateek Jain, Purushottam Kar

In this work, we study a simple hard-thresholding algorithm called TORRENT which, under mild conditions on X, can recover w* exactly even if b corrupts the response variables in an adversarial manner, i. e. both the support and entries of b are selected adversarially after observing X and w*.

Optimizing Non-decomposable Performance Measures: A Tale of Two Classes

no code implementations26 May 2015 Harikrishna Narasimhan, Purushottam Kar, Prateek Jain

Modern classification problems frequently present mild to severe label imbalance as well as specific requirements on classification characteristics, and require optimizing performance measures that are non-decomposable over the dataset, such as F-measure.

General Classification

Surrogate Functions for Maximizing Precision at the Top

no code implementations26 May 2015 Purushottam Kar, Harikrishna Narasimhan, Prateek Jain

At the heart of our results is a family of truly upper bounding surrogates for prec@k. These surrogates are motivated in a principled manner and enjoy attractive properties such as consistency to prec@k under various natural margin/noise conditions.

Multi-Label Classification

To Drop or Not to Drop: Robustness, Consistency and Differential Privacy Properties of Dropout

no code implementations6 Mar 2015 Prateek Jain, Vivek Kulkarni, Abhradeep Thakurta, Oliver Williams

Moreover, using the above mentioned stability properties of dropout, we design dropout based differentially private algorithms for solving ERMs.

L2 Regularization

Provable Submodular Minimization using Wolfe's Algorithm

no code implementations NeurIPS 2014 Deeparnab Chakrabarty, Prateek Jain, Pravesh Kothari

In 1976, Wolfe proposed an algorithm to find the minimum Euclidean norm point in a polytope, and in 1980, Fujishige showed how Wolfe's algorithm can be used for SFM.

Fast Exact Matrix Completion with Finite Samples

no code implementations4 Nov 2014 Prateek Jain, Praneeth Netrapalli

In this paper, we present a fast iterative algorithm that solves the matrix completion problem by observing $O(nr^5 \log^3 n)$ entries, which is independent of the condition number and the desired accuracy.

Matrix Completion

Non-convex Robust PCA

no code implementations NeurIPS 2014 Praneeth Netrapalli, U. N. Niranjan, Sujay Sanghavi, Animashree Anandkumar, Prateek Jain

In contrast, existing methods for robust PCA, which are based on convex optimization, have $O(m^2n)$ complexity per iteration, and take $O(1/\epsilon)$ iterations, i. e., exponentially more iterations for the same accuracy.

Online and Stochastic Gradient Methods for Non-decomposable Loss Functions

no code implementations NeurIPS 2014 Purushottam Kar, Harikrishna Narasimhan, Prateek Jain

In this work we initiate a study of online learning techniques for such non-decomposable loss functions with an aim to enable incremental learning as well as design scalable solvers for batch problems.

Incremental Learning online learning

On Iterative Hard Thresholding Methods for High-dimensional M-Estimation

no code implementations NeurIPS 2014 Prateek Jain, Ambuj Tewari, Purushottam Kar

Our results rely on a general analysis framework that enables us to analyze several popular hard thresholding style algorithms (such as HTP, CoSaMP, SP) in the high dimensional regression setting.

Tighter Low-rank Approximation via Sampling the Leveraged Element

1 code implementation14 Oct 2014 Srinadh Bhojanapalli, Prateek Jain, Sujay Sanghavi

The first is a new method to directly compute a low-rank approximation (in efficient factored form) to the product of two given matrices; it computes a small random set of entries of the product, and then executes weighted alternating minimization (as before) on these.

Provable Tensor Factorization with Missing Data

1 code implementation NeurIPS 2014 Prateek Jain, Sewoong Oh

We show that under certain standard assumptions, our method can recover a three-mode $n\times n\times n$ dimensional rank-$r$ tensor exactly from $O(n^{3/2} r^5 \log^4 n)$ randomly sampled entries.

Universal Matrix Completion

no code implementations10 Feb 2014 Srinadh Bhojanapalli, Prateek Jain

The problem of low-rank matrix completion has recently generated a lot of interest leading to several results that offer exact solutions to the problem.

Low-Rank Matrix Completion

Learning Mixtures of Discrete Product Distributions using Spectral Decompositions

no code implementations12 Nov 2013 Prateek Jain, Sewoong Oh

The main challenge in learning mixtures of discrete product distributions is that these low-rank tensors cannot be obtained directly from the sample moments.

Matrix Completion Recommendation Systems

Learning Sparsely Used Overcomplete Dictionaries via Alternating Minimization

no code implementations30 Oct 2013 Alekh Agarwal, Animashree Anandkumar, Prateek Jain, Praneeth Netrapalli

Alternating minimization is a popular heuristic for sparse coding, where the dictionary and the coefficients are estimated in alternate steps, keeping the other fixed.

Memory Limited, Streaming PCA

no code implementations NeurIPS 2013 Ioannis Mitliagkas, Constantine Caramanis, Prateek Jain

Standard algorithms require $O(p^2)$ memory; meanwhile no algorithm can do better than $O(kp)$ memory, since this is what the output itself requires.

Provable Inductive Matrix Completion

no code implementations4 Jun 2013 Prateek Jain, Inderjit S. Dhillon

In addition to inductive matrix completion, we show that two other low-rank estimation problems can be studied in our framework: a) general low-rank matrix sensing using rank-1 measurements, and b) multi-label regression with missing labels.

Matrix Completion

Phase Retrieval using Alternating Minimization

1 code implementation NeurIPS 2013 Praneeth Netrapalli, Prateek Jain, Sujay Sanghavi

Empirically, we demonstrate that alternating minimization performs similar to recently proposed convex techniques for this problem (which are based on "lifting" to a convex matrix problem) in sample complexity and robustness to noise.

On the Generalization Ability of Online Learning Algorithms for Pairwise Loss Functions

no code implementations11 May 2013 Purushottam Kar, Bharath K. Sriperumbudur, Prateek Jain, Harish C Karnick

We are also able to analyze a class of memory efficient online learning algorithms for pairwise learning problems that use only a bounded subset of past training samples to update the hypothesis at each step.

Generalization Bounds Metric Learning +1

Multilabel Classification using Bayesian Compressed Sensing

no code implementations NeurIPS 2012 Ashish Kapoor, Raajay Viswanathan, Prateek Jain

The two key benefits of the model are that a) it can naturally handle datasets that have missing labels and b) it can also measure uncertainty in prediction.

Active Learning Classification +2

Supervised Learning with Similarity Functions

no code implementations NeurIPS 2012 Purushottam Kar, Prateek Jain

a given supervised learning task and then adapt a well-known landmarking technique to provide efficient algorithms for supervised learning using ''good'' similarity functions.

General Classification

Orthogonal Matching Pursuit with Replacement

no code implementations NeurIPS 2011 Prateek Jain, Ambuj Tewari, Inderjit S. Dhillon

Our proof techniques are novel and flexible enough to also permit the tightest known analysis of popular iterative algorithms such as CoSaMP and Subspace Pursuit.

Similarity-based Learning via Data Driven Embeddings

no code implementations NeurIPS 2011 Purushottam Kar, Prateek Jain

We propose a landmarking-based approach to obtaining a classifier from such learned goodness criteria.

Inductive Regularized Learning of Kernel Functions

no code implementations NeurIPS 2010 Prateek Jain, Brian Kulis, Inderjit S. Dhillon

Our result shows that the learned kernel matrices parameterize a linear transformation kernel function and can be applied inductively to new data points.

Dimensionality Reduction General Classification +1

Hashing Hyperplane Queries to Near Points with Applications to Large-Scale Active Learning

no code implementations NeurIPS 2010 Prateek Jain, Sudheendra Vijayanarasimhan, Kristen Grauman

Our first approach maps the data to two-bit binary keys that are locality-sensitive for the angle between the hyperplane normal and a database point.

Active Learning

Matrix Completion from Power-Law Distributed Samples

no code implementations NeurIPS 2009 Raghu Meka, Prateek Jain, Inderjit S. Dhillon

In this paper, we propose a graph theoretic approach to matrix completion that solves the problem for more realistic sampling models.

Low-Rank Matrix Completion

Guaranteed Rank Minimization via Singular Value Projection

1 code implementation NeurIPS 2010 Raghu Meka, Prateek Jain, Inderjit S. Dhillon

Minimizing the rank of a matrix subject to affine constraints is a fundamental problem with many important applications in machine learning and statistics.

Low-Rank Matrix Completion

Online Metric Learning and Fast Similarity Search

no code implementations NeurIPS 2008 Prateek Jain, Brian Kulis, Inderjit S. Dhillon, Kristen Grauman

Metric learning algorithms can provide useful distance functions for a variety of domains, and recent work has shown good accuracy for problems where the learner can access all distance constraints at once.

Metric Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.