no code implementations • ICML 2020 • Gaurush Hiranandani, Warut Vijitbenjaronk, Sanmi Koyejo, Prateek Jain
Modern recommendation and notification systems must be robust to data imbalance, limitations on the number of recommendations/notifications, and heterogeneous engagement profiles across users.
no code implementations • 9 Jun 2023 • Anshul Nasery, Hardik Shah, Arun Sai Suggala, Prateek Jain
Our algorithm is versatile and can be used with many popular compression methods including pruning, low-rank factorization, and quantization.
1 code implementation • 30 May 2023 • Aniket Rege, Aditya Kusupati, Sharan Ranjit S, Alan Fan, Qingqing Cao, Sham Kakade, Prateek Jain, Ali Farhadi
Finally, we demonstrate that AdANNS can enable inference-time adaptivity for compute-aware search on ANNS indices built non-adaptively on matryoshka representations.
no code implementations • 15 Feb 2023 • Walid Krichene, Prateek Jain, Shuang Song, Mukund Sundararajan, Abhradeep Thakurta, Li Zhang
We study the problem of multi-task learning under user-level differential privacy, in which $n$ users contribute data to $m$ tasks, each involving a subset of users.
no code implementations • 1 Feb 2023 • Abhishek Sharma, Arpit Jain, Shubhangi Sharma, Ashutosh Gupta, Prateek Jain, Saraju P. Mohanty
In this work, multiclass classification is performed on phenotypic data using an SVM model.
no code implementations • 1 Feb 2023 • Depen Morwani, Jatin Batra, Prateek Jain, Praneeth Netrapalli
More concretely, (i) we define SB as the network essentially being a function of a low dimensional projection of the inputs (ii) theoretically, we show that when the data is linearly separable, the network primarily depends on only the linearly separable ($1$-dimensional) subspace even in the presence of an arbitrarily large number of other, more complex features which could have led to a significantly more robust classifier, (iii) empirically, we show that models trained on real datasets such as Imagenette and Waterbirds-Landbirds indeed depend on a low dimensional projection of the inputs, thereby demonstrating SB on these datasets, iv) finally, we present a natural ensemble approach that encourages diversity in models by training successive models on features not used by earlier models, and demonstrate that it yields models that are significantly more robust to Gaussian noise.
no code implementations • 30 Jan 2023 • Xiyang Liu, Prateek Jain, Weihao Kong, Sewoong Oh, Arun Sai Suggala
Under label-corruption, this is the first efficient linear regression algorithm to guarantee both $(\varepsilon,\delta)$-DP and robustness.
no code implementations • 17 Jan 2023 • Soumyabrata Pal, Arun Sai Suggala, Karthikeyan Shanmugam, Prateek Jain
Instead, we propose LATTICE (Latent bAndiTs via maTrIx ComplEtion) which allows exploitation of the latent cluster structure to provide the minimax optimal regret of $\widetilde{O}(\sqrt{(\mathsf{M}+\mathsf{N})\mathsf{T}})$, when the number of clusters is $\widetilde{O}(1)$.
no code implementations • 15 Dec 2022 • Prateek Jain, Srinjoy Ganguly
In molecular research, simulation \& design of molecules are key areas with significant implications for drug development, material science, and other fields.
no code implementations • 15 Nov 2022 • Hasan Mustafa, Sai Nandan Morapakula, Prateek Jain, Srinjoy Ganguly
A moderate protein has about 100 amino acids, and the number of combinations one needs to verify to find the stable structure is enormous.
no code implementations • 11 Oct 2022 • Naman Agarwal, Prateek Jain, Suhas Kowshik, Dheeraj Nagaraj, Praneeth Netrapalli
In this work, we consider the problem of collaborative multi-user reinforcement learning.
no code implementations • 7 Oct 2022 • Soumyabrata Pal, Prateek Varshney, Prateek Jain, Abhradeep Guha Thakurta, Gagan Madan, Gaurav Aggarwal, Pradeep Shenoy, Gaurav Srivastava
We then study the framework in the linear setting, where the problem reduces to that of estimating the sum of a rank-$r$ and a $k$-column sparse matrix using a small number of linear measurements.
no code implementations • 4 Oct 2022 • Sravanti Addepalli, Anshul Nasery, R. Venkatesh Babu, Praneeth Netrapalli, Prateek Jain
To bridge the gap between these two lines of work, we first hypothesize and verify that while SB may not altogether preclude learning complex features, it amplifies simpler features over complex ones.
no code implementations • 8 Sep 2022 • Prateek Jain, Soumyabrata Pal
In each round, the algorithm recommends one item per user, for which it gets a (noisy) reward sampled from a low-rank user-item preference matrix.
no code implementations • 6 Sep 2022 • Abhishek Sharma, Pranjal Sharma, Darshan Pincha, Prateek Jain
Nowadays, yoga has gained worldwide attention because of increasing levels of stress in the modern way of life, and there are many ways or resources to learn yoga.
no code implementations • 19 Aug 2022 • Anshul Nasery, Sravanti Addepalli, Praneeth Netrapalli, Prateek Jain
We consider the problem of OOD generalization, where the goal is to train a model that performs well on test distributions that are different from the training distribution.
no code implementations • 18 Aug 2022 • Lovish Madaan, Srinadh Bhojanapalli, Himanshu Jain, Prateek Jain
Based on such hierarchical navigation, we design Treeformer which can use one of two efficient attention layers -- TF-Attention and TC-Attention.
no code implementations • 11 Jul 2022 • Prateek Varshney, Abhradeep Thakurta, Prateek Jain
Compared to existing $(\epsilon, \delta)$-DP techniques which have sub-optimal error bounds, DP-AMBSSGD is able to provide nearly optimal error bounds in terms of key parameters like dimensionality $d$, number of points $N$, and the standard deviation $\sigma$ of the noise in observations.
no code implementations • 17 Jun 2022 • Kushal Majmundar, Sachin Goyal, Praneeth Netrapalli, Prateek Jain
Typical contrastive learning based SSL methods require instance-wise data augmentations which are difficult to design for unstructured tabular data.
no code implementations • 27 May 2022 • Xiyang Liu, Weihao Kong, Prateek Jain, Sewoong Oh
For sub-Gaussian data, we provide nearly optimal statistical error rates even for $n=\tilde O(d)$.
1 code implementation • 26 May 2022 • Aditya Kusupati, Gantavya Bhatt, Aniket Rege, Matthew Wallingford, Aditya Sinha, Vivek Ramanujan, William Howard-Snyder, KaiFeng Chen, Sham Kakade, Prateek Jain, Ali Farhadi
The flexibility within the learned Matryoshka Representations offer: (a) up to 14x smaller embedding size for ImageNet-1K classification at the same level of accuracy; (b) up to 14x real-world speed-ups for large-scale retrieval on ImageNet-1K and 4K; and (c) up to 2% accuracy improvements for long-tail few-shot classification, all while being as robust as the original representations.
Ranked #24 on
Image Classification
on ObjectNet
(using extra training data)
no code implementations • 9 Feb 2022 • Kwangjun Ahn, Prateek Jain, Ziwei Ji, Satyen Kale, Praneeth Netrapalli, Gil I. Shamir
We initiate a formal study of reproducibility in optimization.
no code implementations • 19 Jan 2022 • Abhishek Sharma, Yash Shah, Yash Agrawal, Prateek Jain
In this work, a self-assistance based yoga posture identification technique is developed, which helps users to perform Yoga with the correction feature in Real-time.
no code implementations • NeurIPS 2021 • Prateek Jain, John Rush, Adam Smith, Shuang Song, Abhradeep Guha Thakurta
We study personalization of supervised learning with user-level differential privacy.
no code implementations • NeurIPS 2021 • Kiran K. Thekumparampil, Prateek Jain, Praneeth Netrapalli, Sewoong Oh
To cope with such data scarcity, meta-representation learning methods train across many related tasks to find a shared (lower-dimensional) representation of the data where all tasks can be solved accurately.
1 code implementation • 23 Nov 2021 • Ameya Daigavane, Gagan Madan, Aditya Sinha, Abhradeep Guha Thakurta, Gaurav Aggarwal, Prateek Jain
Graph Neural Networks (GNNs) are a popular technique for modelling graph-structured data and computing node-level representations via aggregation of information from the neighborhood of each node.
no code implementations • 19 Oct 2021 • Raghuram Bharadwaj Diddigi, Prateek Jain, Prabuchandran K. J., Shalabh Bhatnagar
Learning optimal behavior from existing data is one of the most important problems in Reinforcement Learning (RL).
no code implementations • ICLR 2022 • Naman Agarwal, Syomantak Chaudhuri, Prateek Jain, Dheeraj Nagaraj, Praneeth Netrapalli
The starting point of our work is the observation that in practice, Q-learning is used with two important modifications: (i) training with two networks, called online network and target network simultaneously (online target learning, or OTL) , and (ii) experience replay (ER) (Mnih et al., 2015).
1 code implementation • ICLR 2022 • S Deepak Narayanan, Aditya Sinha, Prateek Jain, Purushottam Kar, Sundararajan Sellamanickam
Training multi-layer Graph Convolution Networks (GCN) using standard SGD techniques scales poorly as each descent step ends up updating node embeddings for a large portion of the graph.
no code implementations • 20 Jul 2021 • Steve Chien, Prateek Jain, Walid Krichene, Steffen Rendle, Shuang Song, Abhradeep Thakurta, Li Zhang
We study the problem of differentially private (DP) matrix completion under user-level privacy.
2 code implementations • 16 Jun 2021 • Anish Acharya, Abolfazl Hashemi, Prateek Jain, Sujay Sanghavi, Inderjit S. Dhillon, Ufuk Topcu
Geometric median (\textsc{Gm}) is a classical method in statistics for achieving a robust estimation of the uncorrupted data; under gross corruption, it achieves the optimal breakdown point of 0. 5.
Ranked #19 on
Image Classification
on MNIST
(Accuracy metric)
1 code implementation • NeurIPS 2021 • Aditya Kusupati, Matthew Wallingford, Vivek Ramanujan, Raghav Somani, Jae Sung Park, Krishna Pillutla, Prateek Jain, Sham Kakade, Ali Farhadi
We further quantitatively measure the quality of our codes by applying it to the efficient image retrieval as well as out-of-distribution (OOD) detection problems.
no code implementations • NeurIPS 2021 • Prateek Jain, Suhas S Kowshik, Dheeraj Nagaraj, Praneeth Netrapalli
In this work, we improve existing results for learning nonlinear systems in a number of ways: a) we provide the first offline algorithm that can learn non-linear dynamical systems without the mixing assumption, b) we significantly improve upon the sample complexity of existing results for mixing systems, c) in the much harder one-pass, streaming setting we study a SGD with Reverse Experience Replay ($\mathsf{SGD-RER}$) method, and demonstrate that for mixing systems, it achieves the same sample complexity as our offline algorithm, d) we justify the expansivity assumption by showing that for the popular ReLU link function -- a non-expansive but easy to learn link function with i. i. d.
no code implementations • 18 May 2021 • Kiran Koshy Thekumparampil, Prateek Jain, Praneeth Netrapalli, Sewoong Oh
We show that, for a constant subspace dimension MLLAM obtains nearly-optimal estimation error, despite requiring only $\Omega(\log d)$ samples per task.
no code implementations • NeurIPS 2021 • Prateek Jain, Suhas S Kowshik, Dheeraj Nagaraj, Praneeth Netrapalli
Thus, we provide the first -- to the best of our knowledge -- optimal SGD-style algorithm for the classical problem of linear system identification with a first order oracle.
1 code implementation • NeurIPS 2021 • Harshay Shah, Prateek Jain, Praneeth Netrapalli
We believe that the DiffROAR evaluation framework and BlockMNIST-based datasets can serve as sanity checks to audit instance-specific interpretability methods; code and data available at https://github. com/harshays/inputgradients.
3 code implementations • 15 Feb 2021 • Ajaykrishna Karthikeyan, Naman jain, Nagarajan Natarajan, Prateek Jain
Decision trees provide a rich family of highly non-linear but efficient models, due to which they continue to be the go-to family of predictive models by practitioners across domains.
no code implementations • 15 Feb 2021 • Aadirupa Saha, Nagarajan Natarajan, Praneeth Netrapalli, Prateek Jain
We study online learning with bandit feedback (i. e. learner has access to only zeroth-order oracle) where cost/reward functions $\f_t$ admit a "pseudo-1d" structure, i. e. $\f_t(\w) = \loss_t(\pred_t(\w))$ where the output of $\pred_t$ is one-dimensional.
no code implementations • 22 Jan 2021 • Prateek Jain, Amit M. Joshi, Saraju Mohanty
There is requirement to develop the Internet-Medical-Things (IoMT) integrated Healthcare Cyber-Physical System (H-CPS) based Smart Healthcare framework for glucose measurement with purpose of continuous health monitoring.
Medical Physics
no code implementations • NeurIPS Workshop CAP 2020 • Sahil Bhatia, Saswat Padhi, Nagarajan Natarajan, Rahul Sharma, Prateek Jain
Automated synthesis of inductive invariants is an important problem in software verification.
no code implementations • NeurIPS 2020 • Kiran Koshy Thekumparampil, Prateek Jain, Praneeth Netrapalli, Sewoong Oh
Further, instead of a PO if we only have a linear minimization oracle (LMO, a la Frank-Wolfe) to access the constraint set, an extension of our method, MOLES, finds a feasible $\epsilon$-suboptimal solution using $O(\epsilon^{-2})$ LMO calls and FO calls---both match known lower bounds, resolving a question left open since White (1993).
no code implementations • 14 Jul 2020 • Nagarajan Natarajan, Ajaykrishna Karthikeyan, Prateek Jain, Ivan Radicek, Sriram Rajamani, Sumit Gulwani, Johannes Gehrke
The goal of the synthesizer is to synthesize a "decision function" $f$ which transforms the features to a decision value for the black-box component so as to maximize the expected reward $E[r \circ f (x)]$ for executing decisions $f(x)$ for various values of $x$.
no code implementations • 25 Jun 2020 • Bhaskar Mukhoty, Govind Gopakumar, Prateek Jain, Purushottam Kar
We provide the first global model recovery results for the IRLS (iteratively reweighted least squares) heuristic for robust regression problems.
no code implementations • NeurIPS 2020 • Guy Bresler, Prateek Jain, Dheeraj Nagaraj, Praneeth Netrapalli, Xian Wu
Our improved rate serves as one of the first results where an algorithm outperforms SGD-DD on an interesting Markov chain and also provides one of the first theoretical analyses to support the use of experience replay in practice.
2 code implementations • NeurIPS 2020 • Harshay Shah, Kaustav Tamuly, aditi raghunathan, Prateek Jain, Praneeth Netrapalli
Furthermore, previous settings that use SB to theoretically justify why neural networks generalize well do not simultaneously capture the non-robustness of neural networks---a widely observed phenomenon in practice [Goodfellow et al. 2014, Jo and Bengio 2017].
no code implementations • LREC 2020 • Dwaipayan Roy, Sumit Bhatia, Prateek Jain
Wikipedia is the largest web-based open encyclopedia covering more than three hundred languages.
no code implementations • 3 Apr 2020 • Arpita Biswas, Shruthi Bannur, Prateek Jain, Srujana Merugu
Thus, it is important to allocate a separate budget of test-kits per day targeted towards preventing community spread and detecting new cases early on.
1 code implementation • ICML 2020 • Sachin Goyal, aditi raghunathan, Moksh Jain, Harsha Vardhan Simhadri, Prateek Jain
Classical approaches for one-class problems such as one-class SVM and isolation forest require careful feature engineering when applied to structured domains like images.
Ranked #3 on
Anomaly Detection
on UEA time-series datasets
2 code implementations • NeurIPS 2020 • Oindrila Saha, Aditya Kusupati, Harsha Vardhan Simhadri, Manik Varma, Prateek Jain
Standard Convolutional Neural Networks (CNNs) designed for computer vision tasks tend to have large intermediate activation maps.
Ranked #26 on
Face Detection
on WIDER Face (Medium)
1 code implementation • ICML 2020 • Aditya Kusupati, Vivek Ramanujan, Raghav Somani, Mitchell Wortsman, Prateek Jain, Sham Kakade, Ali Farhadi
Sparsity in Deep Neural Networks (DNNs) is studied extensively with the focus of maximizing prediction accuracy given an overall parameter budget.
no code implementations • 28 Jan 2020 • Amar Budhiraja, Gaurush Hiranandani, Darshak Chhatbar, Aditya Sinha, Navya Yarrabelly, Ayush Choure, Oluwasanmi Koyejo, Prateek Jain
In this paper, we study the problem of recommendation system where the users and items to be recommended are rich data structures with multiple entity types and with multiple sources of side-information in the form of graphs.
2 code implementations • NeurIPS 2019 • Don Dennis, Durmus Alp Emre Acar, Vikram Mandikal, Vinu Sankar Sadasivan, Venkatesh Saligrama, Harsha Vardhan Simhadri, Prateek Jain
The second layer consumes the output of the first layer using a second RNN thus capturing long dependencies.
no code implementations • NeurIPS 2019 • Kai Zhong, Zhao Song, Prateek Jain, Inderjit S. Dhillon
Inductive matrix completion (IMC) method is a standard approach for this problem where the given query as well as the items are embedded in a common low-dimensional space.
no code implementations • 26 Nov 2019 • Sahil Bhatia, Saswat Padhi, Nagarajan Natarajan, Rahul Sharma, Prateek Jain
Automated synthesis of inductive invariants is an important problem in software verification.
1 code implementation • Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology (UIST'19) 2019 • Shishir G. Patil, Don Dennis, Chirag Pabbaraju, Nadeem Shaheer, Harsha Vardhan Simhadri, Vivek Seshadri, Manik Varma, Prateek Jain
Our in-lab study shows that GesturePod achieves 92% gesture recognition accuracy and can help perform common smartphone tasks faster.
Ranked #1 on
Gesture Recognition
on GesturePod
1 code implementation • 12 Jul 2019 • Chirag Pabbaraju, Prateek Jain
In this paper, we consider the problem of learning functions over sets, i. e., functions that are invariant to permutations of input set items.
2 code implementations • NeurIPS 2019 • Kiran Koshy Thekumparampil, Prateek Jain, Praneeth Netrapalli, Sewoong Oh
This paper studies first order methods for solving smooth minimax optimization problems $\min_x \max_y g(x, y)$ where $g(\cdot,\cdot)$ is smooth and $g(x,\cdot)$ is concave for each $x$.
no code implementations • 17 May 2019 • Raghav Somani, Navin Goyal, Prateek Jain, Praneeth Netrapalli
This paper proposes and demonstrates a surprising pattern in the training of neural networks: there is a one to one relation between the values of any pair of losses (such as cross entropy, mean squared error, 0/1 error etc.)
no code implementations • 29 Apr 2019 • Prateek Jain, Dheeraj Nagaraj, Praneeth Netrapalli
While classical theoretical analysis of SGD for convex problems studies (suffix) \emph{averages} of iterates and obtains information theoretically optimal bounds on suboptimality, the \emph{last point} of SGD is, by far, the most preferred choice in practice.
no code implementations • 19 Mar 2019 • Arun Sai Suggala, Kush Bhatia, Pradeep Ravikumar, Prateek Jain
We provide a nearly linear time estimator which consistently estimates the true regression vector, even with $1-o(1)$ fraction of corruptions.
no code implementations • 4 Mar 2019 • Prateek Jain, Dheeraj Nagaraj, Praneeth Netrapalli
For {\em small} $K$, we show \sgdwor can achieve same convergence rate as \sgd for {\em general smooth strongly-convex} functions.
1 code implementation • NeurIPS 2018 • Aditya Kusupati, Manish Singh, Kush Bhatia, Ashish Kumar, Prateek Jain, Manik Varma
FastRNN addresses these limitations by adding a residual connection that does not constrain the range of the singular values explicitly and has only two extra scalar parameters.
1 code implementation • NeurIPS 2018 • Don Dennis, Chirag Pabbaraju, Harsha Vardhan Simhadri, Prateek Jain
We propose a method, EMI-RNN, that exploits these observations by using a multiple instance learning formulation along with an early prediction technique to learn a model that achieves better accuracy compared to baseline models, while simultaneously reducing computation by a large fraction.
no code implementations • NeurIPS 2018 • Raghav Somani, Chirag Gupta, Prateek Jain, Praneeth Netrapalli
This paper studies the problem of sparse regression where the goal is to learn a sparse vector that best optimizes a given objective function.
26 code implementations • 12 Nov 2018 • Ville Bergholm, Josh Izaac, Maria Schuld, Christian Gogolin, Shahnawaz Ahmed, Vishnu Ajith, M. Sohaib Alam, Guillermo Alonso-Linaje, B. AkashNarayanan, Ali Asadi, Juan Miguel Arrazola, Utkarsh Azad, Sam Banning, Carsten Blank, Thomas R Bromley, Benjamin A. Cordier, Jack Ceroni, Alain Delgado, Olivia Di Matteo, Amintor Dusko, Tanya Garg, Diego Guala, Anthony Hayes, Ryan Hill, Aroosa Ijaz, Theodor Isacsson, David Ittah, Soran Jahangiri, Prateek Jain, Edward Jiang, Ankit Khandelwal, Korbinian Kottmann, Robert A. Lang, Christina Lee, Thomas Loke, Angus Lowe, Keri McKiernan, Johannes Jakob Meyer, J. A. Montañez-Barrera, Romain Moyard, Zeyue Niu, Lee James O'Riordan, Steven Oud, Ashish Panigrahi, Chae-Yeun Park, Daniel Polatajko, Nicolás Quesada, Chase Roberts, Nahum Sá, Isidor Schoch, Borun Shi, Shuli Shu, Sukin Sim, Arshpreet Singh, Ingrid Strandberg, Jay Soni, Antal Száva, Slimane Thabet, Rodrigo A. Vargas-Hernández, Trevor Vincent, Nicola Vitucci, Maurice Weber, David Wierichs, Roeland Wiersema, Moritz Willmann, Vincent Wong, Shaoming Zhang, Nathan Killoran
PennyLane's core feature is the ability to compute gradients of variational quantum circuits in a way that is compatible with classical techniques such as backpropagation.
no code implementations • 26 May 2018 • Kai Zhong, Zhao Song, Prateek Jain, Inderjit S. Dhillon
A standard approach to modeling this problem is Inductive Matrix Completion where the predicted rating is modeled as an inner product of the user and the item features projected onto a latent space.
no code implementations • ICLR 2018 • Ashwin Kalyan, Abhishek Mohta, Oleksandr Polozov, Dhruv Batra, Prateek Jain, Sumit Gulwani
In this work, we propose Neural Guided Deductive Search (NGDS), a hybrid synthesis technique that combines the best of both symbolic logic techniques and statistical models.
2 code implementations • ICLR 2018 • Rahul Kidambi, Praneeth Netrapalli, Prateek Jain, Sham M. Kakade
Extensive empirical results in this paper show that ASGD has performance gains over HB, NAG, and SGD.
no code implementations • 1 Mar 2018 • Srinadh Bhojanapalli, Nicolas Boumal, Prateek Jain, Praneeth Netrapalli
Semidefinite programs (SDP) are important in learning and combinatorial optimization with numerous applications.
no code implementations • ICML 2018 • Prateek Jain, Om Thakkar, Abhradeep Thakurta
We provide the first provably joint differentially private algorithm with formal utility guarantees for the problem of user-level privacy-preserving collaborative filtering.
no code implementations • 21 Dec 2017 • Prateek Jain, Purushottam Kar
The goal of this monograph is to both, introduce the rich literature in this area, as well as equip the reader with the tools and techniques needed to analyze these simple procedures for non-convex problems.
no code implementations • NeurIPS 2017 • Kush Bhatia, Prateek Jain, Parameswaran Kamalaruban, Purushottam Kar
We present the first efficient and provably consistent estimator for the robust regression problem.
no code implementations • 25 Oct 2017 • Prateek Jain, Sham M. Kakade, Rahul Kidambi, Praneeth Netrapalli, Venkata Krishna Pillutla, Aaron Sidford
This work provides a simplified proof of the statistical minimax optimality of (iterate averaged) stochastic gradient descent (SGD), for the special case of least squares.
no code implementations • 18 Sep 2017 • Rahul Wadbude, Vivek Gupta, Piyush Rai, Nagarajan Natarajan, Harish Karnick, Prateek Jain
Our approach is novel in that it highlights interesting connections between label embedding methods used for multi-label learning and paragraph/document embedding methods commonly used for learning representations of text data.
no code implementations • 17 Sep 2017 • Saswat Padhi, Prateek Jain, Daniel Perelman, Oleksandr Polozov, Sumit Gulwani, Todd Millstein
However, manual inspection of data to identify the different formats is infeasible in standard big-data scenarios.
1 code implementation • ICML 2017 • Chirag Gupta, Arun Sai Suggala, Ankit Goyal, Harsha Vardhan Simhadri, Bhargavi Paranjape, Ashish Kumar, Saurabh Goyal, Raghavendra Udupa, Manik Varma, Prateek Jain
Such applications demand prediction models with small storage and computational complexity that do not compromise significantly on accuracy.
no code implementations • ICML 2017 • Yeshwanth Cherapanamjeri, Kartik Gupta, Prateek Jain
Finally, an application of our result to the robust PCA problem (low-rank+sparse matrix separation) leads to nearly linear time (in matrix dimensions) algorithm for the same; existing state-of-the-art methods require quadratic time.
no code implementations • ICML 2017 • Kamalika Chaudhuri, Prateek Jain, Nagarajan Natarajan
In this work, we consider a theoretical analysis of the label requirement of active learning for regression under a heteroscedastic noise model, where the noise depends on the instance.
no code implementations • NeurIPS 2017 • Aditi Raghunathan, Ravishankar Krishnaswamy, Prateek Jain
However, by using a streaming version of the classical (soft-thresholding-based) EM method that exploits the Gaussian distribution explicitly, we show that for a mixture of two Gaussians the true means can be estimated consistently, with estimation error decreasing at nearly optimal rate, and tending to $0$ for $N\rightarrow \infty$.
no code implementations • ICML 2017 • Kai Zhong, Zhao Song, Prateek Jain, Peter L. Bartlett, Inderjit S. Dhillon
For activation functions that are also smooth, we show $\mathit{local~linear~convergence}$ guarantees of gradient descent under a resampling rule.
no code implementations • 26 Apr 2017 • Prateek Jain, Sham M. Kakade, Rahul Kidambi, Praneeth Netrapalli, Aaron Sidford
There is widespread sentiment that it is not possible to effectively utilize fast gradient methods (e. g. Nesterov's acceleration, conjugate gradient, heavy ball) for the purposes of stochastic optimization due to their instability and error accumulation, a notion made precise in d'Aspremont 2008 and Devolder, Glineur, and Nesterov 2014.
no code implementations • 18 Feb 2017 • Yeshwanth Cherapanamjeri, Prateek Jain, Praneeth Netrapalli
That is, given a data matrix $M^*$, where $(1-\alpha)$ fraction of the points are noisy samples from a low-dimensional subspace while $\alpha$ fraction of the points can be arbitrary outliers, the goal is to recover the subspace accurately.
no code implementations • NeurIPS 2016 • Kai Zhong, Prateek Jain, Inderjit S. Dhillon
Furthermore, our empirical results indicate that even with random initialization, our approach converges to the global optima in linear time, providing speed-up of up to two orders of magnitude.
no code implementations • NeurIPS 2016 • Prateek Jain, Nikhil Rao, Inderjit S. Dhillon
Several learning applications require solving high-dimensional regression problems where the relevant features belong to a small number of (overlapping) groups.
1 code implementation • 12 Oct 2016 • Prateek Jain, Sham M. Kakade, Rahul Kidambi, Praneeth Netrapalli, Aaron Sidford
In particular, this work provides a sharp analysis of: (1) mini-batching, a method of averaging many samples of a stochastic gradient to both reduce the variance of the stochastic gradient estimate and for parallelizing SGD and (2) tail-averaging, a method involving averaging the final few iterates of SGD to decrease the variance in SGD's final iterate.
no code implementations • 1 Jul 2016 • Kush Bhatia, Prateek Jain, Parameswaran Kamalaruban, Purushottam Kar
We illustrate our methods on synthetic datasets and show that our methods indeed are able to consistently recover the optimal parameters despite a large fraction of points being corrupted.
no code implementations • 23 Jun 2016 • Yeshwanth Cherapanamjeri, Kartik Gupta, Prateek Jain
Finally, an application of our result to the robust PCA problem (low-rank+sparse matrix separation) leads to nearly linear time (in matrix dimensions) algorithm for the same; existing state-of-the-art methods require quadratic time.
no code implementations • NeurIPS 2016 • Prateek Jain, Nagarajan Natarajan
We consider the problem of recommending relevant labels (items) for a given data point (user).
no code implementations • 22 Feb 2016 • Prateek Jain, Chi Jin, Sham M. Kakade, Praneeth Netrapalli, Aaron Sidford
This work provides improved guarantees for streaming principle component analysis (PCA).
no code implementations • 19 Feb 2016 • Prateek Jain, Nikhil Rao, Inderjit Dhillon
Several learning applications require solving high-dimensional regression problems where the relevant features belong to a small number of (overlapping) groups.
no code implementations • NeurIPS 2015 • Prateek Jain, Ambuj Tewari
In regression problems involving vector-valued outputs (or equivalently, multiple responses), it is well known that the maximum likelihood estimator (MLE), which takes noise covariance structure into account, can be significantly more accurate than the ordinary least squares (OLS) estimator.
no code implementations • NeurIPS 2015 • Kush Bhatia, Himanshu Jain, Purushottam Kar, Manik Varma, Prateek Jain
The objective in extreme multi-label learning is to train a classifier that can automatically tag a novel data point with the most relevant subset of labels from an extremely large label set.
no code implementations • NeurIPS 2015 • Prateek Jain, Nagarajan Natarajan, Ambuj Tewari
We offer a general framework to derive mistake driven online algorithms and associated loss bounds.
no code implementations • 15 Oct 2015 • Animashree Anandkumar, Prateek Jain, Yang Shi, U. N. Niranjan
Robust tensor CP decomposition involves decomposing a tensor into low rank and sparse components.
no code implementations • 9 Jul 2015 • Kush Bhatia, Himanshu Jain, Purushottam Kar, Prateek Jain, Manik Varma
Embedding based approaches make training and prediction tractable by assuming that the training label matrix is low-rank and hence the effective number of labels can be reduced by projecting the high dimensional label vectors onto a low dimensional linear subspace.
Extreme Multi-Label Classification
General Classification
+2
no code implementations • NeurIPS 2015 • Kush Bhatia, Prateek Jain, Purushottam Kar
In this work, we study a simple hard-thresholding algorithm called TORRENT which, under mild conditions on X, can recover w* exactly even if b corrupts the response variables in an adversarial manner, i. e. both the support and entries of b are selected adversarially after observing X and w*.
no code implementations • 26 May 2015 • Harikrishna Narasimhan, Purushottam Kar, Prateek Jain
Modern classification problems frequently present mild to severe label imbalance as well as specific requirements on classification characteristics, and require optimizing performance measures that are non-decomposable over the dataset, such as F-measure.
no code implementations • 26 May 2015 • Purushottam Kar, Harikrishna Narasimhan, Prateek Jain
At the heart of our results is a family of truly upper bounding surrogates for prec@k. These surrogates are motivated in a principled manner and enjoy attractive properties such as consistency to prec@k under various natural margin/noise conditions.
no code implementations • 6 Mar 2015 • Prateek Jain, Vivek Kulkarni, Abhradeep Thakurta, Oliver Williams
Moreover, using the above mentioned stability properties of dropout, we design dropout based differentially private algorithms for solving ERMs.
no code implementations • NeurIPS 2014 • Deeparnab Chakrabarty, Prateek Jain, Pravesh Kothari
In 1976, Wolfe proposed an algorithm to find the minimum Euclidean norm point in a polytope, and in 1980, Fujishige showed how Wolfe's algorithm can be used for SFM.
no code implementations • 4 Nov 2014 • Prateek Jain, Praneeth Netrapalli
In this paper, we present a fast iterative algorithm that solves the matrix completion problem by observing $O(nr^5 \log^3 n)$ entries, which is independent of the condition number and the desired accuracy.
no code implementations • NeurIPS 2014 • Praneeth Netrapalli, U. N. Niranjan, Sujay Sanghavi, Animashree Anandkumar, Prateek Jain
In contrast, existing methods for robust PCA, which are based on convex optimization, have $O(m^2n)$ complexity per iteration, and take $O(1/\epsilon)$ iterations, i. e., exponentially more iterations for the same accuracy.
no code implementations • NeurIPS 2014 • Purushottam Kar, Harikrishna Narasimhan, Prateek Jain
In this work we initiate a study of online learning techniques for such non-decomposable loss functions with an aim to enable incremental learning as well as design scalable solvers for batch problems.
no code implementations • NeurIPS 2014 • Prateek Jain, Ambuj Tewari, Purushottam Kar
Our results rely on a general analysis framework that enables us to analyze several popular hard thresholding style algorithms (such as HTP, CoSaMP, SP) in the high dimensional regression setting.
1 code implementation • 14 Oct 2014 • Srinadh Bhojanapalli, Prateek Jain, Sujay Sanghavi
The first is a new method to directly compute a low-rank approximation (in efficient factored form) to the product of two given matrices; it computes a small random set of entries of the product, and then executes weighted alternating minimization (as before) on these.
1 code implementation • NeurIPS 2014 • Prateek Jain, Sewoong Oh
We show that under certain standard assumptions, our method can recover a three-mode $n\times n\times n$ dimensional rank-$r$ tensor exactly from $O(n^{3/2} r^5 \log^4 n)$ randomly sampled entries.
no code implementations • 10 Feb 2014 • Srinadh Bhojanapalli, Prateek Jain
The problem of low-rank matrix completion has recently generated a lot of interest leading to several results that offer exact solutions to the problem.
no code implementations • 12 Nov 2013 • Prateek Jain, Sewoong Oh
The main challenge in learning mixtures of discrete product distributions is that these low-rank tensors cannot be obtained directly from the sample moments.
no code implementations • 30 Oct 2013 • Alekh Agarwal, Animashree Anandkumar, Prateek Jain, Praneeth Netrapalli
Alternating minimization is a popular heuristic for sparse coding, where the dictionary and the coefficients are estimated in alternate steps, keeping the other fixed.
no code implementations • 18 Jul 2013 • Hsiang-Fu Yu, Prateek Jain, Purushottam Kar, Inderjit S. Dhillon
The multi-label classification problem has generated significant interest in recent years.
no code implementations • NeurIPS 2013 • Ioannis Mitliagkas, Constantine Caramanis, Prateek Jain
Standard algorithms require $O(p^2)$ memory; meanwhile no algorithm can do better than $O(kp)$ memory, since this is what the output itself requires.
no code implementations • 4 Jun 2013 • Prateek Jain, Inderjit S. Dhillon
In addition to inductive matrix completion, we show that two other low-rank estimation problems can be studied in our framework: a) general low-rank matrix sensing using rank-1 measurements, and b) multi-label regression with missing labels.
1 code implementation • NeurIPS 2013 • Praneeth Netrapalli, Prateek Jain, Sujay Sanghavi
Empirically, we demonstrate that alternating minimization performs similar to recently proposed convex techniques for this problem (which are based on "lifting" to a convex matrix problem) in sample complexity and robustness to noise.
no code implementations • 11 May 2013 • Purushottam Kar, Bharath K. Sriperumbudur, Prateek Jain, Harish C Karnick
We are also able to analyze a class of memory efficient online learning algorithms for pairwise learning problems that use only a bounded subset of past training samples to update the hypothesis at each step.
no code implementations • NeurIPS 2012 • Purushottam Kar, Prateek Jain
a given supervised learning task and then adapt a well-known landmarking technique to provide efficient algorithms for supervised learning using ''good'' similarity functions.
no code implementations • NeurIPS 2012 • Ashish Kapoor, Raajay Viswanathan, Prateek Jain
The two key benefits of the model are that a) it can naturally handle datasets that have missing labels and b) it can also measure uncertainty in prediction.
no code implementations • NeurIPS 2011 • Purushottam Kar, Prateek Jain
We propose a landmarking-based approach to obtaining a classifier from such learned goodness criteria.
no code implementations • NeurIPS 2011 • Prateek Jain, Ambuj Tewari, Inderjit S. Dhillon
Our proof techniques are novel and flexible enough to also permit the tightest known analysis of popular iterative algorithms such as CoSaMP and Subspace Pursuit.
no code implementations • NeurIPS 2010 • Prateek Jain, Sudheendra Vijayanarasimhan, Kristen Grauman
Our first approach maps the data to two-bit binary keys that are locality-sensitive for the angle between the hyperplane normal and a database point.
no code implementations • NeurIPS 2010 • Prateek Jain, Brian Kulis, Inderjit S. Dhillon
Our result shows that the learned kernel matrices parameterize a linear transformation kernel function and can be applied inductively to new data points.
no code implementations • NeurIPS 2009 • Raghu Meka, Prateek Jain, Inderjit S. Dhillon
In this paper, we propose a graph theoretic approach to matrix completion that solves the problem for more realistic sampling models.
1 code implementation • NeurIPS 2010 • Raghu Meka, Prateek Jain, Inderjit S. Dhillon
Minimizing the rank of a matrix subject to affine constraints is a fundamental problem with many important applications in machine learning and statistics.
no code implementations • NeurIPS 2008 • Prateek Jain, Brian Kulis, Inderjit S. Dhillon, Kristen Grauman
Metric learning algorithms can provide useful distance functions for a variety of domains, and recent work has shown good accuracy for problems where the learner can access all distance constraints at once.