Search Results for author: Ravi Kumar

Found 40 papers, 4 papers with code

Machine Learning based Framework for Robust Price-Sensitivity Estimation with Application to Airline Pricing

no code implementations4 May 2022 Ravi Kumar, Shahin Boluki, Karl Isler, Jonas Rauch, Darius Walczak

The estimation of price-sensitivity parameters of this model via direct one-stage regression techniques may lead to biased estimates.

Parsimonious Learning-Augmented Caching

no code implementations9 Feb 2022 Sungjin Im, Ravi Kumar, Aditya Petety, Manish Purohit

Learning-augmented algorithms -- in which, traditional algorithms are augmented with machine-learned predictions -- have emerged as a framework to go beyond worst-case analysis.

User-Level Differentially Private Learning via Correlated Sampling

no code implementations NeurIPS 2021 Badih Ghazi, Ravi Kumar, Pasin Manurangsi

Most works in learning with differential privacy (DP) have focused on the setting where each user has a single sample.

Online Knapsack with Frequency Predictions

no code implementations NeurIPS 2021 Sungjin Im, Ravi Kumar, Mahshid Montazer Qaem, Manish Purohit

There has been recent interest in using machine-learned predictions to improve the worst-case guarantees of online algorithms.

Logarithmic Regret from Sublinear Hints

no code implementations NeurIPS 2021 Aditya Bhaskara, Ashok Cutkosky, Ravi Kumar, Manish Purohit

We consider the online linear optimization problem, where at every step the algorithm plays a point $x_t$ in the unit ball, and suffers loss $\langle c_t, x_t\rangle$ for some cost vector $c_t$ that is then revealed to the algorithm.

online learning

User-Level Private Learning via Correlated Sampling

no code implementations21 Oct 2021 Badih Ghazi, Ravi Kumar, Pasin Manurangsi

Most works in learning with differential privacy (DP) have focused on the setting where each user has a single sample.

Large-Scale Differentially Private BERT

no code implementations3 Aug 2021 Rohan Anil, Badih Ghazi, Vineet Gupta, Ravi Kumar, Pasin Manurangsi

In this work, we study the large-scale pretraining of BERT-Large with differentially private SGD (DP-SGD).

Language Modelling

Locally Private k-Means in One Round

no code implementations20 Apr 2021 Alisa Chang, Badih Ghazi, Ravi Kumar, Pasin Manurangsi

We provide an approximation algorithm for k-means clustering in the one-round (aka non-interactive) local model of differential privacy (DP).

Performance Dependency of LSTM and NAR Beamformers With Respect to Sensor Array Properties in V2I Scenario

no code implementations17 Feb 2021 Prateek Bhadauria, Ravi Kumar, Sanjay Sharma

In this work , Long short term based (LSTM) based deep learning and non linear auto regresive technique based regressor have been employed to predict the angles between the road side units and user equipment . Advance prediction of transmit and receive signals enables reliable vehicle to infrastructure communication.

Time Series

Deep Learning with Label Differential Privacy

no code implementations NeurIPS 2021 Badih Ghazi, Noah Golowich, Ravi Kumar, Pasin Manurangsi, Chiyuan Zhang

The Randomized Response (RR) algorithm is a classical technique to improve robustness in survey aggregation, and has been widely adopted in applications with differential privacy guarantees.

Multi-class Classification

On Avoiding the Union Bound When Answering Multiple Differentially Private Queries

no code implementations16 Dec 2020 Badih Ghazi, Ravi Kumar, Pasin Manurangsi

On the other hand, the algorithm of Dagan and Kur has a remarkable advantage that the $\ell_{\infty}$ error bound of $O(\frac{1}{\epsilon}\sqrt{k \log \frac{1}{\delta}})$ holds not only in expectation but always (i. e., with probability one) while we can only get a high probability (or expected) guarantee on the error.

Sample-efficient proper PAC learning with approximate differential privacy

no code implementations7 Dec 2020 Badih Ghazi, Noah Golowich, Ravi Kumar, Pasin Manurangsi

In this paper we prove that the sample complexity of properly learning a class of Littlestone dimension $d$ with approximate differential privacy is $\tilde O(d^6)$, ignoring privacy and accuracy parameters.

Robust and Private Learning of Halfspaces

no code implementations30 Nov 2020 Badih Ghazi, Ravi Kumar, Pasin Manurangsi, Thao Nguyen

In this work, we study the trade-off between differential privacy and adversarial robustness under L2-perturbations in the context of learning halfspaces.

Adversarial Robustness

Online Linear Optimization with Many Hints

no code implementations NeurIPS 2020 Aditya Bhaskara, Ashok Cutkosky, Ravi Kumar, Manish Purohit

We study an online linear optimization (OLO) problem in which the learner is provided access to $K$ "hint" vectors in each round prior to making a decision.

On Additive Approximate Submodularity

no code implementations6 Oct 2020 Flavio Chierichetti, Anirban Dasgupta, Ravi Kumar

We show that an approximately submodular function defined on a ground set of $n$ elements is $O(n^2)$ pointwise-close to a submodular function.

On Distributed Differential Privacy and Counting Distinct Elements

no code implementations21 Sep 2020 Lijie Chen, Badih Ghazi, Ravi Kumar, Pasin Manurangsi

We study the setup where each of $n$ users holds an element from a discrete set, and the goal is to count the number of distinct elements across all users, under the constraint of $(\epsilon, \delta)$-differentially privacy: - In the non-interactive local setting, we prove that the additive error of any protocol is $\Omega(n)$ for any constant $\epsilon$ and for any $\delta$ inverse polynomial in $n$.

Differentially Private Clustering: Tight Approximation Ratios

no code implementations NeurIPS 2020 Badih Ghazi, Ravi Kumar, Pasin Manurangsi

For several basic clustering problems, including Euclidean DensestBall, 1-Cluster, k-means, and k-median, we give efficient differentially private algorithms that achieve essentially the same approximation ratios as those that can be obtained by any non-private algorithm, while incurring only small additive errors.

Near-tight closure bounds for Littlestone and threshold dimensions

no code implementations7 Jul 2020 Badih Ghazi, Noah Golowich, Ravi Kumar, Pasin Manurangsi

We study closure properties for the Littlestone and threshold dimensions of binary hypothesis classes.

Fair Hierarchical Clustering

no code implementations NeurIPS 2020 Sara Ahmadian, Alessandro Epasto, Marina Knittel, Ravi Kumar, Mohammad Mahdian, Benjamin Moseley, Philip Pham, Sergei Vassilvitskii, Yuyan Wang

As machine learning has become more prevalent, researchers have begun to recognize the necessity of ensuring machine learning systems are fair.

Fairness

A Note on Double Pooling Tests

1 code implementation3 Apr 2020 Andrei Z. Broder, Ravi Kumar

We present double pooling, a simple, easy-to-implement variation on test pooling, that in certain ranges for the a priori probability of a positive test, is significantly more efficient than the standard single pooling approach (the Dorfman method).

Discrete Mathematics Information Theory Information Theory Methodology

Online Learning with Imperfect Hints

no code implementations ICML 2020 Aditya Bhaskara, Ashok Cutkosky, Ravi Kumar, Manish Purohit

We consider a variant of the classical online linear optimization problem in which at every step, the online player receives a "hint" vector before choosing the action for that round.

online learning

Fair Correlation Clustering

1 code implementation6 Feb 2020 Sara Ahmadian, Alessandro Epasto, Ravi Kumar, Mohammad Mahdian

We define a fairlet decomposition with cost similar to the $k$-median cost and this allows us to obtain approximation algorithms for a wide range of fairness constraints.

Combinatorial Optimization Fairness

Preventing Adversarial Use of Datasets through Fair Core-Set Construction

no code implementations24 Oct 2019 Benjamin Spector, Ravi Kumar, Andrew Tomkins

We propose improving the privacy properties of a dataset by publishing only a strategically chosen "core-set" of the data containing a subset of the instances.

On the Power of Multiple Anonymous Messages

no code implementations29 Aug 2019 Badih Ghazi, Noah Golowich, Ravi Kumar, Rasmus Pagh, Ameya Velingker

- Protocols in the multi-message shuffled model with $poly(\log{B}, \log{n})$ bits of communication per user and $poly\log{B}$ error, which provide an exponential improvement on the error compared to what is possible with single-message algorithms.

Testing Mixtures of Discrete Distributions

no code implementations6 Jul 2019 Maryam Aliakbarpour, Ravi Kumar, Ronitt Rubinfeld

In our model, the noisy distribution is a mixture of the original distribution and noise, where the latter is known to the tester either explicitly or via sample access; the form of the noise is also known a priori.

Clustering without Over-Representation

no code implementations29 May 2019 Sara Ahmadian, Alessandro Epasto, Ravi Kumar, Mohammad Mahdian

In this paper we consider clustering problems in which each point is endowed with a color.

On the Learnability of Deep Random Networks

no code implementations8 Apr 2019 Abhimanyu Das, Sreenivas Gollapudi, Ravi Kumar, Rina Panigrahy

In this paper we study the learnability of deep random networks from both theoretical and practical points of view.

Mallows Models for Top-k Lists

no code implementations NeurIPS 2018 Flavio Chierichetti, Anirban Dasgupta, Shahrzad Haddadan, Ravi Kumar, Silvio Lattanzi

The classic Mallows model is a widely-used tool to realize distributions on per- mutations.

Improving Online Algorithms via ML Predictions

no code implementations NeurIPS 2018 Manish Purohit, Zoya Svitkina, Ravi Kumar

In this work we study the problem of using machine-learned predictions to improve performance of online algorithms.

Learning a Mixture of Two Multinomial Logits

no code implementations ICML 2018 Flavio Chierichetti, Ravi Kumar, Andrew Tomkins

In this model, a user is offered a slate of choices (a subset of a finite universe of $n$ items), and selects exactly one item from the slate, each with probability proportional to its (positive) weight.

Fair Clustering Through Fairlets

2 code implementations NeurIPS 2017 Flavio Chierichetti, Ravi Kumar, Silvio Lattanzi, Sergei Vassilvitskii

We show that any fair clustering problem can be decomposed into first finding good fairlets, and then using existing machinery for traditional clustering algorithms.

Algorithms for $\ell_p$ Low-Rank Approximation

no code implementations ICML 2017 Flavio Chierichetti, Sreenivas Gollapudi, Ravi Kumar, Silvio Lattanzi, Rina Panigrahy, David P. Woodruff

We consider the problem of approximating a given matrix by a low-rank matrix so as to minimize the entrywise $\ell_p$-approximation error, for any $p \geq 1$; the case $p = 2$ is the classical SVD problem.

Linear Additive Markov Processes

no code implementations5 Apr 2017 Ravi Kumar, Maithra Raghu, Tamas Sarlos, Andrew Tomkins

We introduce LAMP: the Linear Additive Markov Process.

On Mixtures of Markov Chains

no code implementations NeurIPS 2016 Rishi Gupta, Ravi Kumar, Sergei Vassilvitskii

We study the problem of reconstructing a mixture of Markov chains from the trajectories generated by random walks through the state space.

Conversational flow in Oxford-style debates

no code implementations NAACL 2016 Justine Zhang, Ravi Kumar, Sujith Ravi, Cristian Danescu-Niculescu-Mizil

Public debates are a common platform for presenting and juxtaposing diverging views on important issues.

Selecting Diverse Features via Spectral Regularization

no code implementations NeurIPS 2012 Abhimanyu Das, Anirban Dasgupta, Ravi Kumar

We compare our algorithms to traditional greedy and $\ell_1$-regularization schemes and show that we obtain a more diverse set of features that result in the regression problem being stable under perturbations.

Scalable K-Means++

2 code implementations29 Mar 2012 Bahman Bahmani, Benjamin Moseley, Andrea Vattani, Ravi Kumar, Sergei Vassilvitskii

The recently proposed k-means++ initialization algorithm achieves this, obtaining an initial set of centers that is provably close to the optimum solution.

Databases

Cannot find the paper you are looking for? You can Submit a new open access paper.