You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

no code implementations • 4 May 2022 • Ravi Kumar, Shahin Boluki, Karl Isler, Jonas Rauch, Darius Walczak

The estimation of price-sensitivity parameters of this model via direct one-stage regression techniques may lead to biased estimates.

no code implementations • 9 Feb 2022 • Sungjin Im, Ravi Kumar, Aditya Petety, Manish Purohit

Learning-augmented algorithms -- in which, traditional algorithms are augmented with machine-learned predictions -- have emerged as a framework to go beyond worst-case analysis.

no code implementations • NeurIPS 2021 • Badih Ghazi, Ravi Kumar, Pasin Manurangsi

Most works in learning with differential privacy (DP) have focused on the setting where each user has a single sample.

no code implementations • NeurIPS 2021 • Sungjin Im, Ravi Kumar, Mahshid Montazer Qaem, Manish Purohit

There has been recent interest in using machine-learned predictions to improve the worst-case guarantees of online algorithms.

no code implementations • NeurIPS 2021 • Aditya Bhaskara, Ashok Cutkosky, Ravi Kumar, Manish Purohit

We consider the online linear optimization problem, where at every step the algorithm plays a point $x_t$ in the unit ball, and suffers loss $\langle c_t, x_t\rangle$ for some cost vector $c_t$ that is then revealed to the algorithm.

no code implementations • 21 Oct 2021 • Badih Ghazi, Ravi Kumar, Pasin Manurangsi

Most works in learning with differential privacy (DP) have focused on the setting where each user has a single sample.

no code implementations • 3 Aug 2021 • Rohan Anil, Badih Ghazi, Vineet Gupta, Ravi Kumar, Pasin Manurangsi

In this work, we study the large-scale pretraining of BERT-Large with differentially private SGD (DP-SGD).

no code implementations • 20 Apr 2021 • Alisa Chang, Badih Ghazi, Ravi Kumar, Pasin Manurangsi

We provide an approximation algorithm for k-means clustering in the one-round (aka non-interactive) local model of differential privacy (DP).

no code implementations • 17 Feb 2021 • Prateek Bhadauria, Ravi Kumar, Sanjay Sharma

In this work , Long short term based (LSTM) based deep learning and non linear auto regresive technique based regressor have been employed to predict the angles between the road side units and user equipment . Advance prediction of transmit and receive signals enables reliable vehicle to infrastructure communication.

no code implementations • NeurIPS 2021 • Badih Ghazi, Noah Golowich, Ravi Kumar, Pasin Manurangsi, Chiyuan Zhang

The Randomized Response (RR) algorithm is a classical technique to improve robustness in survey aggregation, and has been widely adopted in applications with differential privacy guarantees.

no code implementations • 16 Dec 2020 • Badih Ghazi, Ravi Kumar, Pasin Manurangsi

On the other hand, the algorithm of Dagan and Kur has a remarkable advantage that the $\ell_{\infty}$ error bound of $O(\frac{1}{\epsilon}\sqrt{k \log \frac{1}{\delta}})$ holds not only in expectation but always (i. e., with probability one) while we can only get a high probability (or expected) guarantee on the error.

no code implementations • 7 Dec 2020 • Badih Ghazi, Noah Golowich, Ravi Kumar, Pasin Manurangsi

In this paper we prove that the sample complexity of properly learning a class of Littlestone dimension $d$ with approximate differential privacy is $\tilde O(d^6)$, ignoring privacy and accuracy parameters.

no code implementations • 30 Nov 2020 • Badih Ghazi, Ravi Kumar, Pasin Manurangsi, Thao Nguyen

In this work, we study the trade-off between differential privacy and adversarial robustness under L2-perturbations in the context of learning halfspaces.

no code implementations • NeurIPS 2020 • Aditya Bhaskara, Ashok Cutkosky, Ravi Kumar, Manish Purohit

We study an online linear optimization (OLO) problem in which the learner is provided access to $K$ "hint" vectors in each round prior to making a decision.

no code implementations • 6 Oct 2020 • Flavio Chierichetti, Anirban Dasgupta, Ravi Kumar

We show that an approximately submodular function defined on a ground set of $n$ elements is $O(n^2)$ pointwise-close to a submodular function.

no code implementations • 21 Sep 2020 • Lijie Chen, Badih Ghazi, Ravi Kumar, Pasin Manurangsi

We study the setup where each of $n$ users holds an element from a discrete set, and the goal is to count the number of distinct elements across all users, under the constraint of $(\epsilon, \delta)$-differentially privacy: - In the non-interactive local setting, we prove that the additive error of any protocol is $\Omega(n)$ for any constant $\epsilon$ and for any $\delta$ inverse polynomial in $n$.

no code implementations • NeurIPS 2020 • Badih Ghazi, Ravi Kumar, Pasin Manurangsi

For several basic clustering problems, including Euclidean DensestBall, 1-Cluster, k-means, and k-median, we give efficient differentially private algorithms that achieve essentially the same approximation ratios as those that can be obtained by any non-private algorithm, while incurring only small additive errors.

no code implementations • 7 Jul 2020 • Badih Ghazi, Noah Golowich, Ravi Kumar, Pasin Manurangsi

We study closure properties for the Littlestone and threshold dimensions of binary hypothesis classes.

no code implementations • NeurIPS 2020 • Sara Ahmadian, Alessandro Epasto, Marina Knittel, Ravi Kumar, Mohammad Mahdian, Benjamin Moseley, Philip Pham, Sergei Vassilvitskii, Yuyan Wang

As machine learning has become more prevalent, researchers have begun to recognize the necessity of ensuring machine learning systems are fair.

1 code implementation • 3 Apr 2020 • Andrei Z. Broder, Ravi Kumar

We present double pooling, a simple, easy-to-implement variation on test pooling, that in certain ranges for the a priori probability of a positive test, is significantly more efficient than the standard single pooling approach (the Dorfman method).

Discrete Mathematics Information Theory Information Theory Methodology

no code implementations • ICML 2020 • Aditya Bhaskara, Ashok Cutkosky, Ravi Kumar, Manish Purohit

We consider a variant of the classical online linear optimization problem in which at every step, the online player receives a "hint" vector before choosing the action for that round.

1 code implementation • 6 Feb 2020 • Sara Ahmadian, Alessandro Epasto, Ravi Kumar, Mohammad Mahdian

We define a fairlet decomposition with cost similar to the $k$-median cost and this allows us to obtain approximation algorithms for a wide range of fairness constraints.

no code implementations • NeurIPS 2019 • Ravi Kumar, Manish Purohit, Zoya Svitkina, Erik Vee, Joshua Wang

When training complex neural networks, memory usage can be an important bottleneck.

no code implementations • 24 Oct 2019 • Benjamin Spector, Ravi Kumar, Andrew Tomkins

We propose improving the privacy properties of a dataset by publishing only a strategically chosen "core-set" of the data containing a subset of the instances.

no code implementations • 29 Aug 2019 • Badih Ghazi, Noah Golowich, Ravi Kumar, Rasmus Pagh, Ameya Velingker

- Protocols in the multi-message shuffled model with $poly(\log{B}, \log{n})$ bits of communication per user and $poly\log{B}$ error, which provide an exponential improvement on the error compared to what is possible with single-message algorithms.

no code implementations • 6 Jul 2019 • Maryam Aliakbarpour, Ravi Kumar, Ronitt Rubinfeld

In our model, the noisy distribution is a mixture of the original distribution and noise, where the latter is known to the tester either explicitly or via sample access; the form of the noise is also known a priori.

no code implementations • 29 May 2019 • Sara Ahmadian, Alessandro Epasto, Ravi Kumar, Mohammad Mahdian

In this paper we consider clustering problems in which each point is endowed with a color.

no code implementations • 8 Apr 2019 • Abhimanyu Das, Sreenivas Gollapudi, Ravi Kumar, Rina Panigrahy

In this paper we study the learnability of deep random networks from both theoretical and practical points of view.

no code implementations • NeurIPS 2018 • Flavio Chierichetti, Anirban Dasgupta, Shahrzad Haddadan, Ravi Kumar, Silvio Lattanzi

The classic Mallows model is a widely-used tool to realize distributions on per- mutations.

no code implementations • NeurIPS 2018 • Manish Purohit, Zoya Svitkina, Ravi Kumar

In this work we study the problem of using machine-learned predictions to improve performance of online algorithms.

no code implementations • ICML 2018 • Flavio Chierichetti, Ravi Kumar, Andrew Tomkins

In this model, a user is offered a slate of choices (a subset of a finite universe of $n$ items), and selects exactly one item from the slate, each with probability proportional to its (positive) weight.

2 code implementations • NeurIPS 2017 • Flavio Chierichetti, Ravi Kumar, Silvio Lattanzi, Sergei Vassilvitskii

We show that any fair clustering problem can be decomposed into first finding good fairlets, and then using existing machinery for traditional clustering algorithms.

no code implementations • ICML 2017 • Flavio Chierichetti, Sreenivas Gollapudi, Ravi Kumar, Silvio Lattanzi, Rina Panigrahy, David P. Woodruff

We consider the problem of approximating a given matrix by a low-rank matrix so as to minimize the entrywise $\ell_p$-approximation error, for any $p \geq 1$; the case $p = 2$ is the classical SVD problem.

no code implementations • 5 Apr 2017 • Ravi Kumar, Maithra Raghu, Tamas Sarlos, Andrew Tomkins

We introduce LAMP: the Linear Additive Markov Process.

no code implementations • NeurIPS 2016 • Rishi Gupta, Ravi Kumar, Sergei Vassilvitskii

We study the problem of reconstructing a mixture of Markov chains from the trajectories generated by random walks through the state space.

no code implementations • NAACL 2016 • Justine Zhang, Ravi Kumar, Sujith Ravi, Cristian Danescu-Niculescu-Mizil

Public debates are a common platform for presenting and juxtaposing diverging views on important issues.

no code implementations • NeurIPS 2012 • Abhimanyu Das, Anirban Dasgupta, Ravi Kumar

We compare our algorithms to traditional greedy and $\ell_1$-regularization schemes and show that we obtain a more diverse set of features that result in the regression problem being stable under perturbations.

2 code implementations • 29 Mar 2012 • Bahman Bahmani, Benjamin Moseley, Andrea Vattani, Ravi Kumar, Sergei Vassilvitskii

The recently proposed k-means++ initialization algorithm achieves this, obtaining an initial set of centers that is provably close to the optimum solution.

Databases

no code implementations • NeurIPS 2008 • Deepayan Chakrabarti, Ravi Kumar, Filip Radlinski, Eli Upfal

In our model, arms have (stochastic) lifetime after which they expire.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.