no code implementations • 7 Dec 2021 • Pravesh K. Kothari, Pasin Manurangsi, Ameya Velingker
Prior works obtained private robust algorithms for mean estimation of subgaussian distributions with bounded covariance.
no code implementations • 1 Jan 2021 • Sreenivas Gollapudi, Kostas Kollias, Benjamin Plaut, Ameya Velingker
We consider the problem of routing users through a network with unknown congestion functions over an infinite time horizon.
no code implementations • 21 Mar 2020 • Michael Kapralov, Navid Nouri, Ilya Razenshteyn, Ameya Velingker, Amir Zandieh
Random binning features, introduced in the seminal paper of Rahimi and Recht (2007), are an efficient method for approximating a kernel matrix using locality sensitive hashing.
no code implementations • 24 Sep 2019 • Badih Ghazi, Pasin Manurangsi, Rasmus Pagh, Ameya Velingker
Using a reduction of Balle et al. (2019), our improved analysis of the protocol of Ishai et al. yields, in the same model, an $\left(\varepsilon, \delta\right)$-differentially private protocol for aggregation that, for any constant $\varepsilon > 0$ and any $\delta = \frac{1}{\mathrm{poly}(n)}$, incurs only a constant error and requires only a constant number of messages per party.
Cryptography and Security Data Structures and Algorithms
1 code implementation • 3 Sep 2019 • Thomas D. Ahle, Michael Kapralov, Jakob B. T. Knudsen, Rasmus Pagh, Ameya Velingker, David Woodruff, Amir Zandieh
Oblivious sketching has emerged as a powerful approach to speeding up numerical linear algebra over the past decade, but our understanding of oblivious sketching solutions for kernel matrices has remained quite limited, suffering from the aforementioned exponential dependence on input parameters.
Data Structures and Algorithms
no code implementations • 29 Aug 2019 • Badih Ghazi, Noah Golowich, Ravi Kumar, Rasmus Pagh, Ameya Velingker
- Protocols in the multi-message shuffled model with $poly(\log{B}, \log{n})$ bits of communication per user and $poly\log{B}$ error, which provide an exponential improvement on the error compared to what is possible with single-message algorithms.
no code implementations • 19 Jun 2019 • Badih Ghazi, Rasmus Pagh, Ameya Velingker
Federated learning promises to make machine learning feasible on distributed, private datasets by implementing gradient descent using secure aggregation methods.
no code implementations • 20 Dec 2018 • Haim Avron, Michael Kapralov, Cameron Musco, Christopher Musco, Ameya Velingker, Amir Zandieh
We formalize this intuition by showing that, roughly, a continuous signal from a given class can be approximately reconstructed using a number of samples proportional to the *statistical dimension* of the allowed power spectrum of that class.
no code implementations • ICML 2017 • Haim Avron, Michael Kapralov, Cameron Musco, Christopher Musco, Ameya Velingker, Amir Zandieh
Qualitatively, our results are twofold: on the one hand, we show that random Fourier feature approximation can provably speed up kernel ridge regression under reasonable assumptions.