3 code implementations • NeurIPS 2019 • Hadi Salman, Greg Yang, Jerry Li, Pengchuan Zhang, huan zhang, Ilya Razenshteyn, Sebastien Bubeck
In this paper, we employ adversarial training to improve the performance of randomized smoothing.
1 code implementation • ICML 2020 • Arturs Backurs, Yihe Dong, Piotr Indyk, Ilya Razenshteyn, Tal Wagner
Our extensive experiments, on real-world text and image datasets, show that Flowtree improves over various baselines and existing methods in either running time or accuracy.
Data Structures and Algorithms
1 code implementation • NeurIPS 2015 • Alexandr Andoni, Piotr Indyk, Thijs Laarhoven, Ilya Razenshteyn, Ludwig Schmidt
Our lower bound implies that the above LSH family exhibits a trade-off between evaluation time and quality that is close to optimal for a natural class of LSH functions.
1 code implementation • ICLR 2020 • Yihe Dong, Piotr Indyk, Ilya Razenshteyn, Tal Wagner
Space partitions of $\mathbb{R}^d$ underlie a vast and important class of fast nearest neighbor search (NNS) algorithms.
1 code implementation • ICML 2020 • Greg Yang, Tony Duan, J. Edward Hu, Hadi Salman, Ilya Razenshteyn, Jerry Li
Randomized smoothing is the current state-of-the-art defense with provable robustness against $\ell_2$ adversarial attacks.
1 code implementation • 3 May 2020 • Yihe Dong, Yu Gao, Richard Peng, Ilya Razenshteyn, Saurabh Sawlani
We investigate the problem of efficiently computing optimal transport (OT) distances, which is equivalent to the node-capacitated minimum cost maximum flow problem in a bipartite graph.
no code implementations • 25 May 2018 • Sébastien Bubeck, Eric Price, Ilya Razenshteyn
First we prove that, for a broad set of classification tasks, the mere existence of a robust classifier implies that it can be found by a possibly exponential-time algorithm with relatively few training examples.
no code implementations • 18 Nov 2016 • Alexandr Andoni, Huy L. Nguyen, Aleksandar Nikolov, Ilya Razenshteyn, Erik Waingarten
We show that every symmetric normed space admits an efficient nearest neighbor search data structure with doubly-logarithmic approximation.
no code implementations • 9 Oct 2015 • Thomas D. Ahle, Rasmus Pagh, Ilya Razenshteyn, Francesco Silvestri
* New upper and lower bounds for (A)LSH-based algorithms.
no code implementations • 26 Jun 2018 • Alexandr Andoni, Piotr Indyk, Ilya Razenshteyn
The nearest neighbor problem is defined as follows: Given a set $P$ of $n$ points in some metric space $(X, D)$, build a data structure that, given any point $q$, returns a point in $P$ that is closest to $q$ (its "nearest neighbor" in $P$).
no code implementations • 8 Nov 2018 • Sepideh Mahabadi, Konstantin Makarychev, Yury Makarychev, Ilya Razenshteyn
We introduce and study the notion of an outer bi-Lipschitz extension of a map between Euclidean spaces.
no code implementations • 8 Nov 2018 • Konstantin Makarychev, Yury Makarychev, Ilya Razenshteyn
Further, the cost of every clustering is preserved within $(1+\varepsilon)$.
no code implementations • 15 Nov 2018 • Sébastien Bubeck, Yin Tat Lee, Eric Price, Ilya Razenshteyn
In our recent work (Bubeck, Price, Razenshteyn, arXiv:1805. 10204) we argued that adversarial examples in machine learning might be due to an inherent computational hardness of the problem.
no code implementations • NeurIPS 2017 • Piotr Indyk, Ilya Razenshteyn, Tal Wagner
We introduce a new distance-preserving compact representation of multi-dimensional point-sets.
no code implementations • 3 Apr 2019 • Hao Chen, Ilaria Chillotti, Yihe Dong, Oxana Poburinnaya, Ilya Razenshteyn, M. Sadegh Riazi
In this paper, we introduce SANNS, a system for secure $k$-NNS that keeps client's query and the search result confidential.
no code implementations • 21 Mar 2020 • Michael Kapralov, Navid Nouri, Ilya Razenshteyn, Ameya Velingker, Amir Zandieh
Random binning features, introduced in the seminal paper of Rahimi and Recht (2007), are an efficient method for approximating a kernel matrix using locality sensitive hashing.
no code implementations • 23 Apr 2020 • Sepideh Mahabadi, Ilya Razenshteyn, David P. Woodruff, Samson Zhou
Adaptive sampling is a useful algorithmic tool for data summarization problems in the classical centralized setting, where the entire dataset is available to the single processor performing the computation.
1 code implementation • 24 Feb 2021 • Meena Jagadeesan, Ilya Razenshteyn, Suriya Gunasekar
We provide a function space characterization of the inductive bias resulting from minimizing the $\ell_2$ norm of the weights in multi-channel convolutional neural networks with linear activations and empirically test our resulting hypothesis on ReLU networks trained using gradient descent.