no code implementations • ICML 2020 • Michal Moshkovitz, Sanjoy Dasgupta, Cyrus Rashtchian, Nave Frost
In terms of negative results, we show that popular top-down decision tree algorithms may lead to clusterings with arbitrarily large cost, and we prove that any explainable clustering must incur an \Omega(\log k) approximation compared to the optimal clustering.
1 code implementation • 30 Jan 2023 • Florian Stimberg, Ayan Chakrabarti, Chun-Ta Lu, Hussein Hazimeh, Otilia Stretcu, Wei Qiao, Yintao Liu, Merve Kaya, Cyrus Rashtchian, Ariel Fuxman, Mehmet Tek, Sven Gowal
We evaluate 33 pretrained models on the benchmark and train models with different augmentations, architectures and training methods on subsets of the obfuscations to measure generalization.
no code implementations • 8 Aug 2022 • Cenk Baykal, Nishanth Dikkala, Rina Panigrahy, Cyrus Rashtchian, Xin Wang
After representing LSH-based sparse networks with our model, we prove that sparse networks can match the approximation power of dense networks on Lipschitz functions.
no code implementations • 2 Sep 2021 • Sami Davies, Arya Mazumdar, Soumyabrata Pal, Cyrus Rashtchian
Mixtures of high dimensional Gaussian distributions have been studied extensively in statistics and learning theory.
no code implementations • 3 Jul 2021 • Cyrus Rashtchian, David P. Woodruff, Peng Ye, Hanlin Zhu
Our motivation is to understand the statistical-computational trade-offs in streaming, sketching, and query-based models.
no code implementations • 12 Dec 2020 • Sami Davies, Miklos Z. Racz, Cyrus Rashtchian, Benjamin G. Schiffer
In the usual trace reconstruction problem, the goal is to exactly reconstruct an unknown string of length $n$ after it passes through a deletion channel many times independently, producing a set of traces (i. e., random subsequences of the string).
1 code implementation • 17 Nov 2020 • Yao-Yuan Yang, Cyrus Rashtchian, Ruslan Salakhutdinov, Kamalika Chaudhuri
Overall, adversarially robust networks resemble a nearest neighbor classifier when it comes to OOD data.
1 code implementation • 30 Oct 2020 • Jinyu Zhao, Yi Hao, Cyrus Rashtchian
To learn the embedding, we revisit using a variational autoencoder with a Gaussian mixture prior, and we show that rescaling the latent space embedding and then applying Ward's linkage-based algorithm leads to improved results for both dendrogram purity and the Moseley-Wang cost function.
no code implementations • 12 Oct 2020 • Vinnu Bhardwaj, Pavel A. Pevzner, Cyrus Rashtchian, Yana Safonova
The problem of reconstructing a string from its error-prone copies, the trace reconstruction problem, was introduced by Vladimir Levenshtein two decades ago.
no code implementations • 24 Jun 2020 • Cyrus Rashtchian, David P. Woodruff, Hanlin Zhu
We consider the general problem of learning about a matrix through vector-matrix-vector queries.
2 code implementations • 3 Jun 2020 • Nave Frost, Michal Moshkovitz, Cyrus Rashtchian
To allow flexibility, we develop a new explainable $k$-means clustering algorithm, ExKMC, that takes an additional parameter $k' \geq k$ and outputs a decision tree with $k'$ leaves.
no code implementations • 6 Mar 2020 • Cyrus Rashtchian, Aneesh Sharma, David P. Woodruff
Theoretically, we show that LSF-Join efficiently finds most close pairs, even for small similarity thresholds and for skewed input sets.
1 code implementation • NeurIPS 2020 • Yao-Yuan Yang, Cyrus Rashtchian, Hongyang Zhang, Ruslan Salakhutdinov, Kamalika Chaudhuri
Current methods for training robust networks lead to a drop in test accuracy, which has led prior works to posit that a robustness-accuracy tradeoff may be inevitable in deep learning.
3 code implementations • 28 Feb 2020 • Sanjoy Dasgupta, Nave Frost, Michal Moshkovitz, Cyrus Rashtchian
In terms of negative results, we show, first, that popular top-down decision tree algorithms may lead to clusterings with arbitrarily large cost, and second, that any tree-induced clustering must in general incur an $\Omega(\log k)$ approximation factor compared to the optimal clustering.
1 code implementation • 7 Jun 2019 • Yao-Yuan Yang, Cyrus Rashtchian, Yizhen Wang, Kamalika Chaudhuri
To test our defense, we provide a novel attack that applies to a wide range of non-parametric classifiers.
no code implementations • NeurIPS 2017 • Cyrus Rashtchian, Konstantin Makarychev, Miklos Racz, Siena Ang, Djordje Jevdjic, Sergey Yekhanin, Luis Ceze, Karin Strauss
We provide empirical justification of the accuracy, scalability, and convergence of our algorithm on real and synthetic data.