no code implementations • 20 Oct 2023 • Jonathan Patsenker, Henry Li, Yuval Kluger
The exponential moving average (EMA) is a commonly used statistic for providing stable estimates of stochastic quantities in deep learning optimization.
1 code implementation • 16 Mar 2023 • Junchen Yang, Ofir Lindenbaum, Yuval Kluger, Ariel Jaffe
Multi-modal high throughput biological data presents a great scientific opportunity and a significant computational challenge.
no code implementations • 19 Oct 2022 • Henry Li, Yuval Kluger
We introduce a simple modification to the standard maximum likelihood estimation (MLE) framework.
no code implementations • 18 Jul 2022 • David Cohen, Tal Shnitzer, Yuval Kluger, Ronen Talmon
This in turn allows for the extraction of the hidden manifold underlying the features and avoids overfitting, facilitating few-sample FS.
1 code implementation • 22 Jun 2022 • Henry Li, Yuval Kluger
Any explicit functional representation $f$ of a density is hampered by two main obstacles when we wish to use it as a generative model: designing $f$ so that sampling is fast, and estimating $Z = \int f$ so that $Z^{-1}f$ integrates to 1.
2 code implementations • NeurIPS 2021 • Ya-Wei Eileen Lin, Yuval Kluger, Ronen Talmon
Here, we take a purely geometric approach for label-free alignment of hierarchical datasets and introduce hyperbolic Procrustes analysis (HPA).
no code implementations • NeurIPS Workshop SVRHM 2021 • Yutaro Yamada, Yuval Kluger, Sahand Negahban, Ilker Yildirim
To tackle the problem from a new perspective, we encourage closer collaboration between the robustness and 3D vision communities.
1 code implementation • 11 Oct 2021 • Uri Shaham, Ofir Lindenbaum, Jonathan Svirsky, Yuval Kluger
Experimenting on several real-world datasets, we demonstrate that our proposed approach outperforms similar approaches designed to avoid only correlated or nuisance features, but not both.
no code implementations • 1 Oct 2021 • Ofir Lindenbaum, Yariv Aizenbud, Yuval Kluger
We first present the Robust AutoEncoder (RAE) objective as a minimization problem for splitting the data into inliers and outliers.
no code implementations • 29 Sep 2021 • Yutaro Yamada, Yuval Kluger, Sahand Negahban, Ilker Yildirim
To tackle the problem from a new perspective, we encourage closer collaboration between the robustness and 3D vision communities.
no code implementations • ICLR 2022 • Ofir Lindenbaum, Moshe Salhov, Amir Averbuch, Yuval Kluger
We further propose $\ell_0$-Deep CCA for solving the problem of non-linear sparse CCA by modeling the correlated representations using deep nets.
1 code implementation • 11 Jun 2021 • Junchen Yang, Ofir Lindenbaum, Yuval Kluger
By forcing the model to select a subset of the most informative features for each sample, we reduce model overfitting in low-sample-size data and obtain an interpretable model.
1 code implementation • 26 Feb 2021 • Yariv Aizenbud, Ariel Jaffe, Meng Wang, Amber Hu, Noah Amsel, Boaz Nadler, Joseph T. Chang, Yuval Kluger
For large trees, a common approach, termed divide-and-conquer, is to recover the tree structure in two steps.
1 code implementation • 12 Oct 2020 • Ofir Lindenbaum, Moshe Salhov, Amir Averbuch, Yuval Kluger
We further propose $\ell_0$-Deep CCA for solving the problem of non-linear sparse CCA by modeling the correlated representations using deep nets.
no code implementations • 28 Sep 2020 • Ofir Lindenbaum, Moshe Salhov, Amir Averbuch, Yuval Kluger
The proposed procedure learns two non-linear transformations and simultaneously gates the input variables to identify a subset of most correlated variables.
1 code implementation • NeurIPS 2021 • Ofir Lindenbaum, Uri Shaham, Jonathan Svirsky, Erez Peterfreund, Yuval Kluger
In this paper, we present a method for unsupervised feature selection, and we demonstrate its use for the task of clustering.
no code implementations • 31 May 2020 • Boris Landa, Ronald R. Coifman, Yuval Kluger
When the data points reside in Euclidean space, a widespread approach is to from an affinity matrix by the Gaussian kernel with pairwise distances, and to follow with a certain normalization (e. g. the row-stochastic normalization or its symmetric variant).
3 code implementations • 28 Feb 2020 • Ariel Jaffe, Noah Amsel, Yariv Aizenbud, Boaz Nadler, Joseph T. Chang, Yuval Kluger
A common assumption in multiple scientific applications is that the distribution of observed data can be modeled by a latent tree graphical model.
no code implementations • 27 Feb 2020 • Ariel Jaffe, Yuval Kluger, Ofir Lindenbaum, Jonathan Patsenker, Erez Peterfreund, Stefan Steinerberger
word2vec due to Mikolov \textit{et al.} (2013) is a word embedding method that is widely used in natural language processing.
2 code implementations • 15 Feb 2019 • Dmitry Kobak, George Linderman, Stefan Steinerberger, Yuval Kluger, Philipp Berens
T-distributed stochastic neighbour embedding (t-SNE) is a widely used data visualisation technique.
1 code implementation • ICML 2020 • Yutaro Yamada, Ofir Lindenbaum, Sahand Negahban, Yuval Kluger
Feature selection problems have been extensively studied for linear estimation, for instance, Lasso, but less emphasis has been placed on feature selection for non-linear functions.
1 code implementation • 28 Mar 2018 • Uri Shaham, James Garritano, Yutaro Yamada, Ethan Weinberger, Alex Cloninger, Xiuyuan Cheng, Kelly Stanton, Yuval Kluger
We study the effectiveness of various approaches that defend against adversarial attacks on deep networks via manipulations based on basis function representations of images.
1 code implementation • ICML 2018 • Ariel Jaffe, Roi Weiss, Shai Carmi, Yuval Kluger, Boaz Nadler
Latent variable models with hidden binary units appear in various applications.
3 code implementations • ICLR 2018 • Uri Shaham, Kelly Stanton, Henry Li, Boaz Nadler, Ronen Basri, Yuval Kluger
Moreover, the map learned by SpectralNet naturally generalizes the spectral embedding to unseen data points.
8 code implementations • 25 Dec 2017 • George C. Linderman, Manas Rachh, Jeremy G. Hoskins, Stefan Steinerberger, Yuval Kluger
t-distributed Stochastic Neighborhood Embedding (t-SNE) is a method for dimensionality reduction and visualization that has become widely popular in recent years.
3 code implementations • 13 Nov 2017 • George C. Linderman, Gal Mishne, Yuval Kluger, Stefan Steinerberger
If we pick $n$ random points uniformly in $[0, 1]^d$ and connect each point to its $k-$nearest neighbors, then it is well known that there exists a giant connected component with high probability.
1 code implementation • 18 Aug 2017 • Gal Mishne, Ronen Talmon, Israel Cohen, Ronald R. Coifman, Yuval Kluger
Often the data is such that the observations do not reside on a regular grid, and the given order of the features is arbitrary and does not convey a notion of locality.
no code implementations • 13 Aug 2017 • Almog Lahav, Ronen Talmon, Yuval Kluger
Specifically we show that organizing similar coordinates in clusters can be exploited for the construction of the Mahalanobis distance between samples.
no code implementations • 8 Mar 2017 • Omer Dror, Boaz Nadler, Erhan Bilal, Yuval Kluger
Consider a regression problem where there is no labeled data and the only observations are the predictions $f_i(x_j)$ of $m$ experts $f_{i}$ over many samples $x_j$.
1 code implementation • 13 Oct 2016 • Uri Shaham, Kelly P. Stanton, Jun Zhao, Huamin Li, Khadir Raddassi, Ruth Montgomery, Yuval Kluger
We apply our method to mass cytometry and single-cell RNA-seq datasets, and demonstrate that it effectively attenuates batch effects.
4 code implementations • 2 Jun 2016 • Jared Katzman, Uri Shaham, Jonathan Bates, Alexander Cloninger, Tingting Jiang, Yuval Kluger
We introduce DeepSurv, a Cox proportional hazards deep neural network and state-of-the-art survival method for modeling interactions between a patient's covariates and treatment effectiveness in order to provide personalized treatment recommendations.
1 code implementation • 6 Feb 2016 • Uri Shaham, Xiuyuan Cheng, Omer Dror, Ariel Jaffe, Boaz Nadler, Joseph Chang, Yuval Kluger
We show how deep learning methods can be applied in the context of crowdsourcing and unsupervised ensemble learning.
no code implementations • 20 Oct 2015 • Ariel Jaffe, Ethan Fetaya, Boaz Nadler, Tingting Jiang, Yuval Kluger
In unsupervised ensemble learning, one obtains predictions from multiple sources or classifiers, yet without knowing the reliability and expertise of each source, and with no labeled data to assess it.
no code implementations • 29 Jul 2014 • Ariel Jaffe, Boaz Nadler, Yuval Kluger
In various situations one is given only the predictions of multiple classifiers over a large unlabeled test data.
no code implementations • 13 Mar 2013 • Fabio Parisi, Francesco Strino, Boaz Nadler, Yuval Kluger
This scenario is different from the standard supervised setting, where each classifier accuracy can be assessed using available labeled data, and raises two questions: given only the predictions of several classifiers over a large set of unlabeled test data, is it possible to a) reliably rank them; and b) construct a meta-classifier more accurate than most classifiers in the ensemble?
no code implementations • 9 Jan 2013 • Francesco Strino, Fabio Parisi, Mariann Micsinai, Yuval Kluger
Herein we propose a framework for deconvolving data from a single genome-wide experiment to infer the composition, abundance and evolutionary paths of the underlying cell subpopulations of a tumor.