Search Results for author: Yuval Kluger

Found 36 papers, 20 papers with code

Exponential weight averaging as damped harmonic motion

no code implementations20 Oct 2023 Jonathan Patsenker, Henry Li, Yuval Kluger

The exponential moving average (EMA) is a commonly used statistic for providing stable estimates of stochastic quantities in deep learning optimization.

Multi-modal Differentiable Unsupervised Feature Selection

1 code implementation16 Mar 2023 Junchen Yang, Ofir Lindenbaum, Yuval Kluger, Ariel Jaffe

Multi-modal high throughput biological data presents a great scientific opportunity and a significant computational challenge.

feature selection

Autoregressive Generative Modeling with Noise Conditional Maximum Likelihood Estimation

no code implementations19 Oct 2022 Henry Li, Yuval Kluger

We introduce a simple modification to the standard maximum likelihood estimation (MLE) framework.

ManiFeSt: Manifold-based Feature Selection for Small Data Sets

no code implementations18 Jul 2022 David Cohen, Tal Shnitzer, Yuval Kluger, Ronen Talmon

This in turn allows for the extraction of the hidden manifold underlying the features and avoids overfitting, facilitating few-sample FS.

feature selection

Neural Inverse Transform Sampler

1 code implementation22 Jun 2022 Henry Li, Yuval Kluger

Any explicit functional representation $f$ of a density is hampered by two main obstacles when we wish to use it as a generative model: designing $f$ so that sampling is fast, and estimating $Z = \int f$ so that $Z^{-1}f$ integrates to 1.

Density Estimation

Hyperbolic Procrustes Analysis Using Riemannian Geometry

2 code implementations NeurIPS 2021 Ya-Wei Eileen Lin, Yuval Kluger, Ronen Talmon

Here, we take a purely geometric approach for label-free alignment of hierarchical datasets and introduce hyperbolic Procrustes analysis (HPA).

Computational Efficiency Translation

Exploiting 3D Shape Bias towards Robust Vision

no code implementations NeurIPS Workshop SVRHM 2021 Yutaro Yamada, Yuval Kluger, Sahand Negahban, Ilker Yildirim

To tackle the problem from a new perspective, we encourage closer collaboration between the robustness and 3D vision communities.

3D Reconstruction

Deep Unsupervised Feature Selection by Discarding Nuisance and Correlated Features

1 code implementation11 Oct 2021 Uri Shaham, Ofir Lindenbaum, Jonathan Svirsky, Yuval Kluger

Experimenting on several real-world datasets, we demonstrate that our proposed approach outperforms similar approaches designed to avoid only correlated or nuisance features, but not both.

feature selection

Probabilistic Robust Autoencoders for Outlier Detection

no code implementations1 Oct 2021 Ofir Lindenbaum, Yariv Aizenbud, Yuval Kluger

We first present the Robust AutoEncoder (RAE) objective as a minimization problem for splitting the data into inliers and outliers.

Anomaly Detection Outlier Detection

Geon3D: Exploiting 3D Shape Bias towards Building Robust Machine Vision

no code implementations29 Sep 2021 Yutaro Yamada, Yuval Kluger, Sahand Negahban, Ilker Yildirim

To tackle the problem from a new perspective, we encourage closer collaboration between the robustness and 3D vision communities.

3D Reconstruction

L0-Sparse Canonical Correlation Analysis

no code implementations ICLR 2022 Ofir Lindenbaum, Moshe Salhov, Amir Averbuch, Yuval Kluger

We further propose $\ell_0$-Deep CCA for solving the problem of non-linear sparse CCA by modeling the correlated representations using deep nets.

Locally Sparse Neural Networks for Tabular Biomedical Data

1 code implementation11 Jun 2021 Junchen Yang, Ofir Lindenbaum, Yuval Kluger

By forcing the model to select a subset of the most informative features for each sample, we reduce model overfitting in low-sample-size data and obtain an interpretable model.

Survival Analysis

Spectral Top-Down Recovery of Latent Tree Models

1 code implementation26 Feb 2021 Yariv Aizenbud, Ariel Jaffe, Meng Wang, Amber Hu, Noah Amsel, Boaz Nadler, Joseph T. Chang, Yuval Kluger

For large trees, a common approach, termed divide-and-conquer, is to recover the tree structure in two steps.

$\ell_0$-based Sparse Canonical Correlation Analysis

1 code implementation12 Oct 2020 Ofir Lindenbaum, Moshe Salhov, Amir Averbuch, Yuval Kluger

We further propose $\ell_0$-Deep CCA for solving the problem of non-linear sparse CCA by modeling the correlated representations using deep nets.

Deep Gated Canonical Correlation Analysis

no code implementations28 Sep 2020 Ofir Lindenbaum, Moshe Salhov, Amir Averbuch, Yuval Kluger

The proposed procedure learns two non-linear transformations and simultaneously gates the input variables to identify a subset of most correlated variables.

Doubly-Stochastic Normalization of the Gaussian Kernel is Robust to Heteroskedastic Noise

no code implementations31 May 2020 Boris Landa, Ronald R. Coifman, Yuval Kluger

When the data points reside in Euclidean space, a widespread approach is to from an affinity matrix by the Gaussian kernel with pairwise distances, and to follow with a certain normalization (e. g. the row-stochastic normalization or its symmetric variant).

Spectral neighbor joining for reconstruction of latent tree models

3 code implementations28 Feb 2020 Ariel Jaffe, Noah Amsel, Yariv Aizenbud, Boaz Nadler, Joseph T. Chang, Yuval Kluger

A common assumption in multiple scientific applications is that the distribution of observed data can be modeled by a latent tree graphical model.

The Spectral Underpinning of word2vec

no code implementations27 Feb 2020 Ariel Jaffe, Yuval Kluger, Ofir Lindenbaum, Jonathan Patsenker, Erez Peterfreund, Stefan Steinerberger

word2vec due to Mikolov \textit{et al.} (2013) is a word embedding method that is widely used in natural language processing.

Open-Ended Question Answering

Heavy-tailed kernels reveal a finer cluster structure in t-SNE visualisations

2 code implementations15 Feb 2019 Dmitry Kobak, George Linderman, Stefan Steinerberger, Yuval Kluger, Philipp Berens

T-distributed stochastic neighbour embedding (t-SNE) is a widely used data visualisation technique.

Feature Selection using Stochastic Gates

1 code implementation ICML 2020 Yutaro Yamada, Ofir Lindenbaum, Sahand Negahban, Yuval Kluger

Feature selection problems have been extensively studied for linear estimation, for instance, Lasso, but less emphasis has been placed on feature selection for non-linear functions.

feature selection

Defending against Adversarial Images using Basis Functions Transformations

1 code implementation28 Mar 2018 Uri Shaham, James Garritano, Yutaro Yamada, Ethan Weinberger, Alex Cloninger, Xiuyuan Cheng, Kelly Stanton, Yuval Kluger

We study the effectiveness of various approaches that defend against adversarial attacks on deep networks via manipulations based on basis function representations of images.

Efficient Algorithms for t-distributed Stochastic Neighborhood Embedding

8 code implementations25 Dec 2017 George C. Linderman, Manas Rachh, Jeremy G. Hoskins, Stefan Steinerberger, Yuval Kluger

t-distributed Stochastic Neighborhood Embedding (t-SNE) is a method for dimensionality reduction and visualization that has become widely popular in recent years.

Dimensionality Reduction

Randomized Near Neighbor Graphs, Giant Components, and Applications in Data Science

3 code implementations13 Nov 2017 George C. Linderman, Gal Mishne, Yuval Kluger, Stefan Steinerberger

If we pick $n$ random points uniformly in $[0, 1]^d$ and connect each point to its $k-$nearest neighbors, then it is well known that there exists a giant connected component with high probability.

Data-Driven Tree Transforms and Metrics

1 code implementation18 Aug 2017 Gal Mishne, Ronen Talmon, Israel Cohen, Ronald R. Coifman, Yuval Kluger

Often the data is such that the observations do not reside on a regular grid, and the given order of the features is arbitrary and does not convey a notion of locality.

Clustering

Mahalanonbis Distance Informed by Clustering

no code implementations13 Aug 2017 Almog Lahav, Ronen Talmon, Yuval Kluger

Specifically we show that organizing similar coordinates in clusters can be exploited for the construction of the Mahalanobis distance between samples.

Clustering

Unsupervised Ensemble Regression

no code implementations8 Mar 2017 Omer Dror, Boaz Nadler, Erhan Bilal, Yuval Kluger

Consider a regression problem where there is no labeled data and the only observations are the predictions $f_i(x_j)$ of $m$ experts $f_{i}$ over many samples $x_j$.

regression

Removal of Batch Effects using Distribution-Matching Residual Networks

1 code implementation13 Oct 2016 Uri Shaham, Kelly P. Stanton, Jun Zhao, Huamin Li, Khadir Raddassi, Ruth Montgomery, Yuval Kluger

We apply our method to mass cytometry and single-cell RNA-seq datasets, and demonstrate that it effectively attenuates batch effects.

DeepSurv: Personalized Treatment Recommender System Using A Cox Proportional Hazards Deep Neural Network

4 code implementations2 Jun 2016 Jared Katzman, Uri Shaham, Jonathan Bates, Alexander Cloninger, Tingting Jiang, Yuval Kluger

We introduce DeepSurv, a Cox proportional hazards deep neural network and state-of-the-art survival method for modeling interactions between a patient's covariates and treatment effectiveness in order to provide personalized treatment recommendations.

Feature Engineering Predicting Patient Outcomes +2

A Deep Learning Approach to Unsupervised Ensemble Learning

1 code implementation6 Feb 2016 Uri Shaham, Xiuyuan Cheng, Omer Dror, Ariel Jaffe, Boaz Nadler, Joseph Chang, Yuval Kluger

We show how deep learning methods can be applied in the context of crowdsourcing and unsupervised ensemble learning.

Ensemble Learning

Unsupervised Ensemble Learning with Dependent Classifiers

no code implementations20 Oct 2015 Ariel Jaffe, Ethan Fetaya, Boaz Nadler, Tingting Jiang, Yuval Kluger

In unsupervised ensemble learning, one obtains predictions from multiple sources or classifiers, yet without knowing the reliability and expertise of each source, and with no labeled data to assess it.

Ensemble Learning

Estimating the Accuracies of Multiple Classifiers Without Labeled Data

no code implementations29 Jul 2014 Ariel Jaffe, Boaz Nadler, Yuval Kluger

In various situations one is given only the predictions of multiple classifiers over a large unlabeled test data.

Ranking and combining multiple predictors without labeled data

no code implementations13 Mar 2013 Fabio Parisi, Francesco Strino, Boaz Nadler, Yuval Kluger

This scenario is different from the standard supervised setting, where each classifier accuracy can be assessed using available labeled data, and raises two questions: given only the predictions of several classifiers over a large set of unlabeled test data, is it possible to a) reliably rank them; and b) construct a meta-classifier more accurate than most classifiers in the ensemble?

Decision Making

TrAp: a Tree Approach for Fingerprinting Subclonal Tumor Composition

no code implementations9 Jan 2013 Francesco Strino, Fabio Parisi, Mariann Micsinai, Yuval Kluger

Herein we propose a framework for deconvolving data from a single genome-wide experiment to infer the composition, abundance and evolutionary paths of the underlying cell subpopulations of a tumor.

Cannot find the paper you are looking for? You can Submit a new open access paper.