26 code implementations • 9 Feb 2016 • Matthieu Courbariaux, Itay Hubara, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio
We introduce a method to train Binarized Neural Networks (BNNs) - neural networks with binary weights and activations at run-time.
5 code implementations • 22 Sep 2016 • Itay Hubara, Matthieu Courbariaux, Daniel Soudry, Ran El-Yaniv, Yoshua Bengio
Quantized recurrent neural networks were tested over the Penn Treebank dataset, and achieved comparable accuracy as their 32-bit counterparts using only 4-bits.
1 code implementation • Advances in Neural Information Processing Systems 16 2004 • Allan Borodin, Ran El-Yaniv, Vincent Gogan
A novel algorithm for actively trading stocks is presented.
2 code implementations • 4 Nov 2015 • Noam Segev, Maayan Harel, Shie Mannor, Koby Crammer, Ran El-Yaniv
We propose novel model transfer-learning methods that refine a decision forest model M learned within a "source" domain using a training set sampled from a "target" domain, assumed to be a variation of the source.
2 code implementations • NeurIPS 2018 • Izhak Golan, Ran El-Yaniv
We consider the problem of anomaly detection in images, and present a new detection technique.
Ranked #12 on Anomaly Detection on One-class CIFAR-100
4 code implementations • 26 Jan 2019 • Yonatan Geifman, Ran El-Yaniv
We consider the problem of selective prediction (also known as reject option) in deep neural networks, and introduce SelectiveNet, a deep neural architecture with an integrated reject option.
1 code implementation • ACL 2019 • Yair Feldman, Ran El-Yaniv
This paper is concerned with the task of multi-hop open-domain Question Answering (QA).
Ranked #55 on Question Answering on HotpotQA
1 code implementation • ICLR 2023 • Ido Galil, Mohammed Dabbah, Ran El-Yaniv
In this paper we present a novel framework to benchmark the ability of image classifiers to detect class-out-of-distribution instances (i. e., instances whose true labels do not appear in the training distribution) at various levels of detection difficulty.
1 code implementation • 26 May 2022 • Omer Belhasin, Guy Bar-Shalom, Ran El-Yaniv
This paper deals with deep transductive learning, and proposes TransBoost as a procedure for fine-tuning any deep neural model to improve its performance on any (unlabeled) test set provided at training time.
Ranked #1 on Image Classification on SUN397
1 code implementation • NeurIPS 2019 • Yonatan Geifman, Ran El-Yaniv
We consider active learning of deep neural networks.
1 code implementation • 23 Feb 2023 • Ido Galil, Mohammed Dabbah, Ran El-Yaniv
Here we examine the relationship between deep architectures and their respective training regimes, with their corresponding selective prediction and uncertainty estimation performance.
1 code implementation • NeurIPS 2021 • Ido Galil, Ran El-Yaniv
In this paper we present a novel and simple attack, which unlike adversarial attacks, does not cause incorrect predictions but instead cripples the network's capacity for uncertainty estimation.
no code implementations • ICLR 2019 • Yonatan Geifman, Guy Uziel, Ran El-Yaniv
We consider the problem of uncertainty estimation in the context of (non-Bayesian) deep neural classification.
no code implementations • 2 Nov 2017 • Yonatan Geifman, Ran El-Yaniv
This paper is concerned with pool-based active learning for deep neural networks.
no code implementations • NeurIPS 2017 • Yonatan Geifman, Ran El-Yaniv
Our method allows a user to set a desired risk level.
no code implementations • 27 May 2017 • Guy Uziel, Ran El-Yaniv
Online portfolio selection research has so far focused mainly on minimizing regret defined in terms of wealth growth.
no code implementations • 23 May 2017 • Ran El-Yaniv, Yonatan Geifman, Yair Wiener
We introduce the Prediction Advantage (PA), a novel performance measure for prediction functions under any loss function (e. g., classification or regression).
no code implementations • 19 Mar 2017 • Roei Gelbhart, Ran El-Yaniv
We focus on the agnostic setting, for which there is a known algorithm called LESS that learns a PCS classifier and achieves a fast rejection rate (depending on Hanneke's disagreement coefficient) under strong assumptions.
no code implementations • 4 Dec 2016 • Bar Hilleli, Ran El-Yaniv
We propose a scheme for training a computerized agent to perform complex human tasks such as highway steering.
no code implementations • NeurIPS 2017 • Guy Uziel, Ran El-Yaniv
Recently, an algorithm for dealing with several objective functions in the i. i. d.
no code implementations • 3 May 2016 • Guy Uziel, Ran El-Yaniv
We present a novel online ensemble learning strategy for portfolio selection.
no code implementations • 12 Apr 2016 • Guy Uziel, Ran El-Yaniv
We consider online learning of ensembles of portfolio selection algorithms and aim to regularize risk by encouraging diversification with respect to a predefined risk-driven grouping of stocks.
no code implementations • 5 Apr 2014 • Yair Wiener, Steve Hanneke, Ran El-Yaniv
We introduce a new and improved characterization of the label complexity of disagreement-based active learning, in which the leading quantity is the version space compression set size.
no code implementations • 15 Jan 2014 • Ran El-Yaniv, Dmitry Pechyony
We develop a technique for deriving data-dependent error bounds for transductive learning algorithms based on transductive Rademacher complexity.
no code implementations • 10 Nov 2013 • Ran El-Yaniv, David Yanay
We propose and study a novel supervised approach to learning statistical semantic relatedness models from subjectively annotated training examples.
no code implementations • NeurIPS 2012 • Yair Wiener, Ran El-Yaniv
This paper examines the possibility of a `reject option' in the context of least squares regression.
no code implementations • NeurIPS 2011 • Dmitry Pidan, Ran El-Yaniv
Our results indicate that both methods are effective, and that the sHMM model is superior.
no code implementations • NeurIPS 2011 • Yair Wiener, Ran El-Yaniv
For a learning problem whose associated excess loss class is $(\beta, B)$-Bernstein, we show that it is theoretically possible to track the same classification performance of the best (unknown) hypothesis in our class, provided that we are free to abstain from prediction in some region of our choice.
no code implementations • 28 Jan 2019 • Sella Nevo, Vova Anisimov, Gal Elidan, Ran El-Yaniv, Pete Giencke, Yotam Gigi, Avinatan Hassidim, Zach Moshe, Mor Schlesinger, Guy Shalev, Ajai Tirumali, Ami Wiesel, Oleg Zlydenko, Yossi Matias
We propose to build on these strengths and develop ML systems for timely and accurate riverine flood prediction.
no code implementations • 27 Oct 2019 • Gal Sadeh Kenigsfield, Ran El-Yaniv
Our model relies on a shared text--image representation of subject-verb-object relationships appearing in the text, and object interactions in images.
no code implementations • 3 Nov 2019 • Shai Rozenberg, Gal Elidan, Ran El-Yaniv
Given a deep neural network (DNN) for a classification problem, an application of MAD optimization results in MadNet, a version of the original network, now equipped with an adversarial defense mechanism.
no code implementations • 21 Nov 2019 • Guy Shalev, Ran El-Yaniv, Daniel Klotz, Frederik Kratzert, Asher Metzger, Sella Nevo
Joint models are a common and important tool in the intersection of machine learning and the physical sciences, particularly in contexts where real-world measurements are scarce.
no code implementations • 11 Jun 2020 • Ami Abutbul, Gal Elidan, Liran Katzir, Ran El-Yaniv
A challenging open question in deep learning is how to handle tabular data.
no code implementations • 1 Jul 2020 • Zach Moshe, Asher Metzger, Gal Elidan, Frederik Kratzert, Sella Nevo, Ran El-Yaniv
In this work we present a novel family of hydrologic models, called HydroNets, which leverages river network structure.
no code implementations • ICLR 2021 • Liran Katzir, Gal Elidan, Ran El-Yaniv
A challenging open question in deep learning is how to handle tabular data.
no code implementations • 18 Jul 2021 • Shai Ben-Assayag, Ran El-Yaniv
Our ScalableAlphaZero is capable of learning to play incrementally on small boards, and advancing to play on large ones.
no code implementations • 29 Sep 2021 • Ido Galil, Mohammed Dabbah, Ran El-Yaniv
Moreover, we consider some of the most popular estimation performance metrics previously proposed including AUROC, ECE, AURC, and coverage for selective accuracy constraint.
no code implementations • 25 Sep 2019 • Shai Rozenberg, Gal Elidan, Ran El-Yaniv
This paper is concerned with the defense of deep models against adversarial at- tacks.
no code implementations • 26 Nov 2021 • Mohammed Dabbah, Ran El-Yaniv
Focusing on discriminative zero-shot learning, in this work we introduce a novel mechanism that dynamically augments during training the set of seen classes to produce additional fictitious classes.
no code implementations • 5 Jun 2022 • Ido Galil, Mohammed Dabbah, Ran El-Yaniv
Due to the comprehensive nature of this paper, it has been updated and split into two separate papers: "A Framework For Benchmarking Class-out-of-distribution Detection And Its Application To ImageNet" and "What Can We Learn From The Selective Prediction And Uncertainty Estimation Performance Of 523 Imagenet Classifiers".