Our ScalableAlphaZero is capable of learning to play incrementally on small boards, and advancing to play on large ones.
In this work we present a novel family of hydrologic models, called HydroNets, which leverages river network structure.
Joint models are a common and important tool in the intersection of machine learning and the physical sciences, particularly in contexts where real-world measurements are scarce.
Given a deep neural network (DNN) for a classification problem, an application of MAD optimization results in MadNet, a version of the original network, now equipped with an adversarial defense mechanism.
Our model relies on a shared text--image representation of subject-verb-object relationships appearing in the text, and object interactions in images.
no code implementations • 28 Jan 2019 • Sella Nevo, Vova Anisimov, Gal Elidan, Ran El-Yaniv, Pete Giencke, Yotam Gigi, Avinatan Hassidim, Zach Moshe, Mor Schlesinger, Guy Shalev, Ajai Tirumali, Ami Wiesel, Oleg Zlydenko, Yossi Matias
We propose to build on these strengths and develop ML systems for timely and accurate riverine flood prediction.
We consider the problem of selective prediction (also known as reject option) in deep neural networks, and introduce SelectiveNet, a deep neural architecture with an integrated reject option.
We introduce the Prediction Advantage (PA), a novel performance measure for prediction functions under any loss function (e. g., classification or regression).
We focus on the agnostic setting, for which there is a known algorithm called LESS that learns a PCS classifier and achieves a fast rejection rate (depending on Hanneke's disagreement coefficient) under strong assumptions.
We propose a scheme for training a computerized agent to perform complex human tasks such as highway steering.
Quantized recurrent neural networks were tested over the Penn Treebank dataset, and achieved comparable accuracy as their 32-bit counterparts using only 4-bits.
We consider online learning of ensembles of portfolio selection algorithms and aim to regularize risk by encouraging diversification with respect to a predefined risk-driven grouping of stocks.
We introduce a method to train Binarized Neural Networks (BNNs) - neural networks with binary weights and activations at run-time.
We propose novel model transfer-learning methods that refine a decision forest model M learned within a "source" domain using a training set sampled from a "target" domain, assumed to be a variation of the source.
We introduce a new and improved characterization of the label complexity of disagreement-based active learning, in which the leading quantity is the version space compression set size.
We propose and study a novel supervised approach to learning statistical semantic relatedness models from subjectively annotated training examples.
For a learning problem whose associated excess loss class is $(\beta, B)$-Bernstein, we show that it is theoretically possible to track the same classification performance of the best (unknown) hypothesis in our class, provided that we are free to abstain from prediction in some region of our choice.