Search Results for author: Doron Haviv

Found 6 papers, 2 papers with code

Wasserstein Wormhole: Scalable Optimal Transport Distance with Transformers

no code implementations15 Apr 2024 Doron Haviv, Russell Zhang Kunes, Thomas Dougherty, Cassandra Burdziak, Tal Nawy, Anna Gilbert, Dana Pe'er

Along with an encoder that maps distributions to embeddings, Wasserstein Wormhole includes a decoder that maps embeddings back to distributions, allowing for operations in the embedding space to generalize to OT spaces, such as Wasserstein barycenter estimation and OT interpolation.

Gradient Estimation for Binary Latent Variables via Gradient Variance Clipping

no code implementations12 Aug 2022 Russell Z. Kunes, Mingzhang Yin, Max Land, Doron Haviv, Dana Pe'er, Simon Tavaré

Gradient estimation is often necessary for fitting generative models with discrete latent variables, in contexts such as reinforcement learning and variational autoencoder (VAE) training.

The Ramanujan Machine: Automatically Generated Conjectures on Fundamental Constants

4 code implementations29 Jun 2019 Gal Raayoni, Shahar Gottlieb, George Pisha, Yoav Harris, Yahel Manor, Uri Mendlovic, Doron Haviv, Yaron Hadad, Ido Kaminer

Fundamental mathematical constants like $e$ and $\pi$ are ubiquitous in diverse fields of science, from abstract mathematics to physics, biology and chemistry.

The effectiveness of layer-by-layer training using the information bottleneck principle

no code implementations ICLR 2019 Adar Elad, Doron Haviv, Yochai Blau, Tomer Michaeli

The recently proposed information bottleneck (IB) theory of deep nets suggests that during training, each layer attempts to maximize its mutual information (MI) with the target labels (so as to allow good prediction accuracy), while minimizing its MI with the input (leading to effective compression and thus good generalization).

DON’T JUDGE A BOOK BY ITS COVER - ON THE DYNAMICS OF RECURRENT NEURAL NETWORKS

no code implementations ICLR 2019 Doron Haviv, Alexander Rivkind, Omri Barak

To be effective in sequential data processing, Recurrent Neural Networks (RNNs) are required to keep track of past events by creating memories.

Understanding and Controlling Memory in Recurrent Neural Networks

2 code implementations19 Feb 2019 Doron Haviv, Alexander Rivkind, Omri Barak

Finally, we propose a novel regularization technique that is based on the relation between hidden state speeds and memory longevity.

Descriptive

Cannot find the paper you are looking for? You can Submit a new open access paper.