Search Results for author: Guy Lorberbom

Found 7 papers, 3 papers with code

Layer Collaboration in the Forward-Forward Algorithm

no code implementations21 May 2023 Guy Lorberbom, Itai Gat, Yossi Adi, Alex Schwing, Tamir Hazan

We show that the current version of the forward-forward algorithm is suboptimal when considering information flow in the network, resulting in a lack of collaboration between layers of the network.

Transplantation of Conversational Speaking Style with Interjections in Sequence-to-Sequence Speech Synthesis

no code implementations25 Jul 2022 Raul Fernandez, David Haws, Guy Lorberbom, Slava Shechtman, Alexander Sorin

In this work we explore one-to-many style transfer from a dedicated single-speaker conversational corpus with style nuances and interjections.

Data Augmentation Speech Synthesis +2

Latent Space Explanation by Intervention

no code implementations9 Dec 2021 Itai Gat, Guy Lorberbom, Idan Schwartz, Tamir Hazan

The success of deep neural nets heavily relies on their ability to encode complex relations between their input and their output.

Learning Generalized Gumbel-max Causal Mechanisms

1 code implementation NeurIPS 2021 Guy Lorberbom, Daniel D. Johnson, Chris J. Maddison, Daniel Tarlow, Tamir Hazan

To perform counterfactual reasoning in Structural Causal Models (SCMs), one needs to know the causal mechanisms, which provide factorizations of conditional distributions into noise sources and deterministic functions mapping realizations of noise to samples.

counterfactual Counterfactual Reasoning

Direct Optimization through \arg \max for Discrete Variational Auto-Encoder

1 code implementation NeurIPS 2019 Guy Lorberbom, Tommi Jaakkola, Andreea Gane, Tamir Hazan

Reparameterization of variational auto-encoders with continuous random variables is an effective method for reducing the variance of their gradient estimates.

Direct Policy Gradients: Direct Optimization of Policies in Discrete Action Spaces

no code implementations NeurIPS 2020 Guy Lorberbom, Chris J. Maddison, Nicolas Heess, Tamir Hazan, Daniel Tarlow

A main benefit of DirPG algorithms is that they allow the insertion of domain knowledge in the form of upper bounds on return-to-go at training time, like is used in heuristic search, while still directly computing a policy gradient.

Direct Optimization through $\arg \max$ for Discrete Variational Auto-Encoder

2 code implementations ICLR 2019 Guy Lorberbom, Andreea Gane, Tommi Jaakkola, Tamir Hazan

We demonstrate empirically the effectiveness of the direct loss minimization technique in variational autoencoders with both unstructured and structured discrete latent variables.

Cannot find the paper you are looking for? You can Submit a new open access paper.