no code implementations • 21 May 2023 • Guy Lorberbom, Itai Gat, Yossi Adi, Alex Schwing, Tamir Hazan
We show that the current version of the forward-forward algorithm is suboptimal when considering information flow in the network, resulting in a lack of collaboration between layers of the network.
no code implementations • 25 Jul 2022 • Raul Fernandez, David Haws, Guy Lorberbom, Slava Shechtman, Alexander Sorin
In this work we explore one-to-many style transfer from a dedicated single-speaker conversational corpus with style nuances and interjections.
no code implementations • 9 Dec 2021 • Itai Gat, Guy Lorberbom, Idan Schwartz, Tamir Hazan
The success of deep neural nets heavily relies on their ability to encode complex relations between their input and their output.
1 code implementation • NeurIPS 2021 • Guy Lorberbom, Daniel D. Johnson, Chris J. Maddison, Daniel Tarlow, Tamir Hazan
To perform counterfactual reasoning in Structural Causal Models (SCMs), one needs to know the causal mechanisms, which provide factorizations of conditional distributions into noise sources and deterministic functions mapping realizations of noise to samples.
1 code implementation • NeurIPS 2019 • Guy Lorberbom, Tommi Jaakkola, Andreea Gane, Tamir Hazan
Reparameterization of variational auto-encoders with continuous random variables is an effective method for reducing the variance of their gradient estimates.
no code implementations • NeurIPS 2020 • Guy Lorberbom, Chris J. Maddison, Nicolas Heess, Tamir Hazan, Daniel Tarlow
A main benefit of DirPG algorithms is that they allow the insertion of domain knowledge in the form of upper bounds on return-to-go at training time, like is used in heuristic search, while still directly computing a policy gradient.
2 code implementations • ICLR 2019 • Guy Lorberbom, Andreea Gane, Tommi Jaakkola, Tamir Hazan
We demonstrate empirically the effectiveness of the direct loss minimization technique in variational autoencoders with both unstructured and structured discrete latent variables.