2 code implementations • 21 Feb 2024 • Heli Ben-Hamu, Omri Puny, Itai Gat, Brian Karrer, Uriel Singer, Yaron Lipman
Taming the generation outcome of state of the art Diffusion and Flow-Matching (FM) models without having to re-train a task-specific model unlocks a powerful tool for solving inverse problems, conditional generation, and controlled generation in general.
no code implementations • 28 Apr 2023 • Aram-Alexandre Pooladian, Heli Ben-Hamu, Carles Domingo-Enrich, Brandon Amos, Yaron Lipman, Ricky T. Q. Chen
Simulation-free methods for training continuous-time generative models construct probability paths that go between noise distributions and individual data samples.
2 code implementations • 6 Oct 2022 • Yaron Lipman, Ricky T. Q. Chen, Heli Ben-Hamu, Maximilian Nickel, Matt Le
These paths are more efficient than diffusion paths, provide faster training and sampling, and result in better generalization.
Ranked #5 on Density Estimation on CIFAR-10
no code implementations • 11 Jul 2022 • Heli Ben-Hamu, samuel cohen, Joey Bose, Brandon Amos, Aditya Grover, Maximilian Nickel, Ricky T. Q. Chen, Yaron Lipman
Continuous Normalizing Flows (CNFs) are a class of generative models that transform a prior distribution to a model distribution by solving an ordinary differential equation (ODE).
1 code implementation • ICLR 2022 • Omri Puny, Matan Atzmon, Heli Ben-Hamu, Ishan Misra, Aditya Grover, Edward J. Smith, Yaron Lipman
For example, Euclidean motion invariant/equivariant graph or point cloud neural networks.
3 code implementations • 14 Jun 2020 • Omri Puny, Heli Ben-Hamu, Yaron Lipman
This paper advocates incorporating a Low-Rank Global Attention (LRGA) module, a computation and memory efficient variant of the dot-product attention (Vaswani et al., 2017), to Graph Neural Networks (GNNs) for improving their generalization power.
Ranked #1 on Link Property Prediction on ogbl-ddi
2 code implementations • NeurIPS 2019 • Haggai Maron, Heli Ben-Hamu, Hadar Serviansky, Yaron Lipman
It was shown that the popular message passing GNN cannot distinguish between graphs that are indistinguishable by the 1-WL test (Morris et al. 2018; Xu et al. 2019).
Ranked #6 on Graph Classification on COLLAB
1 code implementation • ICCV 2019 • Niv Haim, Nimrod Segol, Heli Ben-Hamu, Haggai Maron, Yaron Lipman
Specifically, for the use case of learning spherical signals, our representation provides a low distortion alternative to several popular spherical parameterizations used in deep learning.
no code implementations • ICLR 2019 • Haggai Maron, Heli Ben-Hamu, Nadav Shamir, Yaron Lipman
In this paper we provide a characterization of all permutation invariant and equivariant linear layers for (hyper-)graph data, and show that their dimension, in case of edge-value graph data, is 2 and 15, respectively.
1 code implementation • 6 Jun 2018 • Heli Ben-Hamu, Haggai Maron, Itay Kezurer, Gal Avineri, Yaron Lipman
The new tensor data representation is used as input to Generative Adversarial Networks for the task of 3D shape generation.