no code implementations • 20 Apr 2024 • Moshe Eliasof, Beatrice Bevilacqua, Carola-Bibiane Schönlieb, Haggai Maron

In recent years, significant efforts have been made to refine the design of Graph Neural Network (GNN) layers, aiming to overcome diverse challenges, such as limited expressive power and oversmoothing.

no code implementations • 10 Apr 2024 • Yoni Kasten, Wuyue Lu, Haggai Maron

We tackle the long-standing challenge of reconstructing 3D structures and camera positions from videos.

no code implementations • 13 Feb 2024 • Guy Bar-Shalom, Beatrice Bevilacqua, Haggai Maron

In the realm of Graph Neural Networks (GNNs), two exciting research directions have recently emerged: Subgraph GNNs and Graph Transformers.

no code implementations • 6 Feb 2024 • Aviv Shamsian, Aviv Navon, David W. Zhang, Yan Zhang, Ethan Fetaya, Gal Chechik, Haggai Maron

Learning in deep weight spaces (DWS), where neural networks process the weights of other neural networks, is an emerging research direction, with applications to 2D and 3D neural fields (INRs, NeRFs), as well as making inferences about other types of neural networks.

no code implementations • 3 Feb 2024 • Christopher Morris, Nadav Dym, Haggai Maron, İsmail İlkan Ceylan, Fabrizio Frasca, Ron Levie, Derek Lim, Michael Bronstein, Martin Grohe, Stefanie Jegelka

Machine learning on graphs, especially using graph neural networks (GNNs), has seen a surge in interest due to the wide availability of graph data across a broad spectrum of disciplines, from life to social and engineering sciences.

no code implementations • 7 Dec 2023 • Derek Lim, Haggai Maron, Marc T. Law, Jonathan Lorraine, James Lucas

However, those works developed architectures tailored to specific networks such as MLPs and CNNs without normalization layers, and generalizing such architectures to other types of networks can be challenging.

1 code implementation • NeurIPS 2023 • Derek Lim, Joshua Robinson, Stefanie Jegelka, Haggai Maron

In this work, we demonstrate the benefits of sign equivariance for these tasks.

no code implementations • 15 Nov 2023 • Aviv Shamsian, David W. Zhang, Aviv Navon, Yan Zhang, Miltiadis Kofinas, Idan Achituve, Riccardo Valperga, Gertjan J. Burghouts, Efstratios Gavves, Cees G. M. Snoek, Ethan Fetaya, Gal Chechik, Haggai Maron

Learning in weight spaces, where neural networks process the weights of other deep neural networks, has emerged as a promising research direction with applications in various fields, from analyzing and editing neural fields and implicit neural representations, to network pruning and quantization.

1 code implementation • 30 Oct 2023 • Beatrice Bevilacqua, Moshe Eliasof, Eli Meirom, Bruno Ribeiro, Haggai Maron

Subgraph GNNs are provably expressive neural architectures that learn graph representations from sets of subgraphs.

1 code implementation • 20 Oct 2023 • Aviv Navon, Aviv Shamsian, Ethan Fetaya, Gal Chechik, Nadav Dym, Haggai Maron

To accelerate the alignment process and improve its quality, we propose a novel framework aimed at learning to solve the weight alignment problem, which we name Deep-Align.

1 code implementation • NeurIPS 2023 • Dvir Samuel, Rami Ben-Ari, Nir Darshan, Haggai Maron, Gal Chechik

Text-to-image diffusion models show great potential in synthesizing a large variety of concepts in new compositions and scenarios.

no code implementations • 6 Apr 2023 • Ali Taghibakhshi, Mingyuan Ma, Ashwath Aithal, Onur Yilmaz, Haggai Maron, Matthew West

Cross-device user matching is a critical problem in numerous domains, including advertising, recommender systems, and cybersecurity.

no code implementations • 6 Mar 2023 • Moshe Eliasof, Fabrizio Frasca, Beatrice Bevilacqua, Eran Treister, Gal Chechik, Haggai Maron

Two main families of node feature augmentation schemes have been explored for enhancing GNNs: random features and spectral positional encoding.

no code implementations • 22 Feb 2023 • Omri Puny, Derek Lim, Bobak T. Kiani, Haggai Maron, Yaron Lipman

This paper introduces an alternative expressive power hierarchy based on the ability of GNNs to calculate equivariant polynomials of a certain degree.

1 code implementation • 30 Jan 2023 • Aviv Navon, Aviv Shamsian, Idan Achituve, Ethan Fetaya, Gal Chechik, Haggai Maron

Designing machine learning architectures for processing neural networks in their raw weight matrix form is a newly introduced research direction.

no code implementations • 28 Oct 2022 • Sohir Maskey, Ali Parviz, Maximilian Thiessen, Hannes Stärk, Ylli Sadikaj, Haggai Maron

Graph neural networks (GNNs) are the primary tool for processing graph-structured data.

2 code implementations • 22 Jun 2022 • Fabrizio Frasca, Beatrice Bevilacqua, Michael M. Bronstein, Haggai Maron

Subgraph GNNs are a recent class of expressive Graph Neural Networks (GNNs) which model graphs as collections of subgraphs.

no code implementations • 18 Apr 2022 • Eli A. Meirom, Haggai Maron, Shie Mannor, Gal Chechik

Quantum Computing (QC) stands to revolutionize computing, but is currently still limited.

1 code implementation • 2 Mar 2022 • Ben Finkelshtein, Chaim Baskin, Haggai Maron, Nadav Dym

Equivariance to permutations and rigid motions is an important inductive bias for various 3D learning problems.

2 code implementations • 25 Feb 2022 • Derek Lim, Joshua Robinson, Lingxiao Zhao, Tess Smidt, Suvrit Sra, Haggai Maron, Stefanie Jegelka

We introduce SignNet and BasisNet -- new neural architectures that are invariant to two key symmetries displayed by eigenvectors: (i) sign flips, since if $v$ is an eigenvector then so is $-v$; and (ii) more general basis symmetries, which occur in higher dimensional eigenspaces with infinitely many choices of basis eigenvectors.

Ranked #11 on Graph Regression on ZINC-500k

2 code implementations • 2 Feb 2022 • Aviv Navon, Aviv Shamsian, Idan Achituve, Haggai Maron, Kenji Kawaguchi, Gal Chechik, Ethan Fetaya

In this paper, we propose viewing the gradients combination step as a bargaining game, where tasks negotiate to reach an agreement on a joint direction of parameter update.

Ranked #1 on Multi-Task Learning on Cityscapes test

no code implementations • 20 Jan 2022 • Or Litany, Haggai Maron, David Acuna, Jan Kautz, Gal Chechik, Sanja Fidler

Standard Federated Learning (FL) techniques are limited to clients with identical network architectures.

no code implementations • 18 Dec 2021 • Christopher Morris, Yaron Lipman, Haggai Maron, Bastian Rieck, Nils M. Kriege, Martin Grohe, Matthias Fey, Karsten Borgwardt

In recent years, algorithms and neural architectures based on the Weisfeiler--Leman algorithm, a well-known heuristic for the graph isomorphism problem, have emerged as a powerful tool for machine learning with graphs and relational data.

1 code implementation • ICLR 2022 • Beatrice Bevilacqua, Fabrizio Frasca, Derek Lim, Balasubramaniam Srinivasan, Chen Cai, Gopinath Balamurugan, Michael M. Bronstein, Haggai Maron

Thus, we propose to represent each graph as a set of subgraphs derived by some predefined policy, and to process it using a suitable equivariant architecture.

3 code implementations • 2 Aug 2021 • Rinon Gal, Or Patashnik, Haggai Maron, Gal Chechik, Daniel Cohen-Or

Can a generative model be trained to produce images from a specific domain, guided by a text prompt only, without seeing any image?

1 code implementation • ICCV 2021 • Dror Moran, Hodaya Koslowsky, Yoni Kasten, Haggai Maron, Meirav Galun, Ronen Basri

Existing deep methods produce highly accurate 3D reconstructions in stereo and multiview stereo settings, i. e., when cameras are both internally and externally calibrated.

no code implementations • 22 Oct 2020 • Yochai Yemini, Ethan Fetaya, Haggai Maron, Sharon Gannot

We use noisy and noiseless versions of a simulated reverberant dataset to test the proposed architecture.

no code implementations • 17 Oct 2020 • Gilad Yehudai, Ethan Fetaya, Eli Meirom, Gal Chechik, Haggai Maron

In this paper, we identify an important type of data where generalization from small to large graphs is challenging: graph distributions for which the local structure depends on the graph size.

no code implementations • 11 Oct 2020 • Eli A. Meirom, Haggai Maron, Shie Mannor, Gal Chechik

We consider the problem of controlling a partially-observed dynamic process on a graph by a limited number of interventions.

no code implementations • ICLR 2021 • Nadav Dym, Haggai Maron

We first derive two sufficient conditions for an equivariant architecture to have the universal approximation property, based on a novel characterization of the space of equivariant polynomials.

no code implementations • 28 Sep 2020 • Gilad Yehudai, Ethan Fetaya, Eli Meirom, Gal Chechik, Haggai Maron

We further demonstrate on several tasks, that training GNNs on small graphs results in solutions which do not generalize to larger graphs.

1 code implementation • 6 Aug 2020 • Jonathan Shlomi, Sanmay Ganguly, Eilam Gross, Kyle Cranmer, Yaron Lipman, Hadar Serviansky, Haggai Maron, Nimrod Segol

Jet classification is an important ingredient in measurements and searches for new physics at particle coliders, and secondary vertex reconstruction is a key intermediate step in building powerful jet classifiers.

High Energy Physics - Experiment High Energy Physics - Phenomenology

1 code implementation • ICLR 2021 • Aviv Navon, Idan Achituve, Haggai Maron, Gal Chechik, Ethan Fetaya

Two main challenges arise in this multi-task learning setting: (i) designing useful auxiliary tasks; and (ii) combining auxiliary tasks into a single coherent loss.

3 code implementations • 29 Mar 2020 • Idan Achituve, Haggai Maron, Gal Chechik

Self-supervised learning (SSL) is a technique for learning useful representations from unlabeled data.

1 code implementation • ICML 2020 • Ilay Luz, Meirav Galun, Haggai Maron, Ronen Basri, Irad Yavneh

Efficient numerical solvers for sparse linear systems are crucial in science and engineering.

2 code implementations • ICML 2020 • Haggai Maron, Or Litany, Gal Chechik, Ethan Fetaya

We first characterize the space of linear layers that are equivariant both to element reordering and to the inherent symmetries of elements, like translation in the case of images.

1 code implementation • NeurIPS 2020 • Hadar Serviansky, Nimrod Segol, Jonathan Shlomi, Kyle Cranmer, Eilam Gross, Haggai Maron, Yaron Lipman

Many problems in machine learning can be cast as learning functions from sets to graphs, or more generally to hypergraphs; in short, Set2Graph functions.

2 code implementations • NeurIPS 2019 • Matan Atzmon, Niv Haim, Lior Yariv, Ofer Israelov, Haggai Maron, Yaron Lipman

In turn, the sample network can be used to incorporate the level set samples into a loss function of interest.

2 code implementations • NeurIPS 2019 • Haggai Maron, Heli Ben-Hamu, Hadar Serviansky, Yaron Lipman

It was shown that the popular message passing GNN cannot distinguish between graphs that are indistinguishable by the 1-WL test (Morris et al. 2018; Xu et al. 2019).

Ranked #6 on Graph Classification on COLLAB

no code implementations • 27 Jan 2019 • Haggai Maron, Ethan Fetaya, Nimrod Segol, Yaron Lipman

We conclude the paper by proving a necessary condition for the universality of $G$-invariant networks that incorporate only first-order tensors.

1 code implementation • ICCV 2019 • Niv Haim, Nimrod Segol, Heli Ben-Hamu, Haggai Maron, Yaron Lipman

Specifically, for the use case of learning spherical signals, our representation provides a low distortion alternative to several popular spherical parameterizations used in deep learning.

no code implementations • ICLR 2019 • Haggai Maron, Heli Ben-Hamu, Nadav Shamir, Yaron Lipman

In this paper we provide a characterization of all permutation invariant and equivariant linear layers for (hyper-)graph data, and show that their dimension, in case of edge-value graph data, is 2 and 15, respectively.

1 code implementation • 6 Jun 2018 • Heli Ben-Hamu, Haggai Maron, Itay Kezurer, Gal Avineri, Yaron Lipman

The new tensor data representation is used as input to Generative Adversarial Networks for the task of 3D shape generation.

1 code implementation • 27 Mar 2018 • Matan Atzmon, Haggai Maron, Yaron Lipman

This paper presents Point Convolutional Neural Networks (PCNN): a novel framework for applying convolutional neural networks to point clouds.

Ranked #80 on 3D Point Cloud Classification on ModelNet40

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.