Search Results for author: Eran Treister

Found 35 papers, 10 papers with code

An Over Complete Deep Learning Method for Inverse Problems

no code implementations7 Feb 2024 Moshe Eliasof, Eldad Haber, Eran Treister

The novelty of the work proposed is that we jointly design and learn the embedding and the regularizer for the embedding vector.

On The Temporal Domain of Differential Equation Inspired Graph Neural Networks

no code implementations20 Jan 2024 Moshe Eliasof, Eldad Haber, Eran Treister, Carola-Bibiane Schönlieb

Graph Neural Networks (GNNs) have demonstrated remarkable success in modeling complex relationships in graph-structured data.

Second-order methods

Feature Transportation Improves Graph Neural Networks

no code implementations29 Jul 2023 Moshe Eliasof, Eldad Haber, Eran Treister

Graph neural networks (GNNs) have shown remarkable success in learning representations for graph-structured data.

Node Classification

Multigrid-Augmented Deep Learning Preconditioners for the Helmholtz Equation using Compact Implicit Layers

1 code implementation30 Jun 2023 Bar Lerer, Ido Ben-Yair, Eran Treister

We present a deep learning-based iterative approach to solve the discrete heterogeneous Helmholtz equation for high wavenumbers.

DRIP: Deep Regularizers for Inverse Problems

no code implementations30 Mar 2023 Moshe Eliasof, Eldad Haber, Eran Treister

First, most techniques cannot guarantee that the solution fits the data at inference.

Deblurring Image Deblurring +1

Graph Positional Encoding via Random Feature Propagation

no code implementations6 Mar 2023 Moshe Eliasof, Fabrizio Frasca, Beatrice Bevilacqua, Eran Treister, Gal Chechik, Haggai Maron

Two main families of node feature augmentation schemes have been explored for enhancing GNNs: random features and spectral positional encoding.

Graph Classification Node Classification

Efficient Graph Laplacian Estimation by Proximal Newton

no code implementations13 Feb 2023 Yakov Medvedovsky, Eran Treister, Tirza Routtenberg

The Laplacian-constrained Gaussian Markov Random Field (LGMRF) is a common multivariate statistical model for learning a weighted sparse dependency graph from given data.

Graph Learning

NeRN -- Learning Neural Representations for Neural Networks

1 code implementation27 Dec 2022 Maor Ashkenazi, Zohar Rimon, Ron Vainshtein, Shir Levi, Elad Richardson, Pinchas Mintz, Eran Treister

Neural Representations have recently been shown to effectively reconstruct a wide range of signals from 3D meshes and shapes to images and videos.

Knowledge Distillation

Every Node Counts: Improving the Training of Graph Neural Networks on Node Classification

no code implementations29 Nov 2022 Moshe Eliasof, Eldad Haber, Eran Treister

In this paper, we propose novel objective terms for the training of GNNs for node classification, aiming to exploit all the available data and improve accuracy.

Classification Node Classification

pathGCN: Learning General Graph Spatial Operators from Paths

no code implementations15 Jul 2022 Moshe Eliasof, Eldad Haber, Eran Treister

In the context of GCNs, differently from CNNs, a pre-determined spatial operator based on the graph Laplacian is often chosen, allowing only the point-wise operations to be learnt.

Rethinking Unsupervised Neural Superpixel Segmentation

no code implementations21 Jun 2022 Moshe Eliasof, Nir Ben Zikri, Eran Treister

Recently, the concept of unsupervised learning for superpixel segmentation via CNNs has been studied.

Segmentation Superpixels

Inferring Unfairness and Error from Population Statistics in Binary and Multiclass Classification

no code implementations7 Jun 2022 Sivan Sabato, Eran Treister, Elad Yom-Tov

We propose a measure of unfairness with respect to this criterion, which quantifies the fraction of the population that is treated unfairly.

Fairness

Wavelet Feature Maps Compression for Image-to-Image CNNs

1 code implementation24 May 2022 Shahaf E. Finder, Yair Zohav, Maor Ashkenazi, Eran Treister

Convolutional Neural Networks (CNNs) are known for requiring extensive computational resources, and quantization is among the best and most common methods for compressing them.

Depth Estimation Neural Network Compression +2

Multigrid-augmented deep learning preconditioners for the Helmholtz equation

no code implementations NeurIPS Workshop DLDE 2021 Yael Azulay, Eran Treister

In this paper, we present a data-driven approach to iteratively solve the discrete heterogeneous Helmholtz equation at high wavenumbers.

Data Augmentation

Haar Wavelet Feature Compression for Quantized Graph Convolutional Networks

no code implementations10 Oct 2021 Moshe Eliasof, Benjamin Bodner, Eran Treister

Graph Convolutional Networks (GCNs) are widely used in a variety of applications, and can be seen as an unstructured version of standard Convolutional Neural Networks (CNNs).

Feature Compression Node Classification +3

Wavelet Feature Maps Compression for Low Bandwidth Convolutional Neural Networks

no code implementations29 Sep 2021 Yair Zohav, Shahaf E Finder, Maor Ashkenazi, Eran Treister

In this paper, we propose Wavelet Compressed Convolution (WCC)---a novel approach for activation maps compression for $1\times1$ convolutions (the workhorse of modern CNNs).

Depth Estimation Depth Prediction +3

Quantized Convolutional Neural Networks Through the Lens of Partial Differential Equations

no code implementations NeurIPS Workshop DLDE 2021 Ido Ben-Yair, Gil Ben Shalom, Moshe Eliasof, Eran Treister

Quantization of Convolutional Neural Networks (CNNs) is a common approach to ease the computational burden involved in the deployment of CNNs, especially on low-resource edge devices.

Autonomous Driving Image Classification +2

PDE-GCN: Novel Architectures for Graph Neural Networks Motivated by Partial Differential Equations

1 code implementation NeurIPS 2021 Moshe Eliasof, Eldad Haber, Eran Treister

Moreover, as we demonstrate using an extensive set of experiments, our PDE-motivated networks can generalize and be effective for various types of problems from different fields.

Diffraction Tomography with Helmholtz Equation: Efficient and Robust Multigrid-Based Solver

no code implementations8 Jul 2021 Tao Hong, Thanh-an Pham, Eran Treister, Michael Unser

In this work, we introduce instead a Helmholtz-based nonlinear model for inverse scattering.

GradFreeBits: Gradient Free Bit Allocation for Dynamic Low Precision Neural Networks

no code implementations18 Feb 2021 Benjamin J. Bodner, Gil Ben Shalom, Eran Treister

Quantized neural networks (QNNs) are among the main approaches for deploying deep neural networks on low resource edge devices.

Quantization

Full waveform inversion using extended and simultaneous sources

no code implementations11 Feb 2021 Sagi Buchatsky, Eran Treister

This way, we have a large-but-manageable additional parameter space, which has a rather low memory footprint, and is much more suitable for solving large scale instances of the problem than the full rank additional space.

Stochastic Optimization Computational Engineering, Finance, and Science Numerical Analysis Numerical Analysis 86A22, 86A15, 65M32, 65N22, 35Q86, 35R30

Mimetic Neural Networks: A unified framework for Protein Design and Folding

no code implementations7 Feb 2021 Moshe Eliasof, Tue Boesen, Eldad Haber, Chen Keasar, Eran Treister

Recent advancements in machine learning techniques for protein folding motivate better results in its inverse problem -- protein design.

BIG-bench Machine Learning Protein Design +1

MGIC: Multigrid-in-Channels Neural Network Architectures

1 code implementation NeurIPS Workshop DLDE 2021 Moshe Eliasof, Jonathan Ephrath, Lars Ruthotto, Eran Treister

We present a multigrid-in-channels (MGIC) approach that tackles the quadratic growth of the number of parameters with respect to the number of channels in standard convolutional neural networks (CNNs).

Image Classification Point Cloud Classification

Multigrid-in-Channels Architectures for Wide Convolutional Neural Networks

no code implementations11 Jun 2020 Jonathan Ephrath, Lars Ruthotto, Eran Treister

We present a multigrid approach that combats the quadratic growth of the number of parameters with respect to the number of channels in standard convolutional neural networks (CNNs).

Image Classification

DiffGCN: Graph Convolutional Networks via Differential Operators and Algebraic Multigrid Pooling

1 code implementation NeurIPS 2020 Moshe Eliasof, Eran Treister

Graph Convolutional Networks (GCNs) have shown to be effective in handling unordered data like point clouds and meshes.

Effective Learning of a GMRF Mixture Model

1 code implementation18 May 2020 Shahaf E. Finder, Eran Treister, Oren Freifeld

However, we show that even for a single Gaussian, when GLASSO is tuned to successfully estimate the sparsity pattern, it does so at the price of a substantial bias of the values of the nonzero entries of the matrix, and we show that this problem only worsens in a mixture setting.

LeanConvNets: Low-cost Yet Effective Convolutional Neural Networks

no code implementations29 Oct 2019 Jonathan Ephrath, Moshe Eliasof, Lars Ruthotto, Eldad Haber, Eran Treister

In practice, the input data and the hidden features consist of a large number of channels, which in most CNNs are fully coupled by the convolution operators.

Image Classification Semantic Segmentation +2

Multi-modal 3D Shape Reconstruction Under Calibration Uncertainty using Parametric Level Set Methods

2 code implementations23 Apr 2019 Moshe Eliasof, Andrei Sharf, Eran Treister

This method not only allows us to analytically and compactly represent the object, it also confers on us the ability to overcome calibration related noise that originates from inaccurate acquisition parameters.

3D Shape Reconstruction

LeanResNet: A Low-cost Yet Effective Convolutional Residual Networks

no code implementations15 Apr 2019 Jonathan Ephrath, Lars Ruthotto, Eldad Haber, Eran Treister

Convolutional Neural Networks (CNNs) filter the input data using spatial convolution operators with compact stencils.

General Classification Image Classification

IMEXnet: A Forward Stable Deep Neural Network

1 code implementation6 Mar 2019 Eldad Haber, Keegan Lensink, Eran Treister, Lars Ruthotto

Deep convolutional neural networks have revolutionized many machine learning and computer vision tasks, however, some remaining key challenges limit their wider use.

Semantic Segmentation

jInv -- a flexible Julia package for PDE parameter estimation

3 code implementations23 Jun 2016 Lars Ruthotto, Eran Treister, Eldad Haber

Estimating parameters of Partial Differential Equations (PDEs) from noisy and indirect measurements often requires solving ill-posed inverse problems.

Mathematical Software

A Block-Coordinate Descent Approach for Large-scale Sparse Inverse Covariance Estimation

no code implementations NeurIPS 2014 Eran Treister, Javier S. Turek

Numerical experiments on both synthetic and real gene expression data demonstrate that our approach outperforms the existing state of the art methods, especially for large-scale problems.

Cannot find the paper you are looking for? You can Submit a new open access paper.