no code implementations • 17 Feb 2024 • Neta Glazer, Aviv Navon, Aviv Shamsian, Ethan Fetaya

One of the challenges in applying reinforcement learning in a complex real-world environment lies in providing the agent with a sufficiently detailed reward function.

1 code implementation • 6 Feb 2024 • Idan Achituve, Idit Diamant, Arnon Netzer, Gal Chechik, Ethan Fetaya

Running a dedicated model for each task is computationally expensive and therefore there is a great interest in multi-task learning (MTL).

Ranked #1 on Multi-Task Learning on ChestX-ray14

no code implementations • 6 Feb 2024 • Aviv Shamsian, Aviv Navon, David W. Zhang, Yan Zhang, Ethan Fetaya, Gal Chechik, Haggai Maron

Learning in deep weight spaces (DWS), where neural networks process the weights of other neural networks, is an emerging research direction, with applications to 2D and 3D neural fields (INRs, NeRFs), as well as making inferences about other types of neural networks.

no code implementations • 15 Nov 2023 • Aviv Shamsian, David W. Zhang, Aviv Navon, Yan Zhang, Miltiadis Kofinas, Idan Achituve, Riccardo Valperga, Gertjan J. Burghouts, Efstratios Gavves, Cees G. M. Snoek, Ethan Fetaya, Gal Chechik, Haggai Maron

Learning in weight spaces, where neural networks process the weights of other deep neural networks, has emerged as a promising research direction with applications in various fields, from analyzing and editing neural fields and implicit neural representations, to network pruning and quantization.

1 code implementation • 20 Oct 2023 • Aviv Navon, Aviv Shamsian, Ethan Fetaya, Gal Chechik, Nadav Dym, Haggai Maron

To accelerate the alignment process and improve its quality, we propose a novel framework aimed at learning to solve the weight alignment problem, which we name Deep-Align.

no code implementations • 26 Sep 2023 • Hila Levi, Guy Heller, Dan Levi, Ethan Fetaya

Our approach aggregates dense embeddings extracted from CLIP into a compact representation, essentially combining the scalability of image retrieval pipelines with the object identification capabilities of dense detection methods.

no code implementations • 4 Jul 2023 • Guy Berger, Aviv Navon, Ethan Fetaya

In computer vision and machine learning, a crucial challenge is to lower the computation and memory demands for neural network inference.

1 code implementation • 19 Jun 2023 • Ariel Lapid, Idan Achituve, Lior Bracha, Ethan Fetaya

GD-VDM is based on a two-phase generation process involving generating depth videos followed by a novel diffusion Vid2Vid model that generates a coherent real-world video.

1 code implementation • 5 Jun 2023 • Yochai Yemini, Aviv Shamsian, Lior Bracha, Sharon Gannot, Ethan Fetaya

We then condition a diffusion model on the video and use the extracted text through a classifier-guidance mechanism where a pre-trained ASR serves as the classifier.

no code implementations • 30 May 2023 • Lior Bracha, Eitan Shaar, Aviv Shamsian, Ethan Fetaya, Gal Chechik

Our results highlight the potential of using pre-trained visual-semantic models for generating high-quality contextual descriptions.

1 code implementation • 19 Feb 2023 • Idan Achituve, Gal Chechik, Ethan Fetaya

Combining Gaussian processes with the expressive power of deep neural networks is commonly done nowadays through deep kernel learning (DKL).

1 code implementation • 31 Jan 2023 • Aviv Shamsian, Aviv Navon, Neta Glazer, Kenji Kawaguchi, Gal Chechik, Ethan Fetaya

Auxiliary learning is an effective method for enhancing the generalization capabilities of trained models, particularly when dealing with small datasets.

1 code implementation • 30 Jan 2023 • Aviv Navon, Aviv Shamsian, Idan Achituve, Ethan Fetaya, Gal Chechik, Haggai Maron

Designing machine learning architectures for processing neural networks in their raw weight matrix form is a newly introduced research direction.

no code implementations • 4 Sep 2022 • Idan Achituve, Wenbo Wang, Ethan Fetaya, Amir Leshem

Vertical distributed learning exploits the local features collected by multiple learning workers to form a better global model.

1 code implementation • 22 Jun 2022 • Eyal Betzalel, Coby Penso, Aviv Navon, Ethan Fetaya

In this work, we study the evaluation metrics of generative models by generating a high-quality synthetic dataset on which we can estimate classical metrics for comparison.

1 code implementation • 5 Jun 2022 • Coby Penso, Idan Achituve, Ethan Fetaya

Bayesian models have many desirable properties, most notable is their ability to generalize from limited data and to properly estimate the uncertainty in their predictions.

2 code implementations • 2 Feb 2022 • Aviv Navon, Aviv Shamsian, Idan Achituve, Haggai Maron, Kenji Kawaguchi, Gal Chechik, Ethan Fetaya

In this paper, we propose viewing the gradients combination step as a bargaining game, where tasks negotiate to reach an agreement on a joint direction of parameter update.

Ranked #1 on Multi-Task Learning on Cityscapes test

no code implementations • 11 Oct 2021 • Guy Heller, Ethan Fetaya

Bayesian learning via Stochastic Gradient Langevin Dynamics (SGLD) has been suggested for differentially private learning.

1 code implementation • NeurIPS 2021 • Idan Achituve, Aviv Shamsian, Aviv Navon, Gal Chechik, Ethan Fetaya

A key challenge in this setting is to learn effectively across clients even though each client has unique data that is often limited in size.

Ranked #1 on Personalized Federated Learning on CIFAR-100

2 code implementations • 8 Mar 2021 • Aviv Shamsian, Aviv Navon, Ethan Fetaya, Gal Chechik

In this approach, a central hypernetwork model is trained to generate a set of models, one model for each client.

Ranked #1 on Personalized Federated Learning on CIFAR-10

1 code implementation • 15 Feb 2021 • Idan Achituve, Aviv Navon, Yochai Yemini, Gal Chechik, Ethan Fetaya

As a result, our method scales well with both the number of classes and data size.

no code implementations • 22 Oct 2020 • Yochai Yemini, Ethan Fetaya, Haggai Maron, Sharon Gannot

We use noisy and noiseless versions of a simulated reverberant dataset to test the proposed architecture.

no code implementations • 17 Oct 2020 • Gilad Yehudai, Ethan Fetaya, Eli Meirom, Gal Chechik, Haggai Maron

In this paper, we identify an important type of data where generalization from small to large graphs is challenging: graph distributions for which the local structure depends on the graph size.

1 code implementation • ICLR 2021 • Aviv Navon, Aviv Shamsian, Gal Chechik, Ethan Fetaya

Here, we tackle the problem of learning the entire Pareto front, with the capability of selecting a desired operating point on the front after training.

no code implementations • 28 Sep 2020 • Gilad Yehudai, Ethan Fetaya, Eli Meirom, Gal Chechik, Haggai Maron

We further demonstrate on several tasks, that training GNNs on small graphs results in solutions which do not generalize to larger graphs.

1 code implementation • ICLR 2021 • Aviv Navon, Idan Achituve, Haggai Maron, Gal Chechik, Ethan Fetaya

Two main challenges arise in this multi-task learning setting: (i) designing useful auxiliary tasks; and (ii) combining auxiliary tasks into a single coherent loss.

no code implementations • 4 Mar 2020 • Ethan Fetaya, Yonatan Lifshitz, Elad Aaron, Shai Gordin

The main source of information regarding ancient Mesopotamian history and culture are clay cuneiform tablets.

2 code implementations • ICML 2020 • Haggai Maron, Or Litany, Gal Chechik, Ethan Fetaya

We first characterize the space of linear layers that are equivariant both to element reordering and to the inherent symmetries of elements, like translation in the case of images.

no code implementations • ICLR 2020 • Ethan Fetaya, Jörn-Henrik Jacobsen, Will Grathwohl, Richard Zemel

Class-conditional generative models hold promise to overcome the shortcomings of their discriminative counterparts.

no code implementations • 28 May 2019 • Dan Levi, Liran Gispan, Niv Giladi, Ethan Fetaya

Predicting not only the target but also an accurate measure of uncertainty is important for many machine learning applications and in particular safety-critical ones.

no code implementations • 27 Jan 2019 • Haggai Maron, Ethan Fetaya, Nimrod Segol, Yaron Lipman

We conclude the paper by proving a necessary condition for the universality of $G$-invariant networks that incorporate only first-order tensors.

1 code implementation • NeurIPS 2019 • Mengye Ren, Renjie Liao, Ethan Fetaya, Richard S. Zemel

This paper addresses this problem, incremental few-shot learning, where a regular classification network has already been trained to recognize a set of base classes, and several extra novel classes are being considered, each with only a few labeled examples.

1 code implementation • NeurIPS 2018 • Lisa Zhang, Gregory Rosenblatt, Ethan Fetaya, Renjie Liao, William E. Byrd, Matthew Might, Raquel Urtasun, Richard Zemel

Synthesizing programs using example input/outputs is a classic problem in artificial intelligence.

1 code implementation • 21 Mar 2018 • KiJung Yoon, Renjie Liao, Yuwen Xiong, Lisa Zhang, Ethan Fetaya, Raquel Urtasun, Richard Zemel, Xaq Pitkow

Message-passing algorithms, such as belief propagation, are a natural way to disseminate evidence amongst correlated variables while exploiting the graph structure, but these algorithms can struggle when the conditional dependency graphs contain loops.

1 code implementation • ICML 2018 • Renjie Liao, Yuwen Xiong, Ethan Fetaya, Lisa Zhang, KiJung Yoon, Xaq Pitkow, Raquel Urtasun, Richard Zemel

We examine all RBP variants along with BPTT and TBPTT in three different application domains: associative memory with continuous Hopfield networks, document classification in citation networks using graph neural networks and hyperparameter optimization for fully connected networks.

9 code implementations • ICML 2018 • Thomas Kipf, Ethan Fetaya, Kuan-Chieh Wang, Max Welling, Richard Zemel

Interacting systems are prevalent in nature, from dynamical systems in physics to complex societal dynamics.

no code implementations • ICLR 2018 • Oran Shayer, Dan Levi, Ethan Fetaya

We show how a simple modification to the local reparameterization trick, previously used to train Gaussian distributed weights, enables the training of discrete weights.

no code implementations • 27 Mar 2016 • Ita Lifshitz, Ethan Fetaya, Shimon Ullman

In this paper we consider the problem of human pose estimation from a single still image.

Ranked #37 on Pose Estimation on MPII Human Pose

no code implementations • 20 Oct 2015 • Ariel Jaffe, Ethan Fetaya, Boaz Nadler, Tingting Jiang, Yuval Kluger

In unsupervised ensemble learning, one obtains predictions from multiple sources or classifiers, yet without knowing the reliability and expertise of each source, and with no labeled data to assess it.

no code implementations • 4 Feb 2015 • Ethan Fetaya, Shimon Ullman

For many tasks and data types, there are natural transformations to which the data should be invariant or insensitive.

no code implementations • 10 Jun 2014 • Ethan Fetaya, Ohad Shamir, Shimon Ullman

We consider the problem of learning from a similarity matrix (such as spectral clustering and lowd imensional embedding), when computing pairwise similarities are costly, and only a limited number of entries can be observed.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.