no code implementations • 9 Feb 2025 • Idan Achituve, Hai Victor Habi, Amir Rosenfeld, Arnon Netzer, Idit Diamant, Ethan Fetaya
Here, we suggest a novel sampling method based on sequential Monte Carlo (SMC) in the latent space of diffusion models.
no code implementations • 5 Feb 2025 • Elad Cohen, Idan Achituve, Idit Diamant, Arnon Netzer, Hai Victor Habi
ELIR operates in latent space by first predicting the latent representation of the minimum mean square error (MMSE) estimator and then transporting this estimate to high-quality images using a latent consistency flow-based model.
1 code implementation • 2 Jun 2024 • Ohad Rahamim, Hilit Segev, Idan Achituve, Yuval Atzmon, Yoni Kasten, Gal Chechik
Then, we describe how to infer the 3D poses and arrangement of objects from a 2D generated image by finding a consistent projection of objects onto the 2D scene.
1 code implementation • 6 Feb 2024 • Idan Achituve, Idit Diamant, Arnon Netzer, Gal Chechik, Ethan Fetaya
Running a dedicated model for each task is computationally expensive and therefore there is a great interest in multi-task learning (MTL).
Ranked #1 on
Multi-Task Learning
on ChestX-ray14
1 code implementation • 3 Jan 2024 • Idit Diamant, Amir Rosenfeld, Idan Achituve, Jacob Goldberger, Arnon Netzer
In this paper, we introduce a novel noise-learning approach tailored to address noise distribution in domain adaptation settings and learn to de-confuse the pseudo-labels.
no code implementations • 15 Nov 2023 • Aviv Shamsian, David W. Zhang, Aviv Navon, Yan Zhang, Miltiadis Kofinas, Idan Achituve, Riccardo Valperga, Gertjan J. Burghouts, Efstratios Gavves, Cees G. M. Snoek, Ethan Fetaya, Gal Chechik, Haggai Maron
Learning in weight spaces, where neural networks process the weights of other deep neural networks, has emerged as a promising research direction with applications in various fields, from analyzing and editing neural fields and implicit neural representations, to network pruning and quantization.
1 code implementation • 19 Jun 2023 • Ariel Lapid, Idan Achituve, Lior Bracha, Ethan Fetaya
GD-VDM is based on a two-phase generation process involving generating depth videos followed by a novel diffusion Vid2Vid model that generates a coherent real-world video.
1 code implementation • 19 Feb 2023 • Idan Achituve, Gal Chechik, Ethan Fetaya
Combining Gaussian processes with the expressive power of deep neural networks is commonly done nowadays through deep kernel learning (DKL).
2 code implementations • 30 Jan 2023 • Aviv Navon, Aviv Shamsian, Idan Achituve, Ethan Fetaya, Gal Chechik, Haggai Maron
Designing machine learning architectures for processing neural networks in their raw weight matrix form is a newly introduced research direction.
no code implementations • 4 Sep 2022 • Idan Achituve, Wenbo Wang, Ethan Fetaya, Amir Leshem
Vertical distributed learning exploits the local features collected by multiple learning workers to form a better global model.
1 code implementation • 5 Jun 2022 • Coby Penso, Idan Achituve, Ethan Fetaya
Bayesian models have many desirable properties, most notable is their ability to generalize from limited data and to properly estimate the uncertainty in their predictions.
4 code implementations • 2 Feb 2022 • Aviv Navon, Aviv Shamsian, Idan Achituve, Haggai Maron, Kenji Kawaguchi, Gal Chechik, Ethan Fetaya
In this paper, we propose viewing the gradients combination step as a bargaining game, where tasks negotiate to reach an agreement on a joint direction of parameter update.
Ranked #2 on
Multi-Task Learning
on Cityscapes test
1 code implementation • NeurIPS 2021 • Idan Achituve, Aviv Shamsian, Aviv Navon, Gal Chechik, Ethan Fetaya
A key challenge in this setting is to learn effectively across clients even though each client has unique data that is often limited in size.
Ranked #1 on
Personalized Federated Learning
on CIFAR-100
1 code implementation • 15 Feb 2021 • Idan Achituve, Aviv Navon, Yochai Yemini, Gal Chechik, Ethan Fetaya
As a result, our method scales well with both the number of classes and data size.
1 code implementation • ICLR 2021 • Aviv Navon, Idan Achituve, Haggai Maron, Gal Chechik, Ethan Fetaya
Two main challenges arise in this multi-task learning setting: (i) designing useful auxiliary tasks; and (ii) combining auxiliary tasks into a single coherent loss.
3 code implementations • 29 Mar 2020 • Idan Achituve, Haggai Maron, Gal Chechik
Self-supervised learning (SSL) is a technique for learning useful representations from unlabeled data.