no code implementations • 13 Jul 2023 • Moab Arar, Rinon Gal, Yuval Atzmon, Gal Chechik, Daniel Cohen-Or, Ariel Shamir, Amit H. Bermano
Text-to-image (T2I) personalization allows users to guide the creative image generation process by combining their own visual concepts in natural language prompts.
no code implementations • 18 Jun 2023 • Yoni Kasten, Ohad Rahamim, Gal Chechik
Point-cloud data collected in real-world applications are often incomplete.
1 code implementation • 15 Jun 2023 • Royi Rassin, Eran Hirsch, Daniel Glickman, Shauli Ravfogel, Yoav Goldberg, Gal Chechik
This reflects an impaired mapping between linguistic binding of entities and modifiers in the prompt and visual binding of the corresponding elements in the generated image.
1 code implementation • 14 Jun 2023 • Dvir Samuel, Rami Ben-Ari, Nir Darshan, Haggai Maron, Gal Chechik
We describe a simple yet efficient algorithm for approximating this metric and use it to further define centroids in the latent seed space.
no code implementations • 30 May 2023 • Lior Bracha, Eitan Shaar, Aviv Shamsian, Ethan Fetaya, Gal Chechik
Our results highlight the potential of using pre-trained visual-semantic models for generating high-quality contextual descriptions.
no code implementations • 2 May 2023 • Chen Tessler, Yoni Kasten, Yunrong Guo, Shie Mannor, Gal Chechik, Xue Bin Peng
In this work, we present Conditional Adversarial Latent Models (CALM), an approach for generating diverse and directable behaviors for user-controlled interactive virtual characters.
no code implementations • 2 May 2023 • Yoad Tewel, Rinon Gal, Gal Chechik, Yuval Atzmon
The task of T2I personalization poses multiple hard challenges, such as maintaining high visual fidelity while allowing creative control, combining multiple personalized concepts in a single image, and keeping a small model size.
1 code implementation • 27 Apr 2023 • Dvir Samuel, Rami Ben-Ari, Simon Raviv, Nir Darshan, Gal Chechik
We further evaluate SeedSelect on correcting images of hands, a well-known pitfall of current diffusion models, and show that it improves hand generation substantially.
no code implementations • 6 Mar 2023 • Moshe Eliasof, Fabrizio Frasca, Beatrice Bevilacqua, Eran Treister, Gal Chechik, Haggai Maron
Two main families of node feature augmentation schemes have been explored for enhancing GNNs: random features and spectral positional encoding.
no code implementations • 23 Feb 2023 • Rinon Gal, Moab Arar, Yuval Atzmon, Amit H. Bermano, Gal Chechik, Daniel Cohen-Or
Specifically, we employ two components: First, an encoder that takes as an input a single image of a target concept from a given domain, e. g. a specific face, and learns to map it into a word-embedding representing the concept.
1 code implementation • 19 Feb 2023 • Idan Achituve, Gal Chechik, Ethan Fetaya
Combining Gaussian processes with the expressive power of deep neural networks is commonly done nowadays through deep kernel learning (DKL).
1 code implementation • 31 Jan 2023 • Aviv Shamsian, Aviv Navon, Neta Glazer, Kenji Kawaguchi, Gal Chechik, Ethan Fetaya
Auxiliary learning is an effective method for enhancing the generalization capabilities of trained models, particularly when dealing with small datasets.
no code implementations • 30 Jan 2023 • Gal Dalal, Assaf Hallak, Gugan Thoppe, Shie Mannor, Gal Chechik
We prove that the resulting variance decays exponentially with the planning horizon as a function of the expansion policy.
1 code implementation • 30 Jan 2023 • Aviv Navon, Aviv Shamsian, Idan Achituve, Ethan Fetaya, Gal Chechik, Haggai Maron
Designing machine learning architectures for processing neural networks in their raw weight matrix form is a newly introduced research direction.
3 code implementations • 26 Jan 2023 • Ido Greenberg, Shie Mannor, Gal Chechik, Eli Meirom
We prove that the former disappears in MRL, and address the latter via the novel Robust Meta RL algorithm (RoML).
no code implementations • 27 Oct 2022 • Ohad Amosy, Tomer Volk, Eyal Ben-David, Roi Reichart, Gal Chechik
We study the problem of generating a training-free task-dependent visual classifier from text descriptions without visual samples.
no code implementations • 28 Sep 2022 • Gal Dalal, Assaf Hallak, Shie Mannor, Gal Chechik
This allows us to reduce the variance of gradients by three orders of magnitude and to benefit from better sample complexity compared with standard policy gradient.
6 code implementations • 2 Aug 2022 • Rinon Gal, Yuval Alaluf, Yuval Atzmon, Or Patashnik, Amit H. Bermano, Gal Chechik, Daniel Cohen-Or
Yet, it is unclear how such freedom can be exercised to generate images of specific unique concepts, modify their appearance, or compose them in new roles and novel scenes.
1 code implementation • 5 Jul 2022 • Benjamin Fuhrer, Yuval Shpigelman, Chen Tessler, Shie Mannor, Gal Chechik, Eitan Zahavi, Gal Dalal
As communication protocols evolve, datacenter network utilization increases.
1 code implementation • 30 May 2022 • Guy Tennenholtz, Nadav Merlis, Lior Shani, Shie Mannor, Uri Shalit, Gal Chechik, Assaf Hallak, Gal Dalal
We learn the parameters of the TerMDP and leverage the structure of the estimation problem to provide state-wise confidence bounds.
no code implementations • 18 Apr 2022 • Eli A. Meirom, Haggai Maron, Shie Mannor, Gal Chechik
Quantum Computing (QC) stands to revolutionize computing, but is currently still limited.
2 code implementations • 4 Apr 2022 • Niv Cohen, Rinon Gal, Eli A. Meirom, Gal Chechik, Yuval Atzmon
We propose an architecture for solving PerVL that operates by extending the input vocabulary of a pretrained model with new word embeddings for the new personalized concepts.
Ranked #4 on
Zero-Shot Composed Image Retrieval (ZS-CIR)
on CIRCO
1 code implementation • 27 Mar 2022 • Tomer Volk, Eyal Ben-David, Ohad Amosy, Gal Chechik, Roi Reichart
While Natural Language Processing (NLP) algorithms keep reaching unprecedented milestones, out-of-distribution generalization is still challenging.
2 code implementations • 2 Feb 2022 • Aviv Navon, Aviv Shamsian, Idan Achituve, Haggai Maron, Kenji Kawaguchi, Gal Chechik, Ethan Fetaya
In this paper, we propose viewing the gradients combination step as a bargaining game, where tasks negotiate to reach an agreement on a joint direction of parameter update.
Ranked #1 on
Multi-Task Learning
on QM9
no code implementations • 2 Feb 2022 • Yuval Atzmon, Eli A. Meirom, Shie Mannor, Gal Chechik
Reasoning and interacting with dynamic environments is a fundamental problem in AI, but it becomes extremely challenging when actions can trigger cascades of cross-dependent events.
no code implementations • 28 Jan 2022 • Aviv Rosenberg, Assaf Hallak, Shie Mannor, Gal Chechik, Gal Dalal
Some of the most powerful reinforcement learning frameworks use planning for action selection.
no code implementations • 20 Jan 2022 • Or Litany, Haggai Maron, David Acuna, Jan Kautz, Gal Chechik, Sanja Fidler
Standard Federated Learning (FL) techniques are limited to clients with identical network architectures.
no code implementations • 16 Nov 2021 • Ohad Amosy, Gal Eyal, Gal Chechik
In both FL and PFL, all clients participate in the training process and their labeled data are used for training.
no code implementations • ICLR 2022 • Guy Tennenholtz, Assaf Hallak, Gal Dalal, Shie Mannor, Gal Chechik, Uri Shalit
We analyze the limitations of learning from such data with and without external reward, and propose an adjustment of standard imitation learning algorithms to fit this setup.
1 code implementation • CVPR 2022 • Roei Herzig, Elad Ben-Avraham, Karttikeya Mangalam, Amir Bar, Gal Chechik, Anna Rohrbach, Trevor Darrell, Amir Globerson
In this work, we present Object-Region Video Transformers (ORViT), an \emph{object-centric} approach that extends video transformer layers with a block that directly incorporates object representations.
Ranked #3 on
Action Recognition
on Diving-48
no code implementations • 29 Sep 2021 • Ohad Amosy, Gal Eyal, Gal Chechik
That client representation is fed to a hypernetwork that generates a personalized model for that client.
3 code implementations • 2 Aug 2021 • Rinon Gal, Or Patashnik, Haggai Maron, Gal Chechik, Daniel Cohen-Or
Can a generative model be trained to produce images from a specific domain, guided by a text prompt only, without seeing any image?
1 code implementation • NeurIPS 2021 • Assaf Hallak, Gal Dalal, Steven Dalton, Iuri Frosio, Shie Mannor, Gal Chechik
We first discover and analyze a counter-intuitive phenomenon: action selection through TS and a pre-trained value function often leads to lower performance compared to the original pre-trained agent, even when having access to the exact state and reward in future steps.
1 code implementation • NeurIPS 2021 • Idan Achituve, Aviv Shamsian, Aviv Navon, Gal Chechik, Ethan Fetaya
A key challenge in this setting is to learn effectively across clients even though each client has unique data that is often limited in size.
Ranked #1 on
Personalized Federated Learning
on CIFAR-100
1 code implementation • CVPR 2022 • Amir Bar, Xin Wang, Vadim Kantorov, Colorado J Reed, Roei Herzig, Gal Chechik, Anna Rohrbach, Trevor Darrell, Amir Globerson
Recent self-supervised pretraining methods for object detection largely focus on pretraining the backbone of the object detector, neglecting key parts of detection architecture.
Ranked #1 on
Few-Shot Object Detection
on COCO 2017
no code implementations • ICCV 2021 • Dvir Samuel, Gal Chechik
The new robustness loss can be combined with various classifier balancing techniques and can be applied to representations at several layers of the deep model.
Ranked #15 on
Long-tail Learning
on CIFAR-100-LT (ρ=10)
2 code implementations • 8 Mar 2021 • Aviv Shamsian, Aviv Navon, Ethan Fetaya, Gal Chechik
In this approach, a central hypernetwork model is trained to generate a set of models, one model for each client.
Ranked #1 on
Personalized Federated Learning
on CIFAR-10
no code implementations • 18 Feb 2021 • Chen Tessler, Yuval Shpigelman, Gal Dalal, Amit Mandelbaum, Doron Haritan Kazakov, Benjamin Fuhrer, Gal Chechik, Shie Mannor
We approach the task of network congestion control in datacenters using Reinforcement Learning (RL).
1 code implementation • 15 Feb 2021 • Idan Achituve, Aviv Navon, Yochai Yemini, Gal Chechik, Ethan Fetaya
As a result, our method scales well with both the number of classes and data size.
1 code implementation • ICCV 2021 • Sangho Lee, Jiwan Chung, Youngjae Yu, Gunhee Kim, Thomas Breuel, Gal Chechik, Yale Song
We demonstrate that our approach finds videos with high audio-visual correspondence and show that self-supervised models trained on our data achieve competitive performances compared to models trained on existing manually curated datasets.
1 code implementation • 20 Oct 2020 • Ohad Amosy, Gal Chechik
Then, we train a student network using the pseudo labels and regularized the teacher to fit the student predictions.
no code implementations • 17 Oct 2020 • Gilad Yehudai, Ethan Fetaya, Eli Meirom, Gal Chechik, Haggai Maron
In this paper, we identify an important type of data where generalization from small to large graphs is challenging: graph distributions for which the local structure depends on the graph size.
no code implementations • 11 Oct 2020 • Eli A. Meirom, Haggai Maron, Shie Mannor, Gal Chechik
We consider the problem of controlling a partially-observed dynamic process on a graph by a limited number of interventions.
1 code implementation • ICLR 2021 • Aviv Navon, Aviv Shamsian, Gal Chechik, Ethan Fetaya
Here, we tackle the problem of learning the entire Pareto front, with the capability of selecting a desired operating point on the front after training.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Tzuf Paz-Argaman, Yuval Atzmon, Gal Chechik, Reut Tsarfaty
Specifically, given birds' images with free-text descriptions of their species, we learn to classify images of previously-unseen species based on specie descriptions.
no code implementations • 30 Sep 2020 • Achiya Jerbi, Roei Herzig, Jonathan Berant, Gal Chechik, Amir Globerson
In this work, we argue that captions contain much richer information about the image, including attributes of objects and their relations.
no code implementations • 28 Sep 2020 • Gilad Yehudai, Ethan Fetaya, Eli Meirom, Gal Chechik, Haggai Maron
We further demonstrate on several tasks, that training GNNs on small graphs results in solutions which do not generalize to larger graphs.
1 code implementation • 27 Jun 2020 • Amir Bar, Roei Herzig, Xiaolong Wang, Anna Rohrbach, Gal Chechik, Trevor Darrell, Amir Globerson
Our generative model for this task (AG2Vid) disentangles motion and appearance features, and by incorporating a scheduling mechanism for actions facilitates a timely and coordinated video generation.
1 code implementation • NeurIPS 2020 • Yuval Atzmon, Felix Kreuk, Uri Shalit, Gal Chechik
This leads to consistent misclassification of samples from a new distribution, like new combinations of known components.
1 code implementation • ICLR 2021 • Aviv Navon, Idan Achituve, Haggai Maron, Gal Chechik, Ethan Fetaya
Two main challenges arise in this multi-task learning setting: (i) designing useful auxiliary tasks; and (ii) combining auxiliary tasks into a single coherent loss.
1 code implementation • ECCV 2020 • Tanmay Gupta, Arash Vahdat, Gal Chechik, Xiaodong Yang, Jan Kautz, Derek Hoiem
Given pairs of images and captions, we maximize compatibility of the attention-weighted regions and the words in the corresponding caption, compared to non-corresponding pairs of images and captions.
1 code implementation • 5 Apr 2020 • Dvir Samuel, Yuval Atzmon, Gal Chechik
Real-world data is predominantly unbalanced and long-tailed, but deep models struggle to recognize rare classes in the presence of frequent classes.
Ranked #1 on
Long-tail learning with class descriptors
on CUB-LT
3 code implementations • 29 Mar 2020 • Idan Achituve, Haggai Maron, Gal Chechik
Self-supervised learning (SSL) is a technique for learning useful representations from unlabeled data.
1 code implementation • ECCV 2020 • Aviv Shamsian, Ofri Kleinfeld, Amir Globerson, Gal Chechik
The fourth subtask, where a target object is carried by a containing object, is particularly challenging because it requires a system to reason about a moving location of an invisible object.
Ranked #3 on
Video Object Tracking
on CATER
2 code implementations • ICML 2020 • Haggai Maron, Or Litany, Gal Chechik, Ethan Fetaya
We first characterize the space of linear layers that are equivariant both to element reordering and to the inherent symmetries of elements, like translation in the case of images.
2 code implementations • ECCV 2020 • Roei Herzig, Amir Bar, Huijuan Xu, Gal Chechik, Trevor Darrell, Amir Globerson
Generating realistic images of complex visual scenes becomes challenging when one wishes to control the structure of the generated images.
Ranked #3 on
Layout-to-Image Generation
on Visual Genome 256x256
no code implementations • IJCNLP 2019 • Hagai Taitelbaum, Gal Chechik, Jacob Goldberger
In this paper we present a novel approach to simultaneously representing multiple languages in a common space.
no code implementations • IJCNLP 2019 • Hagai Taitelbaum, Gal Chechik, Jacob Goldberger
For each source word, we first search for the most relevant auxiliary languages.
no code implementations • 26 Jul 2019 • Gilad Vered, Gal Oren, Yuval Atzmon, Gal Chechik
Second, we show that the generated descriptions can be kept close to natural by constraining them to be similar to human descriptions.
no code implementations • 10 Jun 2019 • Roman Visotsky, Yuval Atzmon, Gal Chechik
Here we describe a new approach to learn with fewer samples, by using additional information that is provided per sample.
1 code implementation • 26 Feb 2019 • Moshiko Raboh, Roei Herzig, Gal Chechik, Jonathan Berant, Amir Globerson
In many domains, it is preferable to train systems jointly in an end-to-end manner, but SGs are not commonly used as intermediate components in visual reasoning systems because being discrete and sparse, scene-graph representations are non-differentiable and difficult to optimize.
1 code implementation • CVPR 2019 • Lior Bracha, Gal Chechik
Capturing the interesting components of an image is a key aspect of image understanding.
no code implementations • CVPR 2019 • Yuval Atzmon, Gal Chechik
Specifically, our model consists of three classifiers: A "gating" model that makes soft decisions if a sample is from a "seen" class, and two experts: a ZSL expert, and an expert model for seen classes.
no code implementations • 20 Sep 2018 • Yair Lakretz, Gal Chechik, Evan-Gary Cohen, Alessandro Treves, Naama Friedmann
This study presents a new framework for learning a metric function for perceptual distances among pairs of phonemes.
1 code implementation • 7 Jun 2018 • Yuval Atzmon, Gal Chechik
The soft group structure can be learned from data jointly as part of the model, and can also readily incorporate prior knowledge about groups if available.
1 code implementation • NeurIPS 2018 • Roei Herzig, Moshiko Raboh, Gal Chechik, Jonathan Berant, Amir Globerson
Machine understanding of complex images is a key goal of artificial intelligence.
no code implementations • 27 Nov 2017 • Ido Cohen, Eli David, Nathan S. Netanyahu, Noa Liscovitch, Gal Chechik
This paper presents a novel deep learning-based method for learning a functional representation of mammalian neural images.
1 code implementation • CVPR 2017 • Ramakrishna Vedantam, Samy Bengio, Kevin Murphy, Devi Parikh, Gal Chechik
We introduce an inference technique to produce discriminative context-aware image captions (captions that describe differences between images or visual concepts) using only generic context-agnostic training data (captions that describe a concept or an image in isolation).
no code implementations • CVPR 2017 • Andreas Veit, Neil Alldrin, Gal Chechik, Ivan Krasin, Abhinav Gupta, Serge Belongie
For the small clean set of annotations we use a quarter of the validation set with ~40k images.
no code implementations • 27 Aug 2016 • Yuval Atzmon, Jonathan Berant, Vahid Kezami, Amir Globerson, Gal Chechik
Recurrent neural networks have recently been used for learning to describe images using natural language.
no code implementations • 11 Apr 2015 • Alexander Kalmanovich, Gal Chechik
Stacked denoising auto encoders (DAEs) are well known to learn useful deep representations, which can be used to improve supervised training by initializing a deep network.
no code implementations • 19 Dec 2014 • Alexander Kalmanovich, Gal Chechik
Stacked denoising auto encoders (DAEs) are well known to learn useful deep representations, which can be used to improve supervised training by initializing a deep network.
no code implementations • 2 Dec 2013 • Uri Shalit, Gal Chechik
Optimizing over the set of orthogonal matrices is a central component in problems like sparse-PCA or tensor decomposition.
no code implementations • NeurIPS 2010 • Uri Shalit, Daphna Weinshall, Gal Chechik
When learning models that are represented in matrix forms, enforcing a low-rank constraint can dramatically improve the memory and run time complexity, while providing a natural regularization of the model.
no code implementations • NeurIPS 2009 • Gal Chechik, Uri Shalit, Varun Sharma, Samy Bengio
We describe OASIS, a method for learning pairwise similarity that is fast and scales linearly with the number of objects and the number of non-zero features.