Search Results for author: Gal Chechik

Found 81 papers, 39 papers with code

Improved Generalization of Weight Space Networks via Augmentations

no code implementations6 Feb 2024 Aviv Shamsian, Aviv Navon, David W. Zhang, Yan Zhang, Ethan Fetaya, Gal Chechik, Haggai Maron

Learning in deep weight spaces (DWS), where neural networks process the weights of other neural networks, is an emerging research direction, with applications to 2D and 3D neural fields (INRs, NeRFs), as well as making inferences about other types of neural networks.

Contrastive Learning Data Augmentation

Bayesian Uncertainty for Gradient Aggregation in Multi-Task Learning

1 code implementation6 Feb 2024 Idan Achituve, Idit Diamant, Arnon Netzer, Gal Chechik, Ethan Fetaya

Running a dedicated model for each task is computationally expensive and therefore there is a great interest in multi-task learning (MTL).

Bayesian Inference Multi-Task Learning

Training-Free Consistent Text-to-Image Generation

1 code implementation5 Feb 2024 Yoad Tewel, Omri Kaduri, Rinon Gal, Yoni Kasten, Lior Wolf, Gal Chechik, Yuval Atzmon

Text-to-image models offer a new level of creative flexibility by allowing users to guide the image generation process through natural language.

Story Visualization Text-to-Image Generation

Fixed-point Inversion for Text-to-image diffusion models

no code implementations19 Dec 2023 Barak Meiri, Dvir Samuel, Nir Darshan, Gal Chechik, Shai Avidan, Rami Ben-Ari

Several applications of these models, including image editing interpolation, and semantic augmentation, require diffusion inversion.

Breathing Life Into Sketches Using Text-to-Video Priors

no code implementations21 Nov 2023 Rinon Gal, Yael Vinker, Yuval Alaluf, Amit H. Bermano, Daniel Cohen-Or, Ariel Shamir, Gal Chechik

A sketch is one of the most intuitive and versatile tools humans use to convey their ideas visually.

Data Augmentations in Deep Weight Spaces

no code implementations15 Nov 2023 Aviv Shamsian, David W. Zhang, Aviv Navon, Yan Zhang, Miltiadis Kofinas, Idan Achituve, Riccardo Valperga, Gertjan J. Burghouts, Efstratios Gavves, Cees G. M. Snoek, Ethan Fetaya, Gal Chechik, Haggai Maron

Learning in weight spaces, where neural networks process the weights of other deep neural networks, has emerged as a promising research direction with applications in various fields, from analyzing and editing neural fields and implicit neural representations, to network pruning and quantization.

Data Augmentation Network Pruning +1

Equivariant Deep Weight Space Alignment

no code implementations20 Oct 2023 Aviv Navon, Aviv Shamsian, Ethan Fetaya, Gal Chechik, Nadav Dym, Haggai Maron

To accelerate the alignment process and improve its quality, we propose a novel framework aimed at learning to solve the weight alignment problem, which we name Deep-Align.

Domain-Agnostic Tuning-Encoder for Fast Personalization of Text-To-Image Models

no code implementations13 Jul 2023 Moab Arar, Rinon Gal, Yuval Atzmon, Gal Chechik, Daniel Cohen-Or, Ariel Shamir, Amit H. Bermano

Text-to-image (T2I) personalization allows users to guide the creative image generation process by combining their own visual concepts in natural language prompts.

Image Generation

Linguistic Binding in Diffusion Models: Enhancing Attribute Correspondence through Attention Map Alignment

1 code implementation NeurIPS 2023 Royi Rassin, Eran Hirsch, Daniel Glickman, Shauli Ravfogel, Yoav Goldberg, Gal Chechik

This reflects an impaired mapping between linguistic binding of entities and modifiers in the prompt and visual binding of the corresponding elements in the generated image.

Attribute Sentence +1

Norm-guided latent space exploration for text-to-image generation

1 code implementation NeurIPS 2023 Dvir Samuel, Rami Ben-Ari, Nir Darshan, Haggai Maron, Gal Chechik

Text-to-image diffusion models show great potential in synthesizing a large variety of concepts in new compositions and scenarios.

Long-tail Learning Text-to-Image Generation

DisCLIP: Open-Vocabulary Referring Expression Generation

no code implementations30 May 2023 Lior Bracha, Eitan Shaar, Aviv Shamsian, Ethan Fetaya, Gal Chechik

Our results highlight the potential of using pre-trained visual-semantic models for generating high-quality contextual descriptions.

Referring Expression Referring expression generation

Key-Locked Rank One Editing for Text-to-Image Personalization

no code implementations2 May 2023 Yoad Tewel, Rinon Gal, Gal Chechik, Yuval Atzmon

The task of T2I personalization poses multiple hard challenges, such as maintaining high visual fidelity while allowing creative control, combining multiple personalized concepts in a single image, and keeping a small model size.

CALM: Conditional Adversarial Latent Models for Directable Virtual Characters

no code implementations2 May 2023 Chen Tessler, Yoni Kasten, Yunrong Guo, Shie Mannor, Gal Chechik, Xue Bin Peng

In this work, we present Conditional Adversarial Latent Models (CALM), an approach for generating diverse and directable behaviors for user-controlled interactive virtual characters.

Imitation Learning

Generating images of rare concepts using pre-trained diffusion models

1 code implementation27 Apr 2023 Dvir Samuel, Rami Ben-Ari, Simon Raviv, Nir Darshan, Gal Chechik

We show that their limitation is partly due to the long-tail nature of their training data: web-crawled data sets are strongly unbalanced, causing models to under-represent concepts from the tail of the distribution.

Data Augmentation Text-to-Image Generation

Graph Positional Encoding via Random Feature Propagation

no code implementations6 Mar 2023 Moshe Eliasof, Fabrizio Frasca, Beatrice Bevilacqua, Eran Treister, Gal Chechik, Haggai Maron

Two main families of node feature augmentation schemes have been explored for enhancing GNNs: random features and spectral positional encoding.

Graph Classification Node Classification

Encoder-based Domain Tuning for Fast Personalization of Text-to-Image Models

no code implementations23 Feb 2023 Rinon Gal, Moab Arar, Yuval Atzmon, Amit H. Bermano, Gal Chechik, Daniel Cohen-Or

Specifically, we employ two components: First, an encoder that takes as an input a single image of a target concept from a given domain, e. g. a specific face, and learns to map it into a word-embedding representing the concept.

Novel Concepts

Guided Deep Kernel Learning

1 code implementation19 Feb 2023 Idan Achituve, Gal Chechik, Ethan Fetaya

Combining Gaussian processes with the expressive power of deep neural networks is commonly done nowadays through deep kernel learning (DKL).

Gaussian Processes

Auxiliary Learning as an Asymmetric Bargaining Game

1 code implementation31 Jan 2023 Aviv Shamsian, Aviv Navon, Neta Glazer, Kenji Kawaguchi, Gal Chechik, Ethan Fetaya

Auxiliary learning is an effective method for enhancing the generalization capabilities of trained models, particularly when dealing with small datasets.

Auxiliary Learning

SoftTreeMax: Exponential Variance Reduction in Policy Gradient via Tree Search

no code implementations30 Jan 2023 Gal Dalal, Assaf Hallak, Gugan Thoppe, Shie Mannor, Gal Chechik

We prove that the resulting variance decays exponentially with the planning horizon as a function of the expansion policy.

Policy Gradient Methods

Equivariant Architectures for Learning in Deep Weight Spaces

1 code implementation30 Jan 2023 Aviv Navon, Aviv Shamsian, Idan Achituve, Ethan Fetaya, Gal Chechik, Haggai Maron

Designing machine learning architectures for processing neural networks in their raw weight matrix form is a newly introduced research direction.

SoftTreeMax: Policy Gradient with Tree Search

no code implementations28 Sep 2022 Gal Dalal, Assaf Hallak, Shie Mannor, Gal Chechik

This allows us to reduce the variance of gradients by three orders of magnitude and to benefit from better sample complexity compared with standard policy gradient.

Policy Gradient Methods

An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion

7 code implementations2 Aug 2022 Rinon Gal, Yuval Alaluf, Yuval Atzmon, Or Patashnik, Amit H. Bermano, Gal Chechik, Daniel Cohen-Or

Yet, it is unclear how such freedom can be exercised to generate images of specific unique concepts, modify their appearance, or compose them in new roles and novel scenes.

Text-to-Image Generation

Reinforcement Learning with a Terminator

1 code implementation30 May 2022 Guy Tennenholtz, Nadav Merlis, Lior Shani, Shie Mannor, Uri Shalit, Gal Chechik, Assaf Hallak, Gal Dalal

We learn the parameters of the TerMDP and leverage the structure of the estimation problem to provide state-wise confidence bounds.

Autonomous Driving reinforcement-learning +1

"This is my unicorn, Fluffy": Personalizing frozen vision-language representations

2 code implementations4 Apr 2022 Niv Cohen, Rinon Gal, Eli A. Meirom, Gal Chechik, Yuval Atzmon

We propose an architecture for solving PerVL that operates by extending the input vocabulary of a pretrained model with new word embeddings for the new personalized concepts.

Image Retrieval Retrieval +5

Example-based Hypernetworks for Out-of-Distribution Generalization

1 code implementation27 Mar 2022 Tomer Volk, Eyal Ben-David, Ohad Amosy, Gal Chechik, Roi Reichart

Our innovative framework employs example-based Hypernetwork adaptation: a T5 encoder-decoder initially generates a unique signature from an input example, embedding it within the source domains' semantic space.

Domain Adaptation Natural Language Inference +3

Multi-Task Learning as a Bargaining Game

2 code implementations2 Feb 2022 Aviv Navon, Aviv Shamsian, Idan Achituve, Haggai Maron, Kenji Kawaguchi, Gal Chechik, Ethan Fetaya

In this paper, we propose viewing the gradients combination step as a bargaining game, where tasks negotiate to reach an agreement on a joint direction of parameter update.

Multi-Task Learning

Learning to reason about and to act on physical cascading events

no code implementations2 Feb 2022 Yuval Atzmon, Eli A. Meirom, Shie Mannor, Gal Chechik

Reasoning and interacting with dynamic environments is a fundamental problem in AI, but it becomes extremely challenging when actions can trigger cascades of cross-dependent events.

counterfactual

Planning and Learning with Adaptive Lookahead

no code implementations28 Jan 2022 Aviv Rosenberg, Assaf Hallak, Shie Mannor, Gal Chechik, Gal Dalal

Some of the most powerful reinforcement learning frameworks use planning for action selection.

Federated Learning with Heterogeneous Architectures using Graph HyperNetworks

no code implementations20 Jan 2022 Or Litany, Haggai Maron, David Acuna, Jan Kautz, Gal Chechik, Sanja Fidler

Standard Federated Learning (FL) techniques are limited to clients with identical network architectures.

Federated Learning

On-Demand Unlabeled Personalized Federated Learning

no code implementations16 Nov 2021 Ohad Amosy, Gal Eyal, Gal Chechik

In both FL and PFL, all clients participate in the training process and their labeled data are used for training.

Domain Adaptation Multi-Task Learning +1

Object-Region Video Transformers

1 code implementation CVPR 2022 Roei Herzig, Elad Ben-Avraham, Karttikeya Mangalam, Amir Bar, Gal Chechik, Anna Rohrbach, Trevor Darrell, Amir Globerson

In this work, we present Object-Region Video Transformers (ORViT), an \emph{object-centric} approach that extends video transformer layers with a block that directly incorporates object representations.

Action Detection Few-Shot action recognition +3

On Covariate Shift of Latent Confounders in Imitation and Reinforcement Learning

no code implementations ICLR 2022 Guy Tennenholtz, Assaf Hallak, Gal Dalal, Shie Mannor, Gal Chechik, Uri Shalit

We analyze the limitations of learning from such data with and without external reward, and propose an adjustment of standard imitation learning algorithms to fit this setup.

Imitation Learning Recommendation Systems +2

Inference-Time Personalized Federated Learning

no code implementations29 Sep 2021 Ohad Amosy, Gal Eyal, Gal Chechik

That client representation is fed to a hypernetwork that generates a personalized model for that client.

Domain Adaptation Multi-Task Learning +1

StyleGAN-NADA: CLIP-Guided Domain Adaptation of Image Generators

3 code implementations2 Aug 2021 Rinon Gal, Or Patashnik, Haggai Maron, Gal Chechik, Daniel Cohen-Or

Can a generative model be trained to produce images from a specific domain, guided by a text prompt only, without seeing any image?

Domain Adaptation Image Manipulation

Improve Agents without Retraining: Parallel Tree Search with Off-Policy Correction

1 code implementation NeurIPS 2021 Assaf Hallak, Gal Dalal, Steven Dalton, Iuri Frosio, Shie Mannor, Gal Chechik

We first discover and analyze a counter-intuitive phenomenon: action selection through TS and a pre-trained value function often leads to lower performance compared to the original pre-trained agent, even when having access to the exact state and reward in future steps.

Atari Games

DETReg: Unsupervised Pretraining with Region Priors for Object Detection

1 code implementation CVPR 2022 Amir Bar, Xin Wang, Vadim Kantorov, Colorado J Reed, Roei Herzig, Gal Chechik, Anna Rohrbach, Trevor Darrell, Amir Globerson

Recent self-supervised pretraining methods for object detection largely focus on pretraining the backbone of the object detector, neglecting key parts of detection architecture.

Few-Shot Learning Few-Shot Object Detection +6

Distributional Robustness Loss for Long-tail Learning

no code implementations ICCV 2021 Dvir Samuel, Gal Chechik

The new robustness loss can be combined with various classifier balancing techniques and can be applied to representations at several layers of the deep model.

Long-tail Learning

Personalized Federated Learning using Hypernetworks

2 code implementations8 Mar 2021 Aviv Shamsian, Aviv Navon, Ethan Fetaya, Gal Chechik

In this approach, a central hypernetwork model is trained to generate a set of models, one model for each client.

Personalized Federated Learning

ACAV100M: Automatic Curation of Large-Scale Datasets for Audio-Visual Video Representation Learning

1 code implementation ICCV 2021 Sangho Lee, Jiwan Chung, Youngjae Yu, Gunhee Kim, Thomas Breuel, Gal Chechik, Yale Song

We demonstrate that our approach finds videos with high audio-visual correspondence and show that self-supervised models trained on our data achieve competitive performances compared to models trained on existing manually curated datasets.

Representation Learning

Teacher-Student Consistency For Multi-Source Domain Adaptation

1 code implementation20 Oct 2020 Ohad Amosy, Gal Chechik

Then, we train a student network using the pseudo labels and regularized the teacher to fit the student predictions.

Domain Adaptation Object Recognition +1

From Local Structures to Size Generalization in Graph Neural Networks

no code implementations17 Oct 2020 Gilad Yehudai, Ethan Fetaya, Eli Meirom, Gal Chechik, Haggai Maron

In this paper, we identify an important type of data where generalization from small to large graphs is challenging: graph distributions for which the local structure depends on the graph size.

Combinatorial Optimization Domain Adaptation +2

Controlling Graph Dynamics with Reinforcement Learning and Graph Neural Networks

no code implementations11 Oct 2020 Eli A. Meirom, Haggai Maron, Shie Mannor, Gal Chechik

We consider the problem of controlling a partially-observed dynamic process on a graph by a limited number of interventions.

Marketing reinforcement-learning +2

Learning the Pareto Front with Hypernetworks

1 code implementation ICLR 2021 Aviv Navon, Aviv Shamsian, Gal Chechik, Ethan Fetaya

Here, we tackle the problem of learning the entire Pareto front, with the capability of selecting a desired operating point on the front after training.

Fairness Multiobjective Optimization +3

ZEST: Zero-shot Learning from Text Descriptions using Textual Similarity and Visual Summarization

1 code implementation Findings of the Association for Computational Linguistics 2020 Tzuf Paz-Argaman, Yuval Atzmon, Gal Chechik, Reut Tsarfaty

Specifically, given birds' images with free-text descriptions of their species, we learn to classify images of previously-unseen species based on specie descriptions.

Zero-Shot Learning

Learning Object Detection from Captions via Textual Scene Attributes

no code implementations30 Sep 2020 Achiya Jerbi, Roei Herzig, Jonathan Berant, Gal Chechik, Amir Globerson

In this work, we argue that captions contain much richer information about the image, including attributes of objects and their relations.

Image Captioning Object +2

On Size Generalization in Graph Neural Networks

no code implementations28 Sep 2020 Gilad Yehudai, Ethan Fetaya, Eli Meirom, Gal Chechik, Haggai Maron

We further demonstrate on several tasks, that training GNNs on small graphs results in solutions which do not generalize to larger graphs.

Combinatorial Optimization Domain Adaptation +1

Compositional Video Synthesis with Action Graphs

1 code implementation27 Jun 2020 Amir Bar, Roei Herzig, Xiaolong Wang, Anna Rohrbach, Gal Chechik, Trevor Darrell, Amir Globerson

Our generative model for this task (AG2Vid) disentangles motion and appearance features, and by incorporating a scheduling mechanism for actions facilitates a timely and coordinated video generation.

Scheduling Video Generation +2

A causal view of compositional zero-shot recognition

1 code implementation NeurIPS 2020 Yuval Atzmon, Felix Kreuk, Uri Shalit, Gal Chechik

This leads to consistent misclassification of samples from a new distribution, like new combinations of known components.

Attribute Compositional Zero-Shot Learning

Auxiliary Learning by Implicit Differentiation

1 code implementation ICLR 2021 Aviv Navon, Idan Achituve, Haggai Maron, Gal Chechik, Ethan Fetaya

Two main challenges arise in this multi-task learning setting: (i) designing useful auxiliary tasks; and (ii) combining auxiliary tasks into a single coherent loss.

Auxiliary Learning Image Segmentation +3

Contrastive Learning for Weakly Supervised Phrase Grounding

1 code implementation ECCV 2020 Tanmay Gupta, Arash Vahdat, Gal Chechik, Xiaodong Yang, Jan Kautz, Derek Hoiem

Given pairs of images and captions, we maximize compatibility of the attention-weighted regions and the words in the corresponding caption, compared to non-corresponding pairs of images and captions.

Contrastive Learning Language Modelling +1

From Generalized zero-shot learning to long-tail with class descriptors

1 code implementation5 Apr 2020 Dvir Samuel, Yuval Atzmon, Gal Chechik

Real-world data is predominantly unbalanced and long-tailed, but deep models struggle to recognize rare classes in the presence of frequent classes.

Few-Shot Learning Generalized Zero-Shot Learning +1

Self-Supervised Learning for Domain Adaptation on Point-Clouds

3 code implementations29 Mar 2020 Idan Achituve, Haggai Maron, Gal Chechik

Self-supervised learning (SSL) is a technique for learning useful representations from unlabeled data.

Domain Adaptation Self-Supervised Learning

Learning Object Permanence from Video

1 code implementation ECCV 2020 Aviv Shamsian, Ofri Kleinfeld, Amir Globerson, Gal Chechik

The fourth subtask, where a target object is carried by a containing object, is particularly challenging because it requires a system to reason about a moving location of an invisible object.

Object Video Object Tracking

On Learning Sets of Symmetric Elements

2 code implementations ICML 2020 Haggai Maron, Or Litany, Gal Chechik, Ethan Fetaya

We first characterize the space of linear layers that are equivariant both to element reordering and to the inherent symmetries of elements, like translation in the case of images.

3D Shape Recognition Deblurring +1

Cooperative image captioning

no code implementations26 Jul 2019 Gilad Vered, Gal Oren, Yuval Atzmon, Gal Chechik

Second, we show that the generated descriptions can be kept close to natural by constraining them to be similar to human descriptions.

Image Captioning

Few-Shot Learning with Per-Sample Rich Supervision

no code implementations10 Jun 2019 Roman Visotsky, Yuval Atzmon, Gal Chechik

Here we describe a new approach to learn with fewer samples, by using additional information that is provided per sample.

Few-Shot Learning General Classification +1

Differentiable Scene Graphs

1 code implementation26 Feb 2019 Moshiko Raboh, Roei Herzig, Gal Chechik, Jonathan Berant, Amir Globerson

In many domains, it is preferable to train systems jointly in an end-to-end manner, but SGs are not commonly used as intermediate components in visual reasoning systems because being discrete and sparse, scene-graph representations are non-differentiable and difficult to optimize.

Visual Reasoning

Adaptive Confidence Smoothing for Generalized Zero-Shot Learning

no code implementations CVPR 2019 Yuval Atzmon, Gal Chechik

Specifically, our model consists of three classifiers: A "gating" model that makes soft decisions if a sample is from a "seen" class, and two experts: a ZSL expert, and an expert model for seen classes.

Generalized Zero-Shot Learning

Metric Learning for Phoneme Perception

no code implementations20 Sep 2018 Yair Lakretz, Gal Chechik, Evan-Gary Cohen, Alessandro Treves, Naama Friedmann

This study presents a new framework for learning a metric function for perceptual distances among pairs of phonemes.

Metric Learning

Probabilistic AND-OR Attribute Grouping for Zero-Shot Learning

1 code implementation7 Jun 2018 Yuval Atzmon, Gal Chechik

The soft group structure can be learned from data jointly as part of the model, and can also readily incorporate prior knowledge about groups if available.

Attribute Zero-Shot Learning

Context-aware Captions from Context-agnostic Supervision

1 code implementation CVPR 2017 Ramakrishna Vedantam, Samy Bengio, Kevin Murphy, Devi Parikh, Gal Chechik

We introduce an inference technique to produce discriminative context-aware image captions (captions that describe differences between images or visual concepts) using only generic context-agnostic training data (captions that describe a concept or an image in isolation).

Image Captioning Language Modelling

Gradual Training Method for Denoising Auto Encoders

no code implementations11 Apr 2015 Alexander Kalmanovich, Gal Chechik

Stacked denoising auto encoders (DAEs) are well known to learn useful deep representations, which can be used to improve supervised training by initializing a deep network.

Denoising General Classification

Gradual training of deep denoising auto encoders

no code implementations19 Dec 2014 Alexander Kalmanovich, Gal Chechik

Stacked denoising auto encoders (DAEs) are well known to learn useful deep representations, which can be used to improve supervised training by initializing a deep network.

Denoising General Classification

Efficient coordinate-descent for orthogonal matrices through Givens rotations

no code implementations2 Dec 2013 Uri Shalit, Gal Chechik

Optimizing over the set of orthogonal matrices is a central component in problems like sparse-PCA or tensor decomposition.

Tensor Decomposition

Online Learning in The Manifold of Low-Rank Matrices

no code implementations NeurIPS 2010 Uri Shalit, Daphna Weinshall, Gal Chechik

When learning models that are represented in matrix forms, enforcing a low-rank constraint can dramatically improve the memory and run time complexity, while providing a natural regularization of the model.

Multi-Label Image Classification

An Online Algorithm for Large Scale Image Similarity Learning

no code implementations NeurIPS 2009 Gal Chechik, Uri Shalit, Varun Sharma, Samy Bengio

We describe OASIS, a method for learning pairwise similarity that is fast and scales linearly with the number of objects and the number of non-zero features.

Cannot find the paper you are looking for? You can Submit a new open access paper.