Compositional Zero-Shot Learning

24 papers with code • 4 benchmarks • 6 datasets

Compositional Zero-Shot Learning (CZSL) is a computer vision task in which the goal is to recognize unseen compositions fromed from seen state and object during training. The key challenge in CZSL is the inherent entanglement between the state and object within the context of an image. Some example benchmarks for this task are MIT-states, UT-Zappos, and C-GQA. Models are usually evaluated with the Accuracy for both seen and unseen compositions, as well as their Harmonic Mean(HM).

( Image credit: Heosuab )


Use these libraries to find Compositional Zero-Shot Learning models and implementations

Most implemented papers

Open World Compositional Zero-Shot Learning

ExplainableML/czsl CVPR 2021

After estimating the feasibility score of each composition, we use these scores to either directly mask the output space or as a margin for the cosine similarity between visual features and compositional embeddings during training.

Learning Graph Embeddings for Open World Compositional Zero-Shot Learning

ExplainableML/co-cge 3 May 2021

In this work, we overcome this assumption operating on the open world setting, where no limit is imposed on the compositional space at test time, and the search space contains a large number of unseen compositions.

CAILA: Concept-Aware Intra-Layer Adapters for Compositional Zero-Shot Learning

zhaohengz/caila 26 May 2023

In this paper, we study the problem of Compositional Zero-Shot Learning (CZSL), which is to recognize novel attribute-object combinations with pre-existing concepts.

Attributes as Operators: Factorizing Unseen Attribute-Object Compositions

Tushar-N/attributes-as-operators ECCV 2018

In addition, we show that not only can our model recognize unseen compositions robustly in an open-world setting, it can also generalize to compositions where objects themselves were unseen during training.

Symmetry and Group in Attribute-Object Compositions

DirtyHarryLYL/SymNet CVPR 2020

To model the compositional nature of these general concepts, it is a good choice to learn them through transformations, such as coupling and decoupling.

A causal view of compositional zero-shot recognition

nv-research-israel/causal_comp NeurIPS 2020

This leads to consistent misclassification of samples from a new distribution, like new combinations of known components.

Learning Graph Embeddings for Compositional Zero-shot Learning

ExplainableML/czsl CVPR 2021

In compositional zero-shot learning, the goal is to recognize unseen compositions (e. g. old dog) of observed visual primitives states (e. g. old, cute) and objects (e. g. car, dog) in the training set.

Independent Prototype Propagation for Zero-Shot Compositionality

FrankRuis/ProtoProp NeurIPS 2021

Next we propagate the independent prototypes through a compositional graph, to learn compositional prototypes of novel attribute-object combinations that reflect the dependencies of the target distribution.

Relation-aware Compositional Zero-shot Learning for Attribute-Object Pair Recognition

daoyuan98/relation-czsl 10 Aug 2021

The concept module generates semantically meaningful features for primitive concepts, whereas the visual module extracts visual features for attributes and objects from input images.

Learning Single/Multi-Attribute of Object with Symmetry and Group

DirtyHarryLYL/SymNet 9 Oct 2021

To model the compositional nature of these concepts, it is a good choice to learn them as transformations, e. g., coupling and decoupling.