Compositional Zero-Shot Learning

24 papers with code • 4 benchmarks • 6 datasets

Compositional Zero-Shot Learning (CZSL) is a computer vision task in which the goal is to recognize unseen compositions fromed from seen state and object during training. The key challenge in CZSL is the inherent entanglement between the state and object within the context of an image. Some example benchmarks for this task are MIT-states, UT-Zappos, and C-GQA. Models are usually evaluated with the Accuracy for both seen and unseen compositions, as well as their Harmonic Mean(HM).

( Image credit: Heosuab )

Libraries

Use these libraries to find Compositional Zero-Shot Learning models and implementations

Latest papers with no code

CSCNET: Class-Specified Cascaded Network for Compositional Zero-Shot Learning

no code yet • 9 Mar 2024

Inspired by this, we propose a novel A-O disentangled framework for CZSL, namely Class-specified Cascaded Network (CSCNet).

Context-based and Diversity-driven Specificity in Compositional Zero-Shot Learning

no code yet • 27 Feb 2024

Our framework evaluates the specificity of attributes by considering the diversity of objects they apply to and their related context.

Revealing the Proximate Long-Tail Distribution in Compositional Zero-Shot Learning

no code yet • 26 Dec 2023

Building upon this insight, we incorporate visual bias caused by compositions into the classifier's training and inference by estimating it as a proximate class prior.

Compositional Zero-Shot Learning for Attribute-Based Object Reference in Human-Robot Interaction

no code yet • 21 Dec 2023

However, visual observations of an object may not be available when it is referred to, and the number of objects and attributes may also be unbounded in open worlds.

Prompt Tuning for Zero-shot Compositional Learning

no code yet • 2 Dec 2023

In order to achieve this goal, a model has to be "smart" and "knowledgeable".

Compositional Zero-shot Learning via Progressive Language-based Observations

no code yet • 23 Nov 2023

Compositional zero-shot learning aims to recognize unseen state-object compositions by leveraging known primitives (state and object) during training.

HOMOE: A Memory-Based and Composition-Aware Framework for Zero-Shot Learning with Hopfield Network and Soft Mixture of Experts

no code yet • 23 Nov 2023

In our paper, we propose a novel framework that for the first time combines the Modern Hopfield Network with a Mixture of Experts (HOMOE) to classify the compositions of previously unseen objects.

Prompting Language-Informed Distribution for Compositional Zero-Shot Learning

no code yet • 23 May 2023

Orthogonal to the existing literature of soft, hard, or distributional prompts, our method advocates prompting the LLM-supported class distribution that leads to a better zero-shot generalization.

DRPT: Disentangled and Recurrent Prompt Tuning for Compositional Zero-Shot Learning

no code yet • 2 May 2023

Compositional Zero-shot Learning (CZSL) aims to recognize novel concepts composed of known knowledge without training samples.

Distilled Reverse Attention Network for Open-world Compositional Zero-Shot Learning

no code yet • ICCV 2023

Open-World Compositional Zero-Shot Learning (OW-CZSL) aims to recognize new compositions of seen attributes and objects.