Novel Concepts

42 papers with code • 0 benchmarks • 0 datasets

Measures the ability of models to uncover an underlying concept that unites several ostensibly disparate entities, which hopefully would not co-occur frequently. This provides a limited test of a model's ability to creatively construct the necessary abstraction to make sense of a situation that it cannot have memorized in training.

Source: BIG-bench

Most implemented papers

PaLM: Scaling Language Modeling with Pathways

lucidrains/CoCa-pytorch Google Research 2022

To further our understanding of the impact of scale on few-shot learning, we trained a 540-billion parameter, densely activated, Transformer language model, which we call Pathways Language Model PaLM.

Dynamic Few-Shot Visual Learning without Forgetting

gidariss/FewShotWithoutForgetting CVPR 2018

In this context, the goal of our work is to devise a few-shot visual learning system that during test time it will be able to efficiently learn novel categories from only a few training data while at the same time it will not forget the initial categories on which it was trained (here called base categories).

Revisit Systematic Generalization via Meaningful Learning

shininglab/systematic-generalization-via-meaningful-learning 14 Mar 2020

Humans can systematically generalize to novel compositions of existing concepts.

DER: Dynamically Expandable Representation for Class Incremental Learning

Rhyssiyan/DER-ClassIL.pytorch CVPR 2021

We address the problem of class incremental learning, which is a core step towards achieving adaptive vision intelligence.

Learning like a Child: Fast Novel Visual Concept Learning from Sentence Descriptions of Images

mjhucla/TF-mRNN ICCV 2015

In particular, we propose a transposed weight sharing scheme, which not only improves performance on image captioning, but also makes the model more suitable for the novel concept learning task.

Deep Compositional Captioning: Describing Novel Object Categories without Paired Training Data

LisaAnne/DCC CVPR 2016

Current deep caption models can only describe objects contained in paired image-sentence corpora, despite the fact that they are pre-trained with large object recognition datasets, namely ImageNet.

Zero-Shot Object Detection: Learning to Simultaneously Recognize and Localize Novel Concepts

salman-h-khan/ZSD_Release 16 Mar 2018

We hypothesize that this setting is ill-suited for real-world applications where unseen objects appear only as a part of a complex scene, warranting both the `recognition' and `localization' of an unseen category.

Decoupled Novel Object Captioner

Pranav21091996/Semantic_Fidelity-and-Egoshots 11 Apr 2018

Thus, the sequence model can be decoupled from the novel object descriptions.

Multi-level Semantic Feature Augmentation for One-shot Learning

tankche1/Semantic-Feature-Augmentation-in-Few-shot-Learning 15 Apr 2018

In semantic space, we search for related concepts, which are then projected back into the image feature spaces by the decoder portion of the TriNet.

Understanding MCMC Dynamics as Flows on the Wasserstein Space

chang-ml-thu/FGH-flow 1 Feb 2019

It is known that the Langevin dynamics used in MCMC is the gradient flow of the KL divergence on the Wasserstein space, which helps convergence analysis and inspires recent particle-based variational inference methods (ParVIs).