Novel Concepts

51 papers with code • 0 benchmarks • 0 datasets

Measures the ability of models to uncover an underlying concept that unites several ostensibly disparate entities, which hopefully would not co-occur frequently. This provides a limited test of a model's ability to creatively construct the necessary abstraction to make sense of a situation that it cannot have memorized in training.

Source: BIG-bench

Most implemented papers

Zero-Shot Object Detection: Learning to Simultaneously Recognize and Localize Novel Concepts

salman-h-khan/ZSD_Release 16 Mar 2018

We hypothesize that this setting is ill-suited for real-world applications where unseen objects appear only as a part of a complex scene, warranting both the `recognition' and `localization' of an unseen category.

Decoupled Novel Object Captioner

Pranav21091996/Semantic_Fidelity-and-Egoshots 11 Apr 2018

Thus, the sequence model can be decoupled from the novel object descriptions.

Multi-level Semantic Feature Augmentation for One-shot Learning

tankche1/Semantic-Feature-Augmentation-in-Few-shot-Learning 15 Apr 2018

In semantic space, we search for related concepts, which are then projected back into the image feature spaces by the decoder portion of the TriNet.

Understanding MCMC Dynamics as Flows on the Wasserstein Space

chang-ml-thu/FGH-flow 1 Feb 2019

It is known that the Langevin dynamics used in MCMC is the gradient flow of the KL divergence on the Wasserstein space, which helps convergence analysis and inspires recent particle-based variational inference methods (ParVIs).

A Provable Defense for Deep Residual Networks

eth-sri/diffai 29 Mar 2019

We present a training system, which can provably defend significantly larger neural networks than previously possible, including ResNet-34 and DenseNet-100.

Task-Driven Modular Networks for Zero-Shot Compositional Learning

facebookresearch/taskmodularnets ICCV 2019

When extending the evaluation to the generalized setting which accounts also for pairs seen during training, we discover that naive baseline methods perform similarly or better than current approaches.

Variational Prototype Replays for Continual Learning

kreimanlab/VariationalPrototypeReplaysCL 23 May 2019

In each classification task, our method learns a set of variational prototypes with their means and variances, where embedding of the samples from the same class can be represented in a prototypical distribution and class-representative prototypes are separated apart.

Task-Aware Feature Generation for Zero-Shot Compositional Learning

ucbdrive/tafe-net 11 Jun 2019

In this work, we propose a task-aware feature generation (TFG) framework for compositional learning, which generates features of novel visual concepts by transferring knowledge from previously seen concepts.

Knowledge Graph Transfer Network for Few-Shot Recognition

MyChocer/KGTN 21 Nov 2019

In this work, we represent the semantic correlations in the form of structured knowledge graph and integrate this graph into deep neural networks to promote few-shot learning by a novel Knowledge Graph Transfer Network (KGTN).

Simple and Lightweight Human Pose Estimation

zhang943/lpn-pytorch 23 Nov 2019

Specifically, our LPN-50 can achieve 68. 7 in AP score on the COCO test-dev set, with only 2. 7M parameters and 1. 0 GFLOPs, while the inference speed is 17 FPS on an Intel i7-8700K CPU machine.