# Novel Concepts

28 papers with code • 1 benchmarks • 1 datasets

Measures the ability of models to uncover an underlying concept that unites several ostensibly disparate entities, which hopefully would not co-occur frequently. This provides a limited test of a model's ability to creatively construct the necessary abstraction to make sense of a situation that it cannot have memorized in training.

Source: BIG-bench

# Dynamic Few-Shot Visual Learning without Forgetting

In this context, the goal of our work is to devise a few-shot visual learning system that during test time it will be able to efficiently learn novel categories from only a few training data while at the same time it will not forget the initial categories on which it was trained (here called base categories).

4

# PaLM: Scaling Language Modeling with Pathways

To further our understanding of the impact of scale on few-shot learning, we trained a 540-billion parameter, densely activated, Transformer language model, which we call Pathways Language Model PaLM.

4

# DER: Dynamically Expandable Representation for Class Incremental Learning

We address the problem of class incremental learning, which is a core step towards achieving adaptive vision intelligence.

2

# Learning like a Child: Fast Novel Visual Concept Learning from Sentence Descriptions of Images

In particular, we propose a transposed weight sharing scheme, which not only improves performance on image captioning, but also makes the model more suitable for the novel concept learning task.

1

# Deep Compositional Captioning: Describing Novel Object Categories without Paired Training Data

Current deep caption models can only describe objects contained in paired image-sentence corpora, despite the fact that they are pre-trained with large object recognition datasets, namely ImageNet.

1

# Zero-Shot Object Detection: Learning to Simultaneously Recognize and Localize Novel Concepts

16 Mar 2018

We hypothesize that this setting is ill-suited for real-world applications where unseen objects appear only as a part of a complex scene, warranting both the recognition' and localization' of an unseen category.

1

# Decoupled Novel Object Captioner

11 Apr 2018

Thus, the sequence model can be decoupled from the novel object descriptions.

1

# Multi-level Semantic Feature Augmentation for One-shot Learning

In semantic space, we search for related concepts, which are then projected back into the image feature spaces by the decoder portion of the TriNet.

1

# Understanding MCMC Dynamics as Flows on the Wasserstein Space

1 Feb 2019

It is known that the Langevin dynamics used in MCMC is the gradient flow of the KL divergence on the Wasserstein space, which helps convergence analysis and inspires recent particle-based variational inference methods (ParVIs).

1

# A Provable Defense for Deep Residual Networks

29 Mar 2019

We present a training system, which can provably defend significantly larger neural networks than previously possible, including ResNet-34 and DenseNet-100.

1