Few-shot Learning with Retrieval Augmented Language Models

no code yet • 5 Aug 2022

Retrieval augmented models are known to excel at knowledge intensive tasks without the need for as many parameters, but it is unclear whether they work in few-shot settings.

Fact Checking Few-Shot Learning +3

TWEETS

An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion

rinongal/textual_inversion 2 Aug 2022

Yet, it is unclear how such freedom can be exercised to generate images of specific unique concepts, modify their appearance, or compose them in new roles and novel scenes.

Text to image generation Text-to-Image Generation

TWEETS

Meaning without reference in large language models

no code yet • 5 Aug 2022

The widespread success of large language models (LLMs) has been met with skepticism that they possess anything like human concepts or meanings.

TWEETS

Branch-Train-Merge: Embarrassingly Parallel Training of Expert Language Models

hadasah/btm 5 Aug 2022

New ELMs are learned by branching from (mixtures of) ELMs in the current set, further training the parameters on data for the new domain, and then merging the resulting model back into the set for future use.

TWEETS

Prompt-to-Prompt Image Editing with Cross Attention Control

no code yet • 2 Aug 2022

Editing is challenging for these generative models, since an innate property of an editing technique is to preserve most of the original image, while in the text-based models, even a small modification of the text prompt often leads to a completely different outcome.

Image Generation

TWEETS

Open-world Contrastive Learning

no code yet • 4 Aug 2022

Recent advance in contrastive learning has shown remarkable performance.

Contrastive Learning Representation Learning

TWEETS

AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq Model

no code yet • 2 Aug 2022

In this work, we demonstrate that multilingual large-scale sequence-to-sequence (seq2seq) models, pre-trained on a mixture of denoising and Causal Language Modeling (CLM) tasks, are more efficient few-shot learners than decoder-only models on various tasks.

Causal Language Modeling Denoising +3

TWEETS

Conformal Risk Control

aangelopoulos/conformal-risk 4 Aug 2022

We extend conformal prediction to control the expected value of any monotone loss function.

Natural Language Processing

TWEETS

Flow Annealed Importance Sampling Bootstrap

lollcat/fab-torch 3 Aug 2022

We target with AIS the minimum variance distribution for the estimation of the $\alpha$-divergence via importance sampling.

TWEETS

Pyramidal Denoising Diffusion Probabilistic Models

no code yet • 3 Aug 2022

Diffusion models have demonstrated impressive image generation performance, and have been used in various computer vision tasks.

Denoising Image Generation +1

TWEETS