Diversity

3156 papers with code • 0 benchmarks • 0 datasets

Diversity in data sampling is crucial across various use cases, including search, recommendation systems, and more. Ensuring diverse samples means capturing a wide range of variations and perspectives, which leads to more robust, unbiased, and comprehensive models. In search use cases, for instance, diversity helps avoid redundancy, ensuring that users are exposed to a broader set of relevant information rather than repeated similar results.

Libraries

Use these libraries to find Diversity models and implementations

Most implemented papers

Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

huggingface/transformers arXiv 2019

Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP).

Colorful Image Colorization

richzhang/colorization 28 Mar 2016

We embrace the underlying uncertainty of the problem by posing it as a classification task and use class-rebalancing at training time to increase the diversity of colors in the result.

Conditional Image Synthesis With Auxiliary Classifier GANs

eriklindernoren/PyTorch-GAN ICML 2017

We expand on previous work for image quality assessment to provide two new analyses for assessing the discriminability and diversity of samples from class-conditional image synthesis models.

Diverse Beam Search: Decoding Diverse Solutions from Neural Sequence Models

ashwinkalyan/dbs 7 Oct 2016

We observe that our method consistently outperforms BS and previously proposed techniques for diverse decoding from neural sequence models.

The Pile: An 800GB Dataset of Diverse Text for Language Modeling

EleutherAI/The-Pile 31 Dec 2020

Recent work has demonstrated that increased training dataset diversity improves general cross-domain knowledge and downstream generalization capability for large-scale language models.

Diffusion Models Beat GANs on Image Synthesis

openai/guided-diffusion NeurIPS 2021

Finally, we find that classifier guidance combines well with upsampling diffusion models, further improving FID to 3. 94 on ImageNet 256$\times$256 and 3. 85 on ImageNet 512$\times$512.

BEGAN: Boundary Equilibrium Generative Adversarial Networks

eriklindernoren/PyTorch-GAN 31 Mar 2017

We propose a new equilibrium enforcing method paired with a loss derived from the Wasserstein distance for training auto-encoder based Generative Adversarial Networks.

The Curious Case of Neural Text Degeneration

ari-holtzman/degen ICLR 2020

Despite considerable advancements with deep neural language models, the enigma of neural text degeneration persists when these models are tested as text generators.

Generating Diverse High-Fidelity Images with VQ-VAE-2

deepmind/sonnet NeurIPS 2019

We explore the use of Vector Quantized Variational AutoEncoder (VQ-VAE) models for large scale image generation.

A Diversity-Promoting Objective Function for Neural Conversation Models

pender/chatbot-rnn NAACL 2016

Sequence-to-sequence neural network models for generation of conversational responses tend to generate safe, commonplace responses (e. g., "I don't know") regardless of the input.