Diversity
3156 papers with code • 0 benchmarks • 0 datasets
Diversity in data sampling is crucial across various use cases, including search, recommendation systems, and more. Ensuring diverse samples means capturing a wide range of variations and perspectives, which leads to more robust, unbiased, and comprehensive models. In search use cases, for instance, diversity helps avoid redundancy, ensuring that users are exposed to a broader set of relevant information rather than repeated similar results.
Benchmarks
These leaderboards are used to track progress in Diversity
Libraries
Use these libraries to find Diversity models and implementationsMost implemented papers
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP).
Colorful Image Colorization
We embrace the underlying uncertainty of the problem by posing it as a classification task and use class-rebalancing at training time to increase the diversity of colors in the result.
Conditional Image Synthesis With Auxiliary Classifier GANs
We expand on previous work for image quality assessment to provide two new analyses for assessing the discriminability and diversity of samples from class-conditional image synthesis models.
Diverse Beam Search: Decoding Diverse Solutions from Neural Sequence Models
We observe that our method consistently outperforms BS and previously proposed techniques for diverse decoding from neural sequence models.
The Pile: An 800GB Dataset of Diverse Text for Language Modeling
Recent work has demonstrated that increased training dataset diversity improves general cross-domain knowledge and downstream generalization capability for large-scale language models.
Diffusion Models Beat GANs on Image Synthesis
Finally, we find that classifier guidance combines well with upsampling diffusion models, further improving FID to 3. 94 on ImageNet 256$\times$256 and 3. 85 on ImageNet 512$\times$512.
BEGAN: Boundary Equilibrium Generative Adversarial Networks
We propose a new equilibrium enforcing method paired with a loss derived from the Wasserstein distance for training auto-encoder based Generative Adversarial Networks.
The Curious Case of Neural Text Degeneration
Despite considerable advancements with deep neural language models, the enigma of neural text degeneration persists when these models are tested as text generators.
Generating Diverse High-Fidelity Images with VQ-VAE-2
We explore the use of Vector Quantized Variational AutoEncoder (VQ-VAE) models for large scale image generation.
A Diversity-Promoting Objective Function for Neural Conversation Models
Sequence-to-sequence neural network models for generation of conversational responses tend to generate safe, commonplace responses (e. g., "I don't know") regardless of the input.