A Flexible Measurement of Diversity in Datasets with Random Network Distillation

29 Sep 2021  ·  Liam H Fowl, Micah Goldblum, Arjun Gupta, Amr Sharaf, Tom Goldstein ·

Generative models are increasingly able to produce remarkably high quality images and text. The community has developed numerous evaluation metrics for comparing generative models. However, these metrics do not always effectively quantify data diversity. We develop a new, more flexible diversity metric that can readily be applied to data, both synthetic and natural, of any type. Our method employs random network distillation, a technique introduced in reinforcement learning. We validate and deploy this metric on both images and text. We further explore diversity in few-shot image generation, a setting which was previously difficult to evaluate.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here