Distance in Latent Space as Novelty Measure

31 Mar 2020  ·  Mark Philip Philipsen, Thomas Baltzer Moeslund ·

Deep Learning performs well when training data densely covers the experience space. For complex problems this makes data collection prohibitively expensive. We propose to intelligently select samples when constructing data sets in order to best utilize the available labeling budget. The selection methodology is based on the presumption that two dissimilar samples are worth more than two similar samples in a data set. Similarity is measured based on the Euclidean distance between samples in the latent space produced by a DNN. By using a self-supervised method to construct the latent space, it is ensured that the space fits the data well and that any upfront labeling effort can be avoided. The result is more efficient, diverse, and balanced data set, which produce equal or superior results with fewer labeled examples.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here