Unsupervised Learning via Meta-Learning

ICLR 2019  ·  Kyle Hsu, Sergey Levine, Chelsea Finn ·

A central goal of unsupervised learning is to acquire representations from unlabeled data or experience that can be used for more effective learning of downstream tasks from modest amounts of labeled data. Many prior unsupervised learning works aim to do so by developing proxy objectives based on reconstruction, disentanglement, prediction, and other metrics. Instead, we develop an unsupervised meta-learning method that explicitly optimizes for the ability to learn a variety of tasks from small amounts of data. To do so, we construct tasks from unlabeled data in an automatic way and run meta-learning over the constructed tasks. Surprisingly, we find that, when integrated with meta-learning, relatively simple task construction mechanisms, such as clustering embeddings, lead to good performance on a variety of downstream, human-specified tasks. Our experiments across four image datasets indicate that our unsupervised meta-learning approach acquires a learning algorithm without any labeled data that is applicable to a wide range of downstream classification tasks, improving upon the embedding learned by four prior unsupervised learning methods.

PDF Abstract ICLR 2019 PDF ICLR 2019 Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Unsupervised Few-Shot Image Classification Mini-Imagenet 5-way (1-shot) CACTU Accuracy 39.90 # 27
Unsupervised Few-Shot Image Classification Mini-Imagenet 5-way (5-shot) CACTU Accuracy 53.97 # 26

Methods


No methods listed for this paper. Add relevant methods here