Enabling Efficient On-Device Self-supervised Contrastive Learning by Data Selection

1 Jan 2021  ·  Yawen Wu, Zhepeng Wang, Dewen Zeng, Yiyu Shi, Jingtong Hu ·

This work aims to enable efficient on-device contrastive learning from input streaming data after a model is deployed on edge devices such as robots or unmanned aerial vehicles (UAVs) so that they can adapt to a dynamic new environment for higher accuracy. On the other hand, such data usually does not have any labels, calling for unsupervised learning. Most recently, contrastive learning has demonstrated its great potential in learning visual representation from unlabeled data. However, directly applying it to streaming data requires storing a large dataset on-the-fly, which will quickly drain edge devices’ storage resources. In this paper, we propose a framework to automatically select the most representative data from unlabeled input stream on-the-fly, which only requires the use of a small data buffer for dynamic learning. What is more, considering the fact that the data are not independent and identically distributed (iid) as in the traditional training process, we score new data as they come in by measuring the quality of their representations without requiring any label information, based on which the data in the buffer will be updated. Extensive experiments show that the learning speed and accuracy are greatly improved compared with approaches without data selection.

PDF Abstract
No code implementations yet. Submit your code now

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods