Self-Supervised Features Improve Open-World Learning

This paper identifies the flaws in existing open-world learning approaches and attempts to provide a complete picture in the form of \textbf{True Open-World Learning}. We accomplish this by proposing a comprehensive generalize-able open-world learning protocol capable of evaluating various components of open-world learning in an operational setting. We argue that in true open-world learning, the underlying feature representation should be learned in a self-supervised manner. Under this self-supervised feature representation, we introduce the problem of detecting unknowns as samples belonging to Out-of-Label space. We differentiate between Out-of-Label space detection and the conventional Out-of-Distribution detection depending upon whether the unknowns being detected belong to the native-world (same as feature representation) or a new-world, respectively. Our unifying open-world learning framework combines three individual research dimensions, which typically have been explored independently, i.e., Incremental Learning, Out-of-Distribution detection and Open-World Learning. Starting from a self-supervised feature space, an open-world learner has the ability to adapt and specialize its feature space to the classes in each incremental phase and hence perform better without incurring any significant overhead, as demonstrated by our experimental results. The incremental learning component of our pipeline provides the new state-of-the-art on established ImageNet-100 protocol. We also demonstrate the adaptability of our approach by showing how it can work as a plug-in with any of the self-supervised feature representation methods.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here