SITA: Single Image Test-time Adaptation

4 Dec 2021  ·  Ansh Khurana, Sujoy Paul, Piyush Rai, Soma Biswas, Gaurav Aggarwal ·

In Test-time Adaptation (TTA), given a source model, the goal is to adapt it to make better predictions for test instances from a different distribution than the source. Crucially, TTA assumes no access to the source data or even any additional labeled/unlabeled samples from the target distribution to finetune the source model. In this work, we consider TTA in a more pragmatic setting which we refer to as SITA (Single Image Test-time Adaptation). Here, when making a prediction, the model has access only to the given single test instance, rather than a batch of instances, as typically been considered in the literature. This is motivated by the realistic scenarios where inference is needed on-demand instead of delaying for an incoming batch or the inference is happening on an edge device (like mobile phone) where there is no scope for batching. The entire adaptation process in SITA should be extremely fast as it happens at inference time. To address this, we propose a novel approach AugBN that requires only a single forward pass. It can be used on any off-the-shelf trained model to test single instances for both classification and segmentation tasks. AugBN estimates normalization statistics of the unseen test distribution from the given test image using only one forward pass with label-preserving transformations. Since AugBN does not involve any back-propagation, it is significantly faster compared to recent test time adaptation methods. We further extend AugBN to make the algorithm hyperparameter-free. Rigorous experimentation show that our simple algorithm is able to achieve significant performance gains for a variety of datasets, tasks, and network architectures.

PDF Abstract
No code implementations yet. Submit your code now

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here