Test-time Adaptation
205 papers with code • 1 benchmarks • 2 datasets
Libraries
Use these libraries to find Test-time Adaptation models and implementationsMost implemented papers
Benchmarking Robustness of 3D Point Cloud Recognition Against Common Corruptions
Deep neural networks on 3D point cloud data have been widely used in the real world, especially in safety-critical applications.
Advances in Multimodal Adaptation and Generalization: From Traditional Approaches to Foundation Models
Besides, the recent advent of large-scale pre-trained multimodal foundation models, such as CLIP, has inspired works leveraging these models to enhance adaptation and generalization performances or adapting them to downstream tasks.
Continual Test-Time Domain Adaptation
However, real-world machine perception systems are running in non-stationary and continually changing environments where the target domain distribution can change over time.
Test-Time Adaptable Neural Networks for Robust Medical Image Segmentation
In medical image segmentation, this premise is violated when there is a mismatch between training and test images in terms of their acquisition details, such as the scanner model or the protocol.
Tent: Fully Test-time Adaptation by Entropy Minimization
A model must adapt itself to generalize to new and different data during testing.
MEMO: Test Time Robustness via Adaptation and Augmentation
We study the problem of test time robustification, i. e., using the test input to improve model robustness.
Listen, Adapt, Better WER: Source-free Single-utterance Test-time Adaptation for Automatic Speech Recognition
Although deep learning-based end-to-end Automatic Speech Recognition (ASR) has shown remarkable performance in recent years, it suffers severe performance regression on test samples drawn from different data distributions.
Test-Time Adaptation via Self-Training with Nearest Neighbor Information
To overcome this limitation, we propose a novel test-time adaptation method, called Test-time Adaptation via Self-Training with nearest neighbor information (TAST), which is composed of the following procedures: (1) adds trainable adaptation modules on top of the trained feature extractor; (2) newly defines a pseudo-label distribution for the test data by using the nearest neighbor information; (3) trains these modules only a few times during test time to match the nearest neighbor-based pseudo label distribution and a prototype-based class distribution for the test data; and (4) predicts the label of test data using the average predicted class distribution from these modules.
MECTA: Memory-Economic Continual Test-Time Model Adaptation
The proposed MECTA is efficient and can be seamlessly plugged into state-of-theart CTA algorithms at negligible overhead on computation and memory.
ViDA: Homeostatic Visual Domain Adapter for Continual Test Time Adaptation
Note that, our method can be regarded as a novel transfer paradigm for large-scale models, delivering promising results in adaptation to continually changing distributions.