no code implementations • 26 Mar 2024 • Shohei Enomoto, Naoya Hasegawa, Kazuki Adachi, Taku Sasaki, Shin'ya Yamaguchi, Satoshi Suzuki, Takeharu Eda
We hypothesize that enhancing the input image reduces prediction's uncertainty and increase the accuracy of TTA methods.
no code implementations • 21 Mar 2024 • Kazuki Adachi, Shohei Enomoto, Taku Sasaki, Shin'ya Yamaguchi
However, the uncertainty cannot be computed in the same way as classification in re-id since it is an open-set task, which does not share person labels between training and testing.
no code implementations • 15 Mar 2024 • Shin'ya Yamaguchi, Sekitoshi Kanai, Kazuki Adachi, Daiki Chijiwa
To this end, AdaRand minimizes the gap between feature vectors and random reference vectors that are sampled from class conditional Gaussian distributions.
no code implementations • 14 Mar 2023 • Yasutoshi Ida, Sekitoshi Kanai, Kazuki Adachi, Atsutoshi Kumagai, Yasuhiro Fujiwara
Regularized discrete optimal transport (OT) is a powerful tool to measure the distance between two discrete distributions that have been constructed from data samples on two different domains.
no code implementations • 28 Apr 2022 • Kazuki Adachi, Shin'ya Yamaguchi, Atsutoshi Kumagai
Test-time adaptation (TTA), which aims to adapt models without accessing the training dataset, is one of the settings that can address this problem.
no code implementations • 9 Feb 2022 • Kazuki Adachi, Shin'ya Yamaguchi
Under this type of distribution shift, CNNs learn to focus on features that are not task-relevant, such as backgrounds from the training data, and degrade their accuracy on the test data.