1 code implementation • 1 Dec 2023 • Maorong Wang, Nicolas Michel, Ling Xiao, Toshihiko Yamasaki
To this end, we propose Collaborative Continual Learning (CCL), a collaborative learning based strategy to improve the model's capability in acquiring new concepts.
no code implementations • 6 Sep 2023 • Nicolas Michel, Maorong Wang, Ling Xiao, Toshihiko Yamasaki
While Knowledge Distillation (KD) has been extensively used in offline Continual Learning, it remains under-exploited in OCL, despite its potential.
no code implementations • 23 May 2023 • Zerun Wang, Ling Xiao, Liuyu Xiang, Zhaotian Weng, Toshihiko Yamasaki
To alleviate these issues, this paper proposes an end-to-end online OSSOD framework that improves performance and efficiency: 1) We propose a semi-supervised outlier filtering method that more effectively filters the OOD instances using both labeled and unlabeled data.
no code implementations • 14 Mar 2023 • Maorong Wang, Ling Xiao, Toshihiko Yamasaki
Online knowledge distillation (KD) has received increasing attention in recent years.
1 code implementation • 27 Dec 2022 • Ling Xiao, Toshihiko Yamasaki
Our model consistently outperforms existing attention based methods when assessed on the FashionAI (62. 8788% in MAP), DeepFashion (8. 9804% in MAP), and Zappos50k datasets (93. 32% in Prediction accuracy).
no code implementations • 27 Dec 2022 • Ling Xiao, Toshihiko Yamasaki
In this paper, we propose a general color distortion prediction task forcing the baseline to recognize low-level image information to learn more discriminative representation for fashion compatibility prediction.
1 code implementation • 25 Jun 2022 • Ling Xiao, Toshihiko Yamasaki
Then, we propose a self-adaptive triplet loss (SATL), where the DS of the outfit is considered.