no code implementations • 7 Oct 2024 • Kazumoto Nakamura, Yuji Nozawa, Yu-Chieh Lin, Kengo Nakata, Youyang Ng
The goal of this paper is to improve the performance of pretrained Vision Transformer (ViT) models, particularly DINOv2, in image clustering task without requiring re-training or fine-tuning.
Ranked #1 on Image Clustering on Tiny-ImageNet (using extra training data)
no code implementations • 25 Apr 2024 • Ryoya Nara, Yu-Chieh Lin, Yuji Nozawa, Youyang Ng, Goh Itoh, Osamu Torii, Yusuke Matsui
However, metric learning cannot handle differences in users' preferences, and requires data to train an image encoder.
no code implementations • 3 Apr 2022 • Kengo Nakata, Youyang Ng, Daisuke Miyashita, Asuka Maki, Yu-Chieh Lin, Jun Deguchi
Moreover, users cannot verify the validity of inference results or evaluate the contribution of knowledge to the results.
Ranked #1 on Incremental Learning on ImageNet - 10 steps (using extra training data)
2 code implementations • 27 Apr 2020 • Cheng-Ming Chiang, Yu Tseng, Yu-Syuan Xu, Hsien-Kai Kuo, Yi-Min Tsai, Guan-Yu Chen, Koan-Sin Tan, Wei-Ting Wang, Yu-Chieh Lin, Shou-Yao Roy Tseng, Wei-Shiang Lin, Chia-Lin Yu, BY Shen, Kloze Kao, Chia-Ming Cheng, Hung-Jen Chen
To the best of our knowledge, this is the first paper that addresses all the deployment issues of image deblurring task across mobile devices.