no code implementations • 26 Dec 2023 • Zeqiang Wei, Kai Jin, Xiuzhuang Zhou
However, due to task competition and information interference caused by significant differences between the inputs of the two proxy tasks, the effectiveness of representation learning for intra-modal and cross-modal features is limited.
no code implementations • 5 Jun 2023 • Tengjin Weng, Yang shen, Kai Jin, Zhiming Cheng, Yunxiang Li, Gewen Zhang, Shuai Wang, Yaqi Wang
Specifically, we use points to annotate fluid regions in unlabeled OCT images and the Superpixel-Guided Pseudo-Label Generation (SGPLG) module generates pseudo-labels and pixel-level label trust maps from the point annotations.
no code implementations • 28 Oct 2021 • Yunxiang Li, Jingxiong Li, Ruilong Dan, Shuai Wang, Kai Jin, Guodong Zeng, Jun Wang, Xiangji Pan, Qianni Zhang, Huiyu Zhou, Qun Jin, Li Wang, Yaqi Wang
To mitigate this problem, a novel unsupervised domain adaptation (UDA) method named dispensed Transformer network (DTNet) is introduced in this paper.
no code implementations • 26 Aug 2018 • Dongyu Liu, Weiwei Cui, Kai Jin, YuXiao Guo, Huamin Qu
To bridge this gap and help domain experts with their training tasks in a practical environment, we propose a visual analytics system, DeepTracker, to facilitate the exploration of the rich dynamics of CNN training processes and to identify the unusual patterns that are hidden behind the huge amount of training log.