Search Results for author: Xiangli Yang

Found 7 papers, 3 papers with code

Generalized Category Discovery with Clustering Assignment Consistency

no code implementations30 Oct 2023 Xiangli Yang, Xinglin Pan, Irwin King, Zenglin Xu

To address the GCD without knowing the class number of unlabeled dataset, we propose a co-training-based framework that encourages clustering consistency.

Clustering Community Detection +2

Tensor Networks Meet Neural Networks: A Survey and Future Perspectives

1 code implementation22 Jan 2023 Maolin Wang, Yu Pan, Zenglin Xu, Xiangli Yang, Guangxi Li, Andrzej Cichocki

Interestingly, although these two types of networks originate from different observations, they are inherently linked through the common multilinearity structure underlying both TNs and NNs, thereby motivating a significant number of intellectual developments regarding combinations of TNs and NNs.

Tensor Networks

AutoFT: Automatic Fine-Tune for Parameters Transfer Learning in Click-Through Rate Prediction

no code implementations9 Jun 2021 Xiangli Yang, Qing Liu, Rong Su, Ruiming Tang, Zhirong Liu, Xiuqiang He

The field-wise transfer policy decides how the pre-trained embedding representations are frozen or fine-tuned based on the given instance from the target domain.

Click-Through Rate Prediction Recommendation Systems +1

A Survey on Deep Semi-supervised Learning

no code implementations28 Feb 2021 Xiangli Yang, Zixing Song, Irwin King, Zenglin Xu

Deep semi-supervised learning is a fast-growing field with a range of practical applications.

Graph-based Semi-supervised Learning: A Comprehensive Review

1 code implementation26 Feb 2021 Zixing Song, Xiangli Yang, Zenglin Xu, Irwin King

An important class of SSL methods is to naturally represent data as graphs such that the label information of unlabelled samples can be inferred from the graphs, which corresponds to graph-based semi-supervised learning (GSSL) methods.

Graph Embedding

Cannot find the paper you are looking for? You can Submit a new open access paper.