Search Results for author: Mouxing Yang

Found 9 papers, 8 papers with code

Cross-modal Retrieval with Noisy Correspondence via Consistency Refining and Mining

1 code implementation IEEE Transactions on Image Processing 2024 Xinran Ma, Mouxing Yang, Yunfan Li, Peng Hu, Jiancheng Lv, Xi Peng

Thanks to the consistency refining and mining strategy of CREAM, the overfitting on the false positives could be prevented and the consistency rooted in the false negatives could be exploited, thus leading to a robust CMR method.

Cross-modal retrieval with noisy correspondence Graph Matching +1

An Empirical Study of Parameter Efficient Fine-tuning on Vision-Language Pre-train Model

no code implementations13 Mar 2024 Yuxin Tian, Mouxing Yang, Yunfan Li, Dayiheng Liu, Xingzhang Ren, Xi Peng, Jiancheng Lv

A natural expectation for PEFTs is that the performance of various PEFTs is positively related to the data size and fine-tunable parameter size.

Decoupled Contrastive Multi-View Clustering with High-Order Random Walks

1 code implementation22 Aug 2023 Yiding Lu, Yijie Lin, Mouxing Yang, Dezhong Peng, Peng Hu, Xi Peng

In recent, some robust contrastive multi-view clustering (MvC) methods have been proposed, which construct data pairs from neighborhoods to alleviate the false negative issue, i. e., some intra-cluster samples are wrongly treated as negative pairs.

Clustering Contrastive Learning

Semantic Invariant Multi-view Clustering with Fully Incomplete Information

1 code implementation22 May 2023 Pengxin Zeng, Mouxing Yang, Yiding Lu, Changqing Zhang, Peng Hu, Xi Peng

To address this problem, we present a novel framework called SeMantic Invariance LEarning (SMILE) for multi-view clustering with incomplete information that does not require any paired samples.


Incomplete Multi-view Clustering via Prototype-based Imputation

1 code implementation26 Jan 2023 Haobin Li, Yunfan Li, Mouxing Yang, Peng Hu, Dezhong Peng, Xi Peng

Thanks to our dual-stream model, both cluster- and view-specific information could be captured, and thus the instance commonality and view versatility could be preserved to facilitate IMvC.

Clustering Contrastive Learning +2

Graph Matching with Bi-level Noisy Correspondence

3 code implementations ICCV 2023 Yijie Lin, Mouxing Yang, Jun Yu, Peng Hu, Changqing Zhang, Xi Peng

In this paper, we study a novel and widely existing problem in graph matching (GM), namely, Bi-level Noisy Correspondence (BNC), which refers to node-level noisy correspondence (NNC) and edge-level noisy correspondence (ENC).

Contrastive Learning Graph Learning +1

Twin Contrastive Learning for Online Clustering

2 code implementations21 Oct 2022 Yunfan Li, Mouxing Yang, Dezhong Peng, Taihao Li, Jiantao Huang, Xi Peng

Specifically, we find that when the data is projected into a feature space with a dimensionality of the target cluster number, the rows and columns of its feature matrix correspond to the instance and cluster representation, respectively.

Clustering Contrastive Learning +3

Partially View-aligned Representation Learning with Noise-robust Contrastive Loss

1 code implementation CVPR 2021 Mouxing Yang, Yunfan Li, Zhenyu Huang, Zitao Liu, Peng Hu, Xi Peng

To solve such a less-touched problem without the help of labels, we propose simultaneously learning representation and aligning data using a noise-robust contrastive loss.

Clustering Contrastive Learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.