1 code implementation • 15 Jan 2025 • Baoming Zhang, Mingcai Chen, Jianqing Song, Shuangjie Li, Jie Zhang, Chongjun Wang
In this paper, we first analyze the restrictions of GNNs generalization from the perspective of supervision signals in the context of few-shot semi-supervised node classification.
no code implementations • 18 Nov 2024 • Zhendong Liu, Yuanbi Nie, Yingshui Tan, Jiaheng Liu, Xiangyu Yue, Qiushi Cui, Chongjun Wang, Xiaoyong Zhu, Bo Zheng
However, recent research shows that the visual modality in VLMs is highly vulnerable, allowing attackers to bypass safety alignment in LLMs through visually transmitted content, launching harmful attacks.
no code implementations • 7 Nov 2024 • Shuangjie Li, Jiangqing Song, Baoming Zhang, Gaoli Ruan, Junyuan Xie, Chongjun Wang
The key idea behind GaGSL is to learn a compact and informative graph structure for node classification tasks.
no code implementations • 6 Nov 2024 • Shuangjie Li, Baoming Zhang, Jianqing Song, Gaoli Ruan, Chongjun Wang, Junyuan Xie
Next, we propose a clean labels oriented link that connects unlabeled nodes to cleanly labeled nodes, aimed at mitigating label sparsity and promoting supervision propagation.
no code implementations • 14 Oct 2024 • Changfeng Ma, Pengxiao Guo, Shuangyu Yang, Yinuo Chen, Jie Guo, Chongjun Wang, Yanwen Guo, Wenping Wang
Extensive evaluations demonstrate the superiority of our method on reconstruction from point cloud, generation, and interpolation.
1 code implementation • 23 May 2024 • Jianqing Song, Jianguo Huang, Wenyu Jiang, Baoming Zhang, Shuangjie Li, Chongjun Wang
In this paper, we empirically show that for each node, aggregating the non-conformity scores of nodes with the same label can improve the efficiency of conformal prediction sets while maintaining valid marginal coverage.
no code implementations • 22 May 2024 • Zhendong Liu, Yuanbi Nie, Yingshui Tan, Xiangyu Yue, Qiushi Cui, Chongjun Wang, Xiaoyong Zhu, Bo Zheng
To address this issue, we enhance the existing VLMs' visual modality safety alignment by adding safety modules, including a safety projector, safety tokens, and a safety head, through a two-stage training process, effectively improving the model's defense against risky images.
no code implementations • 18 Dec 2023 • Shanli Tan, Hao Cheng, Xiaohu Wu, Han Yu, Tiantian He, Yew-Soon Ong, Chongjun Wang, Xiaofeng Tao
Federated learning (FL) provides a privacy-preserving approach for collaborative training of machine learning models.
no code implementations • 7 Sep 2023 • Zhendong Liu, Jie Zhang, Qiangqiang He, Chongjun Wang
In the realm of visual recognition, data augmentation stands out as a pivotal technique to amplify model robustness.
no code implementations • 31 Jul 2023 • Mingcai Chen, Yuntao Du, Wei Tang, Baoming Zhang, Hao Cheng, Shuwei Qian, Chongjun Wang
We introduce LaplaceConfidence, a method that to obtain label confidence (i. e., clean probabilities) utilizing the Laplacian energy.
2 code implementations • 3 Jun 2023 • Wenyu Jiang, Hao Cheng, Mingcai Chen, Chongjun Wang, Hongxin Wei
Modern neural networks are known to give overconfident prediction for out-of-distribution inputs when deployed in the open world.
no code implementations • CVPR 2023 • Changfeng Ma, Yinuo Chen, Pengxiao Guo, Jie Guo, Chongjun Wang, Yanwen Guo
Extensive experiments and comparisons demonstrate our superiority and generalization and show that our method achieves state-of-the-art performance on unsupervised completion of real scene objects.
no code implementations • 8 Dec 2022 • Zhendong Liu, Wenyu Jiang, Min Guo, Chongjun Wang
Based on the analysis of the internal mechanisms, we develop a mask-based boosting method for data augmentation that comprehensively improves several robustness measures of AI models and beats state-of-the-art data augmentation approaches.
1 code implementation • 6 Oct 2022 • Le Zhao, Mingcai Chen, Yuntao Du, Haiyang Yang, Chongjun Wang
We design an attention module to capture long-term dependency by mining periodic information in traffic data.
no code implementations • 22 Jun 2022 • Liu Zhendong, Wenyu Jiang, Yi Zhang, Chongjun Wang
With the rapid development of eXplainable Artificial Intelligence (XAI), a long line of past work has shown concerns about the Out-of-Distribution (OOD) problem in perturbation-based post-hoc XAI models and explanations are socially misaligned.
no code implementations • 15 Jun 2022 • Wenyu Jiang, Yuxin Ge, Hao Cheng, Mingcai Chen, Shuai Feng, Chongjun Wang
We propose a novel method, READ (Reconstruction Error Aggregated Detector), to unify inconsistencies from classifier and autoencoder.
Out-of-Distribution Detection
Out of Distribution (OOD) Detection
no code implementations • 18 Mar 2022 • Changfeng Ma, Yang Yang, Jie Guo, Chongjun Wang, Yanwen Guo
We propose in this paper an end-to-end network, named CS-Net, to complete the point clouds contaminated by noises or containing outliers.
1 code implementation • 15 Jan 2022 • Yi Zhang, Mingyuan Chen, Jundong Shen, Chongjun Wang
Previous methods mainly focus on projecting multiple modalities into a common latent space and learning an identical representation for all labels, which neglects the diversity of each modality and fails to capture richer semantic information for each label from different perspectives.
no code implementations • 6 Dec 2021 • Mingcai Chen, Hao Cheng, Yuntao Du, Ming Xu, Wenyu Jiang, Chongjun Wang
We show that our method successfully alleviates the damage of both label noise and confirmation bias.
Ranked #2 on
Image Classification
on mini WebVision 1.0
no code implementations • 9 Sep 2021 • Yuntao Du, Haiyang Yang, Mingcai Chen, Juan Jiang, Hongtao Luo, Chongjun Wang
The proposed method firstly generates and augments the pseudo-source domain, and then employs distribution alignment with four novel losses based on pseudo-label based strategy.
2 code implementations • 10 Aug 2021 • Yuntao Du, Jindong Wang, Wenjie Feng, Sinno Pan, Tao Qin, Renjun Xu, Chongjun Wang
This paper proposes Adaptive RNNs (AdaRNN) to tackle the TCS problem by building an adaptive model that generalizes well on the unseen test data.
1 code implementation • 10 Jul 2021 • Mingcai Chen, Yuntao Du, Yi Zhang, Shuwei Qian, Chongjun Wang
Co-training, extended from self-training, is one of the frameworks for semi-supervised learning.
1 code implementation • 29 Jun 2021 • Yuntao Du, Yinghao Chen, Fengli Cui, Xiaowen Zhang, Chongjun Wang
Unsupervised domain adaptation aims to transfer knowledge from a labeled source domain to an unlabeled target domain.
no code implementations • 26 Mar 2020 • Yuntao Du, Ruiting Zhang, Xiaowen Zhang, Yirong Yao, Hengyang Lu, Chongjun Wang
In this paper, a novel method called \textit{learning TransFerable and Discriminative Features for unsupervised domain adaptation} (TFDF) is proposed to optimize these two objectives simultaneously.
1 code implementation • 1 Jan 2020 • Yuntao Du, Zhiwen Tan, Qian Chen, Xiaowen Zhang, Yirong Yao, Chongjun Wang
Recent experiments have shown that when the discriminator is provided with domain information in both domains and label information in the source domain, it is able to preserve the complex multimodal information and high semantic information in both domains.
Ranked #6 on
Domain Adaptation
on ImageCLEF-DA
1 code implementation • 31 Dec 2019 • Yuntao Du, Zhiwen Tan, Qian Chen, Yi Zhang, Chongjun Wang
In this paper, we propose a novel online transfer learning method which seeks to find a new feature representation, so that the marginal distribution and conditional distribution discrepancy can be online reduced simultaneously.
no code implementations • 27 Jul 2019 • Yi Zhang, Cheng Zeng, Hao Cheng, Chongjun Wang, Lei Zhang
The quality of data collected from different channels are inconsistent and some of them may not benefit for prediction.