1 code implementation • 13 Sep 2023 • Jiayu Lei, Lisong Dai, Haoyun Jiang, Chaoyi Wu, Xiaoman Zhang, Yao Zhang, Jiangchao Yao, Weidi Xie, Yanyong Zhang, Yuehua Li, Ya zhang, Yanfeng Wang
Magnetic resonance imaging~(MRI) have played a crucial role in brain disease diagnosis, with which a range of computer-aided artificial intelligence methods have been proposed.
no code implementations • 17 Aug 2023 • Feng Hong, Tianjie Dai, Jiangchao Yao, Ya zhang, Yanfeng Wang
Clinical classification of chest radiography is particularly challenging for standard machine learning algorithms due to its inherent long-tailed and multi-label nature.
1 code implementation • 3 Aug 2023 • YuHang Zhou, Jiangchao Yao, Feng Hong, Ya zhang, Yanfeng Wang
By dynamically manipulating the gradient during training based on these factors, BDR can effectively alleviate knowledge destruction and improve knowledge reconstruction.
1 code implementation • 15 Jun 2023 • Zhanke Zhou, Chenyu Zhou, Xuan Li, Jiangchao Yao, Quanming Yao, Bo Han
Although powerful graph neural networks (GNNs) have boosted numerous real-world applications, the potential privacy risk is still underexplored.
1 code implementation • 12 Jun 2023 • Yikun Liu, Jiangchao Yao, Ya zhang, Yanfeng Wang, Weidi Xie
In this paper, we consider the problem of composed image retrieval (CIR), it aims to train a model that can fuse multi-modal information, e. g., text and images, to accurately retrieve images that match the query, extending the user's expression ability.
1 code implementation • 6 Jun 2023 • Jianing Zhu, Hengzhuang Li, Jiangchao Yao, Tongliang Liu, Jianliang Xu, Bo Han
Based on such insights, we propose a novel method, Unleashing Mask, which aims to restore the OOD discriminative capabilities of the well-trained model with ID data.
Out-of-Distribution Detection
Out of Distribution (OOD) Detection
1 code implementation • 6 Jun 2023 • Jianing Zhu, Xiawei Guo, Jiangchao Yao, Chao Du, Li He, Shuo Yuan, Tongliang Liu, Liang Wang, Bo Han
In this paper, we dive into the perspective of model dynamics and propose a novel information measure, namely, Memorization Discrepancy, to explore the defense via the model-level information.
1 code implementation • CVPR 2023 • Yiming Qin, Huangjie Zheng, Jiangchao Yao, Mingyuan Zhou, Ya zhang
To tackle this problem, we set from the hypothesis that the data distribution is not class-balanced, and propose Class-Balancing Diffusion Models (CBDM) that are trained with a distribution adjustment regularizer as a solution.
no code implementations • 5 Apr 2023 • Shoukai Xu, Jiangchao Yao, Ran Luo, Shuhai Zhang, Zihao Lian, Mingkui Tan, Bo Han, YaoWei Wang
Moreover, the data used for pretraining foundation models are usually invisible and very different from the target data of downstream tasks.
1 code implementation • 1 Mar 2023 • Jianing Zhu, Jiangchao Yao, Tongliang Liu, Quanming Yao, Jianliang Xu, Bo Han
Privacy and security concerns in real-world applications have led to the development of adversarially robust federated models.
1 code implementation • 19 Feb 2023 • Jiangchao Yao, Bo Han, Zhihan Zhou, Ya zhang, Ivor W. Tsang
We solve this problem by introducing a Latent Class-Conditional Noise model (LCCN) to parameterize the noise transition under a Bayesian framework.
1 code implementation • 10 Feb 2023 • Feng Hong, Jiangchao Yao, Zhihan Zhou, Ya zhang, Yanfeng Wang
The straightforward combination of LT and PLL, i. e., LT-PLL, suffers from a fundamental dilemma: LT methods build upon a given class distribution that is unavailable in PLL, and the performance of PLL is severely influenced in long-tailed context.
1 code implementation • CVPR 2023 • Ruipeng Zhang, Qinwei Xu, Jiangchao Yao, Ya zhang, Qi Tian, Yanfeng Wang
Federated Domain Generalization (FedDG) attempts to learn a global model in a privacy-preserving manner that generalizes well to new clients possibly with domain shift.
1 code implementation • 14 Dec 2022 • Ziqing Fan, Yanfeng Wang, Jiangchao Yao, Lingjuan Lyu, Ya zhang, Qi Tian
However, in addition to previous explorations for improvement in federated averaging, our analysis shows that another critical bottleneck is the poorer optima of client models in more heterogeneous conditions.
1 code implementation • 23 Nov 2022 • Xin He, Jiangchao Yao, Yuxin Wang, Zhenheng Tang, Ka Chu Cheung, Simon See, Bo Han, Xiaowen Chu
One-shot neural architecture search (NAS) substantially improves the search efficiency by training one supernet to estimate the performance of every possible child architecture (i. e., subnet).
no code implementations • 7 Jul 2022 • Jiangchao Yao, Feng Wang, Xichen Ding, Shaohu Chen, Bo Han, Jingren Zhou, Hongxia Yang
To overcome this issue, we propose a meta controller to dynamically manage the collaboration between the on-device recommender and the cloud-based recommender, and introduce a novel efficient sample construction from the causal perspective to solve the dataset absence issue of meta controller.
1 code implementation • 25 May 2022 • Zhihan Zhou, Jiangchao Yao, Yanfeng Wang, Bo Han, Ya zhang
Different from previous works, we explore this direction from an alternative perspective, i. e., the data perspective, and propose a novel Boosted Contrastive Learning (BCL) method.
1 code implementation • 11 Nov 2021 • Jiangchao Yao, Shengyu Zhang, Yang Yao, Feng Wang, Jianxin Ma, Jianwei Zhang, Yunfei Chu, Luo Ji, Kunyang Jia, Tao Shen, Anpeng Wu, Fengda Zhang, Ziqi Tan, Kun Kuang, Chao Wu, Fei Wu, Jingren Zhou, Hongxia Yang
However, edge computing, especially edge and cloud collaborative computing, are still in its infancy to announce their success due to the resource-constrained IoT scenarios with very limited algorithms deployed.
no code implementations • 29 Sep 2021 • Jianing Zhu, Jiangchao Yao, Tongliang Liu, Kunyang Jia, Jingren Zhou, Bo Han, Hongxia Yang
Federated Adversarial Training (FAT) helps us address the data privacy and governance issues, meanwhile maintains the model robustness to the adversarial attack.
no code implementations • 27 Sep 2021 • Yujie Pan, Jiangchao Yao, Bo Han, Kunyang Jia, Ya zhang, Hongxia Yang
Click-through rate (CTR) prediction becomes indispensable in ubiquitous web recommendation applications.
no code implementations • 25 Sep 2021 • Zeyuan Chen, Jiangchao Yao, Feng Wang, Kunyang Jia, Bo Han, Wei zhang, Hongxia Yang
With the hardware development of mobile devices, it is possible to build the recommendation models on the mobile side to utilize the fine-grained features and the real-time feedbacks.
no code implementations • 11 Aug 2021 • Hao Wu, Jiangchao Yao, Ya zhang, Yanfeng Wang
Learning with noisy labels has gained the enormous interest in the robust deep learning area.
2 code implementations • ICLR 2022 • Jianing Zhu, Jiangchao Yao, Bo Han, Jingfeng Zhang, Tongliang Liu, Gang Niu, Jingren Zhou, Jianliang Xu, Hongxia Yang
However, when considering adversarial robustness, teachers may become unreliable and adversarial distillation may not work: teachers are pretrained on their own adversarial data, and it is too demanding to require that teachers are also good at every adversarial data queried by students.
1 code implementation • 8 May 2021 • Huangjie Zheng, Xu Chen, Jiangchao Yao, Hongxia Yang, Chunyuan Li, Ya zhang, Hao Zhang, Ivor Tsang, Jingren Zhou, Mingyuan Zhou
We realize this strategy with contrastive attraction and contrastive repulsion (CACR), which makes the query not only exert a greater force to attract more distant positive samples but also do so to repel closer negative samples.
no code implementations • 14 Apr 2021 • Jiangchao Yao, Feng Wang, Kunyang Jia, Bo Han, Jingren Zhou, Hongxia Yang
With the rapid development of storage and computing power on mobile devices, it becomes critical and popular to deploy models on devices to save onerous communication latencies and to capture real-time features.
no code implementations • 31 Mar 2021 • Hao Wu, Jiangchao Yao, Jiajie Wang, Yinru Chen, Ya zhang, Yanfeng Wang
Deep neural networks (DNNs) have the capacity to fit extremely noisy labels nonetheless they tend to learn data with clean labels first and then memorize those with noisy labels.
no code implementations • 17 Mar 2021 • Qizhou Wang, Jiangchao Yao, Chen Gong, Tongliang Liu, Mingming Gong, Hongxia Yang, Bo Han
Most of the previous approaches in this area focus on the pairwise relation (casual or correlational relationship) with noise, such as learning with noisy labels.
1 code implementation • 18 Feb 2021 • Qiaoyu Tan, Jianwei Zhang, Jiangchao Yao, Ninghao Liu, Jingren Zhou, Hongxia Yang, Xia Hu
Our sparse-interest module can adaptively infer a sparse set of concepts for each user from the large concept pool and output multiple embeddings accordingly.
3 code implementations • 3 Nov 2020 • Xu Chen, Siheng Chen, Jiangchao Yao, Huangjie Zheng, Ya zhang, Ivor W Tsang
Thereby, designing a new GNN for these graphs is a burning issue to the graph learning community.
1 code implementation • 28 Aug 2020 • Xu Chen, Jiangchao Yao, Maosen Li, Ya zhang, Yan-Feng Wang
Comprehensive results on both link sign prediction and node recommendation task demonstrate the effectiveness of DVE.
1 code implementation • 28 Aug 2019 • Yuting Ye, Xuwu Wang, Jiangchao Yao, Kunyang Jia, Jingren Zhou, Yanghua Xiao, Hongxia Yang
Low-dimensional embeddings of knowledge graphs and behavior graphs have proved remarkably powerful in varieties of tasks, from predicting unobserved edges between entities to content recommendation.
3 code implementations • 23 Jul 2019 • Xu Chen, Siheng Chen, Huangjie Zheng, Jiangchao Yao, Kenan Cui, Ya zhang, Ivor W. Tsang
NANG learns a unifying latent representation which is shared by both node attributes and graph structures and can be translated to different modalities.
1 code implementation • 6 Mar 2019 • Jiangchao Yao, Ya zhang, Ivor W. Tsang, Jun Sun
We further generalize LCCN for open-set noisy labels and the semi-supervised setting.
Ranked #33 on
Image Classification
on Clothing1M
(using extra training data)
3 code implementations • 14 Jan 2019 • Xingrui Yu, Bo Han, Jiangchao Yao, Gang Niu, Ivor W. Tsang, Masashi Sugiyama
Learning with noisy labels is one of the hottest problems in weakly-supervised learning.
Ranked #13 on
Learning with noisy labels
on CIFAR-10N-Worst
no code implementations • 27 Sep 2018 • Bo Han, Gang Niu, Jiangchao Yao, Xingrui Yu, Miao Xu, Ivor Tsang, Masashi Sugiyama
To handle these issues, by using the memorization effects of deep neural networks, we may train deep neural networks on the whole dataset only the first few iterations.
no code implementations • 22 Sep 2018 • Kenan Cui, Xu Chen, Jiangchao Yao, Ya zhang
Conventional CF-based methods use the user-item interaction data as the sole information source to recommend items to users.
no code implementations • 10 Jul 2018 • Huangjie Zheng, Jiangchao Yao, Ya zhang, Ivor W. Tsang, Jia Wang
In information theory, Fisher information and Shannon information (entropy) are respectively used to quantify the uncertainty associated with the distribution modeling and the uncertainty in specifying the outcome of given variables.
2 code implementations • NeurIPS 2018 • Bo Han, Jiangchao Yao, Gang Niu, Mingyuan Zhou, Ivor Tsang, Ya zhang, Masashi Sugiyama
It is important to learn various types of classifiers given training data with noisy labels.
Ranked #39 on
Image Classification
on Clothing1M
(using extra training data)
no code implementations • 12 Apr 2018 • Jiangchao Yao, Ivor Tsang, Ya zhang
Learning in the latent variable model is challenging in the presence of the complex data structure or the intractable latent variable.
no code implementations • 19 Feb 2018 • Huangjie Zheng, Jiangchao Yao, Ya zhang, Ivor W. Tsang
While enormous progress has been made to Variational Autoencoder (VAE) in recent years, similar to other deep networks, VAE with deep networks suffers from the problem of degeneration, which seriously weakens the correlation between the input and the corresponding latent codes, deviating from the goal of the representation learning.
no code implementations • 10 Feb 2018 • Jiajie Wang, Jiangchao Yao, Ya zhang, Rui Zhang
For object detection, taking WSDDN-like architecture as weakly supervised detector sub-network and Faster-RCNN-like architecture as strongly supervised detector sub-network, we propose an end-to-end Weakly Supervised Collaborative Detection Network.
no code implementations • 2 Nov 2017 • Jiangchao Yao, Jiajie Wang, Ivor Tsang, Ya zhang, Jun Sun, Chengqi Zhang, Rui Zhang
However, the label noise among the datasets severely degenerates the \mbox{performance of deep} learning approaches.