no code implementations • NeurIPS 2015 • Weiwei Liu, Ivor Tsang
Based on our results, we propose a dynamic programming based classifier chain (CC-DP) algorithm to search the globally optimal label order for CC and a greedy classifier chain (CC-Greedy) algorithm to find a locally optimal CC.
no code implementations • 2 Nov 2017 • Jiangchao Yao, Jiajie Wang, Ivor Tsang, Ya zhang, Jun Sun, Chengqi Zhang, Rui Zhang
However, the label noise among the datasets severely degenerates the \mbox{performance of deep} learning approaches.
no code implementations • NeurIPS 2017 • Weiwei Liu, Xiaobo Shen, Ivor Tsang
For example, compared to the advanced singular value decomposition based feature extraction approach, [1] reduce the running time by a factor of $\min \{n, d\}\epsilon^2 log(d)/k$ for data matrix $X \in \mathbb{R}^{n\times d} $ with $n$ data points and $d$ features, while losing only a factor of one in approximation accuracy.
no code implementations • 12 Apr 2018 • Jiangchao Yao, Ivor Tsang, Ya zhang
Learning in the latent variable model is challenging in the presence of the complex data structure or the intractable latent variable.
5 code implementations • NeurIPS 2018 • Bo Han, Quanming Yao, Xingrui Yu, Gang Niu, Miao Xu, Weihua Hu, Ivor Tsang, Masashi Sugiyama
Deep learning with noisy labels is practically challenging, as the capacity of deep models is so high that they can totally memorize these noisy labels sooner or later during training.
Ranked #8 on Learning with noisy labels on CIFAR-10N-Random3
2 code implementations • NeurIPS 2018 • Bo Han, Jiangchao Yao, Gang Niu, Mingyuan Zhou, Ivor Tsang, Ya zhang, Masashi Sugiyama
It is important to learn various types of classifiers given training data with noisy labels.
Ranked #42 on Image Classification on Clothing1M (using extra training data)
no code implementations • 27 Sep 2018 • Yaxin Shi, Donna Xu, Yuangang Pan, Ivor Tsang
Based on this objective, we present an implicit probabilistic formulation for CCA, named Implicit CCA (ICCA), which provides a flexible framework to design CCA extensions with implicit distributions.
no code implementations • 27 Sep 2018 • Bo Han, Gang Niu, Jiangchao Yao, Xingrui Yu, Miao Xu, Ivor Tsang, Masashi Sugiyama
To handle these issues, by using the memorization effects of deep neural networks, we may train deep neural networks on the whole dataset only the first few iterations.
1 code implementation • ICML 2020 • Bo Han, Gang Niu, Xingrui Yu, Quanming Yao, Miao Xu, Ivor Tsang, Masashi Sugiyama
Given data with noisy labels, over-parameterized deep networks can gradually memorize the data, and fit everything in the end.
no code implementations • 4 Jul 2019 • Yaxin Shi, Yuangang Pan, Donna Xu, Ivor Tsang
Although some works have studied probabilistic interpretation for CCA, these models still require the explicit form of the distributions to achieve a tractable solution for the inference.
1 code implementation • 15 Aug 2019 • Huiting Hong, Xin Li, Yuangang Pan, Ivor Tsang
Network alignment is a critical task to a wide variety of fields.
no code implementations • 25 Sep 2019 • Tao Zheng, Ivor Tsang, Xin Yao
We propose an extendable and end-to-end deep representation approach for metric learning on multi-label data set that is based on neural networks able to operate on feature data or directly on raw image data.
no code implementations • CVPR 2020 • Yang Zhang, Ivor Tsang, Yawei Luo, Changhui Hu, Xiaobo Lu, Xin Yu
This paper proposes a Copy and Paste Generative Adversarial Network (CPGAN) to recover authentic high-resolution (HR) face images while compensating for low and non-uniform illumination.
1 code implementation • 26 Aug 2020 • Xu Chen, Yuangang Pan, Ivor Tsang, Ya zhang
In this paper, we study how to learn node representations against perturbations in GNN.
1 code implementation • 15 Sep 2020 • Xu Chen, Ya zhang, Ivor Tsang, Yuangang Pan, Jingchao Su
In this paper, we attempt to learn both features of user preferences in a more principled way.
no code implementations • 1 Jan 2021 • Xingrui Yu, Yueming Lyu, Ivor Tsang
Our method learns useful planning computations with a meaningful reward function that focuses on the resulting region of an agent executing an action.
no code implementations • 1 Jan 2021 • Xiaofeng Cao, Ivor Tsang
To guarantee the improvements, our generalization analysis proves that, compared to typical Bayesian spherical interpretation, geodesic search with ellipsoid can derive a tighter lower error bound and achieve higher probability to obtain a nearly zero error.
no code implementations • 1 Jan 2021 • Yueming Lyu, Xingrui Yu, Ivor Tsang
In this work, we take an initial step to designing a simple robust layer as a lightweight plug-in for vanilla deep models.
no code implementations • 1 Jan 2021 • Yuangang Pan, Ivor Tsang
We present a new deep neural network architecture, named EDGaM, for deep clustering.
no code implementations • 5 Mar 2021 • Xiaowei Zhou, Jie Yin, Ivor Tsang, Chen Wang
The widespread use of deep neural networks has achieved substantial success in many tasks.
no code implementations • 9 Mar 2021 • Yaxin Shi, Xiaowei Zhou, Ping Liu, Ivor Tsang
To benefit the generalization ability of the translation model, we propose transition encoding to facilitate explicit regularization of these two {kinds} of consistencies on unseen transitions.
1 code implementation • 8 May 2021 • Huangjie Zheng, Xu Chen, Jiangchao Yao, Hongxia Yang, Chunyuan Li, Ya zhang, Hao Zhang, Ivor Tsang, Jingren Zhou, Mingyuan Zhou
We realize this strategy with contrastive attraction and contrastive repulsion (CACR), which makes the query not only exert a greater force to attract more distant positive samples but also do so to repel closer negative samples.
no code implementations • 11 Jun 2021 • Yueming Lyu, Ivor Tsang
We further establish a new generalization bound of our deep structured approximated NOK architecture.
no code implementations • 23 Jun 2021 • Boyuan Zheng, Sunny Verma, Jianlong Zhou, Ivor Tsang, Fang Chen
Imitation learning aims to extract knowledge from human experts' demonstrations or artificially created agents in order to replicate their behaviors.
no code implementations • 29 Sep 2021 • Jing Li, Yuangang Pan, Yueming Lyu, Yinghua Yao, Ivor Tsang
Instead of learning from scratch, fine-tuning a pre-trained model to fit a related target dataset of interest or downstream tasks has been a handy trick to achieve the desired performance.
no code implementations • 4 Jan 2022 • Bowen Xing, Ivor Tsang
In aspect-level sentiment classification (ASC), state-of-the-art models encode either syntax graph or relation graph to capture the local syntactic information or global relational information.
1 code implementation • ICCV 2023 • Xiaoguang Li, Qing Guo, Rabab Abdelfattah, Di Lin, Wei Feng, Ivor Tsang, Song Wang
In this work, we find that pretraining shadow removal networks on the image inpainting dataset can reduce the shadow remnants significantly: a naive encoder-decoder network gets competitive restoration quality w. r. t.
no code implementations • 18 May 2023 • Xiaoguang Li, Qing Guo, Pingping Cai, Wei Feng, Ivor Tsang, Song Wang
State-of-the-art shadow removal methods train deep neural networks on collected shadow & shadow-free image pairs, which are desired to complete two distinct tasks via shared weights, i. e., data restoration for shadow regions and identical mapping for non-shadow regions.
no code implementations • 23 May 2023 • Jiachang Liu, Qi Zhang, Chongyang Shi, Usman Naseem, Shoujin Wang, Ivor Tsang
Abstractive related work generation has attracted increasing attention in generating coherent related work that better helps readers grasp the background in the current research.
1 code implementation • 5 Jun 2023 • Chen Zhang, Xiaofeng Cao, Weiyang Liu, Ivor Tsang, James Kwok
In this paper, we consider the problem of Iterative Machine Teaching (IMT), where the teacher provides examples to the learner iteratively such that the learner can achieve fast convergence to a target model.
no code implementations • 26 Jul 2023 • Canyu Zhang, Qing Guo, Xiaoguang Li, Renjie Wan, Hongkai Yu, Ivor Tsang, Song Wang
Given the coordinates of a pixel we want to reconstruct, we first collect its neighboring pixels in the input image and extract their detail-enhanced semantic embeddings, unmask-attentional semantic embeddings, importance values, and spatial distances to the desired pixel.
1 code implementation • 18 Oct 2023 • Yue Cao, Tianlin Li, Xiaofeng Cao, Ivor Tsang, Yang Liu, Qing Guo
The underlying rationale behind our idea is that image resampling can alleviate the influence of adversarial perturbations while preserving essential semantic information, thereby conferring an inherent advantage in defending against adversarial attacks.
no code implementations • 17 Jan 2024 • Feiyang Ye, Baijiong Lin, Xiaofeng Cao, Yu Zhang, Ivor Tsang
In this paper, we study the Multi-Objective Bi-Level Optimization (MOBLO) problem, where the upper-level subproblem is a multi-objective optimization problem and the lower-level subproblem is for scalar optimization.
1 code implementation • 10 Mar 2024 • Yang He, Lingao Xiao, Joey Tianyi Zhou, Ivor Tsang
These two challenges connect to the "subset degradation problem" in traditional dataset condensation: a subset from a larger condensed dataset is often unrepresentative compared to directly condensing the whole dataset to that smaller size.
no code implementations • 19 Mar 2024 • Sensen Gao, Xiaojun Jia, Xuhong Ren, Ivor Tsang, Qing Guo
Vision-language pre-training (VLP) models exhibit remarkable capabilities in comprehending both images and text, yet they remain susceptible to multimodal adversarial examples (AEs).