Search Results for author: Ivor Tsang

Found 36 papers, 12 papers with code

Cross-Context Backdoor Attacks against Graph Prompt Learning

1 code implementation28 May 2024 Xiaoting Lyu, Yufei Han, Wei Wang, Hangwei Qian, Ivor Tsang, Xiangliang Zhang

Graph Prompt Learning (GPL) bridges significant disparities between pretraining and downstream applications to alleviate the knowledge transfer bottleneck in real-world graph learning.

Backdoor Attack Computational Efficiency +3

Boosting Transferability in Vision-Language Attacks via Diversification along the Intersection Region of Adversarial Trajectory

no code implementations19 Mar 2024 Sensen Gao, Xiaojun Jia, Xuhong Ren, Ivor Tsang, Qing Guo

Vision-language pre-training (VLP) models exhibit remarkable capabilities in comprehending both images and text, yet they remain susceptible to multimodal adversarial examples (AEs).

Adversarial Text Image Captioning +2

Multisize Dataset Condensation

1 code implementation10 Mar 2024 Yang He, Lingao Xiao, Joey Tianyi Zhou, Ivor Tsang

These two challenges connect to the "subset degradation problem" in traditional dataset condensation: a subset from a larger condensed dataset is often unrepresentative compared to directly condensing the whole dataset to that smaller size.

Dataset Condensation

A First-Order Multi-Gradient Algorithm for Multi-Objective Bi-Level Optimization

no code implementations17 Jan 2024 Feiyang Ye, Baijiong Lin, Xiaofeng Cao, Yu Zhang, Ivor Tsang

In this paper, we study the Multi-Objective Bi-Level Optimization (MOBLO) problem, where the upper-level subproblem is a multi-objective optimization problem and the lower-level subproblem is for scalar optimization.

Multi-Task Learning

IRAD: Implicit Representation-driven Image Resampling against Adversarial Attacks

1 code implementation18 Oct 2023 Yue Cao, Tianlin Li, Xiaofeng Cao, Ivor Tsang, Yang Liu, Qing Guo

The underlying rationale behind our idea is that image resampling can alleviate the influence of adversarial perturbations while preserving essential semantic information, thereby conferring an inherent advantage in defending against adversarial attacks.

Adversarial Robustness

SuperInpaint: Learning Detail-Enhanced Attentional Implicit Representation for Super-resolutional Image Inpainting

no code implementations26 Jul 2023 Canyu Zhang, Qing Guo, Xiaoguang Li, Renjie Wan, Hongkai Yu, Ivor Tsang, Song Wang

Given the coordinates of a pixel we want to reconstruct, we first collect its neighboring pixels in the input image and extract their detail-enhanced semantic embeddings, unmask-attentional semantic embeddings, importance values, and spatial distances to the desired pixel.

Image Inpainting Image Reconstruction +2

Nonparametric Iterative Machine Teaching

1 code implementation5 Jun 2023 Chen Zhang, Xiaofeng Cao, Weiyang Liu, Ivor Tsang, James Kwok

In this paper, we consider the problem of Iterative Machine Teaching (IMT), where the teacher provides examples to the learner iteratively such that the learner can achieve fast convergence to a target model.

Causal Intervention for Abstractive Related Work Generation

no code implementations23 May 2023 Jiachang Liu, Qi Zhang, Chongyang Shi, Usman Naseem, Shoujin Wang, Ivor Tsang

Abstractive related work generation has attracted increasing attention in generating coherent related work that better helps readers grasp the background in the current research.

Sentence

Learning Restoration is Not Enough: Transfering Identical Mapping for Single-Image Shadow Removal

no code implementations18 May 2023 Xiaoguang Li, Qing Guo, Pingping Cai, Wei Feng, Ivor Tsang, Song Wang

State-of-the-art shadow removal methods train deep neural networks on collected shadow & shadow-free image pairs, which are desired to complete two distinct tasks via shared weights, i. e., data restoration for shadow regions and identical mapping for non-shadow regions.

Image Shadow Removal Shadow Removal

Leveraging Inpainting for Single-Image Shadow Removal

1 code implementation ICCV 2023 Xiaoguang Li, Qing Guo, Rabab Abdelfattah, Di Lin, Wei Feng, Ivor Tsang, Song Wang

In this work, we find that pretraining shadow removal networks on the image inpainting dataset can reduce the shadow remnants significantly: a naive encoder-decoder network gets competitive restoration quality w. r. t.

Decoder Image Inpainting +2

DigNet: Digging Clues from Local-Global Interactive Graph for Aspect-level Sentiment Classification

no code implementations4 Jan 2022 Bowen Xing, Ivor Tsang

In aspect-level sentiment classification (ASC), state-of-the-art models encode either syntax graph or relation graph to capture the local syntactic information or global relational information.

Relation Sentiment Analysis +1

Fine-Tuning from Limited Feedbacks

no code implementations29 Sep 2021 Jing Li, Yuangang Pan, Yueming Lyu, Yinghua Yao, Ivor Tsang

Instead of learning from scratch, fine-tuning a pre-trained model to fit a related target dataset of interest or downstream tasks has been a handy trick to achieve the desired performance.

Fairness

Imitation Learning: Progress, Taxonomies and Challenges

no code implementations23 Jun 2021 Boyuan Zheng, Sunny Verma, Jianlong Zhou, Ivor Tsang, Fang Chen

Imitation learning aims to extract knowledge from human experts' demonstrations or artificially created agents in order to replicate their behaviors.

Autonomous Driving Imitation Learning

Neural Optimization Kernel: Towards Robust Deep Learning

no code implementations11 Jun 2021 Yueming Lyu, Ivor Tsang

We further establish a new generalization bound of our deep structured approximated NOK architecture.

Generalization Bounds

Contrastive Attraction and Contrastive Repulsion for Representation Learning

1 code implementation8 May 2021 Huangjie Zheng, Xu Chen, Jiangchao Yao, Hongxia Yang, Chunyuan Li, Ya zhang, Hao Zhang, Ivor Tsang, Jingren Zhou, Mingyuan Zhou

We realize this strategy with contrastive attraction and contrastive repulsion (CACR), which makes the query not only exert a greater force to attract more distant positive samples but also do so to repel closer negative samples.

Contrastive Learning Representation Learning

Generative Transition Mechanism to Image-to-Image Translation via Encoded Transformation

no code implementations9 Mar 2021 Yaxin Shi, Xiaowei Zhou, Ping Liu, Ivor Tsang

To benefit the generalization ability of the translation model, we propose transition encoding to facilitate explicit regularization of these two {kinds} of consistencies on unseen transitions.

Attribute Image Reconstruction +2

Human-Understandable Decision Making for Visual Recognition

no code implementations5 Mar 2021 Xiaowei Zhou, Jie Yin, Ivor Tsang, Chen Wang

The widespread use of deep neural networks has achieved substantial success in many tasks.

Decision Making

Streamlining EM into Auto-Encoder Networks

no code implementations1 Jan 2021 Yuangang Pan, Ivor Tsang

We present a new deep neural network architecture, named EDGaM, for deep clustering.

Clustering Decoder +1

A Simple Sparse Denoising Layer for Robust Deep Learning

no code implementations1 Jan 2021 Yueming Lyu, Xingrui Yu, Ivor Tsang

In this work, we take an initial step to designing a simple robust layer as a lightweight plug-in for vanilla deep models.

Denoising Dictionary Learning +1

Learning Efficient Planning-based Rewards for Imitation Learning

no code implementations1 Jan 2021 Xingrui Yu, Yueming Lyu, Ivor Tsang

Our method learns useful planning computations with a meaningful reward function that focuses on the resulting region of an agent executing an action.

Atari Games Continuous Control +2

On the Geometry of Deep Bayesian Active Learning

no code implementations1 Jan 2021 Xiaofeng Cao, Ivor Tsang

To guarantee the improvements, our generalization analysis proves that, compared to typical Bayesian spherical interpretation, geodesic search with ellipsoid can derive a tighter lower error bound and achieve higher probability to obtain a nearly zero error.

Active Learning

Learning Node Representations against Perturbations

1 code implementation26 Aug 2020 Xu Chen, Yuangang Pan, Ivor Tsang, Ya zhang

In this paper, we study how to learn node representations against perturbations in GNN.

Contrastive Learning Node Classification +1

Copy and Paste GAN: Face Hallucination from Shaded Thumbnails

no code implementations CVPR 2020 Yang Zhang, Ivor Tsang, Yawei Luo, Changhui Hu, Xiaobo Lu, Xin Yu

This paper proposes a Copy and Paste Generative Adversarial Network (CPGAN) to recover authentic high-resolution (HR) face images while compensating for low and non-uniform illumination.

Face Hallucination Generative Adversarial Network +1

MULTI-LABEL METRIC LEARNING WITH BIDIRECTIONAL REPRESENTATION DEEP NEURAL NETWORKS

no code implementations25 Sep 2019 Tao Zheng, Ivor Tsang, Xin Yao

We propose an extendable and end-to-end deep representation approach for metric learning on multi-label data set that is based on neural networks able to operate on feature data or directly on raw image data.

Metric Learning Multi-Label Learning +1

Domain-adversarial Network Alignment

1 code implementation15 Aug 2019 Huiting Hong, Xin Li, Yuangang Pan, Ivor Tsang

Network alignment is a critical task to a wide variety of fields.

Network Embedding

Probabilistic CCA with Implicit Distributions

no code implementations4 Jul 2019 Yaxin Shi, Yuangang Pan, Donna Xu, Ivor Tsang

Although some works have studied probabilistic interpretation for CCA, these models still require the explicit form of the distributions to achieve a tractable solution for the inference.

Bayesian Inference MULTI-VIEW LEARNING

Pumpout: A Meta Approach for Robustly Training Deep Neural Networks with Noisy Labels

no code implementations27 Sep 2018 Bo Han, Gang Niu, Jiangchao Yao, Xingrui Yu, Miao Xu, Ivor Tsang, Masashi Sugiyama

To handle these issues, by using the memorization effects of deep neural networks, we may train deep neural networks on the whole dataset only the first few iterations.

Memorization

Canonical Correlation Analysis with Implicit Distributions

no code implementations27 Sep 2018 Yaxin Shi, Donna Xu, Yuangang Pan, Ivor Tsang

Based on this objective, we present an implicit probabilistic formulation for CCA, named Implicit CCA (ICCA), which provides a flexible framework to design CCA extensions with implicit distributions.

MULTI-VIEW LEARNING

Masking: A New Perspective of Noisy Supervision

2 code implementations NeurIPS 2018 Bo Han, Jiangchao Yao, Gang Niu, Mingyuan Zhou, Ivor Tsang, Ya zhang, Masashi Sugiyama

It is important to learn various types of classifiers given training data with noisy labels.

Ranked #42 on Image Classification on Clothing1M (using extra training data)

Image Classification

Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels

5 code implementations NeurIPS 2018 Bo Han, Quanming Yao, Xingrui Yu, Gang Niu, Miao Xu, Weihua Hu, Ivor Tsang, Masashi Sugiyama

Deep learning with noisy labels is practically challenging, as the capacity of deep models is so high that they can totally memorize these noisy labels sooner or later during training.

Learning with noisy labels Memorization

Variational Composite Autoencoders

no code implementations12 Apr 2018 Jiangchao Yao, Ivor Tsang, Ya zhang

Learning in the latent variable model is challenging in the presence of the complex data structure or the intractable latent variable.

Decoder

Sparse Embedded k-Means Clustering

no code implementations NeurIPS 2017 Weiwei Liu, Xiaobo Shen, Ivor Tsang

For example, compared to the advanced singular value decomposition based feature extraction approach, [1] reduce the running time by a factor of $\min \{n, d\}\epsilon^2 log(d)/k$ for data matrix $X \in \mathbb{R}^{n\times d} $ with $n$ data points and $d$ features, while losing only a factor of one in approximation accuracy.

Clustering Dimensionality Reduction

Deep Learning from Noisy Image Labels with Quality Embedding

no code implementations2 Nov 2017 Jiangchao Yao, Jiajie Wang, Ivor Tsang, Ya zhang, Jun Sun, Chengqi Zhang, Rui Zhang

However, the label noise among the datasets severely degenerates the \mbox{performance of deep} learning approaches.

On the Optimality of Classifier Chain for Multi-label Classification

no code implementations NeurIPS 2015 Weiwei Liu, Ivor Tsang

Based on our results, we propose a dynamic programming based classifier chain (CC-DP) algorithm to search the globally optimal label order for CC and a greedy classifier chain (CC-Greedy) algorithm to find a locally optimal CC.

Classification General Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.