Search Results for author: Shu-Tao Xia

Found 63 papers, 26 papers with code

A Comparative Study of Feature Expansion Unit for 3D Point Cloud Upsampling

no code implementations19 May 2022 Qiang Li, Tao Dai, Shu-Tao Xia

Recently, deep learning methods have shown great success in 3D point cloud upsampling.

Image Super-Resolution

Improving Vision Transformers by Revisiting High-frequency Components

no code implementations3 Apr 2022 Jiawang Bai, Li Yuan, Shu-Tao Xia, Shuicheng Yan, Zhifeng Li, Wei Liu

The transformer models have shown promising effectiveness in dealing with various vision tasks.

Adaptive Frequency Learning in Two-branch Face Forgery Detection

no code implementations27 Mar 2022 Neng Wang, Yang Bai, Kun Yu, Yong Jiang, Shu-Tao Xia, Yan Wang

Face forgery has attracted increasing attention in recent applications of computer vision.

On the Effectiveness of Adversarial Training against Backdoor Attacks

no code implementations22 Feb 2022 Yinghua Gao, Dongxian Wu, Jingfeng Zhang, Guanhao Gan, Shu-Tao Xia, Gang Niu, Masashi Sugiyama

To explore whether adversarial training could defend against backdoor attacks or not, we conduct extensive experiments across different threat models and perturbation budgets, and find the threat model in adversarial training matters.

Hybrid Contrastive Quantization for Efficient Cross-View Video Retrieval

1 code implementation7 Feb 2022 Jinpeng Wang, Bin Chen, Dongliang Liao, Ziyun Zeng, Gongfu Li, Shu-Tao Xia, Jin Xu

By performing Asymmetric-Quantized Contrastive Learning (AQ-CL) across views, HCQ aligns texts and videos at coarse-grained and multiple fine-grained levels.

Contrastive Learning Quantization +2

Few-Shot Backdoor Attacks on Visual Object Tracking

1 code implementation ICLR 2022 Yiming Li, Haoxiang Zhong, Xingjun Ma, Yong Jiang, Shu-Tao Xia

Visual object tracking (VOT) has been widely adopted in mission-critical applications, such as autonomous driving and intelligent surveillance systems.

Autonomous Driving Backdoor Attack +1

Defending against Model Stealing via Verifying Embedded External Features

1 code implementation ICML Workshop AML 2021 Yiming Li, Linghui Zhu, Xiaojun Jia, Yong Jiang, Shu-Tao Xia, Xiaochun Cao

In this paper, we explore the defense from another angle by verifying whether a suspicious model contains the knowledge of defender-specified \emph{external features}.

Style Transfer

Clustering Effect of (Linearized) Adversarial Robust Models

1 code implementation25 Nov 2021 Yang Bai, Xin Yan, Yong Jiang, Shu-Tao Xia, Yisen Wang

Adversarial robustness has received increasing attention along with the study of adversarial examples.

Adversarial Robustness Domain Adaptation

Does Adversarial Robustness Really Imply Backdoor Vulnerability?

no code implementations29 Sep 2021 Yinghua Gao, Dongxian Wu, Jingfeng Zhang, Shu-Tao Xia, Gang Niu, Masashi Sugiyama

Based on thorough experiments, we find that such trade-off ignores the interactions between the perturbation budget of adversarial training and the magnitude of the backdoor trigger.

Adversarial Robustness

Deep Dirichlet Process Mixture Models

no code implementations29 Sep 2021 Naiqi Li, Wenjie Li, Yong Jiang, Shu-Tao Xia

In this paper we propose the deep Dirichlet process mixture (DDPM) model, which is an unsupervised method that simultaneously performs clustering and feature learning.

Clean-label Backdoor Attack against Deep Hashing based Retrieval

no code implementations18 Sep 2021 Kuofeng Gao, Jiawang Bai, Bin Chen, Dongxian Wu, Shu-Tao Xia

To the best of our knowledge, this is the first attempt at the backdoor attack against deep hashing models.

Backdoor Attack Data Poisoning +1

Contrastive Quantization with Code Memory for Unsupervised Image Retrieval

1 code implementation11 Sep 2021 Jinpeng Wang, Ziyun Zeng, Bin Chen, Tao Dai, Shu-Tao Xia

The high efficiency in computation and storage makes hashing (including binary hashing and quantization) a common strategy in large-scale retrieval systems.

Contrastive Learning Image Retrieval

Pyramid Hybrid Pooling Quantization for Efficient Fine-Grained Image Retrieval

no code implementations11 Sep 2021 Ziyun Zeng, Jinpeng Wang, Bin Chen, Tao Dai, Shu-Tao Xia

Deep hashing approaches, including deep quantization and deep binary hashing, have become a common solution to large-scale image retrieval due to high computation and storage efficiency.

Image Retrieval Quantization

Universal Adversarial Head: Practical Protection against Video Data Leakage

no code implementations ICML Workshop AML 2021 Jiawang Bai, Bin Chen, Dongxian Wu, Chaoning Zhang, Shu-Tao Xia

We propose $universal \ adversarial \ head$ (UAH), which crafts adversarial query videos by prepending the original videos with a sequence of adversarial frames to perturb the normal hash codes in the Hamming space.

Video Retrieval

Learnable Hypergraph Laplacian for Hypergraph Learning

no code implementations12 Jun 2021 Jiying Zhang, Yuzhao Chen, Xi Xiao, Runiu Lu, Shu-Tao Xia

Hypergraph Convolutional Neural Networks (HGCNNs) have demonstrated their potential in modeling high-order relations preserved in graph-structured data.

Graph Classification Node Classification

Learnable Hypergraph Laplacian for Hypergraph Learning

no code implementations10 Jun 2021 Jiying Zhang, Yuzhao Chen, Xi Xiao, Runiu Lu, Shu-Tao Xia

HyperGraph Convolutional Neural Networks (HGCNNs) have demonstrated their potential in modeling high-order relations preserved in graph structured data.

Graph Classification Node Classification

TokenPose: Learning Keypoint Tokens for Human Pose Estimation

1 code implementation ICCV 2021 YanJie Li, Shoukui Zhang, Zhicheng Wang, Sen yang, Wankou Yang, Shu-Tao Xia, Erjin Zhou

Most existing CNN-based methods do well in visual representation, however, lacking in the ability to explicitly learn the constraint relationships between keypoints.

Pose Estimation

Backdoor Attack in the Physical World

no code implementations6 Apr 2021 Yiming Li, Tongqing Zhai, Yong Jiang, Zhifeng Li, Shu-Tao Xia

We demonstrate that this attack paradigm is vulnerable when the trigger in testing images is not consistent with the one used for training.

Backdoor Attack

Improving Adversarial Robustness via Channel-wise Activation Suppressing

1 code implementation ICLR 2021 Yang Bai, Yuyuan Zeng, Yong Jiang, Shu-Tao Xia, Xingjun Ma, Yisen Wang

The study of adversarial examples and their activation has attracted significant attention for secure and robust learning with deep neural networks (DNNs).

Adversarial Robustness

Hidden Backdoor Attack against Semantic Segmentation Models

no code implementations6 Mar 2021 Yiming Li, YanJie Li, Yalei Lv, Yong Jiang, Shu-Tao Xia

Deep neural networks (DNNs) are vulnerable to the \emph{backdoor attack}, which intends to embed hidden backdoors in DNNs by poisoning training data.

Autonomous Driving Backdoor Attack +1

Targeted Attack against Deep Neural Networks via Flipping Limited Weight Bits

1 code implementation ICLR 2021 Jiawang Bai, Baoyuan Wu, Yong Zhang, Yiming Li, Zhifeng Li, Shu-Tao Xia

By utilizing the latest technique in integer programming, we equivalently reformulate this BIP problem as a continuous optimization problem, which can be effectively and efficiently solved using the alternating direction method of multipliers (ADMM) method.

Backdoor Attack

Stochastic Deep Gaussian Processes over Graphs

1 code implementation NeurIPS 2020 Naiqi Li, Wenjie Li, Jifeng Sun, Yinghua Gao, Yong Jiang, Shu-Tao Xia

In this paper we propose Stochastic Deep Gaussian Processes over Graphs (DGPG), which are deep structure models that learn the mappings between input and output signals in graph domains.

Gaussian Processes Variational Inference

Backdoor Attack against Speaker Verification

1 code implementation22 Oct 2020 Tongqing Zhai, Yiming Li, Ziqi Zhang, Baoyuan Wu, Yong Jiang, Shu-Tao Xia

We also demonstrate that existing backdoor attacks cannot be directly adopted in attacking speaker verification.

Backdoor Attack Speaker Verification

JSRT: James-Stein Regression Tree

no code implementations18 Oct 2020 Xingchun Xiang, Qingtao Tang, Huaixuan Zhang, Tao Dai, Jiawei Li, Shu-Tao Xia

To address this issue, we propose a novel regression tree, named James-Stein Regression Tree (JSRT) by considering global information from different nodes.

DPAttack: Diffused Patch Attacks against Universal Object Detection

no code implementations16 Oct 2020 Shudeng Wu, Tao Dai, Shu-Tao Xia

Recently, deep neural networks (DNNs) have been widely and successfully used in Object Detection, e. g.

Object Detection

Open-sourced Dataset Protection via Backdoor Watermarking

1 code implementation12 Oct 2020 Yiming Li, Ziqi Zhang, Jiawang Bai, Baoyuan Wu, Yong Jiang, Shu-Tao Xia

Based on the proposed backdoor-based watermarking, we use a hypothesis test guided method for dataset verification based on the posterior probability generated by the suspicious third-party model of the benign samples and their correspondingly watermarked samples ($i. e.$, images with trigger) on the target class.

Image Classification

Improving Query Efficiency of Black-box Adversarial Attack

1 code implementation ECCV 2020 Yang Bai, Yuyuan Zeng, Yong Jiang, Yisen Wang, Shu-Tao Xia, Weiwei Guo

Deep neural networks (DNNs) have demonstrated excellent performance on various tasks, however they are under the risk of adversarial examples that can be easily generated when the target model is accessible to an attacker (white-box setting).

Adversarial Attack

Rectified Decision Trees: Exploring the Landscape of Interpretable and Effective Machine Learning

no code implementations21 Aug 2020 Yiming Li, Jiawang Bai, Jiawei Li, Xue Yang, Yong Jiang, Shu-Tao Xia

Interpretability and effectiveness are two essential and indispensable requirements for adopting machine learning methods in reality.

Knowledge Distillation

Neural Network-based Automatic Factor Construction

no code implementations14 Aug 2020 Jie Fang, Jian-Wu Lin, Shu-Tao Xia, Yong Jiang, Zhikang Xia, Xiang Liu

This paper proposes Neural Network-based Automatic Factor Construction (NNAFC), a tailored neural network framework that can automatically construct diversified financial factors based on financial domain knowledge and a variety of neural network structures.

Time Series

Backdoor Learning: A Survey

1 code implementation17 Jul 2020 Yiming Li, Yong Jiang, Zhifeng Li, Shu-Tao Xia

Backdoor attack intends to embed hidden backdoor into deep neural networks (DNNs), so that the attacked models perform well on benign samples, whereas their predictions will be maliciously changed if the hidden backdoor is activated by attacker-specified triggers.

Backdoor Attack Data Poisoning

Training Interpretable Convolutional Neural Networks by Differentiating Class-specific Filters

1 code implementation ECCV 2020 Haoyu Liang, Zhihao Ouyang, Yuyuan Zeng, Hang Su, Zihao He, Shu-Tao Xia, Jun Zhu, Bo Zhang

Most existing works attempt post-hoc interpretation on a pre-trained model, while neglecting to reduce the entanglement underlying the model.

Object Localization

Temporal Calibrated Regularization for Robust Noisy Label Learning

no code implementations1 Jul 2020 Dongxian Wu, Yisen Wang, Zhuobin Zheng, Shu-Tao Xia

Deep neural networks (DNNs) exhibit great success on many tasks with the help of large-scale well annotated datasets.

Targeted Attack for Deep Hashing based Retrieval

2 code implementations ECCV 2020 Jiawang Bai, Bin Chen, Yiming Li, Dongxian Wu, Weiwei Guo, Shu-Tao Xia, En-hui Yang

In this paper, we propose a novel method, dubbed deep hashing targeted attack (DHTA), to study the targeted attack on such retrieval.

Image Retrieval Video Retrieval

Adversarial Weight Perturbation Helps Robust Generalization

3 code implementations NeurIPS 2020 Dongxian Wu, Shu-Tao Xia, Yisen Wang

The study on improving the robustness of deep neural networks against adversarial examples grows rapidly in recent years.

Adversarial Robustness

Rethinking the Trigger of Backdoor Attack

no code implementations9 Apr 2020 Yiming Li, Tongqing Zhai, Baoyuan Wu, Yong Jiang, Zhifeng Li, Shu-Tao Xia

Backdoor attack intends to inject hidden backdoor into the deep neural networks (DNNs), such that the prediction of the infected model will be maliciously changed if the hidden backdoor is activated by the attacker-defined trigger, while it performs well on benign samples.

Backdoor Attack

Matrix Smoothing: A Regularization for DNN with Transition Matrix under Noisy Labels

no code implementations26 Mar 2020 Xianbin Lv, Dongxian Wu, Shu-Tao Xia

Probabilistic modeling, which consists of a classifier and a transition matrix, depicts the transformation from true labels to noisy labels and is a promising approach.

Toward Adversarial Robustness via Semi-supervised Robust Training

1 code implementation16 Mar 2020 Yiming Li, Baoyuan Wu, Yan Feng, Yanbo Fan, Yong Jiang, Zhifeng Li, Shu-Tao Xia

In this work, we propose a novel defense method, the robust training (RT), by jointly minimizing two separated risks ($R_{stand}$ and $R_{rob}$), which is with respect to the benign example and its neighborhoods respectively.

Adversarial Defense Adversarial Robustness

Adversarial Attack on Deep Product Quantization Network for Image Retrieval

no code implementations26 Feb 2020 Yan Feng, Bin Chen, Tao Dai, Shu-Tao Xia

Deep product quantization network (DPQN) has recently received much attention in fast image retrieval tasks due to its efficiency of encoding high-dimensional visual features especially when dealing with large-scale datasets.

Adversarial Attack Image Retrieval +1

An Accuracy-Lossless Perturbation Method for Defending Privacy Attacks in Federated Learning

no code implementations23 Feb 2020 Xue Yang, Yan Feng, Weijun Fang, Jun Shao, Xiaohu Tang, Shu-Tao Xia, Rongxing Lu

However, the strong defence ability and high learning accuracy of these schemes cannot be ensured at the same time, which will impede the wide application of FL in practice (especially for medical or financial institutions that require both high accuracy and strong privacy guarantee).

Federated Learning

Skip Connections Matter: On the Transferability of Adversarial Examples Generated with ResNets

2 code implementations ICLR 2020 Dongxian Wu, Yisen Wang, Shu-Tao Xia, James Bailey, Xingjun Ma

We find that using more gradients from the skip connections rather than the residual modules according to a decay factor, allows one to craft adversarial examples with high transferability.

Alpha Discovery Neural Network based on Prior Knowledge

no code implementations26 Dec 2019 Jie Fang, Shu-Tao Xia, Jian-Wu Lin, Zhikang Xia, Xiang Liu, Yong Jiang

This paper proposes Alpha Discovery Neural Network (ADNN), a tailored neural network structure which can automatically construct diversified financial technical indicators based on prior knowledge.

Time Series

Automatic Financial Feature Construction

no code implementations8 Dec 2019 Jie Fang, Shu-Tao Xia, Jian-Wu Lin, Yong Jiang

According to neural network universal approximation theorem, pre-training can conduct a more effective and explainable evolution process.

Data Augmentation Time Series

Visual Privacy Protection via Mapping Distortion

1 code implementation5 Nov 2019 Yiming Li, Peidong Liu, Yong Jiang, Shu-Tao Xia

To a large extent, the privacy of visual classification data is mainly in the mapping between the image and its corresponding label, since this relation provides a great amount of information and can be used in other scenarios.

Deep Flow Collaborative Network for Online Visual Tracking

no code implementations5 Nov 2019 Peidong Liu, Xiyu Yan, Yong Jiang, Shu-Tao Xia

The deep learning-based visual tracking algorithms such as MDNet achieve high performance leveraging to the feature extraction ability of a deep neural network.

Frame Optical Flow Estimation +1

Adversarial Defense via Local Flatness Regularization

no code implementations27 Oct 2019 Jia Xu, Yiming Li, Yong Jiang, Shu-Tao Xia

In this paper, we define the local flatness of the loss surface as the maximum value of the chosen norm of the gradient regarding to the input within a neighborhood centered on the benign sample, and discuss the relationship between the local flatness and adversarial vulnerability.

Adversarial Defense

Training Interpretable Convolutional Neural Networks towards Class-specific Filters

no code implementations25 Sep 2019 Haoyu Liang, Zhihao Ouyang, Hang Su, Yuyuan Zeng, Zihao He, Shu-Tao Xia, Jun Zhu, Bo Zhang

Convolutional neural networks (CNNs) have often been treated as “black-box” and successfully used in a range of tasks.

AdaCompress: Adaptive Compression for Online Computer Vision Services

1 code implementation17 Sep 2019 Hongshan Li, Yu Guo, Zhi Wang, Shu-Tao Xia, Wenwu Zhu

Then we train the agent in a reinforcement learning way to adapt it for different deep learning cloud services that act as the {\em interactive training environment} and feeding a reward with comprehensive consideration of accuracy and data size.

Multimedia Image and Video Processing

Adaptive Regularization of Labels

no code implementations15 Aug 2019 Qianggang Ding, Sifan Wu, Hao Sun, Jiadong Guo, Shu-Tao Xia

In addition, label regularization techniques such as label smoothing and label disturbance have also been proposed with the motivation of adding a stochastic perturbation to labels.

Data Augmentation Knowledge Distillation +1

$t$-$k$-means: A Robust and Stable $k$-means Variant

1 code implementation17 Jul 2019 Yiming Li, Yang Zhang, Qingtao Tang, Weipeng Huang, Yong Jiang, Shu-Tao Xia

$k$-means algorithm is one of the most classical clustering methods, which has been widely and successfully used in signal processing.

Rectified Decision Trees: Towards Interpretability, Compression and Empirical Soundness

no code implementations14 Mar 2019 Jiawang Bai, Yiming Li, Jiawei Li, Yong Jiang, Shu-Tao Xia

How to obtain a model with good interpretability and performance has always been an important research topic.

Knowledge Distillation

Multinomial Random Forest: Toward Consistency and Privacy-Preservation

no code implementations10 Mar 2019 Yiming Li, Jiawang Bai, Jiawei Li, Xue Yang, Yong Jiang, Chun Li, Shu-Tao Xia

Despite the impressive performance of random forests (RF), its theoretical properties have not been thoroughly understood.

General Classification

BML: A High-performance, Low-cost Gradient Synchronization Algorithm for DML Training

no code implementations NeurIPS 2018 Songtao Wang, Dan Li, Yang Cheng, Jinkun Geng, Yanshu Wang, Shuai Wang, Shu-Tao Xia, Jianping Wu

In distributed machine learning (DML), the network performance between machines significantly impacts the speed of iterative training.

Exploiting Common Characters in Chinese and Japanese to Learn Cross-Lingual Word Embeddings via Matrix Factorization

no code implementations WS 2018 Jilei Wang, Shiying Luo, Weiyan Shi, Tao Dai, Shu-Tao Xia

Learning vector space representation of words (i. e., word embeddings) has recently attracted wide research interests, and has been extended to cross-lingual scenario.

Cross-Lingual Word Embeddings Machine Translation +2

Iterative Learning with Open-set Noisy Labels

1 code implementation CVPR 2018 Yisen Wang, Weiyang Liu, Xingjun Ma, James Bailey, Hongyuan Zha, Le Song, Shu-Tao Xia

We refer to this more complex scenario as the \textbf{open-set noisy label} problem and show that it is nontrivial in order to make accurate predictions.

Nonextensive information theoretical machine

no code implementations21 Apr 2016 Chaobing Song, Shu-Tao Xia

In this paper, we propose a new discriminative model named \emph{nonextensive information theoretical machine (NITM)} based on nonextensive generalization of Shannon information theory.

Bayesian linear regression with Student-t assumptions

no code implementations15 Apr 2016 Chaobing Song, Shu-Tao Xia

In this paper, we propose a Bayesian linear regression model with Student-t assumptions (BLRS), which can be inferred exactly.

Unifying Decision Trees Split Criteria Using Tsallis Entropy

no code implementations25 Nov 2015 Yisen Wang, Chaobing Song, Shu-Tao Xia

In this paper, a Tsallis Entropy Criterion (TEC) algorithm is proposed to unify Shannon entropy, Gain Ratio and Gini index, which generalizes the split criteria of decision trees.

Cannot find the paper you are looking for? You can Submit a new open access paper.