Search Results for author: Tao Wei

Found 18 papers, 4 papers with code

Towards Sample-specific Backdoor Attack with Clean Labels via Attribute Trigger

no code implementations3 Dec 2023 Yiming Li, Mingyan Zhu, Junfeng Guo, Tao Wei, Shu-Tao Xia, Zhan Qin

We argue that the intensity constraint of existing SSBAs is mostly because their trigger patterns are `content-irrelevant' and therefore act as `noises' for both humans and DNNs.

Attribute Backdoor Attack

DASA: Difficulty-Aware Semantic Augmentation for Speaker Verification

no code implementations18 Oct 2023 Yuanyuan Wang, Yang Zhang, Zhiyong Wu, Zhihan Yang, Tao Wei, Kun Zou, Helen Meng

Existing augmentation methods for speaker verification manipulate the raw signal, which are time-consuming and the augmented samples lack diversity.

Data Augmentation Speaker Verification

Human labeling errors and their impact on ConvNets for satellite image scene classification

no code implementations20 May 2023 Longkang Peng, Tao Wei, Xuehong Chen, Xiaobei Chen, Rui Sun, Luoma Wan, Xiaolin Zhu

However, the distribution of human labeling errors on satellite images and their impact on ConvNets have not been investigated.

Scene Classification

Counterfactual-based Saliency Map: Towards Visual Contrastive Explanations for Neural Networks

no code implementations ICCV 2023 Xue Wang, Zhibo Wang, Haiqin Weng, Hengchang Guo, Zhifei Zhang, Lu Jin, Tao Wei, Kui Ren

Considering the insufficient study on such complex causal questions, we make the first attempt to explain different causal questions by contrastive explanations in a unified framework, ie., Counterfactual Contrastive Explanation (CCE), which visually and intuitively explains the aforementioned questions via a novel positive-negative saliency-based explanation scheme.

counterfactual

Black-box Dataset Ownership Verification via Backdoor Watermarking

1 code implementation4 Aug 2022 Yiming Li, Mingyan Zhu, Xue Yang, Yong Jiang, Tao Wei, Shu-Tao Xia

The rapid development of DNNs has benefited from the existence of some high-quality datasets ($e. g.$, ImageNet), which allow researchers and developers to easily verify the performance of their methods.

Fairness-aware Adversarial Perturbation Towards Bias Mitigation for Deployed Deep Models

no code implementations CVPR 2022 Zhibo Wang, Xiaowei Dong, Henry Xue, Zhifei Zhang, Weifeng Chiu, Tao Wei, Kui Ren

Prioritizing fairness is of central importance in artificial intelligence (AI) systems, especially for those societal applications, e. g., hiring systems should recommend applicants equally from different demographic groups, and risk assessment systems must eliminate racism in criminal justice.

Fairness

Optimized Separable Convolution: Yet Another Efficient Convolution Operator

no code implementations29 Sep 2021 Tao Wei, Yonghong Tian, YaoWei Wang, Yun Liang, Chang Wen Chen

In this research, we propose a novel and principled operator called optimized separable convolution by optimal design for the internal number of groups and kernel sizes for general separable convolutions can achieve the complexity of O(C^{\frac{3}{2}}K).

A High-Performance, Reconfigurable, Fully Integrated Time-Domain Reflectometry Architecture Using Digital I/Os

no code implementations1 May 2021 Zhenyu Xu, Thomas Mauldin, Zheyi Yao, Gerald Hefferman, Tao Wei

These results demonstrate that a fully reconfigurable and highly integrated TDR (iTDR) can be implemented on a field-programmable gate array (FPGA) chip without using any external circuit components.

Rethinking Convolution: Towards an Optimal Efficiency

no code implementations1 Jan 2021 Tao Wei, Yonghong Tian, Chang Wen Chen

In this research, we propose a novel operator called \emph{optimal separable convolution} which can be calculated at $O(C^{\frac{3}{2}}KHW)$ by optimal design for the internal number of groups and kernel sizes for general separable convolutions.

Computational Efficiency

Fooling Detection Alone is Not Enough: Adversarial Attack against Multiple Object Tracking

1 code implementation ICLR 2020 Yunhan Jia, Yantao Lu, Junjie Shen, Qi Alfred Chen, Hao Chen, Zhenyu Zhong, Tao Wei

Recent work in adversarial machine learning started to focus on the visual perception in autonomous driving and studied Adversarial Examples (AEs) for object detection models.

Adversarial Attack Autonomous Driving +5

Improving Adversarial Robustness via Attention and Adversarial Logit Pairing

no code implementations23 Aug 2019 Dou Goodman, Xingjian Li, Ji Liu, Dejing Dou, Tao Wei

Finally, we conduct extensive experiments using a wide range of datasets and the experiment results show that our AT+ALP achieves the state of the art defense performance.

Adversarial Robustness

Cloud-based Image Classification Service Is Not Robust To Simple Transformations: A Forgotten Battlefield

no code implementations19 Jun 2019 Dou Goodman, Tao Wei

Many recent works demonstrated that Deep Learning models are vulnerable to adversarial examples. Fortunately, generating adversarial examples usually requires white-box access to the victim model, and the attacker can only access the APIs opened by cloud platforms.

Classification General Classification +1

Fooling Detection Alone is Not Enough: First Adversarial Attack against Multiple Object Tracking

1 code implementation27 May 2019 Yunhan Jia, Yantao Lu, Junjie Shen, Qi Alfred Chen, Zhenyu Zhong, Tao Wei

Recent work in adversarial machine learning started to focus on the visual perception in autonomous driving and studied Adversarial Examples (AEs) for object detection models.

Adversarial Attack Autonomous Driving +5

Enhancing Cross-task Transferability of Adversarial Examples with Dispersion Reduction

1 code implementation8 May 2019 Yunhan Jia, Yantao Lu, Senem Velipasalar, Zhenyu Zhong, Tao Wei

Neural networks are known to be vulnerable to carefully crafted adversarial examples, and these malicious samples often transfer, i. e., they maintain their effectiveness even against other models.

Image Classification object-detection +3

Network Iterative Learning for Dynamic Deep Neural Networks via Morphism

no code implementations ICLR 2018 Tao Wei, Changhu Wang, Chang Wen Chen

In this research, we present a novel learning scheme called network iterative learning for deep neural networks.

Modularized Morphing of Neural Networks

no code implementations12 Jan 2017 Tao Wei, Changhu Wang, Chang Wen Chen

Different from existing work where basic morphing types on the layer level were addressed, we target at the central problem of network morphism at a higher level, i. e., how a convolutional layer can be morphed into an arbitrary module of a neural network.

MORPH

Network Morphism

no code implementations5 Mar 2016 Tao Wei, Changhu Wang, Yong Rui, Chang Wen Chen

The second requirement for this network morphism is its ability to deal with non-linearity in a network.

MORPH

Cannot find the paper you are looking for? You can Submit a new open access paper.