Search Results for author: Tao Wei

Found 13 papers, 3 papers with code

Fairness-aware Adversarial Perturbation Towards Bias Mitigation for Deployed Deep Models

no code implementations CVPR 2022 Zhibo Wang, Xiaowei Dong, Henry Xue, Zhifei Zhang, Weifeng Chiu, Tao Wei, Kui Ren

Prioritizing fairness is of central importance in artificial intelligence (AI) systems, especially for those societal applications, e. g., hiring systems should recommend applicants equally from different demographic groups, and risk assessment systems must eliminate racism in criminal justice.

Fairness

Optimized Separable Convolution: Yet Another Efficient Convolution Operator

no code implementations29 Sep 2021 Tao Wei, Yonghong Tian, YaoWei Wang, Yun Liang, Chang Wen Chen

In this research, we propose a novel and principled operator called optimized separable convolution by optimal design for the internal number of groups and kernel sizes for general separable convolutions can achieve the complexity of O(C^{\frac{3}{2}}K).

A High-Performance, Reconfigurable, Fully Integrated Time-Domain Reflectometry Architecture Using Digital I/Os

no code implementations1 May 2021 Zhenyu Xu, Thomas Mauldin, Zheyi Yao, Gerald Hefferman, Tao Wei

These results demonstrate that a fully reconfigurable and highly integrated TDR (iTDR) can be implemented on a field-programmable gate array (FPGA) chip without using any external circuit components.

Rethinking Convolution: Towards an Optimal Efficiency

no code implementations1 Jan 2021 Tao Wei, Yonghong Tian, Chang Wen Chen

In this research, we propose a novel operator called \emph{optimal separable convolution} which can be calculated at $O(C^{\frac{3}{2}}KHW)$ by optimal design for the internal number of groups and kernel sizes for general separable convolutions.

Fooling Detection Alone is Not Enough: Adversarial Attack against Multiple Object Tracking

1 code implementation ICLR 2020 Yunhan Jia, Yantao Lu, Junjie Shen, Qi Alfred Chen, Hao Chen, Zhenyu Zhong, Tao Wei

Recent work in adversarial machine learning started to focus on the visual perception in autonomous driving and studied Adversarial Examples (AEs) for object detection models.

Adversarial Attack Autonomous Driving +4

Improving Adversarial Robustness via Attention and Adversarial Logit Pairing

no code implementations23 Aug 2019 Dou Goodman, Xingjian Li, Ji Liu, Dejing Dou, Tao Wei

Finally, we conduct extensive experiments using a wide range of datasets and the experiment results show that our AT+ALP achieves the state of the art defense performance.

Adversarial Robustness

Cloud-based Image Classification Service Is Not Robust To Simple Transformations: A Forgotten Battlefield

no code implementations19 Jun 2019 Dou Goodman, Tao Wei

Many recent works demonstrated that Deep Learning models are vulnerable to adversarial examples. Fortunately, generating adversarial examples usually requires white-box access to the victim model, and the attacker can only access the APIs opened by cloud platforms.

Classification General Classification +1

Fooling Detection Alone is Not Enough: First Adversarial Attack against Multiple Object Tracking

1 code implementation27 May 2019 Yunhan Jia, Yantao Lu, Junjie Shen, Qi Alfred Chen, Zhenyu Zhong, Tao Wei

Recent work in adversarial machine learning started to focus on the visual perception in autonomous driving and studied Adversarial Examples (AEs) for object detection models.

Adversarial Attack Autonomous Driving +4

Enhancing Cross-task Transferability of Adversarial Examples with Dispersion Reduction

1 code implementation8 May 2019 Yunhan Jia, Yantao Lu, Senem Velipasalar, Zhenyu Zhong, Tao Wei

Neural networks are known to be vulnerable to carefully crafted adversarial examples, and these malicious samples often transfer, i. e., they maintain their effectiveness even against other models.

Image Classification object-detection +2

Network Iterative Learning for Dynamic Deep Neural Networks via Morphism

no code implementations ICLR 2018 Tao Wei, Changhu Wang, Chang Wen Chen

In this research, we present a novel learning scheme called network iterative learning for deep neural networks.

Modularized Morphing of Neural Networks

no code implementations12 Jan 2017 Tao Wei, Changhu Wang, Chang Wen Chen

Different from existing work where basic morphing types on the layer level were addressed, we target at the central problem of network morphism at a higher level, i. e., how a convolutional layer can be morphed into an arbitrary module of a neural network.

Network Morphism

no code implementations5 Mar 2016 Tao Wei, Changhu Wang, Yong Rui, Chang Wen Chen

The second requirement for this network morphism is its ability to deal with non-linearity in a network.

Cannot find the paper you are looking for? You can Submit a new open access paper.