Search Results for author: Tianyi Yan

Found 6 papers, 3 papers with code

Robust Natural Language Understanding with Residual Attention Debiasing

1 code implementation28 May 2023 Fei Wang, James Y. Huang, Tianyi Yan, Wenxuan Zhou, Muhao Chen

However, previous ensemble-based debiasing methods typically apply debiasing on top-level logits without directly addressing biased attention patterns.

Natural Language Understanding

Plug-and-Play Pseudo Label Correction Network for Unsupervised Person Re-identification

no code implementations14 Jun 2022 Tianyi Yan, Kuan Zhu, Haiyun Guo, Guibo Zhu, Ming Tang, Jinqiao Wang

Clustering-based methods, which alternate between the generation of pseudo labels and the optimization of the feature extraction network, play a dominant role in both unsupervised learning (USL) and unsupervised domain adaptive (UDA) person re-identification (Re-ID).

Pseudo Label Unsupervised Person Re-Identification

Towards Efficient Full 8-bit Integer DNN Online Training on Resource-limited Devices without Batch Normalization

no code implementations27 May 2021 Yukuan Yang, Xiaowei Chi, Lei Deng, Tianyi Yan, Feng Gao, Guoqi Li

In summary, the EOQ framework is specially designed for reducing the high cost of convolution and BN in network training, demonstrating a broad application prospect of online training in resource-limited devices.

Model Compression Quantization

Kronecker CP Decomposition with Fast Multiplication for Compressing RNNs

no code implementations21 Aug 2020 Dingheng Wang, Bijiao Wu, Guangshe Zhao, Man Yao, Hengnu Chen, Lei Deng, Tianyi Yan, Guoqi Li

Recurrent neural networks (RNNs) are powerful in the tasks oriented to sequential data, such as natural language processing and video recognition.

Tensor Decomposition Video Recognition

Training High-Performance and Large-Scale Deep Neural Networks with Full 8-bit Integers

2 code implementations5 Sep 2019 Yukuan Yang, Shuang Wu, Lei Deng, Tianyi Yan, Yuan Xie, Guoqi Li

In this way, all the operations in the training and inference can be bit-wise operations, pushing towards faster processing speed, decreased memory cost, and higher energy efficiency.

Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.