Search Results for author: Zhida Feng

Found 4 papers, 1 papers with code

ERNIE-SPARSE: Learning Hierarchical Efficient Transformer Through Regularized Self-Attention

no code implementations23 Mar 2022 Yang Liu, Jiaxiang Liu, Li Chen, Yuxiang Lu, Shikun Feng, Zhida Feng, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang

We argue that two factors, information bottleneck sensitivity and inconsistency between different attention topologies, could affect the performance of the Sparse Transformer.

Sparse Learning text-classification +1

ERNIE-SPARSE: Robust Efficient Transformer Through Hierarchically Unifying Isolated Information

no code implementations29 Sep 2021 Yang Liu, Jiaxiang Liu, Yuxiang Lu, Shikun Feng, Yu Sun, Zhida Feng, Li Chen, Hao Tian, Hua Wu, Haifeng Wang

The first factor is information bottleneck sensitivity, which is caused by the key feature of Sparse Transformer — only a small number of global tokens can attend to all other tokens.

text-classification Text Classification

Alpha at SemEval-2021 Task 6: Transformer Based Propaganda Classification

no code implementations SEMEVAL 2021 Zhida Feng, Jiji Tang, Jiaxiang Liu, Weichong Yin, Shikun Feng, Yu Sun, Li Chen

This paper describes our system participated in Task 6 of SemEval-2021: the task focuses on multimodal propaganda technique classification and it aims to classify given image and text into 22 classes.

Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.