1 code implementation • 18 Apr 2020 • Li Chen, Zewei Xu, Yongjian Fu, Haozhe Huang, Shaowen Wang, Haifeng Li
The incorporation of the double self-attention module has an average of 7\% improvement on the pre-class accuracy.
1 code implementation • 14 Aug 2019 • Liyuan Liu, Zihan Wang, Jingbo Shang, Dandong Yin, Heng Ji, Xiang Ren, Shaowen Wang, Jiawei Han
Our model neither requires the conversion from character sequences to word sequences, nor assumes tokenizer can correctly detect all word boundaries.