no code implementations • 29 Sep 2021 • Hanxu Liu, Nianmin Yao
We train a discriminator to distinguish between human-generated and machine-generated text, which is used to score the sentences generated by the model.
no code implementations • 29 Sep 2021 • Qibin Li, Nianmin Yao, Jian Zhao, Yanan Zhang
Based on the traditional attention mechanism, multi-scale fusion self attention extracts phrase information at different scales by setting convolution kernels at different levels, and calculates the corresponding attention matrix at different scales, so that the model can better extract phrase level information.
no code implementations • 29 Sep 2021 • Ning Gong, Nianmin Yao
Correctly, we propose a generalized decoding framework that can be used to describe and connect existing popular decoding algorithms.