no code implementations • CCL 2022 • Shuang Nie, Zheng Ye, Jun Qin, Jing Liu
“目前常见的机器阅读理解数据增强方法如回译, 单独对文章或者问题进行数据增强, 没有考虑文章、问题和选项三元组之间的联系。因此, 本文探索了一种利用三元组联系进行文章句子筛选的数据增强方法, 通过比较文章与问题以及选项的相似度, 选取文章中与二者联系紧密的句子。同时为了使不同选项的三元组区别增大, 我们选用了正则化Dropout的策略。实验结果表明, 在RACE数据集上的准确率可提高3. 8%。”
no code implementations • 3 May 2023 • Xuanang Chen, Ben He, Zheng Ye, Le Sun, Yingfei Sun
Additionally, current methods rely heavily on the use of a well-imitated surrogate NRM to guarantee the attack effect, which makes them difficult to use in practice.
no code implementations • 9 May 2022 • Ying Zhou, Xuanang Chen, Ben He, Zheng Ye, Le Sun
Knowledge graph completion (KGC) aims to infer missing knowledge triples based on known facts in a knowledge graph.
1 code implementation • ACL 2021 • Zheng Ye, Liucun Lu, Lishan Huang, Liang Lin, Xiaodan Liang
To address these limitations, we propose Quantifiable Dialogue Coherence Evaluation (QuantiDCE), a novel framework aiming to train a quantifiable dialogue coherence metric that can reflect the actual human rating standards.
no code implementations • 17 Apr 2021 • Xiaoyang Chen, Kai Hui, Ben He, Xianpei Han, Le Sun, Zheng Ye
BERT-based text ranking models have dramatically advanced the state-of-the-art in ad-hoc retrieval, wherein most models tend to consider individual query-document pairs independently.
1 code implementation • EMNLP 2020 • Lishan Huang, Zheng Ye, Jinghui Qin, Liang Lin, Xiaodan Liang
Capitalized on the topic-level dialogue graph, we propose a new evaluation metric GRADE, which stands for Graph-enhanced Representations for Automatic Dialogue Evaluation.
1 code implementation • 4 Feb 2020 • Jinghui Qin, Zheng Ye, Jianheng Tang, Xiaodan Liang
Target-guided open-domain conversation aims to proactively and naturally guide a dialogue agent or human to achieve specific goals, topics or keywords during open-ended conversations.
no code implementations • EMNLP 2016 • Adam Trischler, Zheng Ye, Xingdi Yuan, Kaheer Suleman
We present the EpiReader, a novel model for machine comprehension of text.
Ranked #7 on Question Answering on Children's Book Test
1 code implementation • ACL 2016 • Adam Trischler, Zheng Ye, Xingdi Yuan, Jing He, Phillip Bachman, Kaheer Suleman
The parallel hierarchy enables our model to compare the passage, question, and answer from a variety of trainable perspectives, as opposed to using a manually designed, rigid feature set.
Ranked #1 on Question Answering on MCTest-160