Search Results for author: Yuan Ni

Found 6 papers, 2 papers with code

Discovering Better Model Architectures for Medical Query Understanding

no code implementations NAACL 2021 Wei Zhu, Yuan Ni, Xiaoling Wang, Guotong Xie

In developing an online question-answering system for the medical domains, natural language inference (NLI) models play a central role in question matching and intention detection.

Natural Language Inference Neural Architecture Search +1

AutoTrans: Automating Transformer Design via Reinforced Architecture Search

3 code implementations4 Sep 2020 Wei Zhu, Xiaoling Wang, Xipeng Qiu, Yuan Ni, Guotong Xie

Though the transformer architectures have shown dominance in many natural language understanding tasks, there are still unsolved issues for the training of transformer models, especially the need for a principled way of warm-up which has shown importance for stable training of a transformer, as well as whether the task at hand prefer to scale the attention product or not.

Natural Language Understanding

Pingan Smart Health and SJTU at COIN - Shared Task: utilizing Pre-trained Language Models and Common-sense Knowledge in Machine Reading Tasks

no code implementations WS 2019 Xiepeng Li, Zhexi Zhang, Wei Zhu, Zheng Li, Yuan Ni, Peng Gao, Junchi Yan, Guotong Xie

We have experimented both (a) improving the fine-tuning of pre-trained language models on a task with a small dataset size, by leveraging datasets of similar tasks; and (b) incorporating the distributional representations of a KG onto the representations of pre-trained language models, via simply concatenation or multi-head attention.

Common Sense Reasoning Machine Reading Comprehension +1

PANLP at MEDIQA 2019: Pre-trained Language Models, Transfer Learning and Knowledge Distillation

no code implementations WS 2019 Wei Zhu, Xiaofeng Zhou, Keqiang Wang, Xun Luo, Xiepeng Li, Yuan Ni, Guotong Xie

Transfer learning from the NLI task to the RQE task is also experimented, which proves to be useful in improving the results of fine-tuning MT-DNN large.

Knowledge Distillation Re-Ranking +1

Cannot find the paper you are looking for? You can Submit a new open access paper.