no code implementations • ACL 2021 • Dongqin Xu, Junhui Li, Muhua Zhu, Min Zhang, Guodong Zhou
We hope that knowledge gained while learning for English AMR parsing and text generation can be transferred to the counterparts of other languages.
no code implementations • 16 Jul 2021 • Pengju Zhang, Yonghui Jia, Muhua Zhu, Wenliang Chen, Min Zhang
Previous works for encoding questions mainly focus on the word sequences, but seldom consider the information from syntactic trees. In this paper, we propose an approach to learn syntax-based representations for KBQA.
1 code implementation • EMNLP 2020 • Dongqin Xu, Junhui Li, Muhua Zhu, Min Zhang, Guodong Zhou
In the literature, the research on abstract meaning representation (AMR) parsing is much restricted by the size of human-curated dataset which is critical to build an AMR parser with good performance.
Ranked #15 on AMR Parsing on LDC2017T10 (using extra training data)
1 code implementation • ACL 2020 • Ning Ding, Dingkun Long, Guangwei Xu, Muhua Zhu, Pengjun Xie, Xiaobin Wang, Hai-Tao Zheng
In order to simultaneously alleviate these two issues, this paper proposes to couple distant annotation and adversarial training for cross-domain CWS.
1 code implementation • IJCNLP 2019 • Jie Zhu, Junhui Li, Muhua Zhu, Longhua Qian, Min Zhang, Guodong Zhou
Recent studies on AMR-to-text generation often formalize the task as a sequence-to-sequence (seq2seq) learning problem by converting an Abstract Meaning Representation (AMR) graph into a word sequence.
1 code implementation • 30 Mar 2018 • Yu Gong, Xusheng Luo, Yu Zhu, Wenwu Ou, Zhao Li, Muhua Zhu, Kenny Q. Zhu, Lu Duan, Xi Chen
Slot filling is a critical task in natural language understanding (NLU) for dialog systems.
no code implementations • 31 May 2017 • Junhui Li, Muhua Zhu
In the past few years, attention mechanisms have become an indispensable component of end-to-end neural machine translation models.
no code implementations • ACL 2017 • Junhui Li, Deyi Xiong, Zhaopeng Tu, Muhua Zhu, Min Zhang, Guodong Zhou
Even though a linguistics-free sequence to sequence model in neural machine translation (NMT) has certain capability of implicitly learning syntactic information of source sentences, this paper shows that source syntax can be explicitly incorporated into NMT effectively to provide further improvements.