no code implementations • 17 Dec 2022 • Jifan Chen, Yuhao Zhang, Lan Liu, Rui Dong, Xinchi Chen, Patrick Ng, William Yang Wang, Zhiheng Huang
There has been great progress in unifying various table-to-text tasks using a single encoder-decoder model trained via multi-task learning (Xie et al., 2022).
1 code implementation • 12 Oct 2022 • Xiyang Hu, Xinchi Chen, Peng Qi, Deguang Kong, Kunlun Liu, William Yang Wang, Zhiheng Huang
Multilingual information retrieval (IR) is challenging since annotated training data is costly to obtain in many languages.
1 code implementation • Findings (NAACL) 2022 • Danilo Ribeiro, Shen Wang, Xiaofei Ma, Rui Dong, Xiaokai Wei, Henry Zhu, Xinchi Chen, Zhiheng Huang, Peng Xu, Andrew Arnold, Dan Roth
Our model is able to explain a given hypothesis by systematically generating a step-by-step explanation from textual premises.
no code implementations • Findings (EMNLP) 2021 • Peng Xu, Xinchi Chen, Xiaofei Ma, Zhiheng Huang, Bing Xiang
In this work, we propose to use a graph attention network on top of the available pretrained Transformers model to learn document embeddings.
1 code implementation • IJCNLP 2019 • Xinchi Chen, Chunchuan Lyu, Ivan Titov
In every network layer, the capsules interact with each other and with representations of words in the sentence.
no code implementations • 19 Dec 2018 • Jingjing Gong, Xinchi Chen, Tao Gui, Xipeng Qiu
With these auto-switched LSTMs, our model provides a more flexible solution for multi-criteria CWS, which is also easy to transfer the learned knowledge to new criteria.
no code implementations • EMNLP 2018 • Jingjing Gong, Xipeng Qiu, Xinchi Chen, Dong Liang, Xuanjing Huang
Attention-based neural models have achieved great success in natural language inference (NLI).
no code implementations • 23 Aug 2018 • Junkun Chen, Kaiyu Chen, Xinchi Chen, Xipeng Qiu, Xuanjing Huang
Designing shared neural architecture plays an important role in multi-task learning.
3 code implementations • 30 Apr 2018 • Zhan Shi, Xinchi Chen, Xipeng Qiu, Xuanjing Huang
Similar to the adversarial models, the reward and policy function in IRL are optimized alternately.
no code implementations • 2 Jul 2017 • Xinchi Chen, Zhan Shi, Xipeng Qiu, Xuanjing Huang
In this paper, we propose a new neural model to incorporate the word-level information for Chinese word segmentation.
no code implementations • ACL 2017 • Xinchi Chen, Zhan Shi, Xipeng Qiu, Xuanjing Huang
Different linguistic perspectives causes many diverse segmentation criteria for Chinese word segmentation (CWS).
no code implementations • 16 Nov 2016 • Xinchi Chen, Xipeng Qiu, Xuanjing Huang
Recently, neural network models for natural language processing tasks have been increasingly focused on for their ability of alleviating the burden of manual feature engineering.
no code implementations • 15 Nov 2016 • Jingjing Gong, Xinchi Chen, Xipeng Qiu, Xuanjing Huang
However, it is nontrivial for pair-wise models to incorporate the contextual sentence information.
no code implementations • 23 Jul 2016 • Xinchi Chen, Xipeng Qiu, Xuanjing Huang
Sentence ordering is a general and critical task for natural language generation applications.
no code implementations • 19 Nov 2015 • Xinchi Chen, Xipeng Qiu, Jingxiang Jiang, Xuanjing Huang
In this paper, we propose the Gaussian mixture skip-gram (GMSG) model to learn the Gaussian mixture embeddings for words based on skip-gram framework.
no code implementations • IJCNLP 2015 • Chenxi Zhu, Xipeng Qiu, Xinchi Chen, Xuanjing Huang
In this work, we address the problem to model all the nodes (words or phrases) in a dependency tree with the dense representations.