no code implementations • 17 Jun 2022 • Bin Wang, Jiangzhou Ju, Yunlin Mao, Xin-yu Dai, ShuJian Huang, Jiajun Chen
Here, we propose a numerical reasoning question answering system to answer numerical reasoning questions among financial text and table data sources, consisting of a retriever module, a generator module, and an ensemble module.
1 code implementation • 29 Mar 2022 • Zhifang Fan, Dan Ou, Yulong Gu, Bairan Fu, Xiang Li, Wentian Bao, Xin-yu Dai, Xiaoyi Zeng, Tao Zhuang, Qingwen Liu
In this paper, we propose a new perspective for context-aware users' behavior modeling by including the whole page-wisely exposed products and the corresponding feedback as contextualized page-wise feedback sequence.
1 code implementation • COLING 2020 • Siyu Long, Ran Wang, Kun Tao, Jiali Zeng, Xin-yu Dai
Machine reading comprehension (MRC) is the task that asks a machine to answer questions based on a given context.
no code implementations • COLING 2022 • Zewei Sun, Shu-Jian Huang, Xin-yu Dai, Jia-Jun Chen
Recent studies show that the attention heads in Transformer are not equal.
Ranked #1 on Machine Translation on WMT2017 Turkish-English
1 code implementation • 4 Aug 2020 • Robert Ridley, Liang He, Xin-yu Dai, Shu-Jian Huang, Jia-Jun Chen
Cross-prompt automated essay scoring (AES) requires the system to use non target-prompt essays to award scores to a target-prompt essay.
1 code implementation • ACL 2020 • Yawen Ouyang, Moxin Chen, Xin-yu Dai, Yinggong Zhao, Shu-Jian Huang, Jia-Jun Chen
Recent proposed approaches have made promising progress in dialogue state tracking (DST).
no code implementations • ACL 2020 • Jiahuan Li, Yu Bao, Shu-Jian Huang, Xin-yu Dai, Jia-Jun Chen
Definition generation, which aims to automatically generate dictionary definitions for words, has recently been proposed to assist the construction of dictionaries and help people understand unfamiliar texts.
no code implementations • ICLR 2020 • Zaixiang Zheng, Hao Zhou, Shu-Jian Huang, Lei LI, Xin-yu Dai, Jia-Jun Chen
Training neural machine translation models (NMT) requires a large amount of parallel corpus, which is scarce for many language pairs.
1 code implementation • 5 Apr 2020 • Xingyuan Chen, Ping Cai, Peng Jin, Hongjun Wang, Xin-yu Dai, Jia-Jun Chen
To alleviate the exposure bias, generative adversarial networks (GAN) use the discriminator to update the generator's parameters directly, but they fail by being evaluated precisely.
no code implementations • 2 Apr 2020 • Ran Wang, Kun Tao, Dingjie Song, Zhilong Zhang, Xiao Ma, Xi'ao Su, Xin-yu Dai
Existing question answering systems can only predict answers without explicit reasoning processes, which hinder their explainability and make us overestimate their ability of understanding and reasoning over natural language.
1 code implementation • 7 Jan 2020 • Zhen Wu, Fei Zhao, Xin-yu Dai, Shu-Jian Huang, Jia-Jun Chen
In this paper, we propose a novel model to transfer these opinions knowledge from resource-rich review sentiment classification datasets to low-resource task TOWE.
Aspect-oriented Opinion Extraction General Classification +3
no code implementations • 21 Nov 2019 • Zewei Sun, Shu-Jian Huang, Hao-Ran Wei, Xin-yu Dai, Jia-Jun Chen
Experiments also show that back-translation with these diverse translations could bring significant improvement on performance on translation tasks.
1 code implementation • ACL 2020 • Wei Zou, Shu-Jian Huang, Jun Xie, Xin-yu Dai, Jia-Jun Chen
Neural machine translation systems tend to fail on less decent inputs despite its significant efficacy, which may significantly harm the credibility of this systems-fathoming how and when neural-based systems fail in such cases is critical for industrial maintenance.
no code implementations • 9 Nov 2019 • Zhen Cheng, Zaixiang Zheng, Xin-yu Dai, Shu-Jian Huang, Jia-Jun Chen
Intuitively, NLI should rely more on multiple perspectives to form a holistic view to eliminate bias.
1 code implementation • 27 Oct 2019 • Danhao Zhu, Xin-yu Dai, Jia-Jun Chen
Graph neural networks (GNNs) have shown great power in learning on attributed graphs.
no code implementations • IJCNLP 2019 • Huiyun Yang, Shu-Jian Huang, Xin-yu Dai, Jia-Jun Chen
In sequence labeling, previous domain adaptation methods focus on the adaptation from the source domain to the entire target domain without considering the diversity of individual target domain samples, which may lead to negative transfer results for certain samples.
no code implementations • IJCNLP 2019 • Zixian Huang, Yulin Shen, Xiao Li, Yuang Wei, Gong Cheng, Lin Zhou, Xin-yu Dai, Yuzhong Qu
Scenario-based question answering (SQA) has attracted increasing research attention.
1 code implementation • ACL 2019 • Yu Bao, Hao Zhou, Shu-Jian Huang, Lei LI, Lili Mou, Olga Vechtomova, Xin-yu Dai, Jia-Jun Chen
In this paper, we propose to generate sentences from disentangled syntactic and semantic spaces.
1 code implementation • NAACL 2019 • Zhifang Fan, Zhen Wu, Xin-yu Dai, Shu-Jian Huang, Jia-Jun Chen
In this paper, we propose a novel sequence labeling subtask for ABSA named TOWE (Target-oriented Opinion Words Extraction), which aims at extracting the corresponding opinion words for a given opinion target.
Aspect-Based Sentiment Analysis Aspect-oriented Opinion Extraction +2
no code implementations • NAACL 2019 • Hao-Ran Wei, Shu-Jian Huang, Ran Wang, Xin-yu Dai, Jia-Jun Chen
Our method on-the-fly generates a teacher model from checkpoints, guiding the training process to obtain better performance.
no code implementations • NAACL 2019 • Kaijia Yang, Liang He, Xin-yu Dai, Shu-Jian Huang, Jia-Jun Chen
Distant supervision has obtained great progress on relation classification task.
1 code implementation • 30 May 2019 • Xingyuan Chen, Yanzhe Li, Peng Jin, Jiuhua Zhang, Xin-yu Dai, Jia-Jun Chen, Gang Song
It is easy to improve the existing GAN-based models with this mechanism.
1 code implementation • IJCNLP 2019 • Zaixiang Zheng, Shu-Jian Huang, Zhaopeng Tu, Xin-yu Dai, Jia-Jun Chen
Previous studies have shown that neural machine translation (NMT) models can benefit from explicitly modeling translated (Past) and untranslated (Future) to groups of translated and untranslated contents through parts-to-wholes assignment.
no code implementations • 24 Oct 2018 • Zaixiang Zheng, Shu-Jian Huang, Zewei Sun, Rongxiang Weng, Xin-yu Dai, Jia-Jun Chen
Previous studies show that incorporating external information could improve the translation quality of Neural Machine Translation (NMT) systems.
no code implementations • NAACL 2018 • Huadong Chen, Shu-Jian Huang, David Chiang, Xin-yu Dai, Jia-Jun Chen
Natural language sentences, being hierarchical, can be represented at different levels of granularity, like words, subwords, or characters.
no code implementations • 26 Mar 2018 • Guang-Neng Hu, Xin-yu Dai, Feng-Yu Qiu, Rui Xia, Tao Li, Shu-Jian Huang, Jia-Jun Chen
First, we propose a novel model {\em \mbox{MR3}} to jointly model three sources of information (i. e., ratings, item reviews, and social relations) effectively for rating prediction by aligning the latent factors and hidden topics.
1 code implementation • 24 Jan 2018 • Zhen Wu, Xin-yu Dai, Cunyan Yin, Shu-Jian Huang, Jia-Jun Chen
Recently, some works achieved improvement by incorporating user and product information to generate a review representation.
Ranked #3 on Sentiment Analysis on User and product information
1 code implementation • TACL 2018 • Zaixiang Zheng, Hao Zhou, Shu-Jian Huang, Lili Mou, Xin-yu Dai, Jia-Jun Chen, Zhaopeng Tu
The Past and Future contents are fed to both the attention model and the decoder states, which offers NMT systems the knowledge of translated and untranslated contents.
no code implementations • LREC 2018 • Zi-Yi Dou, Hao Zhou, Shu-Jian Huang, Xin-yu Dai, Jia-Jun Chen
However, there are certain limitations in Scheduled Sampling and we propose two dynamic oracle-based methods to improve it.
no code implementations • EMNLP 2017 • Hao Zhou, Zhenting Yu, Yue Zhang, Shu-Jian Huang, Xin-yu Dai, Jia-Jun Chen
Neural parsers have benefited from automatically labeled data via dependency-context word embeddings.
no code implementations • EMNLP 2017 • Rongxiang Weng, Shu-Jian Huang, Zaixiang Zheng, Xin-yu Dai, Jia-Jun Chen
In the encoder-decoder architecture for neural machine translation (NMT), the hidden states of the recurrent structures in the encoder and decoder carry the crucial information about the sentence. These vectors are generated by parameters which are updated by back-propagation of translation errors through time.
no code implementations • CONLL 2017 • Huadong Chen, Shu-Jian Huang, David Chiang, Xin-yu Dai, Jia-Jun Chen
We propose a listwise learning framework for structure prediction problems such as machine translation.
no code implementations • 3 May 2017 • Danhao Zhu, Si Shen, Xin-yu Dai, Jia-Jun Chen
Recurrent Neural Network (RNN) has been widely applied for sequence modeling.
no code implementations • 31 Jan 2017 • Guang-Neng Hu, Xin-yu Dai
On top of text features we uncover the review dimensions that explain the variation in users' feedback and these review factors represent a prior preference of users.
no code implementations • LREC 2016 • Hao Zhou, Yue Zhang, Shu-Jian Huang, Xin-yu Dai, Jia-Jun Chen
Greedy transition-based parsers are appealing for their very fast speed, with reasonably high accuracies.
no code implementations • 11 Jan 2016 • Guang-Neng Hu, Xin-yu Dai, Yunya Song, Shu-Jian Huang, Jia-Jun Chen
Recommender systems (RSs) provide an effective way of alleviating the information overload problem by selecting personalized choices.
no code implementations • 28 Jun 2015 • Li-Qiang Niu, Xin-yu Dai
Latent Dirichlet Allocation (LDA) mining thematic structure of documents plays an important role in nature language processing and machine learning areas.
no code implementations • IJCNLP 2015 • Shujian Huang, Huadong Chen, Xin-yu Dai, Jia-Jun Chen
The linear combination assumes that all the features are in a linear relationship and constrains that each feature interacts with the rest features in an linear manner, which might limit the expressive power of the model and lead to a under-fit model on the current data.