no code implementations • COLING 2022 • Zewei Sun, Shu-Jian Huang, Xin-yu Dai, Jia-Jun Chen
Recent studies show that the attention heads in Transformer are not equal.
Ranked #1 on
Machine Translation
on WMT2017 Turkish-English
no code implementations • 4 Aug 2020 • Robert Ridley, Liang He, Xin-yu Dai, Shu-Jian Huang, Jia-Jun Chen
Cross-prompt automated essay scoring (AES) requires the system to use non target-prompt essays to award scores to a target-prompt essay.
no code implementations • ACL 2020 • Jiahuan Li, Yu Bao, Shu-Jian Huang, Xin-yu Dai, Jia-Jun Chen
Definition generation, which aims to automatically generate dictionary definitions for words, has recently been proposed to assist the construction of dictionaries and help people understand unfamiliar texts.
1 code implementation • ACL 2020 • Yawen Ouyang, Moxin Chen, Xin-yu Dai, Yinggong Zhao, Shu-Jian Huang, Jia-Jun Chen
Recent proposed approaches have made promising progress in dialogue state tracking (DST).
no code implementations • ACL 2020 • Xuhui Zhou, Zaixiang Zheng, Shu-Jian Huang
Based on the properties of RPD, we study the relations of word embeddings of different algorithms systematically and investigate the influence of different training processes and corpora.
no code implementations • ICLR 2020 • Zaixiang Zheng, Hao Zhou, Shu-Jian Huang, Lei LI, Xin-yu Dai, Jia-Jun Chen
Training neural machine translation models (NMT) requires a large amount of parallel corpus, which is scarce for many language pairs.
no code implementations • 24 Feb 2020 • Rongxiang Weng, Hao-Ran Wei, Shu-Jian Huang, Heng Yu, Lidong Bing, Weihua Luo, Jia-Jun Chen
The encoder maps the words in the input sentence into a sequence of hidden states, which are then fed into the decoder to generate the output sentence.
1 code implementation • 19 Feb 2020 • Zaixiang Zheng, Xiang Yue, Shu-Jian Huang, Jia-Jun Chen, Alexandra Birch
Document-level machine translation manages to outperform sentence level models by a small margin, but have failed to be widely adopted.
1 code implementation • 7 Jan 2020 • Zhen Wu, Fei Zhao, Xin-yu Dai, Shu-Jian Huang, Jia-Jun Chen
In this paper, we propose a novel model to transfer these opinions knowledge from resource-rich review sentiment classification datasets to low-resource task TOWE.
Aspect-oriented Opinion Extraction
General Classification
+2
no code implementations • 4 Dec 2019 • Rongxiang Weng, Heng Yu, Shu-Jian Huang, Shanbo Cheng, Weihua Luo
The standard paradigm of exploiting them includes two steps: first, pre-training a model, e. g. BERT, with a large scale unlabeled monolingual data.
no code implementations • 25 Nov 2019 • Yu Bao, Hao Zhou, Jiangtao Feng, Mingxuan Wang, Shu-Jian Huang, Jia-Jun Chen, Lei LI
Non-autoregressive models are promising on various text generation tasks.
no code implementations • 21 Nov 2019 • Zewei Sun, Shu-Jian Huang, Hao-Ran Wei, Xin-yu Dai, Jia-Jun Chen
Experiments also show that back-translation with these diverse translations could bring significant improvement on performance on translation tasks.
1 code implementation • ACL 2020 • Wei Zou, Shu-Jian Huang, Jun Xie, Xin-yu Dai, Jia-Jun Chen
Neural machine translation systems tend to fail on less decent inputs despite its significant efficacy, which may significantly harm the credibility of this systems-fathoming how and when neural-based systems fail in such cases is critical for industrial maintenance.
no code implementations • 9 Nov 2019 • Zhen Cheng, Zaixiang Zheng, Xin-yu Dai, Shu-Jian Huang, Jia-Jun Chen
Intuitively, NLI should rely more on multiple perspectives to form a holistic view to eliminate bias.
no code implementations • IJCNLP 2019 • Huiyun Yang, Shu-Jian Huang, Xin-yu Dai, Jia-Jun Chen
In sequence labeling, previous domain adaptation methods focus on the adaptation from the source domain to the entire target domain without considering the diversity of individual target domain samples, which may lead to negative transfer results for certain samples.
no code implementations • 21 Aug 2019 • Rongxiang Weng, Heng Yu, Shu-Jian Huang, Weihua Luo, Jia-Jun Chen
Then, we design a framework for integrating both source and target sentence-level representations into NMT model to improve the translation quality.
1 code implementation • ACL 2019 • Peng Wu, Shu-Jian Huang, Rongxiang Weng, Zaixiang Zheng, Jianbing Zhang, Xiaohui Yan, Jia-Jun Chen
However, one critical problem is that current approaches only get high accuracy for questions whose relations have been seen in the training data.
no code implementations • 8 Jul 2019 • Rongxiang Weng, Hao Zhou, Shu-Jian Huang, Lei LI, Yifan Xia, Jia-Jun Chen
Experiments in both ideal and real interactive translation settings demonstrate that our proposed \method enhances machine translation results significantly while requires fewer revision instructions from human compared to previous methods.
1 code implementation • ACL 2019 • Yu Bao, Hao Zhou, Shu-Jian Huang, Lei LI, Lili Mou, Olga Vechtomova, Xin-yu Dai, Jia-Jun Chen
In this paper, we propose to generate sentences from disentangled syntactic and semantic spaces.
no code implementations • NAACL 2019 • Hao-Ran Wei, Shu-Jian Huang, Ran Wang, Xin-yu Dai, Jia-Jun Chen
Our method on-the-fly generates a teacher model from checkpoints, guiding the training process to obtain better performance.
no code implementations • NAACL 2019 • Kaijia Yang, Liang He, Xin-yu Dai, Shu-Jian Huang, Jia-Jun Chen
Distant supervision has obtained great progress on relation classification task.
1 code implementation • NAACL 2019 • Zhifang Fan, Zhen Wu, Xin-yu Dai, Shu-Jian Huang, Jia-Jun Chen
In this paper, we propose a novel sequence labeling subtask for ABSA named TOWE (Target-oriented Opinion Words Extraction), which aims at extracting the corresponding opinion words for a given opinion target.
Aspect-oriented Opinion Extraction
target-oriented opinion words extraction
1 code implementation • IJCNLP 2019 • Zaixiang Zheng, Shu-Jian Huang, Zhaopeng Tu, Xin-yu Dai, Jia-Jun Chen
Previous studies have shown that neural machine translation (NMT) models can benefit from explicitly modeling translated (Past) and untranslated (Future) to groups of translated and untranslated contents through parts-to-wholes assignment.
no code implementations • 24 Oct 2018 • Zaixiang Zheng, Shu-Jian Huang, Zewei Sun, Rongxiang Weng, Xin-yu Dai, Jia-Jun Chen
Previous studies show that incorporating external information could improve the translation quality of Neural Machine Translation (NMT) systems.
no code implementations • EMNLP 2018 • Zi-Yi Dou, Zhi-Hao Zhou, Shu-Jian Huang
Bilingual lexicon extraction has been studied for decades and most previous methods have relied on parallel corpora or bilingual dictionaries.
no code implementations • NAACL 2018 • Huadong Chen, Shu-Jian Huang, David Chiang, Xin-yu Dai, Jia-Jun Chen
Natural language sentences, being hierarchical, can be represented at different levels of granularity, like words, subwords, or characters.
no code implementations • 26 Mar 2018 • Guang-Neng Hu, Xin-yu Dai, Feng-Yu Qiu, Rui Xia, Tao Li, Shu-Jian Huang, Jia-Jun Chen
First, we propose a novel model {\em \mbox{MR3}} to jointly model three sources of information (i. e., ratings, item reviews, and social relations) effectively for rating prediction by aligning the latent factors and hidden topics.
1 code implementation • 24 Jan 2018 • Zhen Wu, Xin-yu Dai, Cunyan Yin, Shu-Jian Huang, Jia-Jun Chen
Recently, some works achieved improvement by incorporating user and product information to generate a review representation.
Ranked #3 on
Sentiment Analysis
on User and product information
1 code implementation • TACL 2018 • Zaixiang Zheng, Hao Zhou, Shu-Jian Huang, Lili Mou, Xin-yu Dai, Jia-Jun Chen, Zhaopeng Tu
The Past and Future contents are fed to both the attention model and the decoder states, which offers NMT systems the knowledge of translated and untranslated contents.
no code implementations • LREC 2018 • Zi-Yi Dou, Hao Zhou, Shu-Jian Huang, Xin-yu Dai, Jia-Jun Chen
However, there are certain limitations in Scheduled Sampling and we propose two dynamic oracle-based methods to improve it.
no code implementations • EMNLP 2017 • Hao Zhou, Zhenting Yu, Yue Zhang, Shu-Jian Huang, Xin-yu Dai, Jia-Jun Chen
Neural parsers have benefited from automatically labeled data via dependency-context word embeddings.
no code implementations • WS 2017 • Ond{\v{r}}ej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Shu-Jian Huang, Matthias Huck, Philipp Koehn, Qun Liu, Varvara Logacheva, Christof Monz, Matteo Negri, Matt Post, Raphael Rubino, Lucia Specia, Marco Turchi
no code implementations • EMNLP 2017 • Rongxiang Weng, Shu-Jian Huang, Zaixiang Zheng, Xin-yu Dai, Jia-Jun Chen
In the encoder-decoder architecture for neural machine translation (NMT), the hidden states of the recurrent structures in the encoder and decoder carry the crucial information about the sentence. These vectors are generated by parameters which are updated by back-propagation of translation errors through time.
1 code implementation • ACL 2017 • Huadong Chen, Shu-Jian Huang, David Chiang, Jia-Jun Chen
Most neural machine translation (NMT) models are based on the sequential encoder-decoder framework, which makes no use of syntactic information.
no code implementations • CONLL 2017 • Huadong Chen, Shu-Jian Huang, David Chiang, Xin-yu Dai, Jia-Jun Chen
We propose a listwise learning framework for structure prediction problems such as machine translation.
1 code implementation • ACL 2017 • Hao Zhou, Zhaopeng Tu, Shu-Jian Huang, Xiaohua Liu, Hang Li, Jia-Jun Chen
In typical neural machine translation~(NMT), the decoder generates a sentence word by word, packing all linguistic granularities in the same time-scale of RNN.
no code implementations • LREC 2016 • Hao Zhou, Yue Zhang, Shu-Jian Huang, Xin-yu Dai, Jia-Jun Chen
Greedy transition-based parsers are appealing for their very fast speed, with reasonably high accuracies.
no code implementations • 11 Jan 2016 • Guang-Neng Hu, Xin-yu Dai, Yunya Song, Shu-Jian Huang, Jia-Jun Chen
Recommender systems (RSs) provide an effective way of alleviating the information overload problem by selecting personalized choices.