Search Results for author: Shu-Jian Huang

Found 44 papers, 13 papers with code

Prompt Agnostic Essay Scorer: A Domain Generalization Approach to Cross-prompt Automated Essay Scoring

1 code implementation4 Aug 2020 Robert Ridley, Liang He, Xin-yu Dai, Shu-Jian Huang, Jia-Jun Chen

Cross-prompt automated essay scoring (AES) requires the system to use non target-prompt essays to award scores to a target-prompt essay.

Automated Essay Scoring Domain Generalization +1

Explicit Semantic Decomposition for Definition Generation

no code implementations ACL 2020 Jiahuan Li, Yu Bao, Shu-Jian Huang, Xin-yu Dai, Jia-Jun Chen

Definition generation, which aims to automatically generate dictionary definitions for words, has recently been proposed to assist the construction of dictionaries and help people understand unfamiliar texts.

RPD: A Distance Function Between Word Embeddings

no code implementations ACL 2020 Xuhui Zhou, Zaixiang Zheng, Shu-Jian Huang

Based on the properties of RPD, we study the relations of word embeddings of different algorithms systematically and investigate the influence of different training processes and corpora.

Word Embeddings

Mirror-Generative Neural Machine Translation

no code implementations ICLR 2020 Zaixiang Zheng, Hao Zhou, Shu-Jian Huang, Lei LI, Xin-yu Dai, Jia-Jun Chen

Training neural machine translation models (NMT) requires a large amount of parallel corpus, which is scarce for many language pairs.

Machine Translation NMT +1

GRET: Global Representation Enhanced Transformer

no code implementations24 Feb 2020 Rongxiang Weng, Hao-Ran Wei, Shu-Jian Huang, Heng Yu, Lidong Bing, Weihua Luo, Jia-Jun Chen

The encoder maps the words in the input sentence into a sequence of hidden states, which are then fed into the decoder to generate the output sentence.

Decoder Machine Translation +4

Towards Making the Most of Context in Neural Machine Translation

1 code implementation19 Feb 2020 Zaixiang Zheng, Xiang Yue, Shu-Jian Huang, Jia-Jun Chen, Alexandra Birch

Document-level machine translation manages to outperform sentence level models by a small margin, but have failed to be widely adopted.

Document Level Machine Translation Machine Translation +3

Latent Opinions Transfer Network for Target-Oriented Opinion Words Extraction

1 code implementation7 Jan 2020 Zhen Wu, Fei Zhao, Xin-yu Dai, Shu-Jian Huang, Jia-Jun Chen

In this paper, we propose a novel model to transfer these opinions knowledge from resource-rich review sentiment classification datasets to low-resource task TOWE.

Aspect-oriented Opinion Extraction General Classification +3

Acquiring Knowledge from Pre-trained Model to Neural Machine Translation

no code implementations4 Dec 2019 Rongxiang Weng, Heng Yu, Shu-Jian Huang, Shanbo Cheng, Weihua Luo

The standard paradigm of exploiting them includes two steps: first, pre-training a model, e. g. BERT, with a large scale unlabeled monolingual data.

General Knowledge Knowledge Distillation +3

Generating Diverse Translation by Manipulating Multi-Head Attention

no code implementations21 Nov 2019 Zewei Sun, Shu-Jian Huang, Hao-Ran Wei, Xin-yu Dai, Jia-Jun Chen

Experiments also show that back-translation with these diverse translations could bring significant improvement on performance on translation tasks.

Data Augmentation Decoder +5

A Reinforced Generation of Adversarial Examples for Neural Machine Translation

1 code implementation ACL 2020 Wei Zou, Shu-Jian Huang, Jun Xie, Xin-yu Dai, Jia-Jun Chen

Neural machine translation systems tend to fail on less decent inputs despite its significant efficacy, which may significantly harm the credibility of this systems-fathoming how and when neural-based systems fail in such cases is critical for industrial maintenance.

Machine Translation Translation

Fine-grained Knowledge Fusion for Sequence Labeling Domain Adaptation

no code implementations IJCNLP 2019 Huiyun Yang, Shu-Jian Huang, Xin-yu Dai, Jia-Jun Chen

In sequence labeling, previous domain adaptation methods focus on the adaptation from the source domain to the entire target domain without considering the diversity of individual target domain samples, which may lead to negative transfer results for certain samples.

Diversity Domain Adaptation

Improving Neural Machine Translation with Pre-trained Representation

no code implementations21 Aug 2019 Rongxiang Weng, Heng Yu, Shu-Jian Huang, Weihua Luo, Jia-Jun Chen

Then, we design a framework for integrating both source and target sentence-level representations into NMT model to improve the translation quality.

Machine Translation NMT +3

Correct-and-Memorize: Learning to Translate from Interactive Revisions

no code implementations8 Jul 2019 Rongxiang Weng, Hao Zhou, Shu-Jian Huang, Lei LI, Yifan Xia, Jia-Jun Chen

Experiments in both ideal and real interactive translation settings demonstrate that our proposed \method enhances machine translation results significantly while requires fewer revision instructions from human compared to previous methods.

Machine Translation Translation

Target-oriented Opinion Words Extraction with Target-fused Neural Sequence Labeling

1 code implementation NAACL 2019 Zhifang Fan, Zhen Wu, Xin-yu Dai, Shu-Jian Huang, Jia-Jun Chen

In this paper, we propose a novel sequence labeling subtask for ABSA named TOWE (Target-oriented Opinion Words Extraction), which aims at extracting the corresponding opinion words for a given opinion target.

Aspect-Based Sentiment Analysis Aspect-oriented Opinion Extraction +2

Dynamic Past and Future for Neural Machine Translation

1 code implementation IJCNLP 2019 Zaixiang Zheng, Shu-Jian Huang, Zhaopeng Tu, Xin-yu Dai, Jia-Jun Chen

Previous studies have shown that neural machine translation (NMT) models can benefit from explicitly modeling translated (Past) and untranslated (Future) to groups of translated and untranslated contents through parts-to-wholes assignment.

Machine Translation NMT +1

Learning to Discriminate Noises for Incorporating External Information in Neural Machine Translation

no code implementations24 Oct 2018 Zaixiang Zheng, Shu-Jian Huang, Zewei Sun, Rongxiang Weng, Xin-yu Dai, Jia-Jun Chen

Previous studies show that incorporating external information could improve the translation quality of Neural Machine Translation (NMT) systems.

Machine Translation NMT +2

Unsupervised Bilingual Lexicon Induction via Latent Variable Models

no code implementations EMNLP 2018 Zi-Yi Dou, Zhi-Hao Zhou, Shu-Jian Huang

Bilingual lexicon extraction has been studied for decades and most previous methods have relied on parallel corpora or bilingual dictionaries.

Bilingual Lexicon Induction Word Embeddings

Collaborative Filtering with Topic and Social Latent Factors Incorporating Implicit Feedback

no code implementations26 Mar 2018 Guang-Neng Hu, Xin-yu Dai, Feng-Yu Qiu, Rui Xia, Tao Li, Shu-Jian Huang, Jia-Jun Chen

First, we propose a novel model {\em \mbox{MR3}} to jointly model three sources of information (i. e., ratings, item reviews, and social relations) effectively for rating prediction by aligning the latent factors and hidden topics.

Collaborative Filtering Recommendation Systems

Modeling Past and Future for Neural Machine Translation

1 code implementation TACL 2018 Zaixiang Zheng, Hao Zhou, Shu-Jian Huang, Lili Mou, Xin-yu Dai, Jia-Jun Chen, Zhaopeng Tu

The Past and Future contents are fed to both the attention model and the decoder states, which offers NMT systems the knowledge of translated and untranslated contents.

Decoder Machine Translation +2

Dynamic Oracle for Neural Machine Translation in Decoding Phase

no code implementations LREC 2018 Zi-Yi Dou, Hao Zhou, Shu-Jian Huang, Xin-yu Dai, Jia-Jun Chen

However, there are certain limitations in Scheduled Sampling and we propose two dynamic oracle-based methods to improve it.

Machine Translation NMT +1

Neural Machine Translation with Word Predictions

no code implementations EMNLP 2017 Rongxiang Weng, Shu-Jian Huang, Zaixiang Zheng, Xin-yu Dai, Jia-Jun Chen

In the encoder-decoder architecture for neural machine translation (NMT), the hidden states of the recurrent structures in the encoder and decoder carry the crucial information about the sentence. These vectors are generated by parameters which are updated by back-propagation of translation errors through time.

Decoder Machine Translation +3

Improved Neural Machine Translation with a Syntax-Aware Encoder and Decoder

1 code implementation ACL 2017 Huadong Chen, Shu-Jian Huang, David Chiang, Jia-Jun Chen

Most neural machine translation (NMT) models are based on the sequential encoder-decoder framework, which makes no use of syntactic information.

Decoder Machine Translation +2

Chunk-Based Bi-Scale Decoder for Neural Machine Translation

1 code implementation ACL 2017 Hao Zhou, Zhaopeng Tu, Shu-Jian Huang, Xiaohua Liu, Hang Li, Jia-Jun Chen

In typical neural machine translation~(NMT), the decoder generates a sentence word by word, packing all linguistic granularities in the same time-scale of RNN.

Decoder Machine Translation +3

A Synthetic Approach for Recommendation: Combining Ratings, Social Relations, and Reviews

no code implementations11 Jan 2016 Guang-Neng Hu, Xin-yu Dai, Yunya Song, Shu-Jian Huang, Jia-Jun Chen

Recommender systems (RSs) provide an effective way of alleviating the information overload problem by selecting personalized choices.

Recommendation Systems

Cannot find the paper you are looking for? You can Submit a new open access paper.