Search Results for author: ShuJian Huang

Found 91 papers, 62 papers with code

Data Augmentation for Low-resource Word Segmentation and POS Tagging of Ancient Chinese Texts

no code implementations LT4HALA (LREC) 2022 Yutong Shen, Jiahuan Li, ShuJian Huang, Yi Zhou, Xiaopeng Xie, Qinxin Zhao

Although SikuRoberta significantly boosts performance on WSG and POS tasks on ancient Chinese texts, the lack of labeled data still limits the performance of the model.

Data Augmentation Language Modeling +4

Learning from Adjective-Noun Pairs: A Knowledge-enhanced Framework for Target-Oriented Multimodal Sentiment Classification

1 code implementation COLING 2022 Fei Zhao, Zhen Wu, Siyu Long, Xinyu Dai, ShuJian Huang, Jiajun Chen

Target-oriented multimodal sentiment classification (TMSC) is a new subtask of aspect-based sentiment analysis, which aims to determine the sentiment polarity of the opinion target mentioned in a (sentence, image) pair.

Aspect-Based Sentiment Analysis Aspect-Based Sentiment Analysis (ABSA) +2

Meta-LMTC: Meta-Learning for Large-Scale Multi-Label Text Classification

no code implementations EMNLP 2021 Ran Wang, Xi’ao Su, Siyu Long, Xinyu Dai, ShuJian Huang, Jiajun Chen

However, the simple extension of meta-learning approaches to multi-label classification is sub-optimal for LMTC tasks due to long-tailed label distribution and coexisting of few- and zero-shot scenarios.

Meta-Learning Multi-Label Classification +4

GLAT: Glancing at Latent Variables for Parallel Text Generation

1 code implementation ACL 2022 Yu Bao, Hao Zhou, ShuJian Huang, Dongqi Wang, Lihua Qian, Xinyu Dai, Jiajun Chen, Lei LI

Recently, parallel text generation has received widespread attention due to its success in generation efficiency.

Text Generation

Towards Multi-label Unknown Intent Detection

1 code implementation COLING 2022 Yawen Ouyang, Zhen Wu, Xinyu Dai, ShuJian Huang, Jiajun Chen

In this paper, we propose a more desirable task, multi-label unknown intent detection, to detect whether the utterance contains the unknown intent, in which each utterance may contain multiple intents.

Intent Detection

PATS: Process-Level Adaptive Thinking Mode Switching

1 code implementation25 May 2025 Yi Wang, Junxiao Liu, Shimao Zhang, Jiajun Chen, ShuJian Huang

Current large-language models (LLMs) typically adopt a fixed reasoning strategy, either simple or complex, for all questions, regardless of their difficulty.

Computational Efficiency

Internal Bias in Reasoning Models leads to Overthinking

no code implementations22 May 2025 Renfei Dang, ShuJian Huang, Jiajun Chen

Through further interpretability experiments, we find that this behavior is largely driven by the model's excessive attention to the input section, which amplifies the influence of internal bias on its decision-making process.

Why Not Act on What You Know? Unleashing Safety Potential of LLMs via Self-Aware Guard Enhancement

1 code implementation17 May 2025 Peng Ding, Jun Kuang, ZongYu Wang, Xuezhi Cao, Xunliang Cai, Jiajun Chen, ShuJian Huang

Large Language Models (LLMs) have shown impressive capabilities across various tasks but remain vulnerable to meticulously crafted jailbreak attacks.

Trans-Zero: Self-Play Incentivizes Large Language Models for Multilingual Translation Without Parallel Data

1 code implementation20 Apr 2025 Wei Zou, Sen yang, Yu Bao, ShuJian Huang, Jiajun Chen, Shanbo Cheng

The rise of Large Language Models (LLMs) has reshaped machine translation (MT), but multilingual MT still relies heavily on parallel data for supervised fine-tuning (SFT), facing challenges like data scarcity for low-resource languages and catastrophic forgetting.

Machine Translation Translation

Could Thinking Multilingually Empower LLM Reasoning?

1 code implementation16 Apr 2025 Changjiang Gao, Xu Huang, Wenhao Zhu, ShuJian Huang, Lei LI, Fei Yuan

In this paper, we explore the upper bound of harnessing multilingualism in reasoning tasks, suggesting that multilingual reasoning promises significantly (by nearly 10 Acc@$k$ points) and robustly (tolerance for variations in translation quality and language choice) higher upper bounds than English-only reasoning.

Answer Selection

Elucidating the Design Space of Multimodal Protein Language Models

1 code implementation15 Apr 2025 Cheng-Yen Hsieh, Xinyou Wang, Daiheng Zhang, Dongyu Xue, Fei Ye, ShuJian Huang, Zaixiang Zheng, Quanquan Gu

Multimodal protein language models (PLMs) integrate sequence and token-based structural information, serving as a powerful foundation for protein modeling, generation, and design.

Diversity Representation Learning

Understanding LLMs' Cross-Lingual Context Retrieval: How Good It Is And Where It Comes From

no code implementations15 Apr 2025 Changjiang Gao, Hankun Lin, ShuJian Huang, Xin Huang, Xue Han, Junlan Feng, Chao Deng, Jiajun Chen

The ability of cross-lingual context retrieval is a fundamental aspect of cross-lingual alignment of large language models (LLMs), where the model extracts context information in one language based on requests in another language.

Machine Reading Comprehension Retrieval

R-PRM: Reasoning-Driven Process Reward Modeling

1 code implementation27 Mar 2025 Shuaijie She, Junxiao Liu, Yifeng Liu, Jiajun Chen, Xin Huang, ShuJian Huang

Large language models (LLMs) inevitably make mistakes when performing step-by-step mathematical reasoning.

Mathematical Reasoning

Process-based Self-Rewarding Language Models

1 code implementation5 Mar 2025 Shimao Zhang, Xiao Liu, Xin Zhang, Junxiao Liu, Zheheng Luo, ShuJian Huang, Yeyun Gong

Human-annotated preference data is used for training to further improve LLMs' performance, which is constrained by the upper limit of human performance.

Mathematical Reasoning

Alleviating Distribution Shift in Synthetic Data for Machine Translation Quality Estimation

no code implementations27 Feb 2025 Xiang Geng, Zhejian Lai, Jiajun Chen, Hao Yang, ShuJian Huang

Quality Estimation (QE) models evaluate the quality of machine translations without reference translations, serving as the reward models for the translation task.

Machine Translation Synthetic Data Generation +1

Generalizing From Short to Long: Effective Data Synthesis for Long-Context Instruction Tuning

1 code implementation21 Feb 2025 Wenhao Zhu, Pinzhen Chen, Hanxu Hu, ShuJian Huang, Fei Yuan, Jiajun Chen, Alexandra Birch

The focus of research into modelling long context has been on how to model position and there has been little investigation into other important aspects of language modelling such as instruction tuning.

Language Modelling

BenchMAX: A Comprehensive Multilingual Evaluation Suite for Large Language Models

1 code implementation11 Feb 2025 Xu Huang, Wenhao Zhu, Hanxu Hu, Conghui He, Lei LI, ShuJian Huang, Fei Yuan

Previous multilingual benchmarks focus primarily on simple understanding tasks, but for large language models(LLMs), we emphasize proficiency in instruction following, reasoning, long context understanding, code generation, and so on.

Code Generation Instruction Following +1

Extend Adversarial Policy Against Neural Machine Translation via Unknown Token

no code implementations21 Jan 2025 Wei Zou, ShuJian Huang, Jiajun Chen

Generating adversarial examples contributes to mainstream neural machine translation~(NMT) robustness.

Machine Translation NMT +2

SLAM: Towards Efficient Multilingual Reasoning via Selective Language Alignment

1 code implementation7 Jan 2025 Yuchun Fan, Yongyu Mu, Yilin Wang, Lei Huang, Junhao Ruan, Bei Li, Tong Xiao, ShuJian Huang, Xiaocheng Feng, Jingbo Zhu

Despite the significant improvements achieved by large language models (LLMs) in English reasoning tasks, these models continue to struggle with multilingual reasoning.

Representation Learning

Self-Evolution Knowledge Distillation for LLM-based Machine Translation

no code implementations19 Dec 2024 Yuncheng Song, Liang Ding, Changtong Zan, ShuJian Huang

Knowledge distillation (KD) has shown great promise in transferring knowledge from larger teacher models to smaller student models.

Knowledge Distillation Machine Translation +2

DPLM-2: A Multimodal Diffusion Protein Language Model

1 code implementation17 Oct 2024 Xinyou Wang, Zaixiang Zheng, Fei Ye, Dongyu Xue, ShuJian Huang, Quanquan Gu

In this paper, we introduce DPLM-2, a multimodal protein foundation model that extends discrete diffusion protein language model (DPLM) to accommodate both sequences and structures.

Language Modeling model +2

Formality is Favored: Unraveling the Learning Preferences of Large Language Models on Data with Conflicting Knowledge

1 code implementation7 Oct 2024 Jiahuan Li, Yiqing Cao, ShuJian Huang, Jiajun Chen

We find that pretrained LLMs establish learning preferences similar to humans, i. e., preferences towards formal texts and texts with fewer spelling errors, resulting in faster learning and more favorable treatment of knowledge in data with such features when facing conflicts.

MoE-LPR: Multilingual Extension of Large Language Models through Mixture-of-Experts with Language Priors Routing

2 code implementations21 Aug 2024 Hao Zhou, Zhijun Wang, ShuJian Huang, Xin Huang, Xue Han, Junlan Feng, Chao Deng, Weihua Luo, Jiajun Chen

Then, the model reviews the knowledge of the original languages with replay data amounting to less than 1% of post-pretraining, where we incorporate language priors routing to better recover the abilities of the original languages.

Mixture-of-Experts

Hallu-PI: Evaluating Hallucination in Multi-modal Large Language Models within Perturbed Inputs

1 code implementation2 Aug 2024 Peng Ding, Jingyu Wu, Jun Kuang, Dan Ma, Xuezhi Cao, Xunliang Cai, Shi Chen, Jiajun Chen, ShuJian Huang

Extensive experiments on 12 mainstream MLLMs, such as GPT-4V and Gemini-Pro Vision, demonstrate that these models exhibit significant hallucinations on Hallu-PI, which is not observed in unperturbed scenarios.

Attribute Hallucination +1

PreAlign: Boosting Cross-Lingual Transfer by Early Establishment of Multilingual Alignment

1 code implementation23 Jul 2024 Jiahuan Li, ShuJian Huang, Aarron Ching, Xinyu Dai, Jiajun Chen

In this paper, we propose PreAlign, a framework that establishes multilingual alignment prior to language model pretraining.

Language Modeling Language Modelling +1

Multilingual Contrastive Decoding via Language-Agnostic Layers Skipping

1 code implementation15 Jul 2024 Wenhao Zhu, Sizhe Liu, ShuJian Huang, Shuaijie She, Chris Wendler, Jiajun Chen

Decoding by contrasting layers (DoLa), is designed to improve the generation quality of large language models (LLMs) by contrasting the prediction probabilities between an early exit output (amateur logits) and the final output (expert logits).

Large Language Models are Limited in Out-of-Context Knowledge Reasoning

1 code implementation11 Jun 2024 Peng Hu, Changjiang Gao, Ruiqi Gao, Jiajun Chen, ShuJian Huang

Using this dataset, we evaluated several LLMs and discovered that their proficiency in this aspect is limited, regardless of whether the knowledge is trained in a separate or adjacent training settings.

Attribute Logical Reasoning +2

Extroversion or Introversion? Controlling The Personality of Your Large Language Models

1 code implementation7 Jun 2024 Yanquan Chen, Zhen Wu, Junjie Guo, ShuJian Huang, Xinyu Dai

Our investigation revealed a hierarchy of effectiveness in control: Prompt > SFT > RLHF > Continual Pre-train.

Text Generation

Why Not Transform Chat Large Language Models to Non-English?

1 code implementation22 May 2024 Xiang Geng, Ming Zhu, Jiahuan Li, Zhejian Lai, Wei Zou, Shuaijie She, Jiaxin Guo, Xiaofeng Zhao, Yinglu Li, Yuang Li, Chang Su, Yanqing Zhao, Xinglin Lyu, Min Zhang, Jiajun Chen, Hao Yang, ShuJian Huang

For the second issue, we propose a method comprising two synergistic components: low-rank adaptation for training to maintain the original LLM parameters, and recovery KD, which utilizes data generated by the chat LLM itself to recover the original knowledge from the frozen parameters.

Knowledge Distillation

Improved Paraphrase Generation via Controllable Latent Diffusion

1 code implementation13 Apr 2024 Wei Zou, Ziyuan Zhuang, Xiang Geng, ShuJian Huang, Jia Liu, Jiajun Chen

Paraphrase generation strives to generate high-quality and diverse expressions of a given text, a domain where diffusion models excel.

Diversity Paraphrase Generation

Multilingual Pretraining and Instruction Tuning Improve Cross-Lingual Knowledge Alignment, But Only Shallowly

1 code implementation6 Apr 2024 Changjiang Gao, Hongda Hu, Peng Hu, Jiajun Chen, Jixing Li, ShuJian Huang

In this paper, we propose CLiKA, a systematic framework to assess the cross-lingual knowledge alignment of LLMs in the Performance, Consistency and Conductivity levels, and explored the effect of multilingual pretraining and instruction tuning on the degree of alignment.

EDT: Improving Large Language Models' Generation by Entropy-based Dynamic Temperature Sampling

1 code implementation21 Mar 2024 Shimao Zhang, Yu Bao, ShuJian Huang

However, a fixed temperature parameter is used in most cases, which may not always be an optimal choice for balancing generation quality and diversity.

Diversity

MT-PATCHER: Selective and Extendable Knowledge Distillation from Large Language Models for Machine Translation

1 code implementation14 Mar 2024 Jiahuan Li, Shanbo Cheng, ShuJian Huang, Jiajun Chen

Large Language Models (LLM) have demonstrated their strong ability in the field of machine translation (MT), yet they suffer from high computational cost and latency.

Knowledge Distillation Machine Translation +1

Measuring Meaning Composition in the Human Brain with Composition Scores from Large Language Models

1 code implementation7 Mar 2024 Changjiang Gao, Jixing Li, Jiajun Chen, ShuJian Huang

Drawing on the key-value memory interpretation of transformer feed-forward network blocks, we introduce the Composition Score, a novel model-based metric designed to quantify the degree of meaning composition during sentence comprehension.

Sentence

Diffusion Language Models Are Versatile Protein Learners

1 code implementation28 Feb 2024 Xinyou Wang, Zaixiang Zheng, Fei Ye, Dongyu Xue, ShuJian Huang, Quanquan Gu

This paper introduces diffusion protein language model (DPLM), a versatile protein language model that demonstrates strong generative and predictive capabilities for protein sequences.

Language Modeling Protein Language Model

Cobra Effect in Reference-Free Image Captioning Metrics

no code implementations18 Feb 2024 Zheng Ma, Changxin Wang, Yawen Ouyang, Fei Zhao, Jianbing Zhang, ShuJian Huang, Jiajun Chen

If a certain metric has flaws, it will be exploited by the model and reflected in the generated sentences.

Image Captioning

Question Translation Training for Better Multilingual Reasoning

1 code implementation15 Jan 2024 Wenhao Zhu, ShuJian Huang, Fei Yuan, Shuaijie She, Jiajun Chen, Alexandra Birch

A typical solution is to translate instruction data into all languages of interest, and then train on the resulting multilingual data, which is called translate-training.

Mathematical Reasoning Translation

Multi-Candidate Speculative Decoding

1 code implementation12 Jan 2024 Sen yang, ShuJian Huang, Xinyu Dai, Jiajun Chen

One way to speed them up is speculative decoding, which generates candidate segments (a sequence of tokens) from a fast draft model that is then verified in parallel by the target model.

MAPO: Advancing Multilingual Reasoning through Multilingual Alignment-as-Preference Optimization

1 code implementation12 Jan 2024 Shuaijie She, Wei Zou, ShuJian Huang, Wenhao Zhu, Xiang Liu, Xiang Geng, Jiajun Chen

To enhance reasoning abilities in non-dominant languages, we propose a Multilingual-Alignment-as-Preference Optimization framework (MAPO), aiming to align the reasoning processes in other languages with the dominant language.

Mathematical Reasoning

Lost in the Source Language: How Large Language Models Evaluate the Quality of Machine Translation

1 code implementation12 Jan 2024 Xu Huang, Zhirui Zhang, Xiang Geng, Yichao Du, Jiajun Chen, ShuJian Huang

This study investigates how Large Language Models (LLMs) leverage source and reference data in machine translation evaluation task, aiming to better understand the mechanisms behind their remarkable performance in this task.

Machine Translation Translation

A Wolf in Sheep's Clothing: Generalized Nested Jailbreak Prompts can Fool Large Language Models Easily

1 code implementation14 Nov 2023 Peng Ding, Jun Kuang, Dan Ma, Xuezhi Cao, Yunsen Xian, Jiajun Chen, ShuJian Huang

Finally, we analyze the failure of LLMs defense from the perspective of prompt execution priority, and propose corresponding defense strategies.

Exploring the Factual Consistency in Dialogue Comprehension of Large Language Models

no code implementations13 Nov 2023 Shuaijie She, ShuJian Huang, Xingyun Wang, Yanke Zhou, Jiajun Chen

For answering the factual questions, which is more challenging, the average error rate of all evaluated LLMs is 36. 1%.

Roles of Scaling and Instruction Tuning in Language Perception: Model vs. Human Attention

1 code implementation29 Oct 2023 Changjiang Gao, ShuJian Huang, Jixing Li, Jiajun Chen

Recent large language models (LLMs) have revealed strong abilities to understand natural language.

IMTLab: An Open-Source Platform for Building, Evaluating, and Diagnosing Interactive Machine Translation Systems

1 code implementation17 Oct 2023 Xu Huang, Zhirui Zhang, Ruize Gao, Yichao Du, Lemao Liu, Gouping Huang, Shuming Shi, Jiajun Chen, ShuJian Huang

We present IMTLab, an open-source end-to-end interactive machine translation (IMT) system platform that enables researchers to quickly build IMT systems with state-of-the-art models, perform an end-to-end evaluation, and diagnose the weakness of systems.

Machine Translation Translation

Dynamic Demonstrations Controller for In-Context Learning

1 code implementation30 Sep 2023 Fei Zhao, Taotian Pang, Zhen Wu, Zheng Ma, ShuJian Huang, Xinyu Dai

Previous studies have revealed that ICL is sensitive to the selection and the ordering of demonstrations.

In-Context Learning Language Modeling +2

Only 5\% Attention Is All You Need: Efficient Long-range Document-level Neural Machine Translation

no code implementations25 Sep 2023 Zihan Liu, Zewei Sun, Shanbo Cheng, ShuJian Huang, Mingxuan Wang

Document-level Neural Machine Translation (DocNMT) has been proven crucial for handling discourse phenomena by introducing document-level context information.

All Dimensionality Reduction +2

Extrapolating Large Language Models to Non-English by Aligning Languages

2 code implementations9 Aug 2023 Wenhao Zhu, Yunzhe Lv, Qingxiu Dong, Fei Yuan, Jingjing Xu, ShuJian Huang, Lingpeng Kong, Jiajun Chen, Lei LI

We start from targeting individual languages by performing cross-lingual instruction-tuning (CoIT) on LLaMA, i. e. tuning it with translation task data and cross-lingual general task data to obtain cross-lingual models (x-LLaMAs), and formulate underlying scaling laws to investigate the advantages of using scalable translation data.

Translation

Food-500 Cap: A Fine-Grained Food Caption Benchmark for Evaluating Vision-Language Models

1 code implementation6 Aug 2023 Zheng Ma, Mianzhi Pan, Wenhan Wu, Kanzhi Cheng, Jianbing Zhang, ShuJian Huang, Jiajun Chen

Experiments on our proposed datasets demonstrate that popular VLMs underperform in the food domain compared with their performance in the general domain.

BLEURT Has Universal Translations: An Analysis of Automatic Metrics by Minimum Risk Training

no code implementations6 Jul 2023 Yiming Yan, Tao Wang, Chengqi Zhao, ShuJian Huang, Jiajun Chen, Mingxuan Wang

In this study, we systematically analyze and compare various mainstream and cutting-edge automatic metrics from the perspective of their guidance for training machine translation systems.

Machine Translation Sentence +1

INK: Injecting kNN Knowledge in Nearest Neighbor Machine Translation

1 code implementation10 Jun 2023 Wenhao Zhu, Jingjing Xu, ShuJian Huang, Lingpeng Kong, Jiajun Chen

We propose an effective training framework INK to directly smooth the representation space via adjusting representations of kNN neighbors with a small number of new parameters.

Machine Translation Translation

Eliciting the Translation Ability of Large Language Models via Multilingual Finetuning with Translation Instructions

no code implementations24 May 2023 Jiahuan Li, Hao Zhou, ShuJian Huang, Shanbo Cheng, Jiajun Chen

Secondly, we find that LLMs' ability to carry out translation instructions relies on the understanding of translation instructions and the alignment among different languages.

Language Modeling Language Modelling +1

Selective Knowledge Distillation for Non-Autoregressive Neural Machine Translation

no code implementations31 Mar 2023 Min Liu, Yu Bao, Chengqi Zhao, ShuJian Huang

Benefiting from the sequence-level knowledge distillation, the Non-Autoregressive Transformer (NAT) achieves great success in neural machine translation tasks.

Knowledge Distillation Machine Translation +1

kNN-BOX: A Unified Framework for Nearest Neighbor Generation

1 code implementation27 Feb 2023 Wenhao Zhu, Qianfeng Zhao, Yunzhe Lv, ShuJian Huang, Siheng Zhao, Sizhe Liu, Jiajun Chen

Augmenting the base neural model with a token-level symbolic datastore is a novel generation paradigm and has achieved promising results in machine translation (MT).

Machine Translation Paraphrase Generation +4

CoP: Factual Inconsistency Detection by Controlling the Preference

1 code implementation3 Dec 2022 Shuaijie She, Xiang Geng, ShuJian Huang, Jiajun Chen

To separate the preference for factual consistency, we propose an unsupervised framework named CoP by controlling the preference of the generation model with the help of prompt.

Abstractive Text Summarization

Helping the Weak Makes You Strong: Simple Multi-Task Learning Improves Non-Autoregressive Translators

1 code implementation11 Nov 2022 Xinyou Wang, Zaixiang Zheng, ShuJian Huang

Recently, non-autoregressive (NAR) neural machine translation models have received increasing attention due to their efficient parallel decoding.

Decoder Machine Translation +1

What Knowledge Is Needed? Towards Explainable Memory for kNN-MT Domain Adaptation

1 code implementation8 Nov 2022 Wenhao Zhu, ShuJian Huang, Yunzhe Lv, Xin Zheng, Jiajun Chen

kNN-MT presents a new paradigm for domain adaptation by building an external datastore, which usually saves all target language token occurrences in the parallel corpus.

Domain Adaptation NMT +1

Structure-Unified M-Tree Coding Solver for MathWord Problem

1 code implementation22 Oct 2022 Bin Wang, Jiangzhou Ju, Yang Fan, Xinyu Dai, ShuJian Huang, Jiajun Chen

As one of the challenging NLP tasks, designing math word problem (MWP) solvers has attracted increasing research attention for the past few years.

Math

Probing Cross-modal Semantics Alignment Capability from the Textual Perspective

no code implementations18 Oct 2022 Zheng Ma, Shi Zong, Mianzhi Pan, Jianbing Zhang, ShuJian Huang, Xinyu Dai, Jiajun Chen

In recent years, vision and language pre-training (VLP) models have advanced the state-of-the-art results in a variety of cross-modal downstream tasks.

Image Captioning Sentence

A Numerical Reasoning Question Answering System with Fine-grained Retriever and the Ensemble of Multiple Generators for FinQA

no code implementations17 Jun 2022 Bin Wang, Jiangzhou Ju, Yunlin Mao, Xin-yu Dai, ShuJian Huang, Jiajun Chen

Here, we propose a numerical reasoning question answering system to answer numerical reasoning questions among financial text and table data sources, consisting of a retriever module, a generator module, and an ensemble module.

Question Answering

Analyzing the Intensity of Complaints on Social Media

1 code implementation Findings (NAACL) 2022 Ming Fang, Shi Zong, Jing Li, Xinyu Dai, ShuJian Huang, Jiajun Chen

Furthermore, we conduct a comprehensive linguistic analysis around complaints, including the connections between complaints and sentiment, and a cross-lingual comparison for complaints expressions used by Chinese and English speakers.

$\textit{latent}$-GLAT: Glancing at Latent Variables for Parallel Text Generation

1 code implementation5 Apr 2022 Yu Bao, Hao Zhou, ShuJian Huang, Dongqi Wang, Lihua Qian, Xinyu Dai, Jiajun Chen, Lei LI

Recently, parallel text generation has received widespread attention due to its success in generation efficiency.

Text Generation

Non-Parametric Online Learning from Human Feedback for Neural Machine Translation

1 code implementation23 Sep 2021 Dongqi Wang, Haoran Wei, Zhirui Zhang, ShuJian Huang, Jun Xie, Jiajun Chen

We study the problem of online learning with human feedback in the human-in-the-loop machine translation, in which the human translators revise the machine-generated translations and then the corrected translations are used to improve the neural machine translation (NMT) system.

Machine Translation NMT +1

Non-Parametric Unsupervised Domain Adaptation for Neural Machine Translation

1 code implementation Findings (EMNLP) 2021 Xin Zheng, Zhirui Zhang, ShuJian Huang, Boxing Chen, Jun Xie, Weihua Luo, Jiajun Chen

Recently, $k$NN-MT has shown the promising capability of directly incorporating the pre-trained neural machine translation (NMT) model with domain-specific token-level $k$-nearest-neighbor ($k$NN) retrieval to achieve domain adaptation without retraining.

Machine Translation NMT +3

Energy-based Unknown Intent Detection with Data Manipulation

2 code implementations Findings (ACL) 2021 Yawen Ouyang, Jiasheng Ye, Yu Chen, Xinyu Dai, ShuJian Huang, Jiajun Chen

Unknown intent detection aims to identify the out-of-distribution (OOD) utterance whose intent has never appeared in the training set.

Intent Detection

Adaptive Nearest Neighbor Machine Translation

3 code implementations ACL 2021 Xin Zheng, Zhirui Zhang, Junliang Guo, ShuJian Huang, Boxing Chen, Weihua Luo, Jiajun Chen

On four benchmark machine translation datasets, we demonstrate that the proposed method is able to effectively filter out the noises in retrieval results and significantly outperforms the vanilla kNN-MT model.

Machine Translation NMT +2

DirectQE: Direct Pretraining for Machine Translation Quality Estimation

no code implementations15 May 2021 Qu Cui, ShuJian Huang, Jiahuan Li, Xiang Geng, Zaixiang Zheng, Guoping Huang, Jiajun Chen

However, we argue that there are gaps between the predictor and the estimator in both data quality and training objectives, which preclude QE models from benefiting from a large number of parallel corpora more directly.

Machine Translation Translation

Dual Side Deep Context-aware Modulation for Social Recommendation

1 code implementation16 Mar 2021 Bairan Fu, Wenming Zhang, GuangNeng Hu, Xinyu Dai, ShuJian Huang, Jiajun Chen

Specifically, we first proposed a novel graph neural network to model the social relation and collaborative relation, and on top of high-order relations, a dual side deep context-aware modulation is introduced to capture the friends' information and item attraction.

Graph Neural Network Relation

FGraDA: A Dataset and Benchmark for Fine-Grained Domain Adaptation in Machine Translation

1 code implementation LREC 2022 Wenhao Zhu, ShuJian Huang, Tong Pu, Pingxuan Huang, Xu Zhang, Jian Yu, Wei Chen, Yanfeng Wang, Jiajun Chen

Previous research for adapting a general neural machine translation (NMT) model into a specific domain usually neglects the diversity in translation within the same domain, which is a core problem for domain adaptation in real-world scenarios.

Autonomous Vehicles Diversity +4

A Simple and Effective Approach to Robust Unsupervised Bilingual Dictionary Induction

no code implementations COLING 2020 Yanyang Li, Yingfeng Luo, Ye Lin, Quan Du, Huizhen Wang, ShuJian Huang, Tong Xiao, Jingbo Zhu

Our experiments show that this simple method does not hamper the performance of similar language pairs and achieves an accuracy of 13. 64~55. 53% between English and four distant languages, i. e., Chinese, Japanese, Vietnamese and Thai.

Dimensionality Reduction Self-Learning

Opinion Transmission Network for Jointly Improving Aspect-oriented Opinion Words Extraction and Sentiment Classification

no code implementations1 Nov 2020 Chengcan Ying, Zhen Wu, Xinyu Dai, ShuJian Huang, Jiajun Chen

In this paper, we propose a novel joint model, Opinion Transmission Network (OTN), to exploit the potential bridge between ALSC and AOWE to achieve the goal of facilitating them simultaneously.

Aspect-Based Sentiment Analysis Aspect-Based Sentiment Analysis (ABSA) +3

Non-linear Learning for Statistical Machine Translation

no code implementations IJCNLP 2015 Shujian Huang, Huadong Chen, Xin-yu Dai, Jia-Jun Chen

The linear combination assumes that all the features are in a linear relationship and constrains that each feature interacts with the rest features in an linear manner, which might limit the expressive power of the model and lead to a under-fit model on the current data.

Machine Translation Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.