no code implementations • WMT (EMNLP) 2020 • Qu Cui, Xiang Geng, ShuJian Huang, Jiajun Chen
This paper describes our system of the sentence-level and word-level Quality Estimation Shared Task of WMT20.
1 code implementation • COLING 2022 • Yawen Ouyang, Zhen Wu, Xinyu Dai, ShuJian Huang, Jiajun Chen
In this paper, we propose a more desirable task, multi-label unknown intent detection, to detect whether the utterance contains the unknown intent, in which each utterance may contain multiple intents.
1 code implementation • COLING 2022 • Fei Zhao, Zhen Wu, Siyu Long, Xinyu Dai, ShuJian Huang, Jiajun Chen
Target-oriented multimodal sentiment classification (TMSC) is a new subtask of aspect-based sentiment analysis, which aims to determine the sentiment polarity of the opinion target mentioned in a (sentence, image) pair.
Aspect-Based Sentiment Analysis
Aspect-Based Sentiment Analysis (ABSA)
+2
1 code implementation • ACL 2022 • Yu Bao, Hao Zhou, ShuJian Huang, Dongqi Wang, Lihua Qian, Xinyu Dai, Jiajun Chen, Lei LI
Recently, parallel text generation has received widespread attention due to its success in generation efficiency.
no code implementations • ACL 2022 • Yanling Xiao, Lemao Liu, Guoping Huang, Qu Cui, ShuJian Huang, Shuming Shi, Jiajun Chen
In this work, we propose a novel BiTIIMT system, Bilingual Text-Infilling for Interactive Neural Machine Translation.
no code implementations • EMNLP 2021 • Ran Wang, Xi’ao Su, Siyu Long, Xinyu Dai, ShuJian Huang, Jiajun Chen
However, the simple extension of meta-learning approaches to multi-label classification is sub-optimal for LMTC tasks due to long-tailed label distribution and coexisting of few- and zero-shot scenarios.
1 code implementation • 25 May 2025 • Yi Wang, Junxiao Liu, Shimao Zhang, Jiajun Chen, ShuJian Huang
Current large-language models (LLMs) typically adopt a fixed reasoning strategy, either simple or complex, for all questions, regardless of their difficulty.
no code implementations • 22 May 2025 • Renfei Dang, ShuJian Huang, Jiajun Chen
Through further interpretability experiments, we find that this behavior is largely driven by the model's excessive attention to the input section, which amplifies the influence of internal bias on its decision-making process.
1 code implementation • 17 May 2025 • Peng Ding, Jun Kuang, ZongYu Wang, Xuezhi Cao, Xunliang Cai, Jiajun Chen, ShuJian Huang
Large Language Models (LLMs) have shown impressive capabilities across various tasks but remain vulnerable to meticulously crafted jailbreak attacks.
1 code implementation • 20 Apr 2025 • Wei Zou, Sen yang, Yu Bao, ShuJian Huang, Jiajun Chen, Shanbo Cheng
The rise of Large Language Models (LLMs) has reshaped machine translation (MT), but multilingual MT still relies heavily on parallel data for supervised fine-tuning (SFT), facing challenges like data scarcity for low-resource languages and catastrophic forgetting.
no code implementations • 15 Apr 2025 • Changjiang Gao, Hankun Lin, ShuJian Huang, Xin Huang, Xue Han, Junlan Feng, Chao Deng, Jiajun Chen
The ability of cross-lingual context retrieval is a fundamental aspect of cross-lingual alignment of large language models (LLMs), where the model extracts context information in one language based on requests in another language.
1 code implementation • 9 Apr 2025 • Yuxin Wang, Yiran Guo, Yining Zheng, Zhangyue Yin, Shuo Chen, Jie Yang, Jiajun Chen, Xuanjing Huang, Xipeng Qiu
To bridge this gap, we introduce FamilyTool, a novel benchmark grounded in a family-based knowledge graph (KG) that simulates personalized, multi-hop tool use scenarios.
no code implementations • 9 Apr 2025 • Jiajun Chen, Hongpeng Yin, Yifu Yang
Finally, this approach is applied during the testing phase, rapidly adapting to domain variations through meta-training tasks on support sets, consequently enhancing the model's capability to transfer domain knowledge effectively.
1 code implementation • 27 Mar 2025 • Shuaijie She, Junxiao Liu, Yifeng Liu, Jiajun Chen, Xin Huang, ShuJian Huang
Large language models (LLMs) inevitably make mistakes when performing step-by-step mathematical reasoning.
no code implementations • 16 Mar 2025 • Kanzhi Cheng, Wenpo Song, Jiaxin Fan, Zheng Ma, Qiushi Sun, Fangzhi Xu, Chenyang Yan, Nuo Chen, Jianbing Zhang, Jiajun Chen
Image captioning has been a longstanding challenge in vision-language research.
no code implementations • 27 Feb 2025 • Xiang Geng, Zhejian Lai, Jiajun Chen, Hao Yang, ShuJian Huang
Quality Estimation (QE) models evaluate the quality of machine translations without reference translations, serving as the reward models for the translation task.
1 code implementation • 21 Feb 2025 • Wenhao Zhu, Pinzhen Chen, Hanxu Hu, ShuJian Huang, Fei Yuan, Jiajun Chen, Alexandra Birch
The focus of research into modelling long context has been on how to model position and there has been little investigation into other important aspects of language modelling such as instruction tuning.
no code implementations • 21 Jan 2025 • Wei Zou, ShuJian Huang, Jiajun Chen
Generating adversarial examples contributes to mainstream neural machine translation~(NMT) robustness.
no code implementations • 2 Jan 2025 • Liang He, Yougang Chu, Zhen Wu, Jianbing Zhang, Xinyu Dai, Jiajun Chen
This paper addresses the issue of entity bias in relation extraction tasks, where models tend to rely on entity mentions rather than context.
no code implementations • 5 Dec 2024 • Jiajun Chen, Yik-Cheung Tam
For each mathematical problem, we develop a Prolog solution that includes problem-specific predicates and intermediate predicates derived from these background operators, ensuring that each solution adheres to the defined operator set.
1 code implementation • 7 Oct 2024 • Jiahuan Li, Yiqing Cao, ShuJian Huang, Jiajun Chen
We find that pretrained LLMs establish learning preferences similar to humans, i. e., preferences towards formal texts and texts with fewer spelling errors, resulting in faster learning and more favorable treatment of knowledge in data with such features when facing conflicts.
2 code implementations • 21 Aug 2024 • Hao Zhou, Zhijun Wang, ShuJian Huang, Xin Huang, Xue Han, Junlan Feng, Chao Deng, Weihua Luo, Jiajun Chen
Then, the model reviews the knowledge of the original languages with replay data amounting to less than 1% of post-pretraining, where we incorporate language priors routing to better recover the abilities of the original languages.
1 code implementation • 2 Aug 2024 • Peng Ding, Jingyu Wu, Jun Kuang, Dan Ma, Xuezhi Cao, Xunliang Cai, Shi Chen, Jiajun Chen, ShuJian Huang
Extensive experiments on 12 mainstream MLLMs, such as GPT-4V and Gemini-Pro Vision, demonstrate that these models exhibit significant hallucinations on Hallu-PI, which is not observed in unperturbed scenarios.
1 code implementation • 23 Jul 2024 • Jiahuan Li, ShuJian Huang, Aarron Ching, Xinyu Dai, Jiajun Chen
In this paper, we propose PreAlign, a framework that establishes multilingual alignment prior to language model pretraining.
1 code implementation • 15 Jul 2024 • Wenhao Zhu, Sizhe Liu, ShuJian Huang, Shuaijie She, Chris Wendler, Jiajun Chen
Decoding by contrasting layers (DoLa), is designed to improve the generation quality of large language models (LLMs) by contrasting the prediction probabilities between an early exit output (amateur logits) and the final output (expert logits).
1 code implementation • 11 Jun 2024 • Peng Hu, Changjiang Gao, Ruiqi Gao, Jiajun Chen, ShuJian Huang
Using this dataset, we evaluated several LLMs and discovered that their proficiency in this aspect is limited, regardless of whether the knowledge is trained in a separate or adjacent training settings.
2 code implementations • 22 May 2024 • Shimao Zhang, Changjiang Gao, Wenhao Zhu, Jiajun Chen, Xin Huang, Xue Han, Junlan Feng, Chao Deng, ShuJian Huang
Recently, Large Language Models (LLMs) have shown impressive language capabilities.
1 code implementation • 22 May 2024 • Xiang Geng, Ming Zhu, Jiahuan Li, Zhejian Lai, Wei Zou, Shuaijie She, Jiaxin Guo, Xiaofeng Zhao, Yinglu Li, Yuang Li, Chang Su, Yanqing Zhao, Xinglin Lyu, Min Zhang, Jiajun Chen, Hao Yang, ShuJian Huang
For the second issue, we propose a method comprising two synergistic components: low-rank adaptation for training to maintain the original LLM parameters, and recovery KD, which utilizes data generated by the chat LLM itself to recover the original knowledge from the frozen parameters.
no code implementations • 2 May 2024 • Wenhao Zhu, ShuJian Huang, Fei Yuan, Cheng Chen, Jiajun Chen, Alexandra Birch
Bridging the significant gap between large language model's English and non-English performance presents a great challenge.
1 code implementation • 13 Apr 2024 • Wei Zou, Ziyuan Zhuang, Xiang Geng, ShuJian Huang, Jia Liu, Jiajun Chen
Paraphrase generation strives to generate high-quality and diverse expressions of a given text, a domain where diffusion models excel.
1 code implementation • 6 Apr 2024 • Changjiang Gao, Hongda Hu, Peng Hu, Jiajun Chen, Jixing Li, ShuJian Huang
In this paper, we propose CLiKA, a systematic framework to assess the cross-lingual knowledge alignment of LLMs in the Performance, Consistency and Conductivity levels, and explored the effect of multilingual pretraining and instruction tuning on the degree of alignment.
1 code implementation • 23 Mar 2024 • Lingxing Kong, Yougang Chu, Zheng Ma, Jianbing Zhang, Liang He, Jiajun Chen
Relation extraction is a critical task in the field of natural language processing with numerous real-world applications.
1 code implementation • 14 Mar 2024 • Jiahuan Li, Shanbo Cheng, ShuJian Huang, Jiajun Chen
Large Language Models (LLM) have demonstrated their strong ability in the field of machine translation (MT), yet they suffer from high computational cost and latency.
1 code implementation • 7 Mar 2024 • Changjiang Gao, Jixing Li, Jiajun Chen, ShuJian Huang
Drawing on the key-value memory interpretation of transformer feed-forward network blocks, we introduce the Composition Score, a novel model-based metric designed to quantify the degree of meaning composition during sentence comprehension.
2 code implementations • 28 Feb 2024 • Jiacheng Lin, Jiajun Chen, Kunyu Peng, Xuan He, Zhiyong Li, Rainer Stiefelhagen, Kailun Yang
This paper introduces the task of Auditory Referring Multi-Object Tracking (AR-MOT), which dynamically tracks specific objects in a video sequence based on audio expressions and appears as a challenging problem in autonomous driving.
no code implementations • 18 Feb 2024 • Zheng Ma, Changxin Wang, Yawen Ouyang, Fei Zhao, Jianbing Zhang, ShuJian Huang, Jiajun Chen
If a certain metric has flaws, it will be exploited by the model and reflected in the generated sentences.
1 code implementation • 15 Jan 2024 • Wenhao Zhu, ShuJian Huang, Fei Yuan, Shuaijie She, Jiajun Chen, Alexandra Birch
A typical solution is to translate instruction data into all languages of interest, and then train on the resulting multilingual data, which is called translate-training.
1 code implementation • 12 Jan 2024 • Sen yang, ShuJian Huang, Xinyu Dai, Jiajun Chen
One way to speed them up is speculative decoding, which generates candidate segments (a sequence of tokens) from a fast draft model that is then verified in parallel by the target model.
1 code implementation • 12 Jan 2024 • Shuaijie She, Wei Zou, ShuJian Huang, Wenhao Zhu, Xiang Liu, Xiang Geng, Jiajun Chen
To enhance reasoning abilities in non-dominant languages, we propose a Multilingual-Alignment-as-Preference Optimization framework (MAPO), aiming to align the reasoning processes in other languages with the dominant language.
1 code implementation • 12 Jan 2024 • Xu Huang, Zhirui Zhang, Xiang Geng, Yichao Du, Jiajun Chen, ShuJian Huang
This study investigates how Large Language Models (LLMs) leverage source and reference data in machine translation evaluation task, aiming to better understand the mechanisms behind their remarkable performance in this task.
1 code implementation • 14 Nov 2023 • Peng Ding, Jun Kuang, Dan Ma, Xuezhi Cao, Yunsen Xian, Jiajun Chen, ShuJian Huang
Finally, we analyze the failure of LLMs defense from the perspective of prompt execution priority, and propose corresponding defense strategies.
no code implementations • 13 Nov 2023 • Shuaijie She, ShuJian Huang, Xingyun Wang, Yanke Zhou, Jiajun Chen
For answering the factual questions, which is more challenging, the average error rate of all evaluated LLMs is 36. 1%.
1 code implementation • 29 Oct 2023 • Changjiang Gao, ShuJian Huang, Jixing Li, Jiajun Chen
Recent large language models (LLMs) have revealed strong abilities to understand natural language.
1 code implementation • 17 Oct 2023 • Xu Huang, Zhirui Zhang, Ruize Gao, Yichao Du, Lemao Liu, Gouping Huang, Shuming Shi, Jiajun Chen, ShuJian Huang
We present IMTLab, an open-source end-to-end interactive machine translation (IMT) system platform that enables researchers to quickly build IMT systems with state-of-the-art models, perform an end-to-end evaluation, and diagnose the weakness of systems.
no code implementations • 8 Oct 2023 • Zihan Yu, Liang He, Zhen Wu, Xinyu Dai, Jiajun Chen
Chain-of-Thought (CoT), a step-wise and coherent reasoning chain, shows its impressive strength when used as a prompting strategy for large language models (LLM).
1 code implementation • 23 Sep 2023 • Xiang Geng, Zhejian Lai, Yu Zhang, Shimin Tao, Hao Yang, Jiajun Chen, ShuJian Huang
We generate pseudo MQM data using parallel data from the WMT translation task.
no code implementations • 20 Sep 2023 • Jie Wang, Hanzhu Chen, Qitan Lv, Zhihao Shi, Jiajun Chen, Huarui He, Hongtao Xie, Defu Lian, Enhong Chen, Feng Wu
This implies the great potential of the semantic correlations for the entity-independent inductive link prediction task.
2 code implementations • 9 Aug 2023 • Wenhao Zhu, Yunzhe Lv, Qingxiu Dong, Fei Yuan, Jingjing Xu, ShuJian Huang, Lingpeng Kong, Jiajun Chen, Lei LI
We start from targeting individual languages by performing cross-lingual instruction-tuning (CoIT) on LLaMA, i. e. tuning it with translation task data and cross-lingual general task data to obtain cross-lingual models (x-LLaMAs), and formulate underlying scaling laws to investigate the advantages of using scalable translation data.
no code implementations • 8 Aug 2023 • Jiajun Chen, Jiacheng Lin, Guojin Zhong, Haolong Fu, Ke Nai, Kailun Yang, Zhiyong Li
Next, we propose an Expression Alignment (EA) mechanism for audio and text.
1 code implementation • 6 Aug 2023 • Zheng Ma, Mianzhi Pan, Wenhan Wu, Kanzhi Cheng, Jianbing Zhang, ShuJian Huang, Jiajun Chen
Experiments on our proposed datasets demonstrate that popular VLMs underperform in the food domain compared with their performance in the general domain.
1 code implementation • 2 Aug 2023 • Kanzhi Cheng, Zheng Ma, Shi Zong, Jianbing Zhang, Xinyu Dai, Jiajun Chen
Generating visually grounded image captions with specific linguistic styles using unpaired stylistic corpora is a challenging task, especially since we expect stylized captions with a wide variety of stylistic patterns.
no code implementations • 6 Jul 2023 • Yiming Yan, Tao Wang, Chengqi Zhao, ShuJian Huang, Jiajun Chen, Mingxuan Wang
In this study, we systematically analyze and compare various mainstream and cutting-edge automatic metrics from the perspective of their guidance for training machine translation systems.
1 code implementation • 10 Jun 2023 • Wenhao Zhu, Jingjing Xu, ShuJian Huang, Lingpeng Kong, Jiajun Chen
We propose an effective training framework INK to directly smooth the representation space via adjusting representations of kNN neighbors with a small number of new parameters.
no code implementations • 24 May 2023 • Jiahuan Li, Hao Zhou, ShuJian Huang, Shanbo Cheng, Jiajun Chen
Secondly, we find that LLMs' ability to carry out translation instructions relies on the understanding of translation instructions and the alignment among different languages.
1 code implementation • 7 May 2023 • Jiacheng Lin, Jiajun Chen, Kailun Yang, Alina Roitberg, Siyu Li, Zhiyong Li, Shutao Li
Interactive Image Segmentation (IIS) has emerged as a promising technique for decreasing annotation time.
2 code implementations • 10 Apr 2023 • Wenhao Zhu, Hongyi Liu, Qingxiu Dong, Jingjing Xu, ShuJian Huang, Lingpeng Kong, Jiajun Chen, Lei LI
Large language models (LLMs) have demonstrated remarkable potential in handling multilingual machine translation (MMT).
1 code implementation • 27 Feb 2023 • Wenhao Zhu, Qianfeng Zhao, Yunzhe Lv, ShuJian Huang, Siheng Zhao, Sizhe Liu, Jiajun Chen
Augmenting the base neural model with a token-level symbolic datastore is a novel generation paradigm and has achieved promising results in machine translation (MT).
1 code implementation • 3 Dec 2022 • Shuaijie She, Xiang Geng, ShuJian Huang, Jiajun Chen
To separate the preference for factual consistency, we propose an unsupervised framework named CoP by controlling the preference of the generation model with the help of prompt.
1 code implementation • 8 Nov 2022 • Wenhao Zhu, ShuJian Huang, Yunzhe Lv, Xin Zheng, Jiajun Chen
kNN-MT presents a new paradigm for domain adaptation by building an external datastore, which usually saves all target language token occurrences in the parallel corpus.
1 code implementation • 22 Oct 2022 • Bin Wang, Jiangzhou Ju, Yang Fan, Xinyu Dai, ShuJian Huang, Jiajun Chen
As one of the challenging NLP tasks, designing math word problem (MWP) solvers has attracted increasing research attention for the past few years.
no code implementations • 18 Oct 2022 • Zheng Ma, Shi Zong, Mianzhi Pan, Jianbing Zhang, ShuJian Huang, Xinyu Dai, Jiajun Chen
In recent years, vision and language pre-training (VLP) models have advanced the state-of-the-art results in a variety of cross-modal downstream tasks.
no code implementations • 2 Oct 2022 • Zhihuan Kuang, Shi Zong, Jianbing Zhang, Jiajun Chen, Hongfu Liu
In this paper, we consider a novel research problem: music-to-text synaesthesia.
1 code implementation • 7 Jul 2022 • Kaiming Kuang, Li Zhang, Jingyu Li, Hongwei Li, Jiajun Chen, Bo Du, Jiancheng Yang
The automatic reconstruction of pulmonary segments by ImPulSe is accurate in metrics and visually appealing.
no code implementations • 17 Jun 2022 • Bin Wang, Jiangzhou Ju, Yunlin Mao, Xin-yu Dai, ShuJian Huang, Jiajun Chen
Here, we propose a numerical reasoning question answering system to answer numerical reasoning questions among financial text and table data sources, consisting of a retriever module, a generator module, and an ensemble module.
no code implementations • 16 Jun 2022 • Xueliang Wang, Jiajun Chen, Feng Wu, Jie Wang
By enforcing the entities' embeddings close to their associated prototypes' embeddings, our approach can effectively encourage the global semantic similarities of entities -- that can be far away in the KG -- connected by the same relation.
1 code implementation • Findings (NAACL) 2022 • Ming Fang, Shi Zong, Jing Li, Xinyu Dai, ShuJian Huang, Jiajun Chen
Furthermore, we conduct a comprehensive linguistic analysis around complaints, including the connections between complaints and sentiment, and a cross-lingual comparison for complaints expressions used by Chinese and English speakers.
1 code implementation • 5 Apr 2022 • Yu Bao, Hao Zhou, ShuJian Huang, Dongqi Wang, Lihua Qian, Xinyu Dai, Jiajun Chen, Lei LI
Recently, parallel text generation has received widespread attention due to its success in generation efficiency.
1 code implementation • NeurIPS 2021 • Zhanqiu Zhang, Jie Wang, Jiajun Chen, Shuiwang Ji, Feng Wu
To address this challenge, we propose a novel query embedding model, namely Cone Embeddings (ConE), which is the first geometry-based QE model that can handle all the FOL operations, including conjunction, disjunction, and negation.
1 code implementation • 23 Sep 2021 • Dongqi Wang, Haoran Wei, Zhirui Zhang, ShuJian Huang, Jun Xie, Jiajun Chen
We study the problem of online learning with human feedback in the human-in-the-loop machine translation, in which the human translators revise the machine-generated translations and then the corrected translations are used to improve the neural machine translation (NMT) system.
1 code implementation • Findings (EMNLP) 2021 • Xin Zheng, Zhirui Zhang, ShuJian Huang, Boxing Chen, Jun Xie, Weihua Luo, Jiajun Chen
Recently, $k$NN-MT has shown the promising capability of directly incorporating the pre-trained neural machine translation (NMT) model with domain-specific token-level $k$-nearest-neighbor ($k$NN) retrieval to achieve domain adaptation without retraining.
no code implementations • ACL 2021 • Jiahuan Li, Yutong Shen, ShuJian Huang, Xinyu Dai, Jiajun Chen
Subword segmentation algorithms have been a \textit{de facto} choice when building neural machine translation systems.
2 code implementations • Findings (ACL) 2021 • Yawen Ouyang, Jiasheng Ye, Yu Chen, Xinyu Dai, ShuJian Huang, Jiajun Chen
Unknown intent detection aims to identify the out-of-distribution (OOD) utterance whose intent has never appeared in the training set.
no code implementations • 12 Jul 2021 • Jianyu Cai, Jiajun Chen, Taoxing Pan, Zhanqiu Zhang, Jie Wang
To address this challenge, we propose a framework that integrates three components -- a basic model ComplEx-CMRC, a rule miner AMIE 3, and an inference model to predict missing links.
3 code implementations • ACL 2021 • Xin Zheng, Zhirui Zhang, Junliang Guo, ShuJian Huang, Boxing Chen, Weihua Luo, Jiajun Chen
On four benchmark machine translation datasets, we demonstrate that the proposed method is able to effectively filter out the noises in retrieval results and significantly outperforms the vanilla kNN-MT model.
no code implementations • 15 May 2021 • Qu Cui, ShuJian Huang, Jiahuan Li, Xiang Geng, Zaixiang Zheng, Guoping Huang, Jiajun Chen
However, we argue that there are gaps between the predictor and the estimator in both data quality and training objectives, which preclude QE models from benefiting from a large number of parallel corpora more directly.
1 code implementation • NeurIPS 2021 • Zaixiang Zheng, Hao Zhou, ShuJian Huang, Jiajun Chen, Jingjing Xu, Lei LI
Thus REDER enables reversible machine translation by simply flipping the input and output ends.
1 code implementation • NAACL 2021 • Yu Bao, ShuJian Huang, Tong Xiao, Dongqi Wang, Xinyu Dai, Jiajun Chen
Non-autoregressive Transformer is a promising text generation model.
Ranked #7 on
Machine Translation
on WMT2014 German-English
1 code implementation • 16 Mar 2021 • Bairan Fu, Wenming Zhang, GuangNeng Hu, Xinyu Dai, ShuJian Huang, Jiajun Chen
Specifically, we first proposed a novel graph neural network to model the social relation and collaborative relation, and on top of high-order relations, a dual side deep context-aware modulation is introduced to capture the friends' information and item attraction.
1 code implementation • 5 Mar 2021 • Jiajun Chen, Huarui He, Feng Wu, Jie Wang
TACT is inspired by the observation that the semantic correlation between two relations is highly correlated to their topological structure in knowledge graphs.
1 code implementation • LREC 2022 • Wenhao Zhu, ShuJian Huang, Tong Pu, Pingxuan Huang, Xu Zhang, Jian Yu, Wei Chen, Yanfeng Wang, Jiajun Chen
Previous research for adapting a general neural machine translation (NMT) model into a specific domain usually neglects the diversity in translation within the same domain, which is a core problem for domain adaptation in real-world scenarios.
no code implementations • 1 Nov 2020 • Chengcan Ying, Zhen Wu, Xinyu Dai, ShuJian Huang, Jiajun Chen
In this paper, we propose a novel joint model, Opinion Transmission Network (OTN), to exploit the potential bridge between ALSC and AOWE to achieve the goal of facilitating them simultaneously.
Aspect-Based Sentiment Analysis
Aspect-Based Sentiment Analysis (ABSA)
+3
no code implementations • 1 Nov 2020 • Zhen Wu, Chengcan Ying, Xinyu Dai, ShuJian Huang, Jiajun Chen
To facilitate the research of ABSA, NLPCC 2020 Shared Task 2 releases a new large-scale Multi-Aspect Multi-Sentiment (MAMS) dataset.
Aspect-Based Sentiment Analysis
Aspect-Based Sentiment Analysis (ABSA)
+2
1 code implementation • Findings (ACL) 2022 • Zewei Sun, Mingxuan Wang, Hao Zhou, Chengqi Zhao, ShuJian Huang, Jiajun Chen, Lei LI
This paper does not aim at introducing a novel model for document-level neural machine translation.
1 code implementation • 8 Oct 2020 • Jiancheng Yang, Jiajun Chen, Kaiming Kuang, Tiancheng Lin, Junjun He, Bingbing Ni
Furthermore, we experiment the proposed method on an in-house, retrospective dataset of real-world non-small cell lung cancer patients under anti-PD-1 immunotherapy.
Ranked #1 on
Text-To-Speech Synthesis
on 20000 utterances
(using extra training data)
no code implementations • 25 Sep 2019 • Yu Bao, Hao Zhou, Jiangtao Feng, Mingxuan Wang, ShuJian Huang, Jiajun Chen, Lei LI
However, position modeling of output words is an essential problem in non-autoregressive text generation.