no code implementations • Findings (EMNLP) 2021 • Xin Huang, Jiajun Zhang, Chengqing Zong
Inspired by the findings of (CITATION) that entities are most informative in the image, we propose an explicit entity-level cross-modal learning approach that aims to augment the entity representation.
no code implementations • COLING 2022 • Qian Wang, Jiajun Zhang
However, the existing clustering methods based on language similarity cannot handle the asymmetric problem in multilingual NMT, i. e., one translation task A can benefit from another translation task B but task B will be harmed by task A.
1 code implementation • Findings (ACL) 2022 • Shuxian Zou, Shaonan Wang, Jiajun Zhang, Chengqing Zong
More importantly, it demonstrates that it is feasible to decode a certain word within a large vocabulary from its neural brain activity.
1 code implementation • 29 May 2023 • Wen Yang, Chong Li, Jiajun Zhang, Chengqing Zong
Second, we continue training the model with a large-scale parallel dataset that covers 102 natural languages.
no code implementations • 12 Jan 2023 • Shaonan Wang, Nai Ding, Nan Lin, Jiajun Zhang, Chengqing Zong
Language understanding is a key scientific issue in the fields of cognitive and computer science.
no code implementations • 6 Dec 2022 • Yang Zhao, Junnan Zhu, Lu Xiang, Jiajun Zhang, Yu Zhou, FeiFei Zhai, Chengqing Zong
To alleviate the CF, we investigate knowledge distillation based life-long learning methods.
1 code implementation • 18 Oct 2022 • Jiajun Zhang, BoYu Chen, Zhilong Ji, Jinfeng Bai, Zonghai Hu
This paper describes the approach we have taken in the challenge.
1 code implementation • 18 Oct 2022 • Chen Wang, Yuchen Liu, Boxing Chen, Jiajun Zhang, Wei Luo, Zhongqiang Huang, Chengqing Zong
Existing zero-shot methods fail to align the two modalities of speech and text into a shared semantic space, resulting in much worse performance compared to the supervised ST methods.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+4
2 code implementations • ACL 2022 • Haitao Lin, Junnan Zhu, Lu Xiang, Yu Zhou, Jiajun Zhang, Chengqing Zong
Therefore, we propose a novel role interaction enhanced method for role-oriented dialogue summarization.
no code implementations • 26 Mar 2022 • Sha Yuan, Hanyu Zhao, Shuai Zhao, Jiahong Leng, Yangxiao Liang, Xiaozhi Wang, Jifan Yu, Xin Lv, Zhou Shao, Jiaao He, Yankai Lin, Xu Han, Zhenghao Liu, Ning Ding, Yongming Rao, Yizhao Gao, Liang Zhang, Ming Ding, Cong Fang, Yisen Wang, Mingsheng Long, Jing Zhang, Yinpeng Dong, Tianyu Pang, Peng Cui, Lingxiao Huang, Zheng Liang, HuaWei Shen, HUI ZHANG, Quanshi Zhang, Qingxiu Dong, Zhixing Tan, Mingxuan Wang, Shuo Wang, Long Zhou, Haoran Li, Junwei Bao, Yingwei Pan, Weinan Zhang, Zhou Yu, Rui Yan, Chence Shi, Minghao Xu, Zuobai Zhang, Guoqiang Wang, Xiang Pan, Mengjie Li, Xiaoyu Chu, Zijun Yao, Fangwei Zhu, Shulin Cao, Weicheng Xue, Zixuan Ma, Zhengyan Zhang, Shengding Hu, Yujia Qin, Chaojun Xiao, Zheni Zeng, Ganqu Cui, Weize Chen, Weilin Zhao, Yuan YAO, Peng Li, Wenzhao Zheng, Wenliang Zhao, Ziyi Wang, Borui Zhang, Nanyi Fei, Anwen Hu, Zenan Ling, Haoyang Li, Boxi Cao, Xianpei Han, Weidong Zhan, Baobao Chang, Hao Sun, Jiawen Deng, Chujie Zheng, Juanzi Li, Lei Hou, Xigang Cao, Jidong Zhai, Zhiyuan Liu, Maosong Sun, Jiwen Lu, Zhiwu Lu, Qin Jin, Ruihua Song, Ji-Rong Wen, Zhouchen Lin, LiWei Wang, Hang Su, Jun Zhu, Zhifang Sui, Jiajun Zhang, Yang Liu, Xiaodong He, Minlie Huang, Jian Tang, Jie Tang
With the rapid development of deep learning, training Big Models (BMs) for multiple downstream tasks becomes a popular paradigm.
1 code implementation • ACL 2022 • Yu Lu, Jiali Zeng, Jiajun Zhang, Shuangzhi Wu, Mu Li
Confidence estimation aims to quantify the confidence of the model prediction, providing an expectation of success.
1 code implementation • 18 Jan 2022 • Feihu Jin, Jinliang Lu, Jiajun Zhang, Chengqing Zong
Specifically, we suppose that each learnable prompt token has a different contribution to different instances, and we learn the contribution by calculating the relevance score between an instance and each prompt token.
no code implementations • 27 Dec 2021 • Yuan YAO, Qingxiu Dong, Jian Guan, Boxi Cao, Zhengyan Zhang, Chaojun Xiao, Xiaozhi Wang, Fanchao Qi, Junwei Bao, Jinran Nie, Zheni Zeng, Yuxian Gu, Kun Zhou, Xuancheng Huang, Wenhao Li, Shuhuai Ren, Jinliang Lu, Chengqiang Xu, Huadong Wang, Guoyang Zeng, Zile Zhou, Jiajun Zhang, Juanzi Li, Minlie Huang, Rui Yan, Xiaodong He, Xiaojun Wan, Xin Zhao, Xu sun, Yang Liu, Zhiyuan Liu, Xianpei Han, Erhong Yang, Zhifang Sui, Maosong Sun
We argue that for general-purpose language intelligence evaluation, the benchmark itself needs to be comprehensive and systematic.
2 code implementations • 27 Dec 2021 • Qian Wang, Jiajun Zhang
Further analyses reveal that the parameter sharing configuration obtained by our method correlates well with the linguistic proximities.
no code implementations • 26 Oct 2021 • Meng Chen, Chen Geng, Dongdong Wang, Jiajun Zhang, Ruoyu Di, Fengmei Li, Zhiyong Zhou, Sirong Piao, Yuxin Li, Yaikang Dai
The segmentation metrics we used include DSC, HD, and VS.
no code implementations • NeurIPS Workshop AI4Scien 2021 • Shuxian Zou, Shaonan Wang, Jiajun Zhang, Chengqing Zong
However, most of the existing studies have focused on discriminating which one in two stimuli corresponds to the given brain image, which is far from directly generating text from neural activities.
1 code implementation • Findings (EMNLP) 2021 • Jinliang Lu, Jiajun Zhang
Back-translation (BT) has become one of the de facto components in unsupervised neural machine translation (UNMT), and it explicitly makes UNMT have translation ability.
2 code implementations • EMNLP 2021 • Haitao Lin, Liqun Ma, Junnan Zhu, Lu Xiang, Yu Zhou, Jiajun Zhang, Chengqing Zong
Therefore, in this paper, we introduce a novel Chinese dataset for Customer Service Dialogue Summarization (CSDS).
1 code implementation • 19 Aug 2021 • Haitao Lin, Lu Xiang, Yu Zhou, Jiajun Zhang, Chengqing Zong
We propose two strategies for finetuning process: value-based and context-based augmentation.
no code implementations • ACL 2021 • Yu Lu, Jiali Zeng, Jiajun Zhang, Shuangzhi Wu, Mu Li
Attention mechanisms have achieved substantial improvements in neural machine translation by dynamically selecting relevant inputs for different predictions.
1 code implementation • 5 Jul 2021 • Xin Cai, BoYu Chen, Jiabei Zeng, Jiajun Zhang, Yunjia Sun, Xiao Wang, Zhilong Ji, Xiao Liu, Xilin Chen, Shiguang Shan
This paper presents a method for gaze estimation according to face images.
2 code implementations • 1 Jul 2021 • Jing Liu, Xinxin Zhu, Fei Liu, Longteng Guo, Zijia Zhao, Mingzhen Sun, Weining Wang, Hanqing Lu, Shiyu Zhou, Jiajun Zhang, Jinqiao Wang
In this paper, we propose an Omni-perception Pre-Trainer (OPT) for cross-modal understanding and generation, by jointly modeling visual, text and audio resources.
Ranked #1 on
Image Retrieval
on Localized Narratives
1 code implementation • ACL 2021 • Yangyifan Xu, Yijin Liu, Fandong Meng, Jiajun Zhang, Jinan Xu, Jie zhou
Recently, token-level adaptive training has achieved promising improvement in machine translation, where the cross-entropy loss function is adjusted by assigning different training weights to different tokens, in order to alleviate the token imbalance problem.
1 code implementation • 24 Feb 2021 • Ke-Jia Chen, Jiajun Zhang, Linpu Jiang, Yunyun Wang, Yuxuan Dai
This paper proposes a pre-training method on dynamic graph neural networks (PT-DGNN), which uses dynamic attributed graph generation tasks to simultaneously learn the structure, semantics, and evolution features of the graph.
no code implementations • COLING 2020 • Jingyuan Sun, Shaonan Wang, Jiajun Zhang, Chengqing Zong
The framework is based on language models and can be smoothly built with different language model architectures.
no code implementations • COLING 2020 • Yang Zhao, Lu Xiang, Junnan Zhu, Jiajun Zhang, Yu Zhou, Chengqing Zong
Previous studies combining knowledge graph (KG) with neural machine translation (NMT) have two problems: i) Knowledge under-utilization: they only focus on the entities that appear in both KG and training sentence pairs, making much knowledge in KG unable to be fully utilized.
no code implementations • COLING 2020 • Haoran Li, Junnan Zhu, Jiajun Zhang, Xiaodong He, Chengqing Zong
Thus, we propose a multimodal selective gate network that considers reciprocal relationships between textual and multi-level visual features, including global image descriptor, activation grids, and object proposals, to select highlights of the event when encoding the source sentence.
no code implementations • Asian Chapter of the Association for Computational Linguistics 2020 • Qian Wang, Jiajun Zhang, Lemao Liu, Guoping Huang, Chengqing Zong
We propose a touch-based editing method for translation, which is more flexible than traditional keyboard-mouse-based translation postediting.
no code implementations • 13 Nov 2020 • Jiajun Zhang, Pengyuan Ren, Jianmin Li
Pedestrian Attribute Recognition (PAR) has aroused extensive attention due to its important role in video surveillance scenarios.
no code implementations • 28 Oct 2020 • Yuchen Liu, Junnan Zhu, Jiajun Zhang, Chengqing Zong
End-to-end speech translation aims to translate speech in one language into text in another language via an end-to-end way.
no code implementations • EMNLP 2020 • Xiaomian Kang, Yang Zhao, Jiajun Zhang, Chengqing Zong
Specifically, we introduce a selection module that is independent of the translation module to score each candidate context sentence.
no code implementations • WS 2020 • Long Zhou, Jiajun Zhang, Cheng-qing Zong
In this work, we propose a novel Encoder-NAD-AD framework for NMT, aiming at boosting AT with global information produced by NAT model.
no code implementations • WS 2020 • Qian Wang, Yuchen Liu, Cong Ma, Yu Lu, Yining Wang, Long Zhou, Yang Zhao, Jiajun Zhang, Cheng-qing Zong
This paper describes the CASIA{'}s system for the IWSLT 2020 open domain translation task.
no code implementations • ACL 2020 • Junnan Zhu, Yu Zhou, Jiajun Zhang, Cheng-qing Zong
Cross-lingual summarization aims at summarizing a document in one language (e. g., Chinese) into another language (e. g., English).
1 code implementation • 13 Apr 2020 • Jiajun Zhang, Cheng-qing Zong
Machine translation (MT) is a technique that leverages computers to translate human languages automatically.
1 code implementation • 16 Dec 2019 • Yuchen Liu, Jiajun Zhang, Hao Xiong, Long Zhou, Zhongjun He, Hua Wu, Haifeng Wang, Cheng-qing Zong
Speech-to-text translation (ST), which translates source language speech into target language text, has attracted intensive attention in recent years.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+4
1 code implementation • 1 Dec 2019 • Dongchao Zheng, Weitian Li, Zhenghao Zhu, Chenxi Shan, Jiajun Zhang, Linfeng Xiao, Xiaoli Lian, Dan Hu
Cosmic ray electron (CRE) acceleration and cooling are important physical processes in astrophysics.
High Energy Astrophysical Phenomena Cosmology and Nongalactic Astrophysics
no code implementations • 25 Nov 2019 • Hao Wang, Bing Wang, Jianyong Duan, Jiajun Zhang
Spelling error detection serves as a crucial preprocessing in many natural language processing applications.
no code implementations • IJCNLP 2019 • Yining Wang, Jiajun Zhang, Long Zhou, Yuchen Liu, Cheng-qing Zong
In this paper, we introduce a novel interactive approach to translate a source language into two different languages simultaneously and interactively.
1 code implementation • IJCNLP 2019 • Junnan Zhu, Qian Wang, Yining Wang, Yu Zhou, Jiajun Zhang, Shaonan Wang, Cheng-qing Zong
Moreover, we propose to further improve NCLS by incorporating two related tasks, monolingual summarization and machine translation, into the training process of CLS under multi-task learning.
1 code implementation • IJCNLP 2019 • Weikang Wang, Jiajun Zhang, Qian Li, Cheng-qing Zong, Zhifei Li
In this paper, we focus on identity fraud detection in loan applications and propose to solve this problem with a novel interactive dialogue system which consists of two modules.
no code implementations • 1 Jul 2019 • Kexin Wang, Yu Zhou, Shaonan Wang, Jiajun Zhang, Cheng-qing Zong
Recent work has shown that memory modules are crucial for the generalization ability of neural networks on learning simple algorithms.
no code implementations • ACL 2019 • Yining Wang, Long Zhou, Jiajun Zhang, FeiFei Zhai, Jingfang Xu, Cheng-qing Zong
We verify our methods on various translation scenarios, including one-to-many, many-to-many and zero-shot.
no code implementations • 23 Jun 2019 • Long Zhou, Jiajun Zhang, Cheng-qing Zong, Heng Yu
The encoder-decoder framework has achieved promising process for many sequence generation tasks, such as neural machine translation and text summarization.
1 code implementation • ACL 2019 • Weikang Wang, Jiajun Zhang, Qian Li, Mei-Yuh Hwang, Cheng-qing Zong, Zhifei Li
Clarifying user needs is essential for existing task-oriented dialogue systems.
no code implementations • ACL 2019 • He Bai, Yu Zhou, Jiajun Zhang, Cheng-qing Zong
Dialogue contexts are proven helpful in the spoken language understanding (SLU) system and they are typically encoded with explicit memory representations.
2 code implementations • TACL 2019 • Long Zhou, Jiajun Zhang, Cheng-qing Zong
In this paper, we introduce a synchronous bidirectional neural machine translation (SB-NMT) that predicts its outputs using left-to-right and right-to-left decoding simultaneously and interactively, in order to leverage both of the history and future information at the same time.
Ranked #27 on
Machine Translation
on WMT2014 English-German
no code implementations • 17 Apr 2019 • Yuchen Liu, Hao Xiong, Zhongjun He, Jiajun Zhang, Hua Wu, Haifeng Wang, Cheng-qing Zong
End-to-end speech translation (ST), which directly translates from source language speech into target language text, has attracted intensive attentions in recent years.
1 code implementation • 24 Feb 2019 • Jiajun Zhang, Long Zhou, Yang Zhao, Cheng-qing Zong
In this work, we propose a synchronous bidirectional inference model to generate outputs using both left-to-right and right-to-left decoding simultaneously and interactively.
no code implementations • 1 Nov 2018 • Long Zhou, Yuchen Liu, Jiajun Zhang, Cheng-qing Zong, Guoping Huang
Current Neural Machine Translation (NMT) employs a language-specific encoder to represent the source sentence and adopts a language-specific decoder to generate target translation.
no code implementations • EMNLP 2018 • Junnan Zhu, Haoran Li, Tianshang Liu, Yu Zhou, Jiajun Zhang, Cheng-qing Zong
In this paper, we propose a novel task, multimodal summarization with multimodal output (MSMO).
1 code implementation • EMNLP 2018 • Shaonan Wang, Jiajun Zhang, Cheng-qing Zong
In this paper we address the problem of learning multimodal word representations by integrating textual, visual and auditory inputs.
no code implementations • EMNLP 2018 • Yang Zhao, Jiajun Zhang, Zhongjun He, Cheng-qing Zong, Hua Wu
One of the weaknesses of Neural Machine Translation (NMT) is in handling lowfrequency and ambiguous words, which we refer as troublesome words.
no code implementations • EMNLP 2018 • Yining Wang, Jiajun Zhang, FeiFei Zhai, Jingfang Xu, Cheng-qing Zong
However, previous studies show that one-to-many translation based on this framework cannot perform on par with the individually trained models.
no code implementations • EMNLP 2018 • Weikang Wang, Jiajun Zhang, Han Zhang, Mei-Yuh Hwang, Cheng-qing Zong, Zhifei Li
Specifically, the {``}student{''} is an extended dialog manager based on a new ontology, and the {``}teacher{''} is existing resources used for guiding the learning process of the {``}student{''}.
no code implementations • 19 Aug 2018 • He Bai, Yu Zhou, Jiajun Zhang, Liang Zhao, Mei-Yuh Hwang, Cheng-qing Zong
This paper focuses on the language transferring task given a tiny in-domain parallel SLU corpus.
Cultural Vocal Bursts Intensity Prediction
General Classification
+5
no code implementations • COLING 2018 • Haoran Li, Junnan Zhu, Jiajun Zhang, Cheng-qing Zong
In this paper, we investigate the sentence summarization task that produces a summary from a source sentence.
Ranked #7 on
Text Summarization
on DUC 2004 Task 1
no code implementations • COLING 2018 • He Bai, Yu Zhou, Jiajun Zhang, Liang Zhao, Mei-Yuh Hwang, Cheng-qing Zong
An SLU corpus is a monolingual corpus with domain/intent/slot labels.
Cultural Vocal Bursts Intensity Prediction
General Classification
+6
no code implementations • 25 May 2018 • Yang Zhao, Yining Wang, Jiajun Zhang, Cheng-qing Zong
Neural Machine Translation (NMT) has drawn much attention due to its promising translation performance recently.
no code implementations • 2 Jan 2018 • Shaonan Wang, Jiajun Zhang, Cheng-qing Zong
Multimodal models have been proven to outperform text-based models on learning semantic word representations.
no code implementations • WS 2017 • Guoping Huang, Jiajun Zhang, Yu Zhou, Cheng-qing Zong
Terms extensively exist in specific domains, and term translation plays a critical role in domain-specific machine translation (MT) tasks.
no code implementations • 15 Nov 2017 • Shaonan Wang, Jiajun Zhang, Nan Lin, Cheng-qing Zong
Considering that multimodal models are originally motivated by human concept representations, we assume that correlating multimodal representations with brain-based semantics would interpret their inner properties to answer the above questions.
Learning Semantic Representations
Natural Language Understanding
1 code implementation • 13 Nov 2017 • Yining Wang, Long Zhou, Jiajun Zhang, Cheng-qing Zong
Our experiments show that subword model performs best for Chinese-to-English translation with the vocabulary which is not so big while hybrid word-character model is most suitable for English-to-Chinese translation.
no code implementations • 7 Nov 2017 • Jiajun Zhang, Jinkun Tao, Jiangtao Huangfu, Zhiguo Shi
In this paper, a Doppler Radar based hand gesture recognition system using convolutional neural networks is proposed.
no code implementations • 6 Nov 2017 • Jiajun Zhang, Zhiguo Shi
Traditional vision-based hand gesture recognition systems is limited under dark circumstances.
no code implementations • IJCNLP 2017 • Yining Wang, Yang Zhao, Jiajun Zhang, Cheng-qing Zong, Zhengshan Xue
While neural machine translation (NMT) has become the new paradigm, the parameter optimization requires large-scale parallel data which is scarce in many domains and language pairs.
no code implementations • EMNLP 2017 • Haoran Li, Junnan Zhu, Cong Ma, Jiajun Zhang, Cheng-qing Zong
In this work, we propose an extractive Multi-modal Summarization (MMS) method which can automatically generate a textual summary given a set of documents, images, audios and videos related to a specific topic.
Automatic Speech Recognition (ASR)
Document Summarization
+1
no code implementations • EMNLP 2017 • Shaonan Wang, Jiajun Zhang, Cheng-qing Zong
We introduce a novel mixed characterword architecture to improve Chinese sentence representations, by utilizing rich semantic information of word internal structures.
no code implementations • 30 Aug 2017 • Long Zhou, Jiajun Zhang, Cheng-qing Zong
The attention model has become a standard component in neural machine translation (NMT) and it guides translation process by selectively focusing on parts of the source sentence when predicting each target word.
no code implementations • ACL 2017 • Long Zhou, Wenpeng Hu, Jiajun Zhang, Cheng-qing Zong
Neural machine translation (NMT) becomes a new approach to machine translation and generates much more fluent results compared to statistical machine translation (SMT).
no code implementations • 3 Jan 2017 • Huijia Wu, Jiajun Zhang, Cheng-qing Zong
To simply the stacked architecture, we propose a framework called shortcut block, which is a marriage of the gating mechanism and shortcuts, while discarding the self-connected part in LSTM cell.
no code implementations • COLING 2016 • Wenpeng Hu, Jiajun Zhang, Nan Zheng
Recent work for learning word representations has applied successfully to many NLP applications, such as sentiment analysis and question answering.
1 code implementation • 3 Nov 2016 • Jiajun Zhang, Yue-Lin Sming Tsai, Jui-Lin Kuo, Kingman Cheung, Ming-Chung Chu
The existence of the solitonic core reveals the non-linear effect of quantum pressure and impacts the structure formation in the FDM model.
Cosmology and Nongalactic Astrophysics Astrophysics of Galaxies High Energy Physics - Phenomenology
no code implementations • 24 Oct 2016 • Jiajun Zhang, Cheng-qing Zong
Neural Machine Translation (NMT) has become the new state-of-the-art in several language pairs.
no code implementations • COLING 2016 • Huijia Wu, Jiajun Zhang, Cheng-qing Zong
In this paper, we empirically explore the effects of various kinds of skip connections in stacked bidirectional LSTMs for sequential tagging.
no code implementations • 10 Oct 2016 • Huijia Wu, Jiajun Zhang, Cheng-qing Zong
These motivate us to build a supertagger with a dynamic window approach, which can be treated as an attention mechanism on the local contexts.
no code implementations • 29 Sep 2016 • Shaonan Wang, Jiajun Zhang, Cheng-qing Zong
Recently, much progress has been made in learning general-purpose sentence representations that can be used across domains.
no code implementations • LREC 2018 • Xiao-Qing Li, Jiajun Zhang, Cheng-qing Zong
Neural machine translation (NMT) becomes a new state-of-the-art and achieves promising translation results using a simple encoder-decoder neural network.
no code implementations • 7 Jul 2016 • Xiao-Qing Li, Jiajun Zhang, Cheng-qing Zong
In order to control computational complexity, neural machine translation (NMT) systems convert all rare words outside the vocabulary into a single unk symbol.
no code implementations • LREC 2016 • Yang Liu, Jiajun Zhang, Cheng-qing Zong, Yating Yang, Xi Zhou
Existing discourse research only focuses on the monolingual languages and the inconsistency between languages limits the power of the discourse theory in multilingual applications such as machine translation.
no code implementations • 27 Feb 2015 • Jiajun Zhang
With the sentence-level feature representation, we further design a feed-forward neural network to better predict translations using both local and global information.
no code implementations • 5 Feb 2015 • Jiajun Zhang, Shujie Liu, Mu Li, Ming Zhou, Cheng-qing Zong
Language model is one of the most important modules in statistical machine translation and currently the word-based language model dominants this community.
no code implementations • TACL 2013 • Feifei Zhai, Jiajun Zhang, Yu Zhou, Cheng-qing Zong
In current research, most tree-based translation models are built directly from parse trees.