Search Results for author: Qi Ju

Found 20 papers, 7 papers with code

CSP:Code-Switching Pre-training for Neural Machine Translation

no code implementations EMNLP 2020 Zhen Yang, Bojie Hu, Ambyera Han, Shen Huang, Qi Ju

Unlike traditional pre-training method which randomly masks some fragments of the input sentence, the proposed CSP randomly replaces some words in the source sentence with their translation words in the target language.

Machine Translation NMT +2

Recouple Event Field via Probabilistic Bias for Event Extraction

no code implementations19 May 2023 Xingyu Bai, Taiqiang Wu, Han Guo, Zhe Zhao, Xuefeng Yang, Jiayi Li, Weijie Liu, Qi Ju, Weigang Guo, Yujiu Yang

Event Extraction (EE), aiming to identify and classify event triggers and arguments from event mentions, has benefited from pre-trained language models (PLMs).

Event Extraction

Multi-stage Distillation Framework for Cross-Lingual Semantic Similarity Matching

1 code implementation Findings (NAACL) 2022 Kunbo Ding, Weijie Liu, Yuejian Fang, Zhe Zhao, Qi Ju, Xuefeng Yang

Previous studies have proved that cross-lingual knowledge distillation can significantly improve the performance of pre-trained models for cross-lingual similarity matching tasks.

Contrastive Learning Knowledge Distillation +3

Fully Self-Supervised Learning for Semantic Segmentation

no code implementations24 Feb 2022 YuAn Wang, Wei Zhuo, Yucong Li, Zhi Wang, Qi Ju, Wenwu Zhu

To solve this problem, we proposed a bootstrapped training scheme for semantic segmentation, which fully leveraged the global semantic knowledge for self-supervision with our proposed PGG strategy and CAE module.

Clustering Segmentation +2

Semantic Matching from Different Perspectives

1 code implementation14 Feb 2022 Weijie Liu, Tao Zhu, Weiquan Mao, Zhe Zhao, Weigang Guo, Xuefeng Yang, Qi Ju

In this paper, we pay attention to the issue which is usually overlooked, i. e., \textit{similarity should be determined from different perspectives}.

Sentence Text Matching +1

Energy Aligning for Biased Models

no code implementations7 Jun 2021 Bowen Zhao, Chen Chen, Qi Ju, Shutao Xia

Training on class-imbalanced data usually results in biased models that tend to predict samples into the majority classes, which is a common and notorious problem.

Class Incremental Learning Incremental Learning

Stacked Acoustic-and-Textual Encoding: Integrating the Pre-trained Models into Speech Translation Encoders

no code implementations ACL 2021 Chen Xu, Bojie Hu, Yanyang Li, Yuhao Zhang, Shen Huang, Qi Ju, Tong Xiao, Jingbo Zhu

To our knowledge, we are the first to develop an end-to-end ST system that achieves comparable or even better BLEU performance than the cascaded ST counterpart when large-scale ASR and MT data is available.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +4

Code-switching pre-training for neural machine translation

no code implementations17 Sep 2020 Zhen Yang, Bojie Hu, Ambyera Han, Shen Huang, Qi Ju

Unlike traditional pre-training method which randomly masks some fragments of the input sentence, the proposed CSP randomly replaces some words in the source sentence with their translation words in the target language.

Machine Translation NMT +2

K-BERT: Enabling Language Representation with Knowledge Graph

2 code implementations arXiv 2019 Weijie Liu, Peng Zhou, Zhe Zhao, Zhiruo Wang, Qi Ju, Haotang Deng, Ping Wang

For machines to achieve this capability, we propose a knowledge-enabled language representation model (K-BERT) with knowledge graphs (KGs), in which triples are injected into the sentences as domain knowledge.

Knowledge Graphs Sentence

UER: An Open-Source Toolkit for Pre-training Models

1 code implementation IJCNLP 2019 Zhe Zhao, Hui Chen, Jinbin Zhang, Xin Zhao, Tao Liu, Wei Lu, Xi Chen, Haotang Deng, Qi Ju, Xiaoyong Du

Existing works, including ELMO and BERT, have revealed the importance of pre-training for NLP tasks.

Improving Image Captioning with Conditional Generative Adversarial Nets

1 code implementation18 May 2018 Chen Chen, Shuai Mu, Wanpeng Xiao, Zexiong Ye, Liesi Wu, Qi Ju

In this paper, we propose a novel conditional-generative-adversarial-nets-based image captioning framework as an extension of traditional reinforcement-learning (RL)-based encoder-decoder architecture.

Image Captioning Reinforcement Learning (RL)

Cannot find the paper you are looking for? You can Submit a new open access paper.