Search Results for author: Junliang Guo

Found 28 papers, 12 papers with code

Hierarchical Multi-label Text Classification with Horizontal and Vertical Category Correlations

no code implementations EMNLP 2021 Linli Xu, Sijie Teng, Ruoyu Zhao, Junliang Guo, Chi Xiao, Deqiang Jiang, Bo Ren

Hierarchical multi-label text classification (HMTC) deals with the challenging task where an instance can be assigned to multiple hierarchically structured categories at the same time.

Multi Label Text Classification Multi-Label Text Classification +1

Mitigating Reversal Curse in Large Language Models via Semantic-aware Permutation Training

no code implementations1 Mar 2024 Qingyan Guo, Rui Wang, Junliang Guo, Xu Tan, Jiang Bian, Yujiu Yang

Accordingly, permutation on the training data is considered as a potential solution, since this can make the model predict antecedent words or tokens.

Language Modelling

UniEdit: A Unified Tuning-Free Framework for Video Motion and Appearance Editing

no code implementations20 Feb 2024 Jianhong Bai, Tianyu He, Yuchi Wang, Junliang Guo, Haoji Hu, Zuozhu Liu, Jiang Bian

Recent advances in text-guided video editing have showcased promising results in appearance editing (e. g., stylization).

Video Editing

Bi-directional Adapter for Multi-modal Tracking

1 code implementation17 Dec 2023 Bing Cao, Junliang Guo, Pengfei Zhu, QinGhua Hu

To handle this problem, we propose a novel multi-modal visual prompt tracking model based on a universal bi-directional adapter, cross-prompting multiple modalities mutually.

Object Tracking Rgb-T Tracking

GAIA: Zero-shot Talking Avatar Generation

no code implementations26 Nov 2023 Tianyu He, Junliang Guo, Runyi Yu, Yuchi Wang, Jialiang Zhu, Kaikai An, Leyi Li, Xu Tan, Chunyu Wang, Han Hu, HsiangTao Wu, Sheng Zhao, Jiang Bian

Zero-shot talking avatar generation aims at synthesizing natural talking videos from speech and a single portrait image.

Connecting Large Language Models with Evolutionary Algorithms Yields Powerful Prompt Optimizers

1 code implementation15 Sep 2023 Qingyan Guo, Rui Wang, Junliang Guo, Bei Li, Kaitao Song, Xu Tan, Guoqing Liu, Jiang Bian, Yujiu Yang

Large Language Models (LLMs) excel in various tasks, but they rely on carefully crafted prompts that often demand substantial human effort.

Evolutionary Algorithms

Retrosynthesis Prediction with Local Template Retrieval

no code implementations7 Jun 2023 Shufang Xie, Rui Yan, Junliang Guo, Yingce Xia, Lijun Wu, Tao Qin

Furthermore, we propose a lightweight adapter to adjust the weights when combing neural network and KNN predictions conditioned on the hidden representation and the retrieved templates.

Drug Discovery Retrieval +1

Extract and Attend: Improving Entity Translation in Neural Machine Translation

no code implementations4 Jun 2023 Zixin Zeng, Rui Wang, Yichong Leng, Junliang Guo, Xu Tan, Tao Qin, Tie-Yan Liu

Inspired by this translation process, we propose an Extract-and-Attend approach to enhance entity translation in NMT, where the translation candidates of source entities are first extracted from a dictionary and then attended to by the NMT model to generate the target sentence.

Machine Translation NMT +2

Deliberate then Generate: Enhanced Prompting Framework for Text Generation

no code implementations31 May 2023 Bei Li, Rui Wang, Junliang Guo, Kaitao Song, Xu Tan, Hany Hassan, Arul Menezes, Tong Xiao, Jiang Bian, Jingbo Zhu

Large language models (LLMs) have shown remarkable success across a wide range of natural language generation tasks, where proper prompt designs make great impacts.

Text Generation

ResiDual: Transformer with Dual Residual Connections

1 code implementation28 Apr 2023 Shufang Xie, Huishuai Zhang, Junliang Guo, Xu Tan, Jiang Bian, Hany Hassan Awadalla, Arul Menezes, Tao Qin, Rui Yan

In this paper, we propose ResiDual, a novel Transformer architecture with Pre-Post-LN (PPLN), which fuses the connections in Post-LN and Pre-LN together and inherits their advantages while avoids their limitations.

Machine Translation

A Study on ReLU and Softmax in Transformer

no code implementations13 Feb 2023 Kai Shen, Junliang Guo, Xu Tan, Siliang Tang, Rui Wang, Jiang Bian

This paper sheds light on the following points: 1) Softmax and ReLU use different normalization methods over elements which lead to different variances of results, and ReLU is good at dealing with a large number of key-value slots; 2) FFN and key-value memory are equivalent, and thus the Transformer can be viewed as a memory network where FFNs and self-attention networks are both key-value memories.

Document Translation

N-Gram Nearest Neighbor Machine Translation

no code implementations30 Jan 2023 Rui Lv, Junliang Guo, Rui Wang, Xu Tan, Qi Liu, Tao Qin

Nearest neighbor machine translation augments the Autoregressive Translation~(AT) with $k$-nearest-neighbor retrieval, by comparing the similarity between the token-level context representations of the target tokens in the query and the datastore.

Domain Adaptation Machine Translation +2

Empowering Diffusion Models on the Embedding Space for Text Generation

1 code implementation19 Dec 2022 Zhujin Gao, Junliang Guo, Xu Tan, Yongxin Zhu, Fang Zhang, Jiang Bian, Linli Xu

Diffusion models have achieved state-of-the-art synthesis quality on both visual and audio tasks, and recent works further adapt them to textual data by diffusing on the embedding space.

Denoising Machine Translation +2

VideoDubber: Machine Translation with Speech-Aware Length Control for Video Dubbing

1 code implementation30 Nov 2022 Yihan Wu, Junliang Guo, Xu Tan, Chen Zhang, Bohan Li, Ruihua Song, Lei He, Sheng Zhao, Arul Menezes, Jiang Bian

In this paper, we propose a machine translation system tailored for the task of video dubbing, which directly considers the speech duration of each token in translation, to match the length of source and target speech.

Machine Translation Sentence +4

A Study of Syntactic Multi-Modality in Non-Autoregressive Machine Translation

no code implementations NAACL 2022 Kexun Zhang, Rui Wang, Xu Tan, Junliang Guo, Yi Ren, Tao Qin, Tie-Yan Liu

Furthermore, we take the best of both and design a new loss function to better handle the complicated syntactic multi-modality in real-world datasets.

Machine Translation Translation

BinauralGrad: A Two-Stage Conditional Diffusion Probabilistic Model for Binaural Audio Synthesis

1 code implementation30 May 2022 Yichong Leng, Zehua Chen, Junliang Guo, Haohe Liu, Jiawei Chen, Xu Tan, Danilo Mandic, Lei He, Xiang-Yang Li, Tao Qin, Sheng Zhao, Tie-Yan Liu

Combining this novel perspective of two-stage synthesis with advanced generative models (i. e., the diffusion models), the proposed BinauralGrad is able to generate accurate and high-fidelity binaural audio samples.

Audio Synthesis

Sequence-to-Action: Grammatical Error Correction with Action Guided Sequence Generation

no code implementations22 May 2022 Jiquan Li, Junliang Guo, Yongxin Zhu, Xin Sheng, Deqiang Jiang, Bo Ren, Linli Xu

The task of Grammatical Error Correction (GEC) has received remarkable attention with wide applications in Natural Language Processing (NLP) in recent years.

Grammatical Error Correction Sentence

A Survey on Non-Autoregressive Generation for Neural Machine Translation and Beyond

1 code implementation20 Apr 2022 Yisheng Xiao, Lijun Wu, Junliang Guo, Juntao Li, Min Zhang, Tao Qin, Tie-Yan Liu

While NAR generation can significantly accelerate inference speed for machine translation, the speedup comes at the cost of sacrificed translation accuracy compared to its counterpart, autoregressive (AR) generation.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +11

Task-Oriented Dialogue System as Natural Language Generation

1 code implementation31 Aug 2021 Weizhi Wang, Zhirui Zhang, Junliang Guo, Yinpei Dai, Boxing Chen, Weihua Luo

In this paper, we propose to formulate the task-oriented dialogue system as the purely natural language generation task, so as to fully leverage the large-scale pre-trained models like GPT-2 and simplify complicated delexicalization prepossessing.

Text Generation Transfer Learning

Adaptive Nearest Neighbor Machine Translation

3 code implementations ACL 2021 Xin Zheng, Zhirui Zhang, Junliang Guo, ShuJian Huang, Boxing Chen, Weihua Luo, Jiajun Chen

On four benchmark machine translation datasets, we demonstrate that the proposed method is able to effectively filter out the noises in retrieval results and significantly outperforms the vanilla kNN-MT model.

Machine Translation NMT +2

Towards Variable-Length Textual Adversarial Attacks

no code implementations16 Apr 2021 Junliang Guo, Zhirui Zhang, Linlin Zhang, Linli Xu, Boxing Chen, Enhong Chen, Weihua Luo

In this way, our approach is able to more comprehensively find adversarial examples around the decision boundary and effectively conduct adversarial attacks.

Machine Translation Translation

Incorporating BERT into Parallel Sequence Decoding with Adapters

1 code implementation NeurIPS 2020 Junliang Guo, Zhirui Zhang, Linli Xu, Hao-Ran Wei, Boxing Chen, Enhong Chen

Our framework is based on a parallel sequence decoding algorithm named Mask-Predict considering the bi-directional and conditional independent nature of BERT, and can be adapted to traditional autoregressive decoding easily.

Machine Translation Natural Language Understanding +2

Jointly Masked Sequence-to-Sequence Model for Non-Autoregressive Neural Machine Translation

no code implementations ACL 2020 Junliang Guo, Linli Xu, Enhong Chen

In this work, we introduce a jointly masked sequence-to-sequence model and explore its application on non-autoregressive neural machine translation{\textasciitilde}(NAT).

Language Modelling Machine Translation +1

Fine-Tuning by Curriculum Learning for Non-Autoregressive Neural Machine Translation

2 code implementations20 Nov 2019 Junliang Guo, Xu Tan, Linli Xu, Tao Qin, Enhong Chen, Tie-Yan Liu

Non-autoregressive translation (NAT) models remove the dependence on previous target tokens and generate all target tokens in parallel, resulting in significant inference speedup but at the cost of inferior translation accuracy compared to autoregressive translation (AT) models.

Machine Translation Translation

Non-Autoregressive Neural Machine Translation with Enhanced Decoder Input

no code implementations23 Dec 2018 Junliang Guo, Xu Tan, Di He, Tao Qin, Linli Xu, Tie-Yan Liu

Non-autoregressive translation (NAT) models, which remove the dependence on previous target tokens from the inputs of the decoder, achieve significantly inference speedup but at the cost of inferior accuracy compared to autoregressive translation (AT) models.

Machine Translation Sentence +2

Asynchronous Stochastic Composition Optimization with Variance Reduction

no code implementations15 Nov 2018 Shuheng Shen, Linli Xu, Jingchang Liu, Junliang Guo, Qing Ling

Composition optimization has drawn a lot of attention in a wide variety of machine learning domains from risk management to reinforcement learning.

Management

Enhancing Network Embedding with Auxiliary Information: An Explicit Matrix Factorization Perspective

2 code implementations11 Nov 2017 Junliang Guo, Linli Xu, Xunpeng Huang, Enhong Chen

In this paper, we take a matrix factorization perspective of network embedding, and incorporate structure, content and label information of the network simultaneously.

Link Prediction Network Embedding +1

Cannot find the paper you are looking for? You can Submit a new open access paper.