no code implementations • Findings (NAACL) 2022 • Xin Sheng, Linli Xu, Yinlong Xu, Deqiang Jiang, Bo Ren
We propose a novel siamese generative adversarial net for abstractive text summarization (SSPGAN), which can preserve the main semantics of the source text.
no code implementations • EMNLP 2021 • Linli Xu, Sijie Teng, Ruoyu Zhao, Junliang Guo, Chi Xiao, Deqiang Jiang, Bo Ren
Hierarchical multi-label text classification (HMTC) deals with the challenging task where an instance can be assigned to multiple hierarchically structured categories at the same time.
Multi Label Text Classification Multi-Label Text Classification +1
no code implementations • COLING 2022 • Xin Sheng, Linli Xu, Yinlong Xu, Changcun Bao, Huang Chen, Bo Ren
The discriminator of CoCGAN discriminates the authenticity of given samples and optimizes a contrastive learning objective to capture both more flexible data-to-class relations and data-to-data relations among training samples.
no code implementations • 10 Apr 2024 • Chaohu Liu, Kun Yin, Haoyu Cao, Xinghua Jiang, Xin Li, Yinsong Liu, Deqiang Jiang, Xing Sun, Linli Xu
In addition, we construct a document-oriented visual instruction tuning dataset and apply a multi-stage training strategy to enhance the model's document modeling capabilities.
no code implementations • 19 Feb 2024 • Yifei Cheng, Li Shen, Linli Xu, Xun Qian, Shiwei Wu, Yiming Zhou, Tie Zhang, DaCheng Tao, Enhong Chen
However, existing compression methods either perform only unidirectional compression in one iteration with higher communication cost, or bidirectional compression with slower convergence rate.
no code implementations • 18 Jan 2024 • Yichao Du, Zhirui Zhang, Linan Yue, Xu Huang, Yuqing Zhang, Tong Xu, Linli Xu, Enhong Chen
To protect privacy and meet legal regulations, federated learning (FL) has gained significant attention for training speech-to-text (S2T) systems, including automatic speech recognition (ASR) and speech translation (ST).
Automatic Speech Recognition Automatic Speech Recognition (ASR) +2
no code implementations • 26 Oct 2023 • Yongxin Zhu, Zhujin Gao, Xinyuan Zhou, Zhongyi Ye, Linli Xu
While Diffusion Generative Models have achieved great success on image generation tasks, how to efficiently and effectively incorporate them into speech generation especially translation tasks remains a non-trivial problem.
1 code implementation • 19 Jul 2023 • Pengfei Luo, Tong Xu, Shiwei Wu, Chen Zhu, Linli Xu, Enhong Chen
Then, to derive the similarity matching score for each mention-entity pair, we device three interaction units to comprehensively explore the intra-modal interaction and inter-modal fusion among features of entities and mentions.
no code implementations • 5 Jun 2023 • Yukang Liang, Kaitao Song, Shaoguang Mao, Huiqiang Jiang, Luna Qiu, Yuqing Yang, Dongsheng Li, Linli Xu, Lili Qiu
Pronunciation assessment is a major challenge in the computer-aided pronunciation training system, especially at the word (phoneme)-level.
no code implementations • 4 Apr 2023 • Yongxin Zhu, Zhen Liu, Yukang Liang, Xin Li, Hao liu, Changcun Bao, Linli Xu
Different to conventional STVQA models which take the linguistic semantics and visual semantics in scene text as two separate features, in this paper, we propose a paradigm of "Locate Then Generate" (LTG), which explicitly unifies this two semantics with the spatial bounding box as a bridge connecting them.
1 code implementation • 19 Dec 2022 • Zhujin Gao, Junliang Guo, Xu Tan, Yongxin Zhu, Fang Zhang, Jiang Bian, Linli Xu
Diffusion models have achieved state-of-the-art synthesis quality on both visual and audio tasks, and recent works further adapt them to textual data by diffusing on the embedding space.
no code implementations • 5 Sep 2022 • Peining Zhang, Junliang Guo, Linli Xu, Mu You, Junming Yin
We consider a novel task of automatically generating text descriptions of music.
no code implementations • 22 May 2022 • Jiquan Li, Junliang Guo, Yongxin Zhu, Xin Sheng, Deqiang Jiang, Bo Ren, Linli Xu
The task of Grammatical Error Correction (GEC) has received remarkable attention with wide applications in Natural Language Processing (NLP) in recent years.
no code implementations • 16 Apr 2021 • Junliang Guo, Zhirui Zhang, Linlin Zhang, Linli Xu, Boxing Chen, Enhong Chen, Weihua Luo
In this way, our approach is able to more comprehensively find adversarial examples around the decision boundary and effectively conduct adversarial attacks.
1 code implementation • NeurIPS 2020 • Junliang Guo, Zhirui Zhang, Linli Xu, Hao-Ran Wei, Boxing Chen, Enhong Chen
Our framework is based on a parallel sequence decoding algorithm named Mask-Predict considering the bi-directional and conditional independent nature of BERT, and can be adapted to traditional autoregressive decoding easily.
no code implementations • ACL 2020 • Junliang Guo, Linli Xu, Enhong Chen
In this work, we introduce a jointly masked sequence-to-sequence model and explore its application on non-autoregressive neural machine translation{\textasciitilde}(NAT).
no code implementations • 11 Jun 2020 • Shuheng Shen, Yifei Cheng, Jingchang Liu, Linli Xu
Distributed parallel stochastic gradient descent algorithms are workhorses for large scale machine learning tasks.
2 code implementations • 20 Nov 2019 • Junliang Guo, Xu Tan, Linli Xu, Tao Qin, Enhong Chen, Tie-Yan Liu
Non-autoregressive translation (NAT) models remove the dependence on previous target tokens and generate all target tokens in parallel, resulting in significant inference speedup but at the cost of inferior translation accuracy compared to autoregressive translation (AT) models.
no code implementations • 28 Jun 2019 • Shuheng Shen, Linli Xu, Jingchang Liu, Xianfeng Liang, Yifei Cheng
Nevertheless, although distributed stochastic gradient descent (SGD) algorithms can achieve a linear iteration speedup, they are limited significantly in practice by the communication cost, making it difficult to achieve a linear time speedup.
no code implementations • 23 Dec 2018 • Junliang Guo, Xu Tan, Di He, Tao Qin, Linli Xu, Tie-Yan Liu
Non-autoregressive translation (NAT) models, which remove the dependence on previous target tokens from the inputs of the decoder, achieve significantly inference speedup but at the cost of inferior accuracy compared to autoregressive translation (AT) models.
no code implementations • 15 Nov 2018 • Shuheng Shen, Linli Xu, Jingchang Liu, Junliang Guo, Qing Ling
Composition optimization has drawn a lot of attention in a wide variety of machine learning domains from risk management to reinforcement learning.
no code implementations • 7 Oct 2018 • Jingchang Liu, Linli Xu
(Mini-batch) Stochastic Gradient Descent is a popular optimization method which has been applied to many machine learning applications.
no code implementations • 8 Mar 2018 • Linli Xu, Liang Jiang, Chuan Qin, Zhe Wang, Dongfang Du
Generating poetry from images is much more challenging than generating poetry from text, since images contain very rich visual information which cannot be described completely using several keywords, and a good poem should convey the image accurately.
2 code implementations • 11 Nov 2017 • Junliang Guo, Linli Xu, Xunpeng Huang, Enhong Chen
In this paper, we take a matrix factorization perspective of network embedding, and incorporate structure, content and label information of the network simultaneously.
no code implementations • 21 May 2016 • Yitan Li, Linli Xu, Xiaowei Zhong, Qing Ling
Asynchronous parallel optimization algorithms for solving large-scale machine learning problems have drawn significant attention from academia to industry recently.
no code implementations • NeurIPS 2012 • Junyuan Xie, Linli Xu, Enhong Chen
Our method achieves state-of-the-art performance in the image denoising task.
no code implementations • NeurIPS 2010 • Min Yang, Linli Xu, Martha White, Dale Schuurmans, Yao-Liang Yu
We present a generic procedure that can be applied to standard loss functions and demonstrate improved robustness in regression and classification problems.