no code implementations • Xintong Li, Lemao Liu, Guanlin Li, Max Meng, Shuming Shi
We find that although NMT models are difficult to capture word alignment for CFT words but these words do not sacrifice translation quality significantly, which provides an explanation why NMT is more successful for translation yet worse for word alignment compared to statistical machine translation.
1 code implementation • 25 Sep 2024 • Guanlin Li, Ke Zhang, Ting Wang, Ming Li, Bin Zhao, Xuelong Li
Despite the impressive advancements made in recent low-light image enhancement techniques, the scarcity of paired data has emerged as a significant obstacle to further advancements.
1 code implementation • 23 Sep 2024 • Yachuan Li, Xavier Soria Pomab, Yongke Xi, Guanlin Li, Chaozhi Yang, Qian Xiao, Yun Bai, Zongmin Li
Our study shows that what really matters in the current edge detection is high-quality features, and we can make the encoder-decoder based detector great again even without complex training strategies and huge computational cost.
1 code implementation • 24 May 2024 • Guanlin Li, Kangjie Chen, Shudong Zhang, Jie Zhang, Tianwei Zhang
Additionally, we introduce three large-scale red-teaming datasets for studying the safety risks associated with text-to-image models.
1 code implementation • 28 Mar 2024 • Xiaokang Zhang, Jing Zhang, Zeyao Ma, Yang Li, Bohan Zhang, Guanlin Li, Zijun Yao, Kangli Xu, Jinchang Zhou, Daniel Zhang-li, Jifan Yu, Shu Zhao, Juanzi Li, Jie Tang
We introduce TableLLM, a robust large language model (LLM) with 13 billion parameters, purpose-built for proficiently handling tabular data manipulation tasks, whether they are embedded within documents or spreadsheets, catering to real-world office scenarios.
1 code implementation • 2 Feb 2024 • Guanlin Li, Shuai Yang, Jie Zhang, Tianwei Zhang
With the development of generative models, the quality of generated content keeps increasing.
no code implementations • 4 Dec 2023 • Guanlin Li, Han Qiu, Shangwei Guo, Jiwei Li, Tianwei Zhang
To the best of our knowledge, it is the first work leveraging the observations of kernel dynamics to improve existing AT methods.
no code implementations • 4 Dec 2023 • Guanlin Li, Naishan Zheng, Man Zhou, Jie Zhang, Tianwei Zhang
However, these works lack analysis of adversarial information or perturbation, which cannot reveal the mystery of adversarial examples and lose proper interpretation.
no code implementations • 27 Sep 2023 • Guanlin Li, Yifei Chen, Jie Zhang, Jiwei Li, Shangwei Guo, Tianwei Zhang
We propose Warfare, a unified methodology to achieve both attacks in a holistic way.
1 code implementation • 14 Jul 2023 • Guanlin Li, Guowen Xu, Tianwei Zhang
This framework consists of two components: (1) a new training strategy inspired by the effective number to guide the model to generate more balanced and informative AEs; (2) a carefully constructed penalty function to force a satisfactory feature space.
1 code implementation • 14 Jul 2023 • Guanlin Li, Kangjie Chen, Yuan Xu, Han Qiu, Tianwei Zhang
We first introduce an oracle into the adversarial training process to help the model learn a correct data-label conditional distribution.
1 code implementation • 24 Nov 2022 • Guanlin Li, Guowen Xu, Tianwei Zhang
In this paper, we consider the instance segmentation task on a long-tailed dataset, which contains label noise, i. e., some of the annotations are incorrect.
1 code implementation • 5 Aug 2022 • Xuelong Li, Guanlin Li, Bin Zhao
The illumination enhancement branch is adopted to enlighten the low-frequency component with reduced resolution.
no code implementations • 7 Apr 2022 • Xiaoxuan Lou, Guowen Xu, Kangjie Chen, Guanlin Li, Jiwei Li, Tianwei Zhang
Multiplication-less neural networks significantly reduce the time and energy cost on the hardware platform, as the compute-intensive multiplications are replaced with lightweight bit-shift operations.
no code implementations • 29 Sep 2021 • Guanlin Li, Guowen Xu, Han Qiu, Ruan He, Jiwei Li, Tianwei Zhang
Extensive evaluations indicate the integration of the two techniques provides much more robustness than existing defense solutions for 3D models.
no code implementations • 10 Aug 2021 • Ashild Kummen, Guanlin Li, Ali Hassan, Teodora Ganeva, Qianying Lu, Robert Shaw, Chenuka Ratwatte, Yang Zou, Lu Han, Emil Almazov, Sheena Visram, Andrew Taylor, Neil J Sebire, Lee Stott, Yvonne Rogers, Graham Roberts, Dean Mohamedally
We also introduce a series of bespoke gesture recognition classifications as DirectInput triggers, including gestures for idle states, auto calibration, depth capture from a 2D RGB webcam stream and tracking of facial motions such as mouth motions, winking, and head direction with rotation.
no code implementations • 19 Jun 2021 • Guanlin Li, Guowen Xu, Han Qiu, Shangwei Guo, Run Wang, Jiwei Li, Tianwei Zhang, Rongxing Lu
Since the production of a commercial GAN requires substantial computational and human resources, the copyright protection of GANs is urgently needed.
no code implementations • 1 Jan 2021 • Guanlin Li, Lemao Liu, Taro Watanabe, Conghui Zhu, Tiejun Zhao
Unsupervised Neural Machine Translation or UNMT has received great attention in recent years.
1 code implementation • 2 Aug 2020 • Guanlin Li, Chang Liu, Han Yu, Yanhong Fan, Libang Zhang, Zongyue Wang, Meiqin Wang
Information about system characteristics such as power consumption, electromagnetic leaks and sound can be exploited by the side-channel attack to compromise the system.
1 code implementation • CVPR 2020 • Guanlin Li, Shuya Ding, Jun Luo, Chang Liu
Whereas adversarial training is employed as the main defence strategy against specific adversarial samples, it has limited generalization capability and incurs excessive time complexity.
no code implementations • ACL 2020 • Jierui Li, Lemao Liu, Huayang Li, Guanlin Li, Guoping Huang, Shuming Shi
Recently many efforts have been devoted to interpreting the black-box NMT models, but little progress has been made on metrics to evaluate explanation methods.
no code implementations • 5 Apr 2020 • Conghui Zhu, Guanlin Li, Lemao Liu, Tiejun Zhao, Shuming Shi
Despite the great success of NMT, there still remains a severe challenge: it is hard to interpret the internal dynamics during its training process.
no code implementations • 5 Apr 2020 • Guanlin Li, Lemao Liu, Conghui Zhu, Tiejun Zhao, Shuming Shi
Generalization to unseen instances is our eternal pursuit for all data-driven models.
no code implementations • IJCNLP 2019 • Guanlin Li, Lemao Liu, Guoping Huang, Conghui Zhu, Tiejun Zhao
Many Data Augmentation (DA) methods have been proposed for neural machine translation.
no code implementations • ACL 2019 • Xintong Li, Guanlin Li, Lemao Liu, Max Meng, Shuming Shi
Prior researches suggest that neural machine translation (NMT) captures word alignment through its attention mechanism, however, this paper finds attention may almost fail to capture word alignment for some NMT models.
no code implementations • NAACL 2019 • Guanlin Li, Lemao Liu, Xintong Li, Conghui Zhu, Tiejun Zhao, Shuming Shi
Multilayer architectures are currently the gold standard for large-scale neural machine translation.
no code implementations • 24 Aug 2018 • Wenhu Chen, Guanlin Li, Shujie Liu, Zhirui Zhang, Mu Li, Ming Zhou
Then, we interpret sequence-to-sequence learning as learning a transductive model to transform the source local latent distributions to match their corresponding target distributions.
no code implementations • NAACL 2018 • Wenhu Chen, Guanlin Li, Shuo Ren, Shujie Liu, Zhirui Zhang, Mu Li, Ming Zhou
In order to alleviate data sparsity and overfitting problems in maximum likelihood estimation (MLE) for sequence prediction tasks, we propose the Generative Bridging Network (GBN), in which a novel bridge module is introduced to assist the training of the sequence prediction model (the generator network).
no code implementations • 28 Jun 2017 • Wenhu Chen, Guanlin Li, Shuo Ren, Shujie Liu, Zhirui Zhang, Mu Li, Ming Zhou
In order to alleviate data sparsity and overfitting problems in maximum likelihood estimation (MLE) for sequence prediction tasks, we propose the Generative Bridging Network (GBN), in which a novel bridge module is introduced to assist the training of the sequence prediction model (the generator network).