no code implementations • Findings (EMNLP) 2021 • Peng Lu, Abbas Ghaddar, Ahmad Rashid, Mehdi Rezagholizadeh, Ali Ghodsi, Philippe Langlais
Knowledge Distillation (KD) is extensively used in Natural Language Processing to compress the pre-training and task-specific fine-tuning phases of large neural language models.
no code implementations • 21 May 2024 • Xiaohua Pan, Weifeng Wu, Peiran Liu, Zhen Li, Peng Lu, Peijian Cao, Jianfeng Zhang, Xianfei Qiu, Yangyang Wu
In addition, the SE-GAN model introduces a missing hint matrix to allow the discriminator to more effectively distinguish between known data and data filled by the generator.
no code implementations • 28 Mar 2024 • Zixi Wang, Yubo Huang, Changshuo Fan, Xin Lai, Peng Lu
To address these challenges, this research introduces a novel strategy, combining mathematical modeling, a hybrid genetic algorithm, and ARIMA time series forecasting.
1 code implementation • 6 Mar 2024 • Zhaoran Zhao, Peng Lu, Xujun Peng, Wenhao Guo
This shortfall makes the learning process for photographic image layouts suboptimal.
1 code implementation • 29 Feb 2024 • Suyuchen Wang, Ivan Kobyzev, Peng Lu, Mehdi Rezagholizadeh, Bang Liu
This paper addresses the challenge of train-short-test-long (TSTL) scenarios in Large Language Models (LLMs) equipped with Rotary Position Embedding (RoPE), where models pre-trained on shorter sequences face difficulty with out-of-distribution (OOD) token positions in longer sequences.
1 code implementation • 31 Jan 2024 • Shibiao Xu, Shunpeng Chen, Rongtao Xu, Changwei Wang, Peng Lu, Li Guo
The objective of this endeavor is to furnish a comprehensive overview of local feature matching methods.
1 code implementation • 19 Dec 2023 • Weipeng Guan, Peiyu Chen, Huibin Zhao, Yu Wang, Peng Lu
To the best of our knowledge, this is the first non-learning work to realize event-based dense mapping.
1 code implementation • 12 Dec 2023 • Peng Lu, Tao Jiang, Yining Li, Xiangtai Li, Kai Chen, Wenming Yang
Real-time multi-person pose estimation presents significant challenges in balancing speed and precision.
Ranked #1 on Multi-Person Pose Estimation on CrowdPose (using extra training data)
no code implementations • 1 Dec 2023 • Christophe Tribes, Sacha Benarroch-Lelong, Peng Lu, Ivan Kobyzev
The performance on downstream tasks of models fine-tuned with LoRA heavily relies on a set of hyperparameters including the rank of the decomposition.
no code implementations • 8 May 2023 • Peng Lu, Ahmad Rashid, Ivan Kobyzev, Mehdi Rezagholizadeh, Philippe Langlais
Label Smoothing (LS) is another simple, versatile and efficient regularization which can be applied to various supervised classification tasks.
1 code implementation • 13 Mar 2023 • Tao Jiang, Peng Lu, Li Zhang, Ningsheng Ma, Rui Han, Chengqi Lyu, Yining Li, Kai Chen
Recent studies on 2D pose estimation have achieved excellent performance on public benchmarks, yet its application in the industrial community still suffers from heavy model parameters and high latency.
Ranked #3 on Pose Estimation on OCHuman (using extra training data)
no code implementations • 12 Dec 2022 • Peng Lu, Ivan Kobyzev, Mehdi Rezagholizadeh, Ahmad Rashid, Ali Ghodsi, Philippe Langlais
Moreover, we observe that this simple optimization technique is able to outperform the state-of-the-art KD methods for compact models.
1 code implementation • 25 Sep 2022 • Weipeng Guan, Peiyu Chen, Yuhan Xie, Peng Lu
Compared with the standard cameras, it can provide reliable visual perception during high-speed motions and in high dynamic range scenarios.
1 code implementation • Multimedia Tools and Applications 2022 • Yuan Xu, Yaqin Zhao, Peng Lu
Wildlife image noise reduction is a difficult and challenging problem since the images are inevitably corrupted by the mixed noise in the complex field environments.
no code implementations • 25 May 2022 • Ivan Kobyzev, Aref Jafari, Mehdi Rezagholizadeh, Tianda Li, Alan Do-Omri, Peng Lu, Pascal Poupart, Ali Ghodsi
Knowledge Distillation (KD) is a prominent neural model compression technique that heavily relies on teacher network predictions to guide the training of a student model.
no code implementations • 29 Sep 2021 • Peng Lu, Ahmad Rashid, Ivan Kobyzev, Mehdi Rezagholizadeh, Philippe Langlais
Knowledge Distillation (KD) is an algorithm that transfers the knowledge of a trained, typically larger, neural network into another model under training.
no code implementations • 14 May 2021 • Han Chen, Peng Lu
The approach is able to avoid both static obstacles and dynamic ones in the same framework.
no code implementations • 15 Oct 2020 • Peng Lu, Jiahui Liu, Xujun Peng, Xiaojie Wang
In order to tackle this problem, a weakly supervised cropping frame- work is proposed, where the distribution dissimilarity between high quality images and cropped images is used to guide the coordinate predictor’s training and the ground truths of cropping windows are not required by the proposed method.
no code implementations • 9 Aug 2020 • Changhong Fu, Xiaoxiao Yang, Fan Li, Juntao Xu, Changjing Liu, Peng Lu
By minimizing the difference between the practical and the scheduled ideal consistency map, the consistency level is constrained to maintain temporal smoothness, and rich temporal information contained in response maps is introduced.
1 code implementation • 2 Aug 2020 • Yujie He, Changhong Fu, Fuling Lin, Yiming Li, Peng Lu
Object tracking has been broadly applied in unmanned aerial vehicle (UAV) tasks in recent years.
no code implementations • 23 Mar 2020 • Huawei Wei, Peng Lu, Yichen Wei
There lacks an understanding of how important face alignment is and how it should be performed, for recognition.
1 code implementation • 11 Mar 2020 • Fan Li, Changhong Fu, Fuling Lin, Yiming Li, Peng Lu
After the establishment of a new slot, the weighted fusion of the previous samples generates one key-sample, in order to reduce the number of samples to be scored.
1 code implementation • 10 Aug 2019 • Changhong Fu, Ziyuan Huang, Yiming Li, Ran Duan, Peng Lu
Meanwhile, convolutional features are extracted to provide a more comprehensive representation of the object.
1 code implementation • ICCV 2019 • Ziyuan Huang, Changhong Fu, Yiming Li, Fuling Lin, Peng Lu
Traditional framework of discriminative correlation filters (DCF) is often subject to undesired boundary effects.
2 code implementations • 2 Jul 2019 • Peng Lu, Hao Zhang, Xujun Peng, Xiaofu Jin
In this paper, we primarily focus on improving the accuracy of automatic image cropping, and on further exploring its potential in public datasets with high efficiency.
no code implementations • NAACL 2019 • Peng Lu, Ting Bai, Philippe Langlais
Multi-task learning (MTL) has been studied recently for sequence labeling.
1 code implementation • 11 Dec 2018 • Peng Lu, Gao Huang, Hangyu Lin, Wenming Yang, Guodong Guo, Yanwei Fu
This paper proposes a novel approach for Sketch-Based Image Retrieval (SBIR), for which the key is to bridge the gap between sketches and photos in terms of the data representation.
no code implementations • 28 Nov 2018 • Peng Lu, Hangyu Lin, Yanwei Fu, Shaogang Gong, Yu-Gang Jiang, xiangyang xue
Additionally, to study the tasks of sketch-based hairstyle retrieval, this paper contributes a new instance-level photo-sketch dataset - Hairstyle Photo-Sketch dataset, which is composed of 3600 sketches and photos, and 2400 sketch-photo pairs.
no code implementations • 26 Dec 2013 • Peng Lu, Xujun Peng, Xinshan Zhu, Xiaojie Wang
To effectively retrieve objects from large corpus with high accuracy is a challenge task.