1 code implementation • 5 Jan 2024 • Yabin Zhu, Xiao Wang, Chenglong Li, Bo Jiang, Lin Zhu, Zhixiang Huang, Yonghong Tian, Jin Tang
In this work, we formally propose the task of object tracking using unaligned neuromorphic and visible cameras.
no code implementations • 3 Jan 2024 • Dengdi Sun, Yajie Pan, Andong Lu, Chenglong Li, Bin Luo
We introduce independent dynamic template tokens to interact with the search region, embedding temporal information to address appearance changes, while also retaining the involvement of the initial static template tokens in the joint feature extraction process to ensure the preservation of the original reliable target appearance information that prevent deviations from the target appearance caused by traditional temporal updates.
Ranked #4 on Rgb-T Tracking on LasHeR
no code implementations • 27 Dec 2023 • Lixiang Xu, Qingzhe Cui, Richang Hong, Wei Xu, Enhong Chen, Xin Yuan, Chenglong Li, Yuanyan Tang
The large model GMViT achieves excellent 3D classification and retrieval results on the benchmark datasets ModelNet, ShapeNetCore55, and MCB.
1 code implementation • 25 Dec 2023 • Andong Lu, jiacong Zhao, Chenglong Li, Jin Tang, Bin Luo
To address this challenge, we propose a novel invertible prompt learning approach, which integrates the content-preserving prompts into a well-trained tracking model to adapt to various modality-missing scenarios, for robust RGBT tracking.
no code implementations • 25 Dec 2023 • Andong Lu, Tianrui Zha, Chenglong Li, Jin Tang, XiaoFeng Wang, Bin Luo
To perform effective collaborative modeling between image relighting and person ReID tasks, we integrate the multilevel feature interactions in CENet.
1 code implementation • 22 Dec 2023 • Lei Liu, Mengya Zhang, Cheng Li, Chenglong Li, Jin Tang
Visual tracking often faces challenges such as invalid targets and decreased performance in low-light conditions when relying solely on RGB image sequences.
1 code implementation • 22 Dec 2023 • Lei Liu, Chenglong Li, Futian Wang, Longfeng Shen, Jin Tang
In particular, we design a multi-modal prototype to represent target information by multi-kind samples, including a fixed sample from the first frame and two representative samples from different modalities.
2 code implementations • 17 Dec 2023 • Xiao Wang, Jiandong Jin, Chenglong Li, Jin Tang, Cheng Zhang, Wei Wang
In this paper, we formulate PAR as a vision-language fusion problem and fully exploit the relations between pedestrian images and attribute labels.
1 code implementation • 15 Dec 2023 • Xiao Wang, Wentao Wu, Chenglong Li, Zhicheng Zhao, Zhe Chen, Yukai Shi, Jin Tang
To address this issue, we propose a novel vehicle-centric pre-training framework called VehicleMAE, which incorporates the structural information including the spatial structure from vehicle profile information and the semantic structure from informative high-level natural language descriptions for effective masked vehicle appearance reconstruction.
no code implementations • 13 Dec 2023 • Qiaosi Tang, Ranjala Ratnayake, Gustavo Seabra, Zhe Jiang, Ruogu Fang, Lina Cui, Yousong Ding, Tamer Kahveci, Jiang Bian, Chenglong Li, Hendrik Luesch, Yanjun Li
Additionally, we illuminate the application of morphological profiling in phenotypic drug discovery and highlight potential challenges and opportunities in this field.
2 code implementations • 4 Dec 2023 • Jiandong Jin, Xiao Wang, Chenglong Li, Lili Huang, Jin Tang
Then, a Transformer decoder is proposed to generate the human attributes by incorporating the visual features and attribute query tokens.
no code implementations • 28 Nov 2023 • Kunpeng Wang, Chenglong Li, Zhengzheng Tu, Bin Luo
Existing single-modal and multi-modal salient object detection (SOD) methods focus on designing specific architectures tailored for their respective tasks.
1 code implementation • 31 Aug 2023 • Andong Lu, Zhang Zhang, Yan Huang, Yifan Zhang, Chenglong Li, Jin Tang, Liang Wang
The illumination enhancement branch first estimates an enhanced image from the nighttime image using a nonlinear curve mapping method and then extracts the enhanced features.
no code implementations • 3 Aug 2023 • Zhengzheng Tu, Qishun Wang, Hongshun Wang, Kunpeng Wang, Chenglong Li
Recently, many breakthroughs are made in the field of Video Object Detection (VOD), but the performance is still limited due to the imaging limitations of RGB sensors in adverse illumination conditions.
no code implementations • 25 May 2023 • Aihua Zheng, Chaobin Zhang, Weijun Zhang, Chenglong Li, Jin Tang, Chang Tan, Ruoran Jia
Existing vehicle re-identification methods mainly rely on the single query, which has limited information for vehicle representation and thus significantly hinders the performance of vehicle Re-ID in complicated surveillance networks.
no code implementations • 25 May 2023 • Aihua Zheng, Ziling He, Zi Wang, Chenglong Li, Jin Tang
Many existing multi-modality studies are based on the assumption of modality integrity.
1 code implementation • 23 May 2023 • Aihua Zheng, Zhiqi Ma, Zi Wang, Chenglong Li
Finally, to evaluate the proposed FACENet in handling intense flare, we introduce a new multi-spectral vehicle re-ID dataset, called WMVEID863, with additional challenges such as motion blur, significant background changes, and particularly intense flare degradation.
no code implementations • 26 Mar 2023 • Yabin Zhu, Chenglong Li, Xiao Wang, Jin Tang, Zhixiang Huang
In addition, existing learning methods of RGBT trackers either fuse multimodal features into one for final classification, or exploit the relationship between unimodal branches and fused branch through a competitive learning strategy.
1 code implementation • 11 Oct 2022 • Zi Wang, Huaibo Huang, Aihua Zheng, Chenglong Li, Ran He
To alleviate these two issues, we propose a simple yet effective method with Parallel Augmentation and Dual Enhancement (PADE), which is robust on both occluded and non-occluded data and does not require any auxiliary clues.
no code implementations • 25 Sep 2022 • Chenglong Li, Emmeric Tanghe, Sofie Pollin, Wout Joseph
Then, we present a micro-benchmark of channel response-based direct positioning and tracking for both device-based and contact-free schemes.
no code implementations • 25 Sep 2022 • Chenglong Li, Qiwen Zhu, Tubiao Liu, Jin Tang, Yu Su
To address this issue, we design a multi-stage convolution-transformer network for step segmentation.
no code implementations • 23 Aug 2022 • Chenglong Li, Sibren De Bast, Yang Miao, Emmeric Tanghe, Sofie Pollin, Wout Joseph
To evade the complex association problem of distributed massive MIMO-based MTT, we propose to use a complex Bayesian compressive sensing (CBCS) algorithm to estimate the targets' locations based on the extracted target-of-interest CSI signal directly.
1 code implementation • 1 Aug 2022 • Aihua Zheng, Xianpeng Zhu, Zhiqi Ma, Chenglong Li, Jin Tang, Jixin Ma
In particular, we design a new cross-directional center loss to pull the modality centers of each identity close to mitigate cross-modality discrepancy, while the sample centers of each identity close to alleviate the sample discrepancy.
no code implementations • 2 Jun 2022 • Chenglong Li, Xiaobin Yang, Guohao Wang, Aihua Zheng, Chang Tan, Ruoran Jia, Jin Tang
License plate recognition plays a critical role in many practical applications, but license plates of large vehicles are difficult to be recognized due to the factors of low resolution, contamination, low illumination, and occlusion, to name a few.
1 code implementation • 11 Feb 2022 • Yabin Zhu, Chenglong Li, Yao Liu, Xiao Wang, Jin Tang, Bin Luo, Zhixiang Huang
Tiny objects, frequently appearing in practical applications, have weak appearance and features, and receive increasing interests in meany vision tasks, such as object detection and segmentation.
2 code implementations • AAAI2022 2022 • Yun Xiao, Mengmeng Yang, Chenglong Li, Lei Liu, Jin Tang
RGBT tracking usually suffers from various challenging factors of fast motion, scale variation, illumination variation, thermal crossover and occlusion, to name a few.
no code implementations • 8 Nov 2021 • Chenglong Li, Tianhao Zhu, Lei Liu, Xiaonan Si, Zilin Fan, Sulan Zhai
To promote the research and development of cross-modal object tracking, we propose a new algorithm, which learns the modality-aware target representation to mitigate the appearance gap between RGB and NIR modalities in the tracking process.
1 code implementation • 27 Sep 2021 • Chenglong Li, Emmeric Tanghe, Jaron Fontaine, Luc Martens, Jac Romme, Gaurav Singh, Eli de Poorter, Wout Joseph
Due to its high delay resolution, the ultra-wideband (UWB) technique has been widely adopted for fine-grained indoor localization.
1 code implementation • 27 Apr 2021 • Chenglong Li, Wanlin Xue, Yaqing Jia, Zhichen Qu, Bin Luo, Jin Tang, Dengdi Sun
RGBT tracking receives a surge of interest in the computer vision community, but this research field lacks a large-scale and high-diversity benchmark dataset, which is essential for both the training of deep RGBT trackers and the comprehensive evaluation of RGBT tracking methods.
no code implementations • 27 Mar 2021 • Chenglong Li, Sibren De Bast, Emmeric Tanghe, Sofie Pollin, Wout Joseph
On top of the available MPCs, we propose a generalized fingerprinting system based on different single-metric and hybrid-metric schemes.
1 code implementation • CVPR 2021 • Lingbo Liu, Jiaqi Chen, Hefeng Wu, Guanbin Li, Chenglong Li, Liang Lin
Extensive experiments conducted on the RGBT-CC benchmark demonstrate the effectiveness of our framework for RGBT crowd counting.
no code implementations • 26 Nov 2020 • Alicia Y. Tsai, Selim Gunay, Minjune Hwang, Pengyuan Zhai, Chenglong Li, Laurent El Ghaoui, Khalid M. Mosalam
Post-hazard reconnaissance for natural disasters (e. g., earthquakes) is important for understanding the performance of the built environment, speeding up the recovery, enhancing resilience and making informed decisions related to current and future hazards.
no code implementations • 18 Nov 2020 • Aihua Zheng, Xia Sun, Chenglong Li, Jin Tang
Comprehensive experiments against the state-of-the-art methods on two multi-viewpoint benchmark datasets VeRi and VeRi-Wild validate the promising performance of the proposed method in both with and without domain adaption scenarios while handling unsupervised vehicle Re-ID.
no code implementations • 14 Nov 2020 • Andong Lu, Chenglong Li, Yuqing Yan, Jin Tang, Bin Luo
In specific, we use the modified VGG-M as the generality adapter to extract the modality-shared target representations. To extract the modality-specific features while reducing the computational complexity, we design a modality adapter, which adds a small block to the generality adapter in each layer and each modality in a parallel manner.
no code implementations • 14 Nov 2020 • Andong Lu, Cun Qian, Chenglong Li, Jin Tang, Liang Wang
To deal with the tracking failure caused by sudden camera motion, which often occurs in RGBT tracking, we design a resampling strategy based on optical flow algorithms.
1 code implementation • 3 Aug 2020 • Qiao Liu, Xin Li, Zhenyu He, Chenglong Li, Jun Li, Zikun Zhou, Di Yuan, Jing Li, Kai Yang, Nana Fan, Feng Zheng
We evaluate and analyze more than 30 trackers on LSOTB-TIR to provide a series of baselines, and the results show that deep trackers achieve promising performance.
Thermal Infrared Object Tracking Vocal Bursts Intensity Prediction
no code implementations • ECCV 2020 • Chenglong Li, Lei Liu, Andong Lu, Qing Ji, Jin Tang
RGB and thermal source data suffer from both shared and specific challenges, and how to explore and exploit them plays a critical role to represent the target appearance in RGBT tracking.
2 code implementations • 7 Jul 2020 • Zhengzheng Tu, Yan Ma, Zhun Li, Chenglong Li, Jieming Xu, Yongtao Liu
Salient object detection in complex scenes and environments is a challenging research topic.
2 code implementations • 5 Jun 2020 • Zhengzheng Tu, Zhun Li, Chenglong Li, Yang Lang, Jin Tang
RGBT salient object detection (SOD) aims to segment the common prominent regions of visible and thermal infrared images.
2 code implementations • 5 May 2020 • Zhengzheng Tu, Zhun Li, Chenglong Li, Yang Lang, Jin Tang
Then, we design a novel dual-decoder to conduct the interactions of multi-level features, two modalities and global contexts.
no code implementations • 17 Mar 2020 • Zhengzheng Tu, Chun Lin, Chenglong Li, Jin Tang, Bin Luo
Classifying the confusing samples in the course of RGBT tracking is a quite challenging problem, which hasn't got satisfied solution.
1 code implementation • 1 Dec 2019 • Yanjun Li, Mohammad A. Rezaei, Chenglong Li, Xiaolin Li, Dapeng Wu
The cornerstone of computational drug design is the calculation of binding affinity between two biological counterparts, especially a chemical compound, i. e., a ligand, and a protein.
no code implementations • 12 Aug 2019 • Rui Yang, Yabin Zhu, Xiao Wang, Chenglong Li, Jin Tang
RGB-Thermal object tracking attempt to locate target object using complementary visual and thermal infrared data.
no code implementations • 7 Aug 2019 • Zhengzheng Tu, Yan Ma, Chenglong Li, Jin Tang, Bin Luo
To maintain the clear edge structure of salient objects, we propose a novel Edge-guided Non-local FCN (ENFNet) to perform edge guided feature learning for accurate salient object detection.
no code implementations • 5 Aug 2019 • Chenglong Li, Yan Huang, Liang Wang, Jin Tang, Liang Lin
Many state-of-the-art trackers usually resort to the pretrained convolutional neural network (CNN) model for correlation filtering, in which deep features could usually be redundant, noisy and less discriminative for some certain instances, and the tracking performance might thus be affected.
1 code implementation • 24 Jul 2019 • Chenglong Li, Wei Xia, Yan Yan, Bin Luo, Jin Tang
These advantages of thermal infrared cameras make the segmentation of semantic objects in day and night.
no code implementations • 24 Jul 2019 • Yabin Zhu, Chenglong Li, Bin Luo, Jin Tang, Xiao Wang
In different modalities, we propose to prune the densely aggregated features of all modalities in a collaborative way.
no code implementations • 17 Jul 2019 • Chenglong Li, Andong Lu, Aihua Zheng, Zhengzheng Tu, Jin Tang
In a specific, the generality adapter is to extract shared object representations, the modality adapter aims at encoding modality-specific information to deploy their complementary advantages, and the instance adapter is to model the appearance properties and temporal variations of a certain object.
no code implementations • 22 May 2019 • Hongchao Li, Xianmin Lin, Aihua Zheng, Chenglong Li, Bin Luo, Ran He, Amir Hussain
In particular, our network is end-to-end trained and contains three subnetworks of deep features embedded by the corresponding attributes (i. e., camera view, vehicle type and vehicle color).
1 code implementation • 16 May 2019 • Zhengzheng Tu, Tian Xia, Chenglong Li, Xiaoxiao Wang, Yan Ma, Jin Tang
In this paper, we propose an effective approach for RGB-T image saliency detection.
no code implementations • 27 Nov 2018 • Xiao Wang, Tao Sun, Rui Yang, Chenglong Li, Bin Luo, Jin Tang
In this paper, we propose an efficient quality-aware deep neural network to model the weight of data from each domain using deep reinforcement learning (DRL).
no code implementations • 25 Nov 2018 • Xiao Wang, Chenglong Li, Rui Yang, Tianzhu Zhang, Jin Tang, Bin Luo
To refine the states of the target and re-track the target when it is back to view from heavy occlusion and out of view, we elaborately design a novel subnetwork to learn the target-driven visual attentions from the guidance of both visual and natural language cues.
no code implementations • 24 Nov 2018 • Yabin Zhu, Chenglong Li, Bin Luo, Jin Tang
This paper investigates how to perform robust visual tracking in adverse and challenging conditions using complementary visual and thermal infrared data (RGBT tracking).
1 code implementation • ECCV 2018 • Chenglong Li, Chengli Zhu, Yan Huang, Jin Tang, Liang Wang
To address this problem, this paper presents a novel approach to suppress background effects for RGB-T tracking.
no code implementations • CVPR 2018 • Xiao Wang, Chenglong Li, Bin Luo, Jin Tang
Based on the generated hard positive samples, we train a Siamese network for visual tracking and our experiments validate the effectiveness of the introduced algorithm.
no code implementations • 23 May 2018 • Chenglong Li, Xinyan Liang, Yijuan Lu, Nan Zhao, Jin Tang
RGB-Thermal (RGB-T) object tracking receives more and more attention due to the strongly complementary benefits of thermal information to visible data.
no code implementations • 4 Oct 2017 • Chenglong Li, Liang Lin, WangMeng Zuo, Jin Tang, Ming-Hsuan Yang
First, the graph is initialized by assigning binary weights of some image patches to indicate the object and background patches according to the predicted bounding box.
1 code implementation • 11 Jan 2017 • Chenglong Li, Guizhao Wang, Yunpeng Ma, Aihua Zheng, Bin Luo, Jin Tang
In particular, we introduce a weight for each modality to describe the reliability, and integrate them into the graph-based manifold ranking algorithm to achieve adaptive fusion of different source data.
no code implementations • CVPR 2015 • Chenglong Li, Liang Lin, WangMeng Zuo, Shuicheng Yan, Jin Tang
In particular, the affinity matrix with the rank fixed can be decomposed into two sub-matrices of low rank, and then we iteratively optimize them with closed-form solutions.
no code implementations • CVPR 2013 • Keze Wang, Liang Lin, Jiangbo Lu, Chenglong Li, Keyang Shi
In this paper, we propose a unified framework called PISA, which stands for Pixelwise Image Saliency Aggregating various bottom-up cues and priors.