no code implementations • 19 Dec 2024 • Rui Ding, Liang Yong, Sihuan Zhao, Jing Nie, Lihui Chen, Haijun Liu, Xichuan Zhou
To this end, in this paper, we propose a Progressive Fine-to-Coarse Reconstruction (PFCR) method for accurate PTQ, which significantly improves the performance of low-bit quantized vision transformers.
no code implementations • 19 Oct 2024 • Yue Zhan, Zhihong Zeng, Haijun Liu, Xiaoheng Tan, Yinli Tian
A primary challenge of this issue is how to fuse the complementary information from RGB and depth effectively.
no code implementations • 6 Sep 2024 • Xi Su, Xiangfei Shen, Mingyang Wan, Jing Nie, Lihui Chen, Haijun Liu, Xichuan Zhou
In recent years, research on RGB SR has shown that models pre-trained on large-scale benchmark datasets can greatly improve performance on unseen data, which may stand as a remedy for HSI.
no code implementations • 29 Aug 2024 • Shiguang Wang, Tao Xie, Haijun Liu, Xingcheng Zhang, Jian Cheng
Channel Pruning is one of the most widespread techniques used to compress deep neural networks while maintaining their performances.
1 code implementation • journal 2023 • Chukwuemeka Clinton Atabansi, Jing Nie, Haijun Liu, Qianqian Song, Lingfeng Yan, Xichuan Zhou
Transformers have been widely used in many computer vision challenges and have shown the capability of producing better results than convolutional neural networks (CNNs).
no code implementations • 14 Jul 2023 • Haijun Liu, Xi Su, Xiangfei Shen, Lihui Chen, Xichuan Zhou
Our method introduces a separation training loss based on a latent binary mask to separately constrain the background and anomalies in the estimated image.
no code implementations • CVPR 2023 • Shiguang Wang, Tao Xie, Jian Cheng, Xingcheng Zhang, Haijun Liu
Technically, MDL-NAS constructs a coarse-to-fine search space, where the coarse search space offers various optimal architectures for different tasks while the fine search space provides fine-grained parameter sharing to tackle the inherent obstacles of multi-domain learning.
no code implementations • 7 Jul 2021 • Chengzhi Jiang, Yanzhou Su, Wen Wang, Haiwei Bai, Haijun Liu, Jian Cheng
This method, named IntraLoss, explicitly performs gradient enhancement in the anisotropic region so that the intra-class distribution continues to shrink, resulting in isotropic and more compact intra-class distribution and further margin between identities.
1 code implementation • 9 Dec 2020 • Haijun Liu, Yanxia Chai, Xiaoheng Tan, Dong Li, Xichuan Zhou
In this letter, we propose a conceptually simple and effective dual-granularity triplet loss for visible-thermal person re-identification (VT-ReID).
Ranked #2 on
Cross-Modal Person Re-Identification
on RegDB
1 code implementation • 14 Aug 2020 • Haijun Liu, Xiaoheng Tan, Xichuan Zhou
By well splitting the ResNet50 model to construct the modality-specific feature extracting network and modality-sharing feature embedding network, we experimentally demonstrate the effect of parameters sharing of two-stream network for VT Re-ID.
Ranked #1 on
Cross-Modal Person Re-Identification
on RegDB
Cross-Modality Person Re-identification
Cross-Modal Person Re-Identification
+1
1 code implementation • 9 Jun 2020 • Xichuan Zhou, Kui Liu, Cong Shi, Haijun Liu, Ji Liu
Recent researches on information bottleneck shed new light on the continuous attempts to open the black box of neural signal encoding.
no code implementations • 23 Jul 2019 • Haijun Liu, Jian Cheng
To address these two issues, we propose focusing on enhancing the discriminative feature learning (EDFL) with two extreme simple means from two core aspects, (1) skip-connection for mid-level features incorporation to improve the person features with more discriminability and robustness, and (2) dual-modality triplet loss to guide the training procedures by simultaneously considering the cross-modality discrepancy and intra-modality variations.
Cross-Modality Person Re-identification
Person Re-Identification
+1
no code implementations • 30 May 2019 • Haijun Liu, Jian Cheng, Wen Wang, Yanzhou Su
A large amount of loss functions based on pair distances have been presented in the literature for guiding the training of deep metric learning.
no code implementations • 30 May 2019 • Haijun Liu, Jian Cheng, Shiguang Wang, Wen Wang
Unlike existing cross-domain Re-ID methods, leveraging the auxiliary information of those unlabeled target-domain data, we aim at enhancing the model generalization and adaptation by discriminative feature learning, and directly exploiting a pre-trained model to new domains (datasets) without any utilization of the information from target domains.
no code implementations • 19 Oct 2018 • Wen Wang, Yongjian Wu, Haijun Liu, Shiguang Wang, Jian Cheng
Temporal action detection aims at not only recognizing action category but also detecting start time and end time for each action instance in an untrimmed video.
no code implementations • 12 Apr 2018 • Shiguang Wang, Jian Cheng, Haijun Liu, Ming Tang
To take advantage of the body parts and context information for pedestrian detection, we propose the part and context network (PCN) in this work.
10 code implementations • 17 Jan 2018 • Feng Wang, Weiyang Liu, Haijun Liu, Jian Cheng
In this work, we introduce a novel additive angular margin for the Softmax loss, which is intuitively appealing and more interpretable than the existing works.
Ranked #2 on
Face Identification
on Trillion Pairs Dataset