Search Results for author: Lichao Huang

Found 23 papers, 18 papers with code

WidthFormer: Toward Efficient Transformer-based BEV View Transformation

1 code implementation8 Jan 2024 Chenhongyi Yang, Tianwei Lin, Lichao Huang, Elliot J. Crowley

We present WidthFormer, a novel transformer-based module to compute Bird's-Eye-View (BEV) representations from multi-view cameras for real-time autonomous-driving applications.

3D Object Detection Autonomous Driving +4

EDA: Evolving and Distinct Anchors for Multimodal Motion Prediction

1 code implementation15 Dec 2023 Longzhong Lin, Xuewu Lin, Tianwei Lin, Lichao Huang, Rong Xiong, Yue Wang

Motion prediction is a crucial task in autonomous driving, and one of its major challenges lands in the multimodality of future behaviors.

Autonomous Driving motion prediction +1

Sparse4D v3: Advancing End-to-End 3D Detection and Tracking

1 code implementation20 Nov 2023 Xuewu Lin, Zixiang Pei, Tianwei Lin, Lichao Huang, Zhizhong Su

We introduce two auxiliary training tasks (Temporal Instance Denoising and Quality Estimation) and propose decoupled attention to make structural improvements, leading to significant enhancements in detection performance.

Autonomous Driving Denoising

Sparse4D v2: Recurrent Temporal Fusion with Sparse Model

1 code implementation23 May 2023 Xuewu Lin, Tianwei Lin, Zixiang Pei, Lichao Huang, Zhizhong Su

Firstly, it reduces the computational complexity of temporal fusion from $O(T)$ to $O(1)$, resulting in significant improvements in inference speed and memory usage.

Plug and Play Active Learning for Object Detection

1 code implementation CVPR 2024 Chenhongyi Yang, Lichao Huang, Elliot J. Crowley

To overcome this challenge, we introduce Plug and Play Active Learning (PPAL), a simple and effective AL strategy for object detection.

Active Learning Diversity +4

Contrastive Object-level Pre-training with Spatial Noise Curriculum Learning

1 code implementation26 Nov 2021 Chenhongyi Yang, Lichao Huang, Elliot J. Crowley

The goal of contrastive learning based pre-training is to leverage large quantities of unlabeled data to produce a model that can be readily adapted downstream.

Contrastive Learning Instance Segmentation +2

Boundary-preserving Mask R-CNN

1 code implementation ECCV 2020 Tianheng Cheng, Xinggang Wang, Lichao Huang, Wenyu Liu

Besides, it is not surprising to observe that BMask R-CNN obtains more obvious improvement when the evaluation criterion requires better localization (e. g., AP$_{75}$) as shown in Fig. 1.

Instance Segmentation Object +1

Image Super-Resolution with Cross-Scale Non-Local Attention and Exhaustive Self-Exemplars Mining

3 code implementations CVPR 2020 Yiqun Mei, Yuchen Fan, Yuqian Zhou, Lichao Huang, Thomas S. Huang, Humphrey Shi

By combining the new CS-NL prior with local and in-scale non-local priors in a powerful recurrent fusion cell, we can find more cross-scale feature correlations within a single low-resolution (LR) image.

Feature Correlation Image Super-Resolution

Multi-object Tracking via End-to-end Tracklet Searching and Ranking

no code implementations4 Mar 2020 Tao Hu, Lichao Huang, Han Shen

Recent works in multiple object tracking use sequence model to calculate the similarity score between the detections and the previous tracklets.

Multi-Object Tracking Multiple Object Tracking

Learned Video Compression via Joint Spatial-Temporal Correlation Exploration

no code implementations13 Dec 2019 Haojie Liu, Han Shen, Lichao Huang, Ming Lu, Tong Chen, Zhan Ma

Traditional video compression technologies have been developed over decades in pursuit of higher coding efficiency.

Optical Flow Estimation Video Compression

RDSNet: A New Deep Architecture for Reciprocal Object Detection and Instance Segmentation

1 code implementation11 Dec 2019 Shaoru Wang, Yongchao Gong, Junliang Xing, Lichao Huang, Chang Huang, Weiming Hu

To reciprocate these two tasks, we design a two-stream structure to learn features on both the object level (i. e., bounding boxes) and the pixel level (i. e., instance masks) jointly.

Instance Segmentation Object +5

Real Time Visual Tracking using Spatial-Aware Temporal Aggregation Network

1 code implementation2 Aug 2019 Tao Hu, Lichao Huang, Xian-Ming Liu, Han Shen

Our tracker achieves leading performance in OTB2013, OTB2015, VOT2015, VOT2016 and LaSOT, and operates at a real-time speed of 26 FPS, which indicates our method is effective and practical.

Motion Estimation Real-Time Visual Tracking

Object Detection in Video with Spatial-temporal Context Aggregation

no code implementations11 Jul 2019 Hao Luo, Lichao Huang, Han Shen, Yuan Li, Chang Huang, Xinggang Wang

Without any bells and whistles, our method obtains 80. 3\% mAP on the ImageNet VID dataset, which is superior over the previous state-of-the-arts.

Object object-detection +1

Mask Scoring R-CNN

3 code implementations CVPR 2019 Zhaojin Huang, Lichao Huang, Yongchao Gong, Chang Huang, Xinggang Wang

In this paper, we study this problem and propose Mask Scoring R-CNN which contains a network block to learn the quality of the predicted instance masks.

General Classification Instance Segmentation +2

CCNet: Criss-Cross Attention for Semantic Segmentation

4 code implementations ICCV 2019 Zilong Huang, Xinggang Wang, Yunchao Wei, Lichao Huang, Humphrey Shi, Wenyu Liu, Thomas S. Huang

Compared with the non-local block, the proposed recurrent criss-cross attention module requires 11x less GPU memory usage.

Ranked #7 on Semantic Segmentation on FoodSeg103 (using extra training data)

Computational Efficiency Human Parsing +8

Tracklet Association Tracker: An End-to-End Learning-based Association Approach for Multi-Object Tracking

no code implementations5 Aug 2018 Han Shen, Lichao Huang, Chang Huang, Wei Xu

The separation of the task requires to define a hand-crafted training goal in affinity learning stage and a hand-crafted cost function of data association stage, which prevents the tracking goals from learning directly from the feature.

Multi-Object Tracking Multiple Object Tracking +1

Parse Geometry from a Line: Monocular Depth Estimation with Partial Laser Observation

4 code implementations17 Oct 2016 Yiyi Liao, Lichao Huang, Yue Wang, Sarath Kodagoda, Yinan Yu, Yong liu

Many standard robotic platforms are equipped with at least a fixed 2D laser range finder and a monocular camera.

Depth Completion

Cannot find the paper you are looking for? You can Submit a new open access paper.