Search Results for author: Sanqing Qu

Found 13 papers, 9 papers with code

MAP: MAsk-Pruning for Source-Free Model Intellectual Property Protection

1 code implementation7 Mar 2024 Boyang Peng, Sanqing Qu, Yong Wu, Tianpei Zou, Lianghua He, Alois Knoll, Guang Chen, Changjun Jiang

In this paper, we target a practical setting where only a well-trained source model is available and investigate how we can realize IP protection.

PCDepth: Pattern-based Complementary Learning for Monocular Depth Estimation by Best of Both Worlds

no code implementations29 Feb 2024 Haotian Liu, Sanqing Qu, Fan Lu, Zongtao Bu, Florian Roehrbein, Alois Knoll, Guang Chen

Therefore, existing complementary learning approaches for MDE fuse intensity information from images and scene details from event data for better scene understanding.

Depth Prediction Monocular Depth Estimation +2

TMA: Temporal Motion Aggregation for Event-based Optical Flow

1 code implementation ICCV 2023 Haotian Liu, Guang Chen, Sanqing Qu, Yanping Zhang, Zhijun Li, Alois Knoll, Changjun Jiang

In this paper, we argue that temporal continuity is a vital element of event-based optical flow and propose a novel Temporal Motion Aggregation (TMA) approach to unlock its potential.

Event-based Optical Flow Optical Flow Estimation

Modality-Agnostic Debiasing for Single Domain Generalization

no code implementations CVPR 2023 Sanqing Qu, Yingwei Pan, Guang Chen, Ting Yao, Changjun Jiang, Tao Mei

We validate the superiority of our MAD in a variety of single-DG scenarios with different modalities, including recognition on 1D texts, 2D images, 3D point clouds, and semantic segmentation on 2D images.

Data Augmentation Domain Generalization +1

Upcycling Models under Domain and Category Shift

3 code implementations CVPR 2023 Sanqing Qu, Tianpei Zou, Florian Roehrbein, Cewu Lu, Guang Chen, DaCheng Tao, Changjun Jiang

We examine the superiority of our GLC on multiple benchmarks with different category shift scenarios, including partial-set, open-set, and open-partial-set DA.

Clustering Source-Free Domain Adaptation +2

BMD: A General Class-balanced Multicentric Dynamic Prototype Strategy for Source-free Domain Adaptation

1 code implementation6 Apr 2022 Sanqing Qu, Guang Chen, Jing Zhang, Zhijun Li, wei he, DaCheng Tao

Source-free Domain Adaptation (SFDA) aims to adapt a pre-trained source model to the unlabeled target domain without accessing the well-labeled source data, which is a much more practical setting due to the data privacy, security, and transmission issues.

Clustering Pseudo Label +1

HRegNet: A Hierarchical Network for Large-scale Outdoor LiDAR Point Cloud Registration

1 code implementation ICCV 2021 Fan Lu, Guang Chen, Yinlong Liu, Lijun Zhang, Sanqing Qu, Shu Liu, Rongqi Gu

Extensive experiments are conducted on two large-scale outdoor LiDAR point cloud datasets to demonstrate the high accuracy and efficiency of the proposed HRegNet.

Point Cloud Registration

PointINet: Point Cloud Frame Interpolation Network

1 code implementation18 Dec 2020 Fan Lu, Guang Chen, Sanqing Qu, Zhijun Li, Yinlong Liu, Alois Knoll

Generally, the frame rates of mechanical LiDAR sensors are 10 to 20 Hz, which is much lower than other commonly used sensors like cameras.

3D Point Cloud Interpolation

MoNet: Motion-based Point Cloud Prediction Network

no code implementations21 Nov 2020 Fan Lu, Guang Chen, Yinlong Liu, Zhijun Li, Sanqing Qu, Tianpei Zou

3D point clouds accurately model 3D information of surrounding environment and are crucial for intelligent vehicles to perceive the scene.

Autonomous Driving

LAP-Net: Adaptive Features Sampling via Learning Action Progression for Online Action Detection

no code implementations16 Nov 2020 Sanqing Qu, Guang Chen, Dan Xu, Jinhu Dong, Fan Lu, Alois Knoll

At each time step, this sampling strategy first estimates current action progression and then decide what temporal ranges should be used to aggregate the optimal supplementary features.

Online Action Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.