Search Results for author: Shuxue Quan

Found 5 papers, 1 papers with code

Vision Backbone Enhancement via Multi-Stage Cross-Scale Attention

no code implementations10 Aug 2023 Liang Shang, Yanli Liu, Zhengyang Lou, Shuxue Quan, Nagesh Adluru, Bochen Guan, William A. Sethares

Convolutional neural networks (CNNs) and vision transformers (ViTs) have achieved remarkable success in various vision tasks.

SMOF: Squeezing More Out of Filters Yields Hardware-Friendly CNN Pruning

no code implementations21 Oct 2021 Yanli Liu, Bochen Guan, Qinwen Xu, Weiyi Li, Shuxue Quan

We develop a CNN pruning framework called SMOF, which Squeezes More Out of Filters by reducing both kernel size and the number of filter channels.

Network Pruning

RobustFusion: Robust Volumetric Performance Reconstruction under Human-object Interactions from Monocular RGBD Stream

no code implementations30 Apr 2021 Zhuo Su, Lan Xu, Dawei Zhong, Zhong Li, Fan Deng, Shuxue Quan, Lu Fang

To fill this gap, in this paper, we propose RobustFusion, a robust volumetric performance reconstruction system for human-object interaction scenarios using only a single RGBD sensor, which combines various data-driven visual and interaction cues to handle the complex interaction patterns and severe occlusions.

4D reconstruction Disentanglement +5

PoP-Net: Pose over Parts Network for Multi-Person 3D Pose Estimation from a Depth Image

1 code implementation12 Dec 2020 Yuliang Guo, Zhong Li, Zekun Li, Xiangyu Du, Shuxue Quan, Yi Xu

In this paper, a real-time method called PoP-Net is proposed to predict multi-person 3D poses from a depth image.

3D Pose Estimation Data Augmentation

Object Detection in the Context of Mobile Augmented Reality

no code implementations15 Aug 2020 Xiang Li, Yuan Tian, Fuyao Zhang, Shuxue Quan, Yi Xu

Ordinary object detection approaches process information from the images only, and they are oblivious to the camera pose with regard to the environment and the scale of the environment.

Object object-detection +1

Cannot find the paper you are looking for? You can Submit a new open access paper.