Search Results for author: Hao-Shu Fang

Found 28 papers, 19 papers with code

Target-Referenced Reactive Grasping for Dynamic Objects

no code implementations CVPR 2023 Jirong Liu, Ruo Zhang, Hao-Shu Fang, Minghao Gou, Hongjie Fang, Chenxi Wang, Sheng Xu, Hengxu Yan, Cewu Lu

Reactive grasping, which enables the robot to successfully grasp dynamic moving objects, is of great interest in robotics.

X-NeRF: Explicit Neural Radiance Field for Multi-Scene 360$^{\circ} $ Insufficient RGB-D Views

1 code implementation11 Oct 2022 Haoyi Zhu, Hao-Shu Fang, Cewu Lu

In this paper, we focus on a rarely discussed but important setting: can we train one model that can represent multiple scenes, with 360$^\circ $ insufficient views and RGB-D images?

Novel View Synthesis

Unseen Object 6D Pose Estimation: A Benchmark and Baselines

no code implementations23 Jun 2022 Minghao Gou, Haolin Pan, Hao-Shu Fang, Ziyuan Liu, Cewu Lu, Ping Tan

In this paper, we propose a new task that enables and facilitates algorithms to estimate the 6D pose estimation of novel objects during testing.

6D Pose Estimation

TransCG: A Large-Scale Real-World Dataset for Transparent Object Depth Completion and a Grasping Baseline

1 code implementation17 Feb 2022 Hongjie Fang, Hao-Shu Fang, Sheng Xu, Cewu Lu

However, the majority of current grasping algorithms would fail in this case since they heavily rely on the depth image, while ordinary depth sensors usually fail to produce accurate depth information for transparent objects owing to the reflection and refraction of light.

Depth Completion Robotic Grasping +2

HAKE: A Knowledge Engine Foundation for Human Activity Understanding

3 code implementations14 Feb 2022 Yong-Lu Li, Xinpeng Liu, Xiaoqian Wu, Yizhuo Li, Zuoyu Qiu, Liang Xu, Yue Xu, Hao-Shu Fang, Cewu Lu

Human activity understanding is of widespread interest in artificial intelligence and spans diverse applications like health care and behavior analysis.

Action Recognition Human-Object Interaction Detection +2

Human Trajectory Prediction With Momentary Observation

no code implementations CVPR 2022 Jianhua Sun, YuXuan Li, Liang Chai, Hao-Shu Fang, Yong-Lu Li, Cewu Lu

Human trajectory prediction task aims to analyze human future movements given their past status, which is a crucial step for many autonomous systems such as self-driving cars and social robots.

Self-Driving Cars Trajectory Prediction

SuctionNet-1Billion: A Large-Scale Benchmark for Suction Grasping

no code implementations23 Mar 2021 Hanwen Cao, Hao-Shu Fang, Wenhai Liu, Cewu Lu

Meanwhile, we propose a method to predict numerous suction poses from an RGB-D image of a cluttered scene and demonstrate our superiority against several previous methods.

Robotic Grasping

RGB Matters: Learning 7-DoF Grasp Poses on Monocular RGBD Images

1 code implementation3 Mar 2021 Minghao Gou, Hao-Shu Fang, Zhanda Zhu, Sheng Xu, Chenxi Wang, Cewu Lu

In the first stage, an encoder-decoder like convolutional neural network Angle-View Net(AVN) is proposed to predict the SO(3) orientation of the gripper at every location of the image.

Graspness Discovery in Clutters for Fast and Accurate Grasp Detection

1 code implementation ICCV 2021 Chenxi Wang, Hao-Shu Fang, Minghao Gou, Hongjie Fang, Jin Gao, Cewu Lu

To quickly detect graspness in practice, we develop a neural network named graspness model to approximate the searching process.

Robotic Grasping

DIRV: Dense Interaction Region Voting for End-to-End Human-Object Interaction Detection

1 code implementation2 Oct 2020 Hao-Shu Fang, Yichen Xie, Dian Shao, Cewu Lu

On the other hand, existing one-stage methods mainly focus on the union regions of interactions, which introduce unnecessary visual information as disturbances to HOI detection.

Human-Object Interaction Detection

GraspNet: A Large-Scale Clustered and Densely Annotated Dataset for Object Grasping

no code implementations31 Dec 2019 Hao-Shu Fang, Chenxi Wang, Minghao Gou, Cewu Lu

Object grasping is critical for many applications, which is also a challenging computer vision problem.

InstaBoost: Boosting Instance Segmentation via Probability Map Guided Copy-Pasting

3 code implementations ICCV 2019 Hao-Shu Fang, Jianhua Sun, Runzhong Wang, Minghao Gou, Yong-Lu Li, Cewu Lu

With the guidance of such map, we boost the performance of R101-Mask R-CNN on instance segmentation from 35. 7 mAP to 37. 9 mAP without modifying the backbone or network structure.

Data Augmentation Instance Segmentation +3

Cross-Domain Adaptation for Animal Pose Estimation

no code implementations ICCV 2019 Jinkun Cao, Hongyang Tang, Hao-Shu Fang, Xiaoyong Shen, Cewu Lu, Yu-Wing Tai

Therefore, the easily available human pose dataset, which is of a much larger scale than our labeled animal dataset, provides important prior knowledge to boost up the performance on animal pose estimation.

Animal Pose Estimation Domain Adaptation

HAKE: Human Activity Knowledge Engine

4 code implementations13 Apr 2019 Yong-Lu Li, Liang Xu, Xinpeng Liu, Xijie Huang, Yue Xu, Mingyang Chen, Ze Ma, Shiyi Wang, Hao-Shu Fang, Cewu Lu

To address these and promote the activity understanding, we build a large-scale Human Activity Knowledge Engine (HAKE) based on the human body part states.

Ranked #2 on Human-Object Interaction Detection on HICO (using extra training data)

Action Detection Human-Object Interaction Detection +1

CrowdPose: Efficient Crowded Scenes Pose Estimation and A New Benchmark

3 code implementations CVPR 2019 Jiefeng Li, Can Wang, Hao Zhu, Yihuan Mao, Hao-Shu Fang, Cewu Lu

In this paper, we propose a novel and efficient method to tackle the problem of pose estimation in the crowd and a new dataset to better evaluate algorithms.

Keypoint Detection Multi-Person Pose Estimation

Transferable Interactiveness Knowledge for Human-Object Interaction Detection

3 code implementations CVPR 2019 Yong-Lu Li, Siyuan Zhou, Xijie Huang, Liang Xu, Ze Ma, Hao-Shu Fang, Yan-Feng Wang, Cewu Lu

On account of the generalization of interactiveness, interactiveness network is a transferable knowledge learner and can be cooperated with any HOI detection models to achieve desirable results.

Human-Object Interaction Detection Object

Weakly and Semi Supervised Human Body Part Parsing via Pose-Guided Knowledge Transfer

1 code implementation CVPR 2018 Hao-Shu Fang, Guansong Lu, Xiaolin Fang, Jianwen Xie, Yu-Wing Tai, Cewu Lu

In this paper, we present a novel method to generate synthetic human part segmentation data using easily-obtained human keypoint annotations.

Ranked #4 on Human Part Segmentation on PASCAL-Part (using extra training data)

Human Parsing Human Part Segmentation +3

RMPE: Regional Multi-person Pose Estimation

14 code implementations ICCV 2017 Hao-Shu Fang, Shuqin Xie, Yu-Wing Tai, Cewu Lu

In this paper, we propose a novel regional multi-person pose estimation (RMPE) framework to facilitate pose estimation in the presence of inaccurate human bounding boxes.

2D Human Pose Estimation Human Detection +2

Cannot find the paper you are looking for? You can Submit a new open access paper.