Search Results for author: Chongkun Xia

Found 8 papers, 5 papers with code

PaddingFlow: Improving Normalizing Flows with Padding-Dimensional Noise

1 code implementation13 Mar 2024 Qinglong Meng, Chongkun Xia, Xueqian Wang

To implement PaddingFlow, only the dimension of normalizing flows needs to be modified.

 Ranked #1 on Density Estimation on MNIST (MMD-L2 metric)

Density Estimation

PPNet: A Two-Stage Neural Network for End-to-end Path Planning

1 code implementation18 Jan 2024 Qinglong Meng, Chongkun Xia, Xueqian Wang, Songping Mai, Bin Liang

The results show that PPNet can find a near-optimal solution in 15. 3ms, which is much shorter than the state-of-the-art path planners.

Deep Reinforcement Learning Based on Local GNN for Goal-conditioned Deformable Object Rearranging

no code implementations21 Feb 2023 Yuhong Deng, Chongkun Xia, Xueqian Wang, Lipeng Chen

Some research has been attempting to design a general framework to obtain more advanced manipulation capabilities for deformable rearranging tasks, with lots of progress achieved in simulation.

Graph-Transporter: A Graph-based Learning Method for Goal-Conditioned Deformable Object Rearranging Task

no code implementations21 Feb 2023 Yuhong Deng, Chongkun Xia, Xueqian Wang, Lipeng Chen

Rearranging deformable objects is a long-standing challenge in robotic manipulation for the high dimensionality of configuration space and the complex dynamics of deformable objects.

Object

Foldsformer: Learning Sequential Multi-Step Cloth Manipulation With Space-Time Attention

1 code implementation8 Jan 2023 Kai Mo, Chongkun Xia, Xueqian Wang, Yuhong Deng, Xuehai Gao, Bin Liang

Foldformer can complete multi-step cloth manipulation tasks even when configurations of the cloth (e. g., size and pose) vary from configurations in the general demonstrations.

Visual-tactile Fusion for Transparent Object Grasping in Complex Backgrounds

no code implementations30 Nov 2022 Shoujie Li, Haixin Yu, Wenbo Ding, Houde Liu, Linqi Ye, Chongkun Xia, Xueqian Wang, Xiao-Ping Zhang

Here, a visual-tactile fusion framework for transparent object grasping under complex backgrounds and variant light conditions is proposed, including the grasping position detection, tactile calibration, and visual-tactile fusion based classification.

Classification Position +1

Polarimetric Inverse Rendering for Transparent Shapes Reconstruction

1 code implementation25 Aug 2022 Mingqi Shao, Chongkun Xia, Dongxu Duan, Xueqian Wang

We build a polarization dataset for multi-view transparent shapes reconstruction to verify our method.

Inverse Rendering Transparent objects

Transparent Shape from a Single View Polarization Image

1 code implementation ICCV 2023 Mingqi Shao, Chongkun Xia, Zhendong Yang, Junnan Huang, Xueqian Wang

To train and test our method, we construct a dataset for transparent shape from polarization with paired polarization images and ground-truth normal maps.

Cannot find the paper you are looking for? You can Submit a new open access paper.