Search Results for author: Peng Yun

Found 9 papers, 3 papers with code

Deep Metric Learning for Open World Semantic Segmentation

no code implementations10 Aug 2021 Jun Cen, Peng Yun, Junhao Cai, Michael Yu Wang, Ming Liu

Incrementally learning these OOD objects with few annotations is an ideal way to enlarge the knowledge base of the deep learning models.

Autonomous Driving Few-Shot Learning +2

Smart-Inspect: Micro Scale Localization and Classification of Smartphone Glass Defects for Industrial Automation

no code implementations2 Oct 2020 M Usman Maqbool Bhutta, Shoaib Aslam, Peng Yun, Jianhao Jiao, Ming Liu

We present a robust semi-supervised learning framework for intelligent micro-scaled localization and classification of defects on a 16K pixel image of smartphone glass.

General Classification

MLOD: Awareness of Extrinsic Perturbation in Multi-LiDAR 3D Object Detection for Autonomous Driving

2 code implementations29 Sep 2020 Jianhao Jiao, Peng Yun, Lei Tai, Ming Liu

To minimize the detrimental effect of extrinsic perturbation, we propagate an uncertainty prior on each point of input point clouds, and use this information to boost an approach for 3D geometric tasks.

3D Object Detection Autonomous Driving

Focal Loss in 3D Object Detection

no code implementations17 Sep 2018 Peng Yun, Lei Tai, Yu-An Wang, Chengju Liu, Ming Liu

Inspired by the recent use of focal loss in image-based object detection, we extend this hard-mining improvement of binary cross entropy to point-cloud-based object detection and conduct experiments to show its performance based on two different 3D detectors: 3D-FCN and VoxelNet.

3D Object Detection Autonomous Driving

PointSeg: Real-Time Semantic Segmentation Based on 3D LiDAR Point Cloud

3 code implementations17 Jul 2018 Yu-An Wang, Tianyue Shi, Peng Yun, Lei Tai, Ming Liu

We take the spherical image, which is transformed from the 3D LiDAR point clouds, as input of the convolutional neural networks (CNNs) to predict the point-wise semantic map.

3D Object Detection Autonomous Driving +1

VR-Goggles for Robots: Real-to-sim Domain Adaptation for Visual Control

no code implementations1 Feb 2018 Jingwei Zhang, Lei Tai, Peng Yun, Yufeng Xiong, Ming Liu, Joschka Boedecker, Wolfram Burgard

In this paper, we deal with the reality gap from a novel perspective, targeting transferring Deep Reinforcement Learning (DRL) policies learned in simulated environments to the real-world domain for visual control tasks.

Domain Adaptation Style Transfer

Cannot find the paper you are looking for? You can Submit a new open access paper.