Search Results for author: Jieyu Li

Found 6 papers, 4 papers with code

2D LiDAR and Camera Fusion Using Motion Cues for Indoor Layout Estimation

1 code implementation24 Apr 2022 Jieyu Li, Robert Stevenson

A ground robot explores an indoor space with a single floor and vertical walls, and collects a sequence of intensity images and 2D LiDAR datasets.

Semantic Segmentation

Unsupervised Local Discrimination for Medical Images

1 code implementation21 Aug 2021 Huai Chen, Renzhen Wang, Jieyu Li, Jianhao Bai, Qing Peng, Deyu Meng, Lisheng Wang

Following the fact that images of the same body region should share similar anatomical structures, and pixels of the same structure should have similar semantic patterns, we design a neural network to construct a local discriminative embedding space where pixels with similar contexts are clustered and dissimilar pixels are dispersed.

Contrastive Learning Representation Learning

Unsupervised Learning of Local Discriminative Representation for Medical Images

1 code implementation17 Dec 2020 Huai Chen, Jieyu Li, Renzhen Wang, YiJie Huang, Fanrui Meng, Deyu Meng, Qing Peng, Lisheng Wang

However, the commonly applied supervised representation learning methods require a large amount of annotated data, and unsupervised discriminative representation learning distinguishes different images by learning a global feature, both of which are not suitable for localized medical image analysis tasks.

Representation Learning

Indoor Layout Estimation by 2D LiDAR and Camera Fusion

no code implementations15 Jan 2020 Jieyu Li, Robert L. Stevenson

This paper presents an algorithm for indoor layout estimation and reconstruction through the fusion of a sequence of captured images and LiDAR data sets.

Depth Estimation Pose Estimation +1

Semantic Parsing with Dual Learning

1 code implementation ACL 2019 Ruisheng Cao, Su Zhu, Chen Liu, Jieyu Li, Kai Yu

Semantic parsing converts natural language queries into structured logical forms.

Semantic Parsing

Cannot find the paper you are looking for? You can Submit a new open access paper.