Search Results for author: YingLi Tian

Found 34 papers, 5 papers with code

Unambiguous Text Localization and Retrieval for Cluttered Scenes

no code implementations CVPR 2017 Xuejian Rong, Chucai Yi, YingLi Tian

Text instance as one category of self-described objects provides valuable information for understanding and describing cluttered scenes.

Retrieval Text Retrieval

Self-Guiding Multimodal LSTM - when we do not have a perfect training dataset for image captioning

no code implementations15 Sep 2017 Yang Xian, YingLi Tian

Afterwards, during the training of sg-LSTM on the rest training data, this guiding information serves as additional input to the network along with the image representations and the ground-truth descriptions.

Image Captioning Sentence

Self-Supervised Spatiotemporal Feature Learning via Video Rotation Prediction

no code implementations28 Nov 2018 Longlong Jing, Xiaodong Yang, Jingen Liu, YingLi Tian

The success of deep neural networks generally requires a vast amount of training data to be labeled, which is expensive and unfeasible in scale, especially for video collections.

Self-Supervised Action Recognition Temporal Action Localization +1

Discovering Spatio-Temporal Action Tubes

no code implementations29 Nov 2018 Yuancheng Ye, Xiaodong Yang, YingLi Tian

In this paper, we address the challenging problem of spatial and temporal action detection in videos.

Action Detection

Incremental Scene Synthesis

no code implementations NeurIPS 2019 Benjamin Planche, Xuejian Rong, Ziyan Wu, Srikrishna Karanam, Harald Kosch, YingLi Tian, Jan Ernst, Andreas Hutter

We present a method to incrementally generate complete 2D or 3D scenes with the following properties: (a) it is globally consistent at each step according to a learned scene prior, (b) real observations of a scene can be incorporated while observing global consistency, (c) unobserved regions can be hallucinated locally in consistence with previous observations, hallucinations and global priors, and (d) hallucinations are statistical in nature, i. e., different scenes can be generated from the same observations.

Autonomous Navigation Hallucination

Coarse-to-fine Semantic Segmentation from Image-level Labels

no code implementations28 Dec 2018 Longlong Jing, Yu-cheng Chen, YingLi Tian

The enhanced coarse mask is fed to a fully convolutional neural network to be recursively refined.

Foreground Segmentation Object +2

LGAN: Lung Segmentation in CT Scans Using Generative Adversarial Network

1 code implementation11 Jan 2019 Jiaxing Tan, Longlong Jing, Yumei Huo, YingLi Tian, Oguz Akin

Lung segmentation in computerized tomography (CT) images is an important procedure in various lung disease diagnosis.

Generative Adversarial Network Segmentation

Self-supervised Visual Feature Learning with Deep Neural Networks: A Survey

no code implementations16 Feb 2019 Longlong Jing, YingLi Tian

This paper provides an extensive review of deep learning-based self-supervised general visual feature learning methods from images or videos.

Self-Supervised Image Classification Self-Supervised Learning

Recognizing American Sign Language Manual Signs from RGB-D Videos

no code implementations7 Jun 2019 Longlong Jing, Elahe Vahdani, Matt Huenerfauth, YingLi Tian

In this paper, we propose a 3D Convolutional Neural Network (3DCNN) based multi-stream framework to recognize American Sign Language (ASL) manual signs (consisting of movements of the hands, as well as non-manual face movements in some cases) in real-time from RGB-D videos, by fusing multimodality features including hand gestures, facial expressions, and body poses from multi-channels (RGB, depth, motion, and skeleton joints).

3DFPN-HS$^2$: 3D Feature Pyramid Network Based High Sensitivity and Specificity Pulmonary Nodule Detection

no code implementations8 Jun 2019 Jingya Liu, Liangliang Cao, Oguz Akin, YingLi Tian

Accurate detection of pulmonary nodules with high sensitivity and specificity is essential for automatic lung cancer diagnosis from CT scans.

Lung Cancer Diagnosis Specificity

Accurate and Robust Pulmonary Nodule Detection by 3D Feature Pyramid Network with Self-supervised Feature Learning

no code implementations25 Jul 2019 Jingya Liu, Liangliang Cao, Oguz Akin, YingLi Tian

Accurate detection of pulmonary nodules with high sensitivity and specificity is essential for automatic lung cancer diagnosis from CT scans.

Lung Cancer Diagnosis Self-Supervised Learning +1

VideoSSL: Semi-Supervised Learning for Video Classification

no code implementations29 Feb 2020 Longlong Jing, Toufiq Parag, Zhe Wu, YingLi Tian, Hongcheng Wang

To minimize the dependence on a large annotated dataset, our proposed semi-supervised method trains from a small number of labeled examples and exploits two regulatory signals from unlabeled data.

Classification General Classification +1

Self-supervised Feature Learning by Cross-modality and Cross-view Correspondences

no code implementations13 Apr 2020 Longlong Jing, Yu-cheng Chen, Ling Zhang, Mingyi He, YingLi Tian

Specifically, 2D image features of rendered images from different views are extracted by a 2D convolutional neural network, and 3D point cloud features are extracted by a graph convolution neural network.

3D Part Segmentation 3D Shape Classification +4

Weakly Supervised Semantic Segmentation in 3D Graph-Structured Point Clouds of Wild Scenes

no code implementations26 Apr 2020 Hai-Yan Wang, Xuejian Rong, Liang Yang, Jinglun Feng, Jizhong Xiao, YingLi Tian

The deficiency of 3D segmentation labels is one of the main obstacles to effective point cloud segmentation, especially for scenes in the wild with varieties of different objects.

3D Semantic Segmentation Point Cloud Segmentation +4

An Isolated-Signing RGBD Dataset of 100 American Sign Language Signs Produced by Fluent ASL Signers

no code implementations LREC 2020 Saad Hassan, Larwan Berke, Elahe Vahdani, Longlong Jing, YingLi Tian, Matt Huenerfauth

We have collected a new dataset consisting of color and depth videos of fluent American Sign Language (ASL) signers performing sequences of 100 ASL signs from a Kinect v2 sensor.

Recognizing American Sign Language Nonmanual Signal Grammar Errors in Continuous Videos

no code implementations1 May 2020 Elahe Vahdani, Longlong Jing, YingLi Tian, Matt Huenerfauth

Our system is able to recognize grammatical elements on ASL-HW-RGBD from manual gestures, facial expressions, and head movements and successfully detect 8 ASL grammatical mistakes.

Self-supervised Modal and View Invariant Feature Learning

no code implementations28 May 2020 Longlong Jing, Yu-cheng Chen, Ling Zhang, Mingyi He, YingLi Tian

By exploring the inherent multi-modality attributes of 3D objects, in this paper, we propose to jointly learn modal-invariant and view-invariant features from different modalities including image, point cloud, and mesh with heterogeneous networks for 3D data.

Cross-Modal Retrieval Retrieval

Monocular Human Pose Estimation: A Survey of Deep Learning-based Methods

no code implementations2 Jun 2020 Yu-cheng Chen, YingLi Tian, Mingyi He

Vision-based monocular human pose estimation, as one of the most fundamental and challenging problems in computer vision, aims to obtain posture of the human body from input images or video sequences.

3D Human Pose Estimation

Cross-modal Center Loss

no code implementations8 Aug 2020 Longlong Jing, Elahe Vahdani, Jiaxing Tan, YingLi Tian

Cross-modal retrieval aims to learn discriminative and modal-invariant features for data from different modalities.

Cross-Modal Retrieval Retrieval

FESTA: Flow Estimation via Spatial-Temporal Attention for Scene Point Clouds

1 code implementation CVPR 2021 HaiYan Wang, Jiahao Pang, Muhammad A. Lodhi, YingLi Tian, Dong Tian

Scene flow depicts the dynamics of a 3D scene, which is critical for various applications such as autonomous driving, robot navigation, AR/VR, etc.

Autonomous Driving Robot Navigation +1

Cross-Modal Center Loss for 3D Cross-Modal Retrieval

no code implementations CVPR 2021 Longlong Jing, Elahe Vahdani, Jiaxing Tan, YingLi Tian

Cross-modal retrieval aims to learn discriminative and modal-invariant features for data from different modalities.

Cross-Modal Retrieval Retrieval

Advancing Self-supervised Monocular Depth Learning with Sparse LiDAR

2 code implementations20 Sep 2021 Ziyue Feng, Longlong Jing, Peng Yin, YingLi Tian, Bing Li

Unlike the existing methods that use sparse LiDAR mainly in a manner of time-consuming iterative post-processing, our model fuses monocular image features and sparse LiDAR features to predict initial depth maps.

Depth Completion Depth Prediction +3

Self-Supervised Modality-Invariant and Modality-Specific Feature Learning for 3D Objects

no code implementations29 Sep 2021 Longlong Jing, Zhimin Chen, Bing Li, YingLi Tian

Our proposed novel self-supervised model learns two types of distinct features: modality-invariant features and modality-specific features.

3D Object Recognition Cross-Modal Retrieval +1

Deep Learning-based Action Detection in Untrimmed Videos: A Survey

no code implementations30 Sep 2021 Elahe Vahdani, YingLi Tian

The task of temporal activity detection in untrimmed videos aims to localize the temporal boundary of actions and classify the action categories.

Action Detection Action Recognition +1

Multimodal Semi-Supervised Learning for 3D Objects

1 code implementation22 Oct 2021 Zhimin Chen, Longlong Jing, Yang Liang, YingLi Tian, Bing Li

This paper explores how the coherence of different modelities of 3D data (e. g. point cloud, image, and mesh) can be used to improve data efficiency for both 3D classification and retrieval tasks.

3D Classification Retrieval

The State of Aerial Surveillance: A Survey

no code implementations9 Jan 2022 Kien Nguyen, Clinton Fookes, Sridha Sridharan, YingLi Tian, Feng Liu, Xiaoming Liu, Arun Ross

The rapid emergence of airborne platforms and imaging sensors are enabling new forms of aerial surveillance due to their unprecedented advantages in scale, mobility, deployment and covert observation capabilities.

Disentangling Object Motion and Occlusion for Unsupervised Multi-frame Monocular Depth

1 code implementation29 Mar 2022 Ziyue Feng, Liang Yang, Longlong Jing, HaiYan Wang, YingLi Tian, Bing Li

Conventional self-supervised monocular depth prediction methods are based on a static environment assumption, which leads to accuracy degradation in dynamic scenes due to the mismatch and occlusion problems introduced by object motions.

Depth Prediction Disentanglement +4

Sequential Point Clouds: A Survey

no code implementations20 Apr 2022 HaiYan Wang, YingLi Tian

Point cloud has drawn more and more research attention as well as real-world applications.

Autonomous Driving object-detection +2

POTLoc: Pseudo-Label Oriented Transformer for Point-Supervised Temporal Action Localization

no code implementations20 Oct 2023 Elahe Vahdani, YingLi Tian

This paper tackles the challenge of point-supervised temporal action detection, wherein only a single frame is annotated for each action instance in the training set.

Action Detection Pseudo Label +1

ADM-Loc: Actionness Distribution Modeling for Point-supervised Temporal Action Localization

no code implementations27 Nov 2023 Elahe Vahdani, YingLi Tian

This paper addresses the challenge of point-supervised temporal action detection, in which only one frame per action instance is annotated in the training set.

Action Classification Action Detection +2

Burst Denoising via Temporally Shifted Wavelet Transforms

no code implementations ECCV 2020 Xuejian Rong, Denis Demandolx, Kevin Matzen, Priyam Chatterjee, YingLi Tian

As a result, imaging pipelines often rely on computational photography to improve SNR by fusing multiple short exposures.

Denoising

Cannot find the paper you are looking for? You can Submit a new open access paper.