Search Results for author: Sheng Yang

Found 13 papers, 4 papers with code

Efficient and Systematic Partitioning of Large and Deep Neural Networks for Parallelization

1 code implementation Part of the Lecture Notes in Computer Science book series 2021 Haoran Wang, Chong Li, Thibaut Tachon, Hongxing Wang, Sheng Yang, Sébastien Limet, Sophie Robert

We propose the Flex-Edge Recursive Graph and the Double Recursive Algorithm, successfully limiting our parallelization strategy generation to a linear complexity with a good quality of parallelization strategy.

Progressive Self-Guided Loss for Salient Object Detection

1 code implementation7 Jan 2021 Sheng Yang, Weisi Lin, Guosheng Lin, Qiuping Jiang, Zichuan Liu

We present a simple yet effective progressive self-guided loss function to facilitate deep learning-based salient object detection (SOD) in images.

Object Detection Salient Object Detection

A Method of Generating Measurable Panoramic Image for Indoor Mobile Measurement System

no code implementations27 Oct 2020 Hao Ma, Jingbin Liu, Zhirong Hu, Hongyu Qiu, Dong Xu, Zemin Wang, Xiaodong Gong, Sheng Yang

This paper designs a technique route to generate high-quality panoramic image with depth information, which involves two critical research hotspots: fusion of LiDAR and image data and image stitching.

Image Stitching

Learning Efficient Parameter Server Synchronization Policies for Distributed SGD

no code implementations ICLR 2020 Rong Zhu, Sheng Yang, Andreas Pfadler, Zhengping Qian, Jingren Zhou

We apply a reinforcement learning (RL) based approach to learning optimal synchronization policies used for Parameter Server-based distributed training of machine learning models with Stochastic Gradient Descent (SGD).

Q-Learning

ClusterVO: Clustering Moving Instances and Estimating Visual Odometry for Self and Surroundings

no code implementations CVPR 2020 Jiahui Huang, Sheng Yang, Tai-Jiang Mu, Shi-Min Hu

We present ClusterVO, a stereo Visual Odometry which simultaneously clusters and estimates the motion of both ego and surrounding rigid clusters/objects.

Autonomous Driving Scene Understanding +1

Morphing and Sampling Network for Dense Point Cloud Completion

2 code implementations30 Nov 2019 Minghua Liu, Lu Sheng, Sheng Yang, Jing Shao, Shi-Min Hu

3D point cloud completion, the task of inferring the complete geometric shape from a partial point cloud, has been attracting attention in the community.

Point Cloud Completion

High-Dimensional Stochastic Gradient Quantization for Communication-Efficient Edge Learning

no code implementations9 Oct 2019 Yuqing Du, Sheng Yang, Kaibin Huang

First, the framework features a practical hierarchical architecture for decomposing the stochastic gradient into its norm and normalized block gradients, and efficiently quantizes them using a uniform quantizer and a low-dimensional codebook on a Grassmann manifold, respectively.

Federated Learning Quantization

A Dilated Inception Network for Visual Saliency Prediction

1 code implementation7 Apr 2019 Sheng Yang, Guosheng Lin, Qiuping Jiang, Weisi Lin

In this work, we proposed an end-to-end dilated inception network (DINet) for visual saliency prediction.

Saliency Prediction

Learning Markov Clustering Networks for Scene Text Detection

no code implementations CVPR 2018 Zichuan Liu, Guosheng Lin, Sheng Yang, Jiashi Feng, Weisi Lin, Wang Ling Goh

MCN predicts instance-level bounding boxes by firstly converting an image into a Stochastic Flow Graph (SFG) and then performing Markov Clustering on this graph.

Scene Text Detection

Flint Water Crisis: Data-Driven Risk Assessment Via Residential Water Testing

no code implementations30 Sep 2016 Jacob Abernethy, Cyrus Anderson, Chengyu Dai, Arya Farahi, Linh Nguyen, Adam Rauh, Eric Schwartz, Wenbo Shen, Guangsha Shi, Jonathan Stroud, Xinyu Tan, Jared Webb, Sheng Yang

In this analysis, we find that lead service lines are not the only factor that is predictive of the risk of lead contamination of water.

Cannot find the paper you are looking for? You can Submit a new open access paper.