Search Results for author: Yanyu Xu

Found 16 papers, 9 papers with code

Reliable Joint Segmentation of Retinal Edema Lesions in OCT Images

1 code implementation1 Dec 2022 Meng Wang, Kai Yu, Chun-Mei Feng, Ke Zou, Yanyu Xu, Qingquan Meng, Rick Siow Mong Goh, Yong liu, Huazhu Fu

Specifically, aiming at improving the model's ability to learn the complex pathological features of retinal edema lesions in OCT images, we develop a novel segmentation backbone that integrates a wavelet-enhanced feature extractor network and a multi-scale transformer module of our newly designed.


ResNeRF: Geometry-Guided Residual Neural Radiance Field for Indoor Scene Novel View Synthesis

no code implementations26 Nov 2022 Yuting Xiao, Yiqun Zhao, Yanyu Xu, Shenghua Gao

In the first stage, we focus on geometry reconstruction based on SDF representation, which would lead to a good geometry surface of the scene and also a sharp density.

Novel View Synthesis

GAMMA Challenge:Glaucoma grAding from Multi-Modality imAges

no code implementations14 Feb 2022 Junde Wu, Huihui Fang, Fei Li, Huazhu Fu, Fengbin Lin, Jiongcheng Li, Lexing Huang, Qinji Yu, Sifan Song, Xinxing Xu, Yanyu Xu, Wensai Wang, Lingxiao Wang, Shuai Lu, Huiqi Li, Shihua Huang, Zhichao Lu, Chubin Ou, Xifei Wei, Bingyuan Liu, Riadh Kobbi, Xiaoying Tang, Li Lin, Qiang Zhou, Qiang Hu, Hrvoje Bogunovic, José Ignacio Orlando, Xiulan Zhang, Yanwu Xu

However, although numerous algorithms are proposed based on fundus images or OCT volumes in computer-aided diagnosis, there are still few methods leveraging both of the modalities for the glaucoma assessment.

Layout-Guided Novel View Synthesis from a Single Indoor Panorama

1 code implementation CVPR 2021 Jiale Xu, Jia Zheng, Yanyu Xu, Rui Tang, Shenghua Gao

Then, we leverage the room layout prior, a strong structural constraint of the indoor scene, to guide the generation of target views.

Novel View Synthesis

Crowd Counting With Partial Annotations in an Image

1 code implementation ICCV 2021 Yanyu Xu, Ziming Zhong, Dongze Lian, Jing Li, Zhengxin Li, Xinxing Xu, Shenghua Gao

To fully leverage the data captured from different scenes with different view angles while reducing the annotation cost, this paper studies a novel crowd counting setting, i. e. only using partial annotations in each image as training data.

Active Learning Crowd Counting

Amodal Segmentation Based on Visible Region Segmentation and Shape Prior

1 code implementation10 Dec 2020 Yuting Xiao, Yanyu Xu, Ziming Zhong, Weixin Luo, Jiawei Li, Shenghua Gao

In this way, features corresponding to background and occlusion can be suppressed for amodal mask estimation.


SIRI: Spatial Relation Induced Network For Spatial Description Resolution

no code implementations NeurIPS 2020 Peiyao Wang, Weixin Luo, Yanyu Xu, Haojie Li, Shugong Xu, Jianyu Yang, Shenghua Gao

Spatial Description Resolution, as a language-guided localization task, is proposed for target location in a panoramic street view, given corresponding language descriptions.

Semantic Human Matting

2 code implementations5 Sep 2018 Quan Chen, Tiezheng Ge, Yanyu Xu, Zhiqiang Zhang, Xinxin Yang, Kun Gai

SHM is the first algorithm that learns to jointly fit both semantic information and high quality details with deep networks.

Image Matting

Saliency Detection in 360° Videos

no code implementations ECCV 2018 Ziheng Zhang, Yanyu Xu, Jingyi Yu, Shenghua Gao

Considering that the 360° videos are usually stored with equirectangular panorama, we propose to implement the spherical convolution on panorama by stretching and rotating the kernel based on the location of patch to be convolved.

Video Saliency Detection

Encoding Crowd Interaction With Deep Neural Network for Pedestrian Trajectory Prediction

1 code implementation CVPR 2018 Yanyu Xu, Zhixin Piao, Shenghua Gao

Specifically, motivated by the residual learning in deep learning, we propose to predict displacement between neighboring frames for each pedestrian sequentially.

Pedestrian Trajectory Prediction Trajectory Prediction

Gaze Prediction in Dynamic 360° Immersive Videos

no code implementations CVPR 2018 Yanyu Xu, Yanbing Dong, Junru Wu, Zhengzhong Sun, Zhiru Shi, Jingyi Yu, Shenghua Gao

This paper explores gaze prediction in dynamic $360^circ$ immersive videos, emph{i. e.}, based on the history scan path and VR contents, we predict where a viewer will look at an upcoming time.

Gaze Prediction

Personalized Saliency and its Prediction

1 code implementation9 Oct 2017 Yanyu Xu, Shenghua Gao, Junru Wu, Nianyi Li, Jingyi Yu

Specifically, we propose to decompose a personalized saliency map (referred to as PSM) into a universal saliency map (referred to as USM) predictable by existing saliency detection models and a new discrepancy map across users that characterizes personalized saliency.

Saliency Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.