Search Results for author: Lei Ke

Found 9 papers, 7 papers with code

Mask Transfiner for High-Quality Instance Segmentation

1 code implementation26 Nov 2021 Lei Ke, Martin Danelljan, Xia Li, Yu-Wing Tai, Chi-Keung Tang, Fisher Yu

Instead of operating on regular dense tensors, our Mask Transfiner decomposes and represents the image regions as a quadtree.

Instance Segmentation Semantic Segmentation

Occlusion-Aware Video Object Inpainting

no code implementations ICCV 2021 Lei Ke, Yu-Wing Tai, Chi-Keung Tang

To facilitate this new research, we construct the first large-scale video object inpainting benchmark YouTube-VOI to provide realistic occlusion scenarios with both occluded and visible object masks available.

Texture Synthesis Video Inpainting

Deep Occlusion-Aware Instance Segmentation with Overlapping BiLayers

1 code implementation CVPR 2021 Lei Ke, Yu-Wing Tai, Chi-Keung Tang

Segmenting highly-overlapping objects is challenging, because typically no distinction is made between real object contours and occlusion boundaries.

Amodal Instance Segmentation Boundary Detection +4

Cascaded deep monocular 3D human pose estimation with evolutionary training data

1 code implementation CVPR 2020 Shichao Li, Lei Ke, Kevin Pratama, Yu-Wing Tai, Chi-Keung Tang, Kwang-Ting Cheng

End-to-end deep representation learning has achieved remarkable accuracy for monocular 3D human pose estimation, yet these models may fail for unseen poses with limited and fixed training data.

Data Augmentation Monocular 3D Human Pose Estimation +2

Reflective Decoding Network for Image Captioning

no code implementations ICCV 2019 Lei Ke, Wenjie Pei, Ruiyu Li, Xiaoyong Shen, Yu-Wing Tai

State-of-the-art image captioning methods mostly focus on improving visual features, less attention has been paid to utilizing the inherent properties of language to boost captioning performance.

Image Captioning

Memory-Attended Recurrent Network for Video Captioning

1 code implementation CVPR 2019 Wenjie Pei, Jiyuan Zhang, Xiangrong Wang, Lei Ke, Xiaoyong Shen, Yu-Wing Tai

Typical techniques for video captioning follow the encoder-decoder framework, which can only focus on one source video being processed.

Video Captioning

Cannot find the paper you are looking for? You can Submit a new open access paper.