1 code implementation • 25 Mar 2024 • Yuhang Ding, Liulei Li, Wenguan Wang, Yi Yang
}$ This enables knowledge acquired from prior slices to assist in the segmentation of the current slice, further efficiently bridging the communication between remote slices using mere 2D networks.
no code implementations • ICCV 2023 • Liulei Li, Wenguan Wang, Yi Yang
Current high-performance semantic segmentation models are purely data-driven sub-symbolic approaches and blind to the structured nature of the visual world.
no code implementations • ICCV 2023 • Lu Yang, Liulei Li, Xueshi Xin, Yifan Sun, Qing Song, Wenguan Wang
Instead of existing efforts devoted to localizing tourist photos captured by perspective cameras, in this article, we focus on devising person positioning solutions using overhead fisheye cameras.
1 code implementation • 11 May 2023 • Yangming Cheng, Liulei Li, Yuanyou Xu, Xiaodi Li, Zongxin Yang, Wenguan Wang, Yi Yang
This report presents a framework called Segment And Track Anything (SAMTrack) that allows users to precisely and effectively segment and track any object in a video.
1 code implementation • CVPR 2023 • Yurong Zhang, Liulei Li, Wenguan Wang, Rong Xie, Li Song, Wenjun Zhang
Current top-leading solutions for video object segmentation (VOS) typically follow a matching-based regime: for each query frame, the segmentation mask is inferred according to its correspondence to previously processed and the first annotated frames.
1 code implementation • CVPR 2023 • Liulei Li, Wenguan Wang, Tianfei Zhou, Jianwu Li, Yi Yang
The objective of this paper is self-supervised learning of video object segmentation.
3 code implementations • CVPR 2022 • Liulei Li, Tianfei Zhou, Wenguan Wang, Jianwu Li, Yi Yang
In this paper, we instead address hierarchical semantic segmentation (HSS), which aims at structured, pixel-wise description of visual observation in terms of a class hierarchy.
1 code implementation • 27 Mar 2022 • Liulei Li, Tianfei Zhou, Wenguan Wang, Lu Yang, Jianwu Li, Yi Yang
Our target is to learn visual correspondence from unlabeled videos.
no code implementations • CVPR 2022 • Liulei Li, Tianfei Zhou, Wenguan Wang, Lu Yang, Jianwu Li, Yi Yang
Our target is to learn visual correspondence from unlabeled videos.
1 code implementation • journal 2021 • Tianfei Zhou, Liulei Li, Xueyi Li, Chun-Mei Feng, Jianwu Li, Ling Shao
The framework explicitly encodes semantic dependencies in a group of images to discover rich semantic context for estimating more reliable pseudo ground-truths, which are subsequently employed to train more effective segmentation models.
1 code implementation • 20 Jun 2021 • Tianfei Zhou, Liulei Li, Gustav Bredell, Jianwu Li, Ender Konukoglu
The proposed network has two appealing characteristics: 1) The memory-augmented network offers the ability to quickly encode past segmentation information, which will be retrieved for the segmentation of other slices; 2) The quality assessment module enables the model to directly estimate the qualities of segmentation predictions, which allows an active learning paradigm where users preferentially label the lowest-quality slice for multi-round refinement.