Search Results for author: Deshui Miao

Found 7 papers, 1 papers with code

Discriminative Spatial-Semantic VOS Solution: 1st Place Solution for 6th LSVOS

no code implementations29 Aug 2024 Deshui Miao, Yameng Gu, Xin Li, Zhenyu He, YaoWei Wang, Ming-Hsuan Yang

Video object segmentation (VOS) is a crucial task in computer vision, but current VOS methods struggle with complex scenes and prolonged object motions.

Object Object Recognition +3

Learning Spatial-Semantic Features for Robust Video Object Segmentation

no code implementations10 Jul 2024 Xin Li, Deshui Miao, Zhenyu He, YaoWei Wang, Huchuan Lu, Ming-Hsuan Yang

Tracking and segmenting multiple similar objects with complex or separate parts in long-term videos is inherently challenging due to the ambiguity of target parts and identity confusion caused by occlusion, background clutter, and long-term variations.

Object Semantic Segmentation +2

1st Place Solution for MOSE Track in CVPR 2024 PVUW Workshop: Complex Video Object Segmentation

no code implementations7 Jun 2024 Deshui Miao, Xin Li, Zhenyu He, YaoWei Wang, Ming-Hsuan Yang

In this challenge, we propose a semantic embedding video object segmentation model and use the salient features of objects as query representations.

Object Segmentation +3

Spatial-Temporal Multi-level Association for Video Object Segmentation

no code implementations9 Apr 2024 Deshui Miao, Xin Li, Zhenyu He, Huchuan Lu, Ming-Hsuan Yang

In addition, we propose a spatial-temporal memory to assist feature association and temporal ID assignment and correlation.

Object Segmentation +3

Simple Contrastive Representation Adversarial Learning for NLP Tasks

no code implementations26 Nov 2021 Deshui Miao, JiaQi Zhang, WenBo Xie, Jian Song, Xin Li, Lijuan Jia, Ning Guo

In this paper, adversarial training is performed to generate challenging and harder learning adversarial examples over the embedding space of NLP as learning pairs.

Contrastive Learning Natural Language Understanding +4

Cannot find the paper you are looking for? You can Submit a new open access paper.