Search Results for author: Lihe Yang

Found 9 papers, 8 papers with code

Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data

3 code implementations19 Jan 2024 Lihe Yang, Bingyi Kang, Zilong Huang, Xiaogang Xu, Jiashi Feng, Hengshuang Zhao

To this end, we scale up the dataset by designing a data engine to collect and automatically annotate large-scale unlabeled data (~62M), which significantly enlarges the data coverage and thus is able to reduce the generalization error.

Ranked #3 on Monocular Depth Estimation on NYU-Depth V2 (using extra training data)

Data Augmentation Monocular Depth Estimation +1

Diverse Cotraining Makes Strong Semi-Supervised Segmentor

1 code implementation ICCV 2023 Yijiang Li, Xinjiang Wang, Lihe Yang, Litong Feng, Wayne Zhang, Ying Gao

Deep co-training has been introduced to semi-supervised segmentation and achieves impressive results, yet few studies have explored the working mechanism behind it.

Augmentation Matters: A Simple-yet-Effective Approach to Semi-supervised Semantic Segmentation

1 code implementation CVPR 2023 Zhen Zhao, Lihe Yang, Sifan Long, Jimin Pi, Luping Zhou, Jingdong Wang

Differently, in this work, we follow a standard teacher-student framework and propose AugSeg, a simple and clean approach that focuses mainly on data perturbations to boost the SSS performance.

Semi-Supervised Semantic Segmentation

Revisiting Weak-to-Strong Consistency in Semi-Supervised Semantic Segmentation

1 code implementation CVPR 2023 Lihe Yang, Lei Qi, Litong Feng, Wayne Zhang, Yinghuan Shi

In this work, we revisit the weak-to-strong consistency framework, popularized by FixMatch from semi-supervised classification, where the prediction of a weakly perturbed image serves as supervision for its strongly perturbed version.

Semi-supervised Change Detection Semi-supervised Medical Image Segmentation +1

ST++: Make Self-training Work Better for Semi-supervised Semantic Segmentation

1 code implementation CVPR 2022 Lihe Yang, Wei Zhuo, Lei Qi, Yinghuan Shi, Yang Gao

In this work, we first construct a strong baseline of self-training (namely ST) for semi-supervised semantic segmentation via injecting strong data augmentations (SDA) on unlabeled images to alleviate overfitting noisy labels as well as decouple similar predictions between the teacher and student.

Semi-Supervised Semantic Segmentation

Cannot find the paper you are looking for? You can Submit a new open access paper.