Search Results for author: Cheng-Kun Yang

Found 4 papers, 2 papers with code

PartDistill: 3D Shape Part Segmentation by Vision-Language Model Distillation

1 code implementation7 Dec 2023 Ardian Umam, Cheng-Kun Yang, Min-Hung Chen, Jen-Hui Chuang, Yen-Yu Lin

This paper proposes a cross-modal distillation framework, PartDistill, which transfers 2D knowledge from vision-language models (VLMs) to facilitate 3D shape part segmentation.

3D Part Segmentation Language Modelling +1

2D-3D Interlaced Transformer for Point Cloud Segmentation with Scene-Level Supervision

no code implementations ICCV 2023 Cheng-Kun Yang, Min-Hung Chen, Yung-Yu Chuang, Yen-Yu Lin

Considering the high annotation cost of point clouds, effective 2D and 3D feature fusion based on weakly supervised learning is in great demand.

Point Cloud Segmentation Segmentation +1

An MIL-Derived Transformer for Weakly Supervised Point Cloud Segmentation

no code implementations CVPR 2022 Cheng-Kun Yang, Ji-Jia Wu, Kai-Syun Chen, Yung-Yu Chuang, Yen-Yu Lin

We address weakly supervised point cloud segmentation by proposing a new model, MIL-derived transformer, to mine additional supervisory signals.

Model Optimization Multiple Instance Learning +1

Unsupervised Point Cloud Object Co-Segmentation by Co-Contrastive Learning and Mutual Attention Sampling

1 code implementation ICCV 2021 Cheng-Kun Yang, Yung-Yu Chuang, Yen-Yu Lin

We formulate this task as an object point sampling problem, and develop two techniques, the mutual attention module and co-contrastive learning, to enable it.

Contrastive Learning Object

Cannot find the paper you are looking for? You can Submit a new open access paper.