Search Results for author: Yunhan Yang

Found 9 papers, 6 papers with code

HoloPart: Generative 3D Part Amodal Segmentation

no code implementations10 Apr 2025 Yunhan Yang, Yuan-Chen Guo, Yukun Huang, Zi-Xin Zou, Zhipeng Yu, Yangguang Li, Yan-Pei Cao, Xihui Liu

3D part amodal segmentation--decomposing a 3D shape into complete, semantically meaningful parts, even when occluded--is a challenging but crucial task for 3D content creation and understanding.

3D geometry 3D Part Segmentation +1

Beyond Outcomes: Transparent Assessment of LLM Reasoning in Games

1 code implementation18 Dec 2024 Wenye Lin, Jonathan Roberts, Yunhan Yang, Samuel Albanie, Zongqing Lu, Kai Han

Furthermore, we develop a suite of rule-based algorithms to generate ground truth for these subproblems, enabling rigorous validation of the LLMs' intermediate reasoning steps.

SAMPart3D: Segment Any Part in 3D Objects

2 code implementations11 Nov 2024 Yunhan Yang, Yukun Huang, Yuan-Chen Guo, Liangjun Lu, Xiaoyang Wu, Edmund Y. Lam, Yan-Pei Cao, Xihui Liu

For flexibility, we distill scale-conditioned part-aware 3D features for 3D part segmentation at multiple granularities.

3D Generation 3D Part Segmentation +3

SAM3D: Segment Anything in 3D Scenes

1 code implementation6 Jun 2023 Yunhan Yang, Xiaoyang Wu, Tong He, Hengshuang Zhao, Xihui Liu

In this work, we propose SAM3D, a novel framework that is able to predict masks in 3D point clouds by leveraging the Segment-Anything Model (SAM) in RGB images without further training or finetuning.

Segmentation

Data-Driven Network Neuroscience: On Data Collection and Benchmark

1 code implementation NeurIPS 2023 Jiaxing Xu, Yunhan Yang, David Tse Jung Huang, Sophi Shilpa Gururajapathy, Yiping Ke, Miao Qiao, Alan Wang, Haribalan Kumar, Josh McGeown, Eryn Kwon

This paper presents a comprehensive and quality collection of functional human brain network data for potential research in the intersection of neuroscience, machine learning, and graph analytics.

Functional Connectivity

CLIP2Point: Transfer CLIP to Point Cloud Classification with Image-Depth Pre-training

1 code implementation ICCV 2023 Tianyu Huang, Bowen Dong, Yunhan Yang, Xiaoshui Huang, Rynson W. H. Lau, Wanli Ouyang, WangMeng Zuo

To address this issue, we propose CLIP2Point, an image-depth pre-training method by contrastive learning to transfer CLIP to the 3D domain, and adapt it to point cloud classification.

Contrastive Learning Few-Shot Learning +5

Cannot find the paper you are looking for? You can Submit a new open access paper.