Search Results for author: Xiaohan Mao

Found 5 papers, 4 papers with code

MMScan: A Multi-Modal 3D Scene Dataset with Hierarchical Grounded Language Annotations

1 code implementation13 Jun 2024 Ruiyuan Lyu, Tai Wang, Jingli Lin, Shuai Yang, Xiaohan Mao, Yilun Chen, Runsen Xu, Haifeng Huang, Chenming Zhu, Dahua Lin, Jiangmiao Pang

With the emergence of LLMs and their integration with other data modalities, multi-modal 3D perception attracts more attention due to its connectivity to the physical world and makes rapid progress.

3D visual grounding Attribute +1

Beyond Object Recognition: A New Benchmark towards Object Concept Learning

no code implementations ICCV 2023 Yong-Lu Li, Yue Xu, Xinyu Xu, Xiaohan Mao, Yuan YAO, SiQi Liu, Cewu Lu

To support OCL, we build a densely annotated knowledge base including extensive labels for three levels of object concept (category, attribute, affordance), and the causal relations of three levels.

Attribute Object +1

Learning Single/Multi-Attribute of Object with Symmetry and Group

1 code implementation9 Oct 2021 Yong-Lu Li, Yue Xu, Xinyu Xu, Xiaohan Mao, Cewu Lu

To model the compositional nature of these concepts, it is a good choice to learn them as transformations, e. g., coupling and decoupling.

Attribute Compositional Zero-Shot Learning

Symmetry and Group in Attribute-Object Compositions

1 code implementation CVPR 2020 Yong-Lu Li, Yue Xu, Xiaohan Mao, Cewu Lu

To model the compositional nature of these general concepts, it is a good choice to learn them through transformations, such as coupling and decoupling.

 Ranked #1 on Compositional Zero-Shot Learning on MIT-States (Top-1 accuracy % metric)

Attribute Compositional Zero-Shot Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.