1 code implementation • 13 Jun 2024 • Ruiyuan Lyu, Tai Wang, Jingli Lin, Shuai Yang, Xiaohan Mao, Yilun Chen, Runsen Xu, Haifeng Huang, Chenming Zhu, Dahua Lin, Jiangmiao Pang
With the emergence of LLMs and their integration with other data modalities, multi-modal 3D perception attracts more attention due to its connectivity to the physical world and makes rapid progress.
1 code implementation • CVPR 2024 • Tai Wang, Xiaohan Mao, Chenming Zhu, Runsen Xu, Ruiyuan Lyu, Peisen Li, Xiao Chen, Wenwei Zhang, Kai Chen, Tianfan Xue, Xihui Liu, Cewu Lu, Dahua Lin, Jiangmiao Pang
In the realm of computer vision and robotics, embodied agents are expected to explore their environment and carry out human instructions.
no code implementations • ICCV 2023 • Yong-Lu Li, Yue Xu, Xinyu Xu, Xiaohan Mao, Yuan YAO, SiQi Liu, Cewu Lu
To support OCL, we build a densely annotated knowledge base including extensive labels for three levels of object concept (category, attribute, affordance), and the causal relations of three levels.
1 code implementation • 9 Oct 2021 • Yong-Lu Li, Yue Xu, Xinyu Xu, Xiaohan Mao, Cewu Lu
To model the compositional nature of these concepts, it is a good choice to learn them as transformations, e. g., coupling and decoupling.
1 code implementation • CVPR 2020 • Yong-Lu Li, Yue Xu, Xiaohan Mao, Cewu Lu
To model the compositional nature of these general concepts, it is a good choice to learn them through transformations, such as coupling and decoupling.
Ranked #1 on Compositional Zero-Shot Learning on MIT-States (Top-1 accuracy % metric)