1 code implementation • 27 Mar 2025 • Chung-En Sun, Ge Yan, Tsui-Wei Weng
Recent studies have shown that Large Language Models (LLMs) augmented with chain-of-thought (CoT) reasoning demonstrate impressive problem-solving abilities.
1 code implementation • 25 Mar 2025 • Akshay Kulkarni, Ge Yan, Chung-En Sun, Tuomas Oikarinen, Tsui-Wei Weng
Concept bottleneck models (CBM) aim to produce inherently interpretable models that rely on human-understandable concepts for their predictions.
no code implementations • 17 Mar 2025 • Ri-Zhao Qiu, Shiqi Yang, Xuxin Cheng, Chaitanya Chawla, Jialong Li, Tairan He, Ge Yan, David J. Yoon, Ryan Hoque, Lars Paulsen, Ge Yang, Jian Zhang, Sha Yi, Guanya Shi, Xiaolong Wang
The state-action space of HAT is unified for both humans and humanoid robots and can be differentiably retargeted to robot actions.
no code implementations • 28 Feb 2025 • Ge Yan, Lipeng Zhu, Rui Zhang
To address this issue, we propose in this paper a new approach to optimize the MA positions based on the users' statistical CSI over a large timescale.
no code implementations • 30 Jan 2025 • Yuelei Li, Ge Yan, Annabella Macaluso, Mazeyu Ji, Xueyan Zou, Xiaolong Wang
In aligning high-level and low-level control for robot actions, language embeddings representing the high-level policy are jointly attended with the 3D feature field in the 3D transformer for seamless integration.
no code implementations • 20 Jul 2024 • Ge Yan, Lipeng Zhu, Rui Zhang
Prior works on IRS CSI acquisition mainly estimate IRS-cascaded channels based on the extra pilot signals received at the users/base station (BS) with time-varying IRS reflections, which, however, needs to modify the existing channel training/estimation protocols of wireless systems.
1 code implementation • 18 Jul 2024 • Divyansh Srivastava, Ge Yan, Tsui-Wei Weng
Concept Bottleneck Models (CBMs) provide interpretable prediction by introducing an intermediate Concept Bottleneck Layer (CBL), which encodes human-understandable concepts to explain models' decision.
1 code implementation • 30 Apr 2024 • Ge Yan, Yaniv Romano, Tsui-Wei Weng
To address these limitations, we first propose a novel framework called RSCP+ to provide provable robustness guarantee in evaluation, which fixes the issues in the original RSCP method.
no code implementations • 7 Mar 2024 • Ge Yan, Yueh-Hua Wu, Xiaolong Wang
To learn a generalizable multi-task policy with few demonstrations, the pre-training phase of DNAct leverages neural rendering to distill 2D semantic features from foundation models such as Stable Diffusion to a 3D space, which provides a comprehensive semantic understanding regarding the scene.
no code implementations • 17 Oct 2023 • Ge Yan, Lipeng Zhu, Rui Zhang
Intelligent reflecting surface (IRS) can bring significant performance enhancement for wireless communication systems by reconfiguring wireless channels via passive signal reflection.
1 code implementation • 31 Aug 2023 • Yanjie Ze, Ge Yan, Yueh-Hua Wu, Annabella Macaluso, Yuying Ge, Jianglong Ye, Nicklas Hansen, Li Erran Li, Xiaolong Wang
To incorporate semantics in 3D, the reconstruction module utilizes a vision-language foundation model ($\textit{e. g.}$, Stable Diffusion) to distill rich semantic information into the deep 3D voxel.
no code implementations • 5 Mar 2021 • Ge Yan, Sharanjeet Kaur, Jeffery W. Banks, Jason E. Hicken
The results suggest that DGD and SBP solution errors are similar for the same number of degrees of freedom.
Numerical Analysis Numerical Analysis 65M60, 65M70, 65M12