Search Results for author: Yuanqi Yao

Found 5 papers, 3 papers with code

SpatialVLA: Exploring Spatial Representations for Visual-Language-Action Model

no code implementations27 Jan 2025 Delin Qu, Haoming Song, Qizhi Chen, Yuanqi Yao, Xinyi Ye, Yan Ding, Zhigang Wang, Jiayuan Gu, Bin Zhao, Dong Wang, Xuelong Li

Specifically, we introduce Ego3D Position Encoding to inject 3D information into the input observations of the visual-language-action model, and propose Adaptive Action Grids to represent spatial robot movement actions with adaptive discretized action grids, facilitating learning generalizable and transferrable spatial action knowledge for cross-robot control.

Robot Manipulation

MEOW: MEMOry Supervised LLM Unlearning Via Inverted Facts

1 code implementation18 Sep 2024 Tianle Gu, Kexin Huang, Ruilin Luo, Yuanqi Yao, Yujiu Yang, Yan Teng, Yingchun Wang

LLM Unlearning, a post-hoc approach to remove this information from trained LLMs, offers a promising solution to mitigate these risks.

Memorization

MLLMGuard: A Multi-dimensional Safety Evaluation Suite for Multimodal Large Language Models

1 code implementation11 Jun 2024 Tianle Gu, Zeyang Zhou, Kexin Huang, Dandan Liang, Yixu Wang, Haiquan Zhao, Yuanqi Yao, Xingge Qiao, Keqing Wang, Yujiu Yang, Yan Teng, Yu Qiao, Yingchun Wang

In this paper, we present MLLMGuard, a multidimensional safety evaluation suite for MLLMs, including a bilingual image-text evaluation dataset, inference utilities, and a lightweight evaluator.

Red Teaming

Cannot find the paper you are looking for? You can Submit a new open access paper.