SpatialVLA: Exploring Spatial Representations for Visual-Language-Action Model

In this paper, we claim that spatial understanding is the keypoint in robot manipulation, and propose SpatialVLA to explore effective spatial representations for the robot foundation model. Specifically, we introduce Ego3D Position Encoding to inject 3D information into the input observations of the visual-language-action model, and propose Adaptive Action Grids to represent spatial robot movement actions with adaptive discretized action grids, facilitating learning generalizable and transferrable spatial action knowledge for cross-robot control. SpatialVLA is first pre-trained on top of a vision-language model with 1.1 Million real-world robot episodes, to learn a generalist manipulation policy across multiple robot environments and tasks. After pre-training, SpatialVLA is directly applied to perform numerous tasks in a zero-shot manner. The superior results in both simulation and real-world robots demonstrate its advantage of inferring complex robot motion trajectories and its strong in-domain multi-task generalization ability. We further show the proposed Adaptive Action Grids offer a new and effective way to fine-tune the pre-trained SpatialVLA model for new simulation and real-world setups, where the pre-learned action grids are re-discretized to capture robot-specific spatial action movements of new setups. The superior results from extensive evaluations demonstrate the exceptional in-distribution generalization and out-of-distribution adaptation capability, highlighting the crucial benefit of the proposed spatial-aware representations for generalist robot policy learning. All the details and codes will be open-sourced.

PDF Abstract
No code implementations yet. Submit your code now

Results from the Paper


Ranked #2 on Robot Manipulation on SimplerEnv-Widow X (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Robot Manipulation SimplerEnv-Google Robot SpatialVLA Visual Matching-Pick Coke Can 0.810 # 2
Visual Matching-Move Near 0.696 # 3
Visual Matching 0.719 # 2
Visual Matching-Open/Close Drawer 0.593 # 2
Variant Aggregation 0.688 # 1
Variant Aggregation-Pick Coke Can 0.895 # 2
Variant Aggregation-Move Near 0.717 # 3
Variant Aggregation-Open/Close Drawer 0.362 # 1
Robot Manipulation SimplerEnv-Widow X SpatialVLA Average 0.344 # 2
Put Spoon on Towel 0.208 # 3
Put Carrot on Plate 0.208 # 3
Stack Green Block on Yellow Block 0.250 # 2

Methods


No methods listed for this paper. Add relevant methods here