Test unseen
6 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in Test unseen
Most implemented papers
Language and Visual Entity Relationship Graph for Agent Navigation
From both the textual and visual perspectives, we find that the relationships among the scene, its objects, and directional clues are essential for the agent to interpret complex instructions and correctly perceive the environment.
Deep Virtual Markers for Articulated 3D Shapes
We propose deep virtual markers, a framework for estimating dense and accurate positional information for various types of 3D data.
Learning to Act with Affordance-Aware Multimodal Neural SLAM
With the proposed Affordance-aware Multimodal Neural SLAM (AMSLAM) approach, we obtain more than 40% improvement over prior published work on the ALFRED benchmark and set a new state-of-the-art generalization performance at a success rate of 23. 48% on the test unseen scenes.
Reducing Flipping Errors in Deep Neural Networks
Deep neural networks (DNNs) have been widely applied in various domains in artificial intelligence including computer vision and natural language processing.
Improving Generalized Zero-Shot Learning by Exploring the Diverse Semantics from External Class Names
This motivates us to study GZSL in the more practical setting where unseen classes can be either similar or dissimilar to seen classes.
MAGIC: Meta-Ability Guided Interactive Chain-of-Distillation for Effective-and-Efficient Vision-and-Language Navigation
Despite the remarkable developments of recent large models in Embodied Artificial Intelligence (E-AI), their integration into robotics is hampered by their excessive parameter sizes and computational demands.