Spatial Relation Recognition
4 papers with code • 1 benchmarks • 4 datasets
Latest papers
Rel3D: A Minimally Contrastive Benchmark for Grounding Spatial Relations in 3D
The 3D scenes in our dataset come in minimally contrastive pairs: two scenes in a pair are almost identical, but a spatial relation holds in one and fails in the other.
A Linguistic Analysis of Visually Grounded Dialogues Based on Spatial Expressions
Recent models achieve promising results in visually grounded dialogues.
Learning Object Placements For Relational Instructions by Hallucinating Scene Representations
One particular requirement for such robots is that they are able to understand spatial relations and can place objects in accordance with the spatial relations expressed by their user.
SpatialSense: An Adversarially Crowdsourced Benchmark for Spatial Relation Recognition
Understanding the spatial relations between objects in images is a surprisingly challenging task.