no code implementations • 21 Mar 2024 • Nikolaos Tsagkas, Jack Rome, Subramanian Ramamoorthy, Oisin Mac Aodha, Chris Xiaoxuan Lu
Precise manipulation that is generalizable across scenes and objects remains a persistent challenge in robotics.
no code implementations • 21 May 2023 • Nikolaos Tsagkas, Oisin Mac Aodha, Chris Xiaoxuan Lu
We present Visual-Language Fields (VL-Fields), a neural implicit spatial representation that enables open-vocabulary semantic queries.
no code implementations • 7 Sep 2022 • Alfredo Nazabal, Nikolaos Tsagkas, Christopher K. I. Williams
In this paper we specify a generative model for such data, and derive a variational algorithm for inferring the transformation of each model object in a scene, and the assignments of observed parts to the objects.
2 code implementations • 11 Mar 2021 • Alfredo Nazabal, Nikolaos Tsagkas, Christopher K. I. Williams
Capsule networks (see e. g. Hinton et al., 2018) aim to encode knowledge and reason about the relationship between an object and its parts.