Search Results for author: Nikolaos Tsagkas

Found 4 papers, 1 papers with code

Click to Grasp: Zero-Shot Precise Manipulation via Visual Diffusion Descriptors

no code implementations21 Mar 2024 Nikolaos Tsagkas, Jack Rome, Subramanian Ramamoorthy, Oisin Mac Aodha, Chris Xiaoxuan Lu

Precise manipulation that is generalizable across scenes and objects remains a persistent challenge in robotics.

VL-Fields: Towards Language-Grounded Neural Implicit Spatial Representations

no code implementations21 May 2023 Nikolaos Tsagkas, Oisin Mac Aodha, Chris Xiaoxuan Lu

We present Visual-Language Fields (VL-Fields), a neural implicit spatial representation that enables open-vocabulary semantic queries.

Segmentation Semantic Segmentation

Inference and Learning for Generative Capsule Models

no code implementations7 Sep 2022 Alfredo Nazabal, Nikolaos Tsagkas, Christopher K. I. Williams

In this paper we specify a generative model for such data, and derive a variational algorithm for inferring the transformation of each model object in a scene, and the assignments of observed parts to the objects.

Object

Inference for Generative Capsule Models

2 code implementations11 Mar 2021 Alfredo Nazabal, Nikolaos Tsagkas, Christopher K. I. Williams

Capsule networks (see e. g. Hinton et al., 2018) aim to encode knowledge and reason about the relationship between an object and its parts.

Object

Cannot find the paper you are looking for? You can Submit a new open access paper.