Search Results for author: Georgios Tziafas

Found 6 papers, 3 papers with code

Language-guided Robot Grasping: CLIP-based Referring Grasp Synthesis in Clutter

1 code implementation9 Nov 2023 Georgios Tziafas, Yucheng Xu, Arushi Goel, Mohammadreza Kasaei, Zhibin Li, Hamidreza Kasaei

To address these limitations, we develop a challenging benchmark based on cluttered indoor scenes from OCID dataset, for which we generate referring expressions and connect them with 4-DoF grasp poses.

Object Visual Grounding

Enhancing Fine-Grained 3D Object Recognition using Hybrid Multi-Modal Vision Transformer-CNN Models

1 code implementation3 Oct 2022 Songsong Xiong, Georgios Tziafas, Hamidreza Kasaei

Robots operating in human-centered environments, such as retail stores, restaurants, and households, are often required to distinguish between similar objects in different contexts with a high degree of accuracy.

3D Object Recognition Fine-Grained Image Classification +1

Enhancing Interpretability and Interactivity in Robot Manipulation: A Neurosymbolic Approach

1 code implementation3 Oct 2022 Georgios Tziafas, Hamidreza Kasaei

Finally, we integrate our method with a robot framework and demonstrate how it can serve as an interpretable solution for an interactive object-picking task, both in simulation and with a real robot.

Referring Expression Robot Manipulation +4

Early or Late Fusion Matters: Efficient RGB-D Fusion in Vision Transformers for 3D Object Recognition

no code implementations3 Oct 2022 Georgios Tziafas, Hamidreza Kasaei

We explore which depth representation is better in terms of resulting accuracy and compare early and late fusion techniques for aligning the RGB and depth modalities within the ViT architecture.

3D Object Recognition

Sim-To-Real Transfer of Visual Grounding for Human-Aided Ambiguity Resolution

no code implementations24 May 2022 Georgios Tziafas, Hamidreza Kasaei

Service robots should be able to interact naturally with non-expert human users, not only to help them in various tasks but also to receive guidance in order to resolve ambiguities that might be present in the instruction.

Domain Adaptation Visual Grounding

Cannot find the paper you are looking for? You can Submit a new open access paper.