Search Results for author: Gi-Cheon Kang

Found 9 papers, 7 papers with code

Socratic Planner: Inquiry-Based Zero-Shot Planning for Embodied Instruction Following

no code implementations21 Apr 2024 Suyeon Shin, Sujin jeon, Junghyun Kim, Gi-Cheon Kang, Byoung-Tak Zhang

Embodied Instruction Following (EIF) is the task of executing natural language instructions by navigating and interacting with objects in 3D environments.

Continual Vision-and-Language Navigation

no code implementations22 Mar 2024 Seongjun Jeong, Gi-Cheon Kang, SeongHo Choi, Joochan Kim, Byoung-Tak Zhang

For the training and evaluation of CVLN agents, we re-arrange existing VLN datasets to propose two datasets: CVLN-I, focused on navigation via initial-instruction interpretation, and CVLN-D, aimed at navigation through dialogue with other agents.

Continual Learning Navigate +1

PGA: Personalizing Grasping Agents with Single Human-Robot Interaction

1 code implementation19 Oct 2023 Junghyun Kim, Gi-Cheon Kang, Jaein Kim, Seoyun Yang, Minjoon Jung, Byoung-Tak Zhang

Based on the acquired information, PGA pseudo-labels objects in the Reminiscence by our proposed label propagation algorithm.

Object Robotic Grasping

GVCCI: Lifelong Learning of Visual Grounding for Language-Guided Robotic Manipulation

1 code implementation12 Jul 2023 Junghyun Kim, Gi-Cheon Kang, Jaein Kim, Suyeon Shin, Byoung-Tak Zhang

Furthermore, the qualitative analysis shows that the unadapted VG model often fails to find correct objects due to a strong bias learned from the pre-training data.

Object Detection Visual Grounding

Dual Attention Networks for Visual Reference Resolution in Visual Dialog

2 code implementations IJCNLP 2019 Gi-Cheon Kang, Jaeseo Lim, Byoung-Tak Zhang

Specifically, REFER module learns latent relationships between a given question and a dialog history by employing a self-attention mechanism.

Question Answering Visual Dialog +2

Cannot find the paper you are looking for? You can Submit a new open access paper.