Search Results for author: Chung Min Kim

Found 4 papers, 2 papers with code

GARField: Group Anything with Radiance Fields

1 code implementation17 Jan 2024 Chung Min Kim, Mingxuan Wu, Justin Kerr, Ken Goldberg, Matthew Tancik, Angjoo Kanazawa

We optimize this field from a set of 2D masks provided by Segment Anything (SAM) in a way that respects coarse-to-fine hierarchy, using scale to consistently fuse conflicting masks from different viewpoints.

Scene Understanding

Language Embedded Radiance Fields for Zero-Shot Task-Oriented Grasping

no code implementations14 Sep 2023 Adam Rashid, Satvik Sharma, Chung Min Kim, Justin Kerr, Lawrence Chen, Angjoo Kanazawa, Ken Goldberg

Instead, we propose LERF-TOGO, Language Embedded Radiance Fields for Task-Oriented Grasping of Objects, which uses vision-language models zero-shot to output a grasp distribution over an object given a natural language query.

Object

LERF: Language Embedded Radiance Fields

5 code implementations ICCV 2023 Justin Kerr, Chung Min Kim, Ken Goldberg, Angjoo Kanazawa, Matthew Tancik

Humans describe the physical world using natural language to refer to specific 3D locations based on a vast range of properties: visual appearance, semantics, abstract associations, or actionable affordances.

Cannot find the paper you are looking for? You can Submit a new open access paper.