Search Results for author: Brent Yi

Found 7 papers, 4 papers with code

General In-Hand Object Rotation with Vision and Touch

no code implementations18 Sep 2023 Haozhi Qi, Brent Yi, Sudharshan Suresh, Mike Lambeta, Yi Ma, Roberto Calandra, Jitendra Malik

We introduce RotateIt, a system that enables fingertip-based object rotation along multiple axes by leveraging multimodal sensory inputs.

Object

Canonical Factors for Hybrid Neural Fields

no code implementations ICCV 2023 Brent Yi, Weijia Zeng, Sam Buchanan, Yi Ma

Factored feature volumes offer a simple way to build more compact, efficient, and intepretable neural fields, but also introduce biases that are not necessarily beneficial for real-world data.

Nerfstudio: A Modular Framework for Neural Radiance Field Development

2 code implementations8 Feb 2023 Matthew Tancik, Ethan Weber, Evonne Ng, RuiLong Li, Brent Yi, Justin Kerr, Terrance Wang, Alexander Kristoffersen, Jake Austin, Kamyar Salahi, Abhik Ahuja, David McAllister, Angjoo Kanazawa

Neural Radiance Fields (NeRF) are a rapidly growing area of research with wide-ranging applications in computer vision, graphics, robotics, and more.

Unsupervised Learning of Structured Representations via Closed-Loop Transcription

1 code implementation30 Oct 2022 Shengbang Tong, Xili Dai, Yubei Chen, Mingyang Li, Zengyi Li, Brent Yi, Yann Lecun, Yi Ma

This paper proposes an unsupervised method for learning a unified representation that serves both discriminative and generative purposes.

Category-Independent Articulated Object Tracking with Factor Graphs

no code implementations7 May 2022 Nick Heppert, Toki Migimatsu, Brent Yi, Claire Chen, Jeannette Bohg

Robots deployed in human-centric environments may need to manipulate a diverse range of articulated objects, such as doors, dishwashers, and cabinets.

Object Object Tracking

Incremental Learning of Structured Memory via Closed-Loop Transcription

1 code implementation11 Feb 2022 Shengbang Tong, Xili Dai, Ziyang Wu, Mingyang Li, Brent Yi, Yi Ma

Our method is simpler than existing approaches for incremental learning, and more efficient in terms of model size, storage, and computation: it requires only a single, fixed-capacity autoencoding network with a feature space that is used for both discriminative and generative purposes.

Incremental Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.