Search Results for author: Hong-Xing Yu

Found 29 papers, 9 papers with code

Reconstruction and Simulation of Elastic Objects with Spring-Mass 3D Gaussians

no code implementations14 Mar 2024 Licheng Zhong, Hong-Xing Yu, Jiajun Wu, Yunzhu Li

In particular, we develop and integrate a 3D Spring-Mass model into 3D Gaussian kernels, enabling the reconstruction of the visual appearance, shape, and physical dynamics of the object.

Future prediction Object

Unsupervised Discovery of Object-Centric Neural Fields

no code implementations12 Feb 2024 Rundong Luo, Hong-Xing Yu, Jiajun Wu

Extensive experiments show that uOCF enables unsupervised discovery of visually rich objects from a single real image, allowing applications such as 3D object segmentation and scene manipulation.

Object Object Discovery +3

Fluid Simulation on Neural Flow Maps

no code implementations22 Dec 2023 Yitong Deng, Hong-Xing Yu, Diyang Zhang, Jiajun Wu, Bo Zhu

We introduce Neural Flow Maps, a novel simulation method bridging the emerging paradigm of implicit neural representations with fluid simulation based on the theory of flow maps, to achieve state-of-the-art simulation of inviscid fluid phenomena.

Inferring Hybrid Neural Fluid Fields from Videos

no code implementations NeurIPS 2023 Hong-Xing Yu, Yang Zheng, Yuan Gao, Yitong Deng, Bo Zhu, Jiajun Wu

Specifically, to deal with visual ambiguities of fluid velocity, we introduce a set of physics-based losses that enforce inferring a physically plausible velocity field, which is divergence-free and drives the transport of density.

Dynamic Reconstruction Future prediction

ZeroNVS: Zero-Shot 360-Degree View Synthesis from a Single Real Image

no code implementations27 Oct 2023 Kyle Sargent, Zizhang Li, Tanmay Shah, Charles Herrmann, Hong-Xing Yu, Yunzhi Zhang, Eric Ryan Chan, Dmitry Lagun, Li Fei-Fei, Deqing Sun, Jiajun Wu

Further, we observe that Score Distillation Sampling (SDS) tends to truncate the distribution of complex backgrounds during distillation of 360-degree scenes, and propose "SDS anchoring" to improve the diversity of synthesized novel views.

Novel View Synthesis

Tree-Structured Shading Decomposition

no code implementations ICCV 2023 Chen Geng, Hong-Xing Yu, Sharon Zhang, Maneesh Agrawala, Jiajun Wu

The shade tree representation enables novice users who are unfamiliar with the physical shading process to edit object shading in an efficient and intuitive manner.

Object

Learning Vortex Dynamics for Fluid Inference and Prediction

no code implementations27 Jan 2023 Yitong Deng, Hong-Xing Yu, Jiajun Wu, Bo Zhu

We propose a novel differentiable vortex particle (DVP) method to infer and predict fluid dynamics from a single video.

Future prediction

Differentiable Physics Simulation of Dynamics-Augmented Neural Objects

no code implementations17 Oct 2022 Simon Le Cleac'h, Hong-Xing Yu, Michelle Guo, Taylor A. Howell, Ruohan Gao, Jiajun Wu, Zachary Manchester, Mac Schwager

A robot can use this simulation to optimize grasps and manipulation trajectories of neural objects, or to improve the neural object models through gradient-based real-to-simulation transfer.

Friction Object

Unsupervised Discovery and Composition of Object Light Fields

no code implementations8 May 2022 Cameron Smith, Hong-Xing Yu, Sergey Zakharov, Fredo Durand, Joshua B. Tenenbaum, Jiajun Wu, Vincent Sitzmann

Neural scene representations, both continuous and discrete, have recently emerged as a powerful new paradigm for 3D scene understanding.

Novel View Synthesis Object +1

Rotationally Equivariant 3D Object Detection

no code implementations CVPR 2022 Hong-Xing Yu, Jiajun Wu, Li Yi

To incorporate object-level rotation equivariance into 3D object detectors, we need a mechanism to extract equivariant features with local object-level spatial support while being able to model cross-object context information.

3D Object Detection Autonomous Driving +2

Letter-level Online Writer Identification

no code implementations6 Dec 2021 Zelin Chen, Hong-Xing Yu, AnCong Wu, Wei-Shi Zheng

To make the application of writer-id more practical (e. g., on mobile devices), we focus on a novel problem, letter-level online writer-id, which requires only a few trajectories of written letters as identification cues.

Unsupervised Discovery of Object Radiance Fields

1 code implementation ICLR 2022 Hong-Xing Yu, Leonidas J. Guibas, Jiajun Wu

We study the problem of inferring an object-centric scene representation from a single image, aiming to derive a representation that explains the image formation process, captures the scene's 3D nature, and is learned without supervision.

Novel View Synthesis Object +1

OpenRooms: An Open Framework for Photorealistic Indoor Scene Datasets

no code implementations CVPR 2021 Zhengqin Li, Ting-Wei Yu, Shen Sang, Sarah Wang, Meng Song, YuHan Liu, Yu-Ying Yeh, Rui Zhu, Nitesh Gundavarapu, Jia Shi, Sai Bi, Hong-Xing Yu, Zexiang Xu, Kalyan Sunkavalli, Milos Hasan, Ravi Ramamoorthi, Manmohan Chandraker

Finally, we demonstrate that our framework may also be integrated with physics engines, to create virtual robotics environments with unique ground truth such as friction coefficients and correspondence to real scenes.

Friction Inverse Rendering +1

Neural Radiance Flow for 4D View Synthesis and Video Processing

1 code implementation ICCV 2021 Yilun Du, Yinan Zhang, Hong-Xing Yu, Joshua B. Tenenbaum, Jiajun Wu

We present a method, Neural Radiance Flow (NeRFlow), to learn a 4D spatial-temporal representation of a dynamic scene from a set of RGB images.

Image Super-Resolution Temporal View Synthesis

OpenRooms: An End-to-End Open Framework for Photorealistic Indoor Scene Datasets

no code implementations25 Jul 2020 Zhengqin Li, Ting-Wei Yu, Shen Sang, Sarah Wang, Meng Song, YuHan Liu, Yu-Ying Yeh, Rui Zhu, Nitesh Gundavarapu, Jia Shi, Sai Bi, Zexiang Xu, Hong-Xing Yu, Kalyan Sunkavalli, Miloš Hašan, Ravi Ramamoorthi, Manmohan Chandraker

Finally, we demonstrate that our framework may also be integrated with physics engines, to create virtual robotics environments with unique ground truth such as friction coefficients and correspondence to real scenes.

Friction Inverse Rendering +2

DSRGAN: Explicitly Learning Disentangled Representation of Underlying Structure and Rendering for Image Generation without Tuple Supervision

no code implementations30 Sep 2019 Guang-Yuan Hao, Hong-Xing Yu, Wei-Shi Zheng

We focus on explicitly learning disentangled representation for natural image generation, where the underlying spatial structure and the rendering on the structure can be independently controlled respectively, yet using no tuple supervision.

Image Generation

Unsupervised Person Re-identification by Deep Asymmetric Metric Embedding

1 code implementation29 Jan 2019 Hong-Xing Yu, An-Cong Wu, Wei-Shi Zheng

In such a way, DECAMEL jointly learns the feature representation and the unsupervised asymmetric metric.

Clustering Deep Clustering +2

MIXGAN: Learning Concepts from Different Domains for Mixture Generation

1 code implementation4 Jul 2018 Guang-Yuan Hao, Hong-Xing Yu, Wei-Shi Zheng

In this work, we present an interesting attempt on mixture generation: absorbing different image concepts (e. g., content and style) from different domains and thus generating a new domain with learned concepts.

Generative Adversarial Network Translation

Adversarial Attribute-Image Person Re-identification

no code implementations5 Dec 2017 Zhou Yin, Wei-Shi Zheng, An-Cong Wu, Hong-Xing Yu, Hai Wan, Xiaowei Guo, Feiyue Huang, Jian-Huang Lai

While attributes have been widely used for person re-identification (Re-ID) which aims at matching the same person images across disjoint camera views, they are used either as extra features or for performing multi-task learning to assist the image-image matching task.

Attribute Multi-Task Learning +1

RGB-Infrared Cross-Modality Person Re-Identification

no code implementations ICCV 2017 Ancong Wu, Wei-Shi Zheng, Hong-Xing Yu, Shaogang Gong, Jian-Huang Lai

To that end, matching RGB images with infrared images is required, which are heterogeneous with very different visual characteristics.

Ranked #4 on Cross-Modal Person Re-Identification on SYSU-MM01 (mAP (All-search & Single-shot) metric)

Cross-Modality Person Re-identification Cross-Modal Person Re-Identification

Cross-view Asymmetric Metric Learning for Unsupervised Person Re-identification

1 code implementation ICCV 2017 Hong-Xing Yu, An-Cong Wu, Wei-Shi Zheng

While metric learning is important for Person re-identification (RE-ID), a significant problem in visual surveillance for cross-view pedestrian matching, existing metric models for RE-ID are mostly based on supervised learning that requires quantities of labeled samples in all pairs of camera views for training.

Clustering Metric Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.