Search Results for author: Yuxi Xiao

Found 5 papers, 3 papers with code

SpatialTracker: Tracking Any 2D Pixels in 3D Space

no code implementations5 Apr 2024 Yuxi Xiao, Qianqian Wang, Shangzhan Zhang, Nan Xue, Sida Peng, Yujun Shen, Xiaowei Zhou

Recovering dense and long-range pixel motion in videos is a challenging problem.

CoDeF: Content Deformation Fields for Temporally Consistent Video Processing

1 code implementation15 Aug 2023 Hao Ouyang, Qiuyu Wang, Yuxi Xiao, Qingyan Bai, Juntao Zhang, Kecheng Zheng, Xiaowei Zhou, Qifeng Chen, Yujun Shen

We present the content deformation field CoDeF as a new type of video representation, which consists of a canonical content field aggregating the static contents in the entire video and a temporal deformation field recording the transformations from the canonical image (i. e., rendered from the canonical content field) to each individual frame along the time axis. Given a target video, these two fields are jointly optimized to reconstruct it through a carefully tailored rendering pipeline. We advisedly introduce some regularizations into the optimization process, urging the canonical content field to inherit semantics (e. g., the object shape) from the video. With such a design, CoDeF naturally supports lifting image algorithms for video processing, in the sense that one can apply an image algorithm to the canonical image and effortlessly propagate the outcomes to the entire video with the aid of the temporal deformation field. We experimentally show that CoDeF is able to lift image-to-image translation to video-to-video translation and lift keypoint detection to keypoint tracking without any training. More importantly, thanks to our lifting strategy that deploys the algorithms on only one image, we achieve superior cross-frame consistency in processed videos compared to existing video-to-video translation approaches, and even manage to track non-rigid objects like water and smog. Project page can be found at https://qiuyu96. github. io/CoDeF/.

Image-to-Image Translation Keypoint Detection +1

NEAT: Distilling 3D Wireframes from Neural Attraction Fields

1 code implementation14 Jul 2023 Nan Xue, Bin Tan, Yuxi Xiao, Liang Dong, Gui-Song Xia, Tianfu Wu, Yujun Shen

Instead of leveraging matching-based solutions from 2D wireframes (or line segments) for 3D wireframe reconstruction as done in prior arts, we present NEAT, a rendering-distilling formulation using neural fields to represent 3D line segments with 2D observations, and bipartite matching for perceiving and distilling of a sparse set of 3D global junctions.

3D Wireframe Reconstruction Novel View Synthesis

Level-S$^2$fM: Structure from Motion on Neural Level Set of Implicit Surfaces

1 code implementation CVPR 2023 Yuxi Xiao, Nan Xue, Tianfu Wu, Gui-Song Xia

This paper presents a neural incremental Structure-from-Motion (SfM) approach, Level-S$^2$fM, which estimates the camera poses and scene geometry from a set of uncalibrated images by learning coordinate MLPs for the implicit surfaces and the radiance fields from the established keypoint correspondences.

3D Reconstruction Neural Rendering +1

DeepMLE: A Robust Deep Maximum Likelihood Estimator for Two-view Structure from Motion

no code implementations11 Oct 2022 Yuxi Xiao, Li Li, Xiaodi Li, Jian Yao

In addition, in order to increase the robustness of our framework, we formulate the likelihood function of the correlations of 2D image matches as a Gaussian and Uniform mixture distribution which takes the uncertainty caused by illumination changes, image noise and moving objects into account.

3D Reconstruction

Cannot find the paper you are looking for? You can Submit a new open access paper.