Search Results for author: Sergey Prokudin

Found 14 papers, 10 papers with code

SplatFormer: Point Transformer for Robust 3D Gaussian Splatting

1 code implementation10 Nov 2024 Yutong Chen, Marko Mihajlovic, Xiyi Chen, Yiming Wang, Sergey Prokudin, Siyu Tang

To our knowledge, this is the first successful application of point transformers directly on 3DGS sets, surpassing the limitations of previous multi-scene training methods, which could handle only a restricted number of input views during inference.

Novel View Synthesis

FreSh: Frequency Shifting for Accelerated Neural Representation Learning

1 code implementation7 Oct 2024 Adam Kania, Marko Mihajlovic, Sergey Prokudin, Jacek Tabor, Przemysław Spurek

Implicit Neural Representations (INRs) have recently gained attention as a powerful approach for continuously representing signals such as images, videos, and 3D shapes using multilayer perceptrons (MLPs).

Representation Learning

RISE-SDF: a Relightable Information-Shared Signed Distance Field for Glossy Object Inverse Rendering

no code implementations30 Sep 2024 Deheng Zhang, Jingyu Wang, Shaofei Wang, Marko Mihajlovic, Sergey Prokudin, Hendrik P. A. Lensch, Siyu Tang

Our experiments demonstrate that our algorithm achieves state-of-the-art performance in inverse rendering and relighting, with particularly strong results in the reconstruction of highly reflective objects.

Inverse Rendering

SplatFields: Neural Gaussian Splats for Sparse 3D and 4D Reconstruction

1 code implementation17 Sep 2024 Marko Mihajlovic, Sergey Prokudin, Siyu Tang, Robert Maier, Federica Bogo, Tony Tung, Edmond Boyer

Digitizing 3D static scenes and 4D dynamic events from multi-view images has long been a challenge in computer vision and graphics.

4D reconstruction

Degrees of Freedom Matter: Inferring Dynamics from Point Trajectories

no code implementations CVPR 2024 Yan Zhang, Sergey Prokudin, Marko Mihajlovic, Qianli Ma, Siyu Tang

By observing a set of point trajectories, we aim to learn an implicit motion field parameterized by a neural network to predict the movement of novel points within the same domain, without relying on any data-driven or scene-specific priors.

Morphable Diffusion: 3D-Consistent Diffusion for Single-image Avatar Creation

1 code implementation CVPR 2024 Xiyi Chen, Marko Mihajlovic, Shaofei Wang, Sergey Prokudin, Siyu Tang

To the best of our knowledge, our proposed framework is the first diffusion model to enable the creation of fully 3D-consistent, animatable, and photorealistic human avatars from a single image of an unseen subject; extensive quantitative and qualitative evaluations demonstrate the advantages of our approach over existing state-of-the-art avatar creation models on both novel view and novel expression synthesis tasks.

Novel View Synthesis

ResFields: Residual Neural Fields for Spatiotemporal Signals

1 code implementation6 Sep 2023 Marko Mihajlovic, Sergey Prokudin, Marc Pollefeys, Siyu Tang

Neural fields, a category of neural networks trained to represent high-frequency signals, have gained significant attention in recent years due to their impressive performance in modeling complex 3D data, such as signed distance (SDFs) or radiance fields (NeRFs), via a single multi-layer perceptron (MLP).

4D reconstruction Neural Rendering

Dynamic Point Fields

1 code implementation ICCV 2023 Sergey Prokudin, Qianli Ma, Maxime Raafat, Julien Valentin, Siyu Tang

In this work, we present a dynamic point field model that combines the representational benefits of explicit point-based graphics with implicit deformation networks to allow efficient modeling of non-rigid 3D surfaces.

Surface Reconstruction

HARP: Personalized Hand Reconstruction from a Monocular RGB Video

no code implementations CVPR 2023 Korrawe Karunratanakul, Sergey Prokudin, Otmar Hilliges, Siyu Tang

We present HARP (HAnd Reconstruction and Personalization), a personalized hand avatar creation approach that takes a short monocular RGB video of a human hand as input and reconstructs a faithful hand avatar exhibiting a high-fidelity appearance and geometry.

3D Hand Pose Estimation

SMPLpix: Neural Avatars from 3D Human Models

1 code implementation16 Aug 2020 Sergey Prokudin, Michael J. Black, Javier Romero

Recent advances in deep generative models have led to an unprecedented level of realism for synthetically generated images of humans.

3D geometry

Real Time Trajectory Prediction Using Deep Conditional Generative Models

1 code implementation9 Sep 2019 Sebastian Gomez-Gonzalez, Sergey Prokudin, Bernhard Scholkopf, Jan Peters

Our method uses encoder and decoder deep networks that maps complete or partial trajectories to a Gaussian distributed latent space and back, allowing for fast inference of the future values of a trajectory given previous observations.

Decoder Time Series +2

Efficient Learning on Point Clouds with Basis Point Sets

1 code implementation ICCV 2019 Sergey Prokudin, Christoph Lassner, Javier Romero

The basis point set representation is a residual representation that can be computed efficiently and can be used with standard neural network architectures and other machine learning algorithms.

BIG-bench Machine Learning

Deep Directional Statistics: Pose Estimation with Uncertainty Quantification

1 code implementation ECCV 2018 Sergey Prokudin, Peter Gehler, Sebastian Nowozin

However, in challenging imaging conditions such as on low-resolution images or when the image is corrupted by imaging artifacts, current systems degrade considerably in accuracy.

Deep Learning Pose Estimation +2

Cannot find the paper you are looking for? You can Submit a new open access paper.