Search Results for author: Simon Niklaus

Found 13 papers, 8 papers with code

Towards Domain-agnostic Depth Completion

1 code implementation29 Jul 2022 Wei Yin, Jianming Zhang, Oliver Wang, Simon Niklaus, Simon Chen, Chunhua Shen

Our method leverages a data driven prior in the form of a single image depth prediction network trained on large-scale datasets, the output of which is used as an input to our model.

Depth Completion Depth Estimation +2

Many-to-many Splatting for Efficient Video Frame Interpolation

1 code implementation CVPR 2022 Ping Hu, Simon Niklaus, Stan Sclaroff, Kate Saenko

Motion-based video frame interpolation commonly relies on optical flow to warp pixels from the inputs to the desired interpolation instant.

Motion Estimation Optical Flow Estimation +1

Splatting-based Synthesis for Video Frame Interpolation

no code implementations25 Jan 2022 Simon Niklaus, Ping Hu, Jiawen Chen

Specifically, splatting can be used to warp the input images to an arbitrary temporal location based on an optical flow estimate.

Optical Flow Estimation Video Frame Interpolation

Learning to Recover 3D Scene Shape from a Single Image

1 code implementation CVPR 2021 Wei Yin, Jianming Zhang, Oliver Wang, Simon Niklaus, Long Mai, Simon Chen, Chunhua Shen

Despite significant progress in monocular depth estimation in the wild, recent state-of-the-art methods cannot be used to recover accurate 3D scene shape due to an unknown depth shift induced by shift-invariant reconstruction losses used in mixed-data depth prediction training, and possible unknown camera focal length.

 Ranked #1 on Monocular Depth Estimation on NYU-Depth V2 (absolute relative error metric, using extra training data)

3D Scene Reconstruction Depth Prediction +3

Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes

3 code implementations CVPR 2021 Zhengqi Li, Simon Niklaus, Noah Snavely, Oliver Wang

We present a method to perform novel view and time synthesis of dynamic scenes, requiring only a monocular video with known camera poses as input.

Revisiting Adaptive Convolutions for Video Frame Interpolation

no code implementations2 Nov 2020 Simon Niklaus, Long Mai, Oliver Wang

Video frame interpolation, the synthesis of novel views in time, is an increasingly popular research direction with many new papers further advancing the state of the art.

Image Denoising Video Frame Interpolation +1

Learned Dual-View Reflection Removal

no code implementations1 Oct 2020 Simon Niklaus, Xuaner Cecilia Zhang, Jonathan T. Barron, Neal Wadhwa, Rahul Garg, Feng Liu, Tianfan Xue

Traditional reflection removal algorithms either use a single image as input, which suffers from intrinsic ambiguities, or use multiple images from a moving camera, which is inconvenient for users.

Reflection Removal

Softmax Splatting for Video Frame Interpolation

2 code implementations CVPR 2020 Simon Niklaus, Feng Liu

In contrast, how to perform forward warping has seen less attention, partly due to additional challenges such as resolving the conflict of mapping multiple pixels to the same target location in a differentiable way.

Depth Estimation Optical Flow Estimation +1

3D Ken Burns Effect from a Single Image

4 code implementations12 Sep 2019 Simon Niklaus, Long Mai, Jimei Yang, Feng Liu

According to this depth estimate, our framework then maps the input image to a point cloud and synthesizes the resulting video frames by rendering the point cloud from the corresponding camera positions.

Depth Estimation Depth Prediction

Context-aware Synthesis for Video Frame Interpolation

no code implementations CVPR 2018 Simon Niklaus, Feng Liu

Finally, unlike common approaches that blend the pre-warped frames, our method feeds them and their context maps to a video frame synthesis neural network to produce the interpolated frame in a context-aware fashion.

Optical Flow Estimation Video Frame Interpolation

Video Frame Interpolation via Adaptive Separable Convolution

6 code implementations ICCV 2017 Simon Niklaus, Long Mai, Feng Liu

Our method develops a deep fully convolutional neural network that takes two input frames and estimates pairs of 1D kernels for all pixels simultaneously.

Optical Flow Estimation Video Frame Interpolation

Cannot find the paper you are looking for? You can Submit a new open access paper.