Search Results for author: Bastian Goldluecke

Found 14 papers, 4 papers with code

3D-MuPPET: 3D Multi-Pigeon Pose Estimation and Tracking

2 code implementations29 Aug 2023 Urs Waldmann, Alex Hoi Hang Chan, Hemal Naik, Máté Nagy, Iain D. Couzin, Oliver Deussen, Bastian Goldluecke, Fumihiro Kano

To the best of our knowledge we are the first to present a framework for 2D/3D animal posture and trajectory tracking that works in both indoor and outdoor environments for up to 10 individuals.

Pose Estimation

Towards Monocular Shape from Refraction

1 code implementation31 May 2023 Antonin Sulc, Imari Sato, Bastian Goldluecke, Tali treibitz

In contrast, we claim that a simple energy function based on Snell's law enables the reconstruction of an arbitrary refractive surface geometry using just a single image and known background texture and geometry.

SupeRVol: Super-Resolution Shape and Reflectance Estimation in Inverse Volume Rendering

no code implementations9 Dec 2022 Mohammed Brahimi, Bjoern Haefner, Tarun Yenamandra, Bastian Goldluecke, Daniel Cremers

We propose an end-to-end inverse rendering pipeline called SupeRVol that allows us to recover 3D shape and material parameters from a set of color images in a super-resolution manner.

Inverse Rendering Super-Resolution

Style Agnostic 3D Reconstruction via Adversarial Style Transfer

no code implementations20 Oct 2021 Felix Petersen, Bastian Goldluecke, Oliver Deussen, Hilde Kuehne

Recently introduced differentiable renderers can be leveraged to learn the 3D geometry of objects from 2D images, but those approaches require additional supervision to enable the renderer to produce an output that can be compared to the input image.

3D Object Reconstruction 3D Reconstruction +3

Structure-from-Motion-Aware PatchMatch for Adaptive Optical Flow Estimation

no code implementations ECCV 2018 Daniel Maurer, Nico Marniok, Bastian Goldluecke, Andres Bruhn

To this end, we propose a novel structure-from-motion-aware PatchMatch approach that, in contrast to existing matching techniques, combines two hierarchical feature matching methods: a recent two-frame PatchMatch approach for optical flow estimation (general motion) and a specifically tailored three-frame PatchMatch approach for rigid scene reconstruction (SfM).

Optical Flow Estimation

Light Field Intrinsics With a Deep Encoder-Decoder Network

no code implementations CVPR 2018 Anna Alperovich, Ole Johannsen, Michael Strecke, Bastian Goldluecke

We present a fully convolutional autoencoder for light fields, which jointly encodes stacks of horizontal and vertical epipolar plane images through a deep network of residual layers.

Disparity Estimation Lightfield

Accurate Depth and Normal Maps From Occlusion-Aware Focal Stack Symmetry

no code implementations CVPR 2017 Michael Strecke, Anna Alperovich, Bastian Goldluecke

We introduce a novel approach to jointly estimate consistent depth and normal maps from 4D light fields, with two main contributions.

What Sparse Light Field Coding Reveals About Scene Structure

no code implementations CVPR 2016 Ole Johannsen, Antonin Sulc, Bastian Goldluecke

In this paper, we propose a novel method for depth estimation in light fields which employs a specifically designed sparse decomposition to leverage the depth-orientation relationship on its epipolar plane images.

Depth Estimation

Bayesian View Synthesis and Image-Based Rendering Principles

no code implementations CVPR 2014 Sergi Pujades, Frederic Devernay, Bastian Goldluecke

In this paper, we address the problem of synthesizing novel views from a set of input images.

The Variational Structure of Disparity and Regularization of 4D Light Fields

no code implementations CVPR 2013 Bastian Goldluecke, Sven Wanner

Unlike traditional images which do not offer information for different directions of incident light, a light field is defined on ray space, and implicitly encodes scene geometry data in a rich structure which becomes visible on its epipolar plane images.

Cannot find the paper you are looking for? You can Submit a new open access paper.