1 code implementation • CVPR 2023 • Zhanghao Sun, Wei Ye, Jinhui Xiong, Gyeongmin Choe, Jialiang Wang, Shuochen Su, Rakesh Ranjan
We believe the methods and dataset are beneficial to a broad community as dToF depth sensing is becoming mainstream on mobile devices.
1 code implementation • CVPR 2022 • Cho-Ying Wu, Jialiang Wang, Michael Hall, Ulrich Neumann, Shuochen Su
The majority of prior monocular depth estimation methods without groundtruth depth guidance focus on driving scenarios.
no code implementations • CVPR 2018 • Shuochen Su, Felix Heide, Gordon Wetzstein, Wolfgang Heidrich
We present an end-to-end image processing framework for time-of-flight (ToF) cameras.
1 code implementation • CVPR 2017 • Shuochen Su, Mauricio Delbracio, Jue Wang, Guillermo Sapiro, Wolfgang Heidrich, Oliver Wang
We show that the features learned from this dataset extend to deblurring motion blur that arises due to camera shake in a wide range of videos, and compare the quality of results to a number of other baselines.
1 code implementation • 25 Nov 2016 • Shuochen Su, Mauricio Delbracio, Jue Wang, Guillermo Sapiro, Wolfgang Heidrich, Oliver Wang
We show that the features learned from this dataset extend to deblurring motion blur that arises due to camera shake in a wide range of videos, and compare the quality of results to a number of other baselines.
no code implementations • CVPR 2016 • Shuochen Su, Felix Heide, Robin Swanson, Jonathan Klein, Clara Callenberg, Matthias Hullin, Wolfgang Heidrich
We propose a material classification method using raw time-of-flight (ToF) measurements.
no code implementations • CVPR 2015 • Shuochen Su, Wolfgang Heidrich
Although motion blur and rolling shutter deformations are closely coupled artifacts in images taken with CMOS image sensors, the two phenomena have so far mostly been treated separately, with deblurring algorithms being unable to handle rolling shutter wobble, and rolling shutter algorithms being incapable of dealing with motion blur.
no code implementations • CVPR 2013 • Chen Li, Shuochen Su, Yasuyuki Matsushita, Kun Zhou, Stephen Lin
We present a method that enhances the performance of depth-from-defocus (DFD) through the use of shading information.