no code implementations • ECCV 2020 • Guan-Ying Chen, Michael Waechter, Boxin Shi, Kwan-Yee K. Wong, Yasuyuki Matsushita
Based on this insight, we propose a guided calibration network, named GCNet, that explicitly leverages object shape and shading information for improved lighting estimation.
no code implementations • ECCV 2020 • Daichi Iwata, Michael Waechter, Wen-Yan Lin, Yasuyuki Matsushita
This paper studies the problem of sparse residual regression, i. e., learning a linear model using a norm that favors solutions in which the residuals are sparsely distributed.
no code implementations • ECCV 2020 • Hiroaki Santo, Michael Waechter, Yasuyuki Matsushita
This paper presents a near-light photometric stereo method for spatially varying reflectances.
1 code implementation • 15 Mar 2025 • William Louis Rothman, Yasuyuki Matsushita
Finding a balance between artistic beauty and machine-generated imagery is always a difficult task.
no code implementations • CVPR 2024 • Heng Guo, Jieji Ren, Feishi Wang, Boxin Shi, Mingjun Ren, Yasuyuki Matsushita
Photometric stereo faces challenges from non-Lambertian reflectance in real-world scenarios.
no code implementations • CVPR 2024 • Hiroaki Santo, Fumio Okura, Yasuyuki Matsushita
Multi-view photometric stereo (MVPS) recovers a high-fidelity 3D shape of a scene by benefiting from both multi-view stereo and photometric stereo.
1 code implementation • CVPR 2023 • Xu Cao, Hiroaki Santo, Fumio Okura, Yasuyuki Matsushita
We present a method for 3D reconstruction only using calibrated multi-view surface azimuth maps.
no code implementations • 11 Jul 2022 • Heng Guo, Hiroaki Santo, Boxin Shi, Yasuyuki Matsushita
This paper presents a near-light photometric stereo method that faithfully preserves sharp depth edges in the 3D reconstruction.
no code implementations • CVPR 2021 • Junxuan Li, Hongdong Li, Yasuyuki Matsushita
We propose a method for estimating high-definition spatially-varying lighting, reflectance, and geometry of a scene from 360deg stereo images.
1 code implementation • CVPR 2021 • Xu Cao, Boxin Shi, Fumio Okura, Yasuyuki Matsushita
Experimental results on analytically computed, synthetic, and real-world surfaces show that our method yields accurate and stable reconstruction for both orthographic and perspective normal maps.
1 code implementation • CVPR 2021 • Heng Guo, Fumio Okura, Boxin Shi, Takuya Funatomi, Yasuhiro Mukaigawa, Yasuyuki Matsushita
To make the problem well-posed, existing MPS methods rely on restrictive assumptions, such as shape prior, surfaces having a monochromatic with uniform albedo.
1 code implementation • IEEE Transactions on Pattern Analysis and Machine Intelligence 2021 • Wen-Yan Lin, Siying Liu, Changhao Ren, Ngai-Man Cheung, Hongdong Li, Yasuyuki Matsushita
The foundational assumption of machine learning is that the data under consideration is separable into classes; while intuitively reasonable, separability constraints have proven remarkably difficult to formulate mathematically.
Ranked #1 on
Unsupervised Anomaly Detection with Specified Settings -- 10% anomaly
on STL-10
(using extra training data)
no code implementations • 20 Apr 2021 • Junxuan Li, Hongdong Li, Yasuyuki Matsushita
We propose a method for estimating high-definition spatially-varying lighting, reflectance, and geometry of a scene from 360$^{\circ}$ stereo images.
no code implementations • ICCV 2021 • Feiran Li, Kent Fujiwara, Fumio Okura, Yasuyuki Matsushita
Therefore, in this work, we generalize the formulation of shuffled linear regression to a broader range of conditions where only part of the data should correspond.
no code implementations • ICCV 2021 • Feiran Li, Kent Fujiwara, Fumio Okura, Yasuyuki Matsushita
Recent progress in rotation-invariant point cloud analysis is mainly driven by converting point clouds into their respective canonical poses, and principal component analysis (PCA) is a practical tool to achieve this.
no code implementations • 27 Nov 2020 • Takuma Doi, Fumio Okura, Toshiki Nagahara, Yasuyuki Matsushita, Yasushi Yagi
This paper proposes a multi-view extension of instance segmentation without relying on texture or shape descriptor matching.
1 code implementation • 26 Jul 2020 • Guan-Ying Chen, Kai Han, Boxin Shi, Yasuyuki Matsushita, Kwan-Yee K. Wong
To deal with the uncalibrated scenario where light directions are unknown, we introduce a new convolutional network, named LCNet, to estimate light directions from input images.
1 code implementation • CVPR 2019 • Guan-Ying Chen, Kai Han, Boxin Shi, Yasuyuki Matsushita, Kwan-Yee K. Wong
This paper proposes an uncalibrated photometric stereo method for non-Lambertian scenes based on deep learning.
no code implementations • 29 Nov 2018 • Yutaro Miyauchi, Yusuke Sugano, Yasuyuki Matsushita
Conditional image generation is effective for diverse tasks including training data synthesis for learning-based computer vision.
no code implementations • ECCV 2018 • Hiroaki Santo, Michael Waechter, Masaki Samejima, Yusuke Sugano, Yasuyuki Matsushita
We present a practical method for geometric point light source calibration.
no code implementations • CVPR 2018 • Zhipeng Mo, Boxin Shi, Feng Lu, Sai-Kit Yeung, Yasuyuki Matsushita
This paper presents a photometric stereo method that works with unknown natural illuminations without any calibration object.
no code implementations • CVPR 2018 • Takahiro Isokane, Fumio Okura, Ayaka Ide, Yasuyuki Matsushita, Yasushi Yagi
This paper describes a method for inferring three-dimensional (3D) plant branch structures that are hidden under leaves from multi-view observations.
1 code implementation • CVPR 2018 • Wen-Yan Lin, Siying Liu, Jian-Huang Lai, Yasuyuki Matsushita
Many high dimensional vector distances tend to a constant.
no code implementations • CVPR 2017 • Zhipeng Mo, Boxin Shi, Sai-Kit Yeung, Yasuyuki Matsushita
Radiometrically calibrating the images from Internet photo collections brings photometric analysis from lab data to big image data in the wild, but conventional calibration methods cannot be directly applied to such image data.
1 code implementation • CVPR 2017 • Jia-Wang Bian, Wen-Yan Lin, Yasuyuki Matsushita, Sai-Kit Yeung, Tan-Dat Nguyen, Ming-Ming Cheng
Incorporating smoothness constraints into feature matching is known to enable ultra-robust matching.
no code implementations • CVPR 2017 • Kenichiro Tanaka, Yasuhiro Mukaigawa, Takuya Funatomi, Hiroyuki Kubo, Yasuyuki Matsushita, Yasushi Yagi
This paper presents a material classification method using an off-the-shelf Time-of-Flight (ToF) camera.
no code implementations • NeurIPS 2016 • Tae-Hyun Oh, Yasuyuki Matsushita, In Kweon, David Wipf
Commonly used in many applications, robust PCA represents an algorithmic attempt to reduce the sensitivity of classical PCA to outliers.
no code implementations • 1 Jun 2016 • Shaodi You, Yasuyuki Matsushita, Sudipta Sinha, Yusuke Bou, Katsushi Ikeuchi
Digitally unwrapping images of paper sheets is crucial for accurate document scanning and text recognition.
no code implementations • CVPR 2016 • Seonghyeon Nam, Youngbae Hwang, Yasuyuki Matsushita, Seon Joo Kim
Modelling and analyzing noise in images is a fundamental task in many computer vision systems.
no code implementations • CVPR 2016 • Kenichiro Tanaka, Yasuhiro Mukaigawa, Hiroyuki Kubo, Yasuyuki Matsushita, Yasushi Yagi
This paper presents a method for recovering shape and normal of a transparent object from a single viewpoint using a Time-of-Flight (ToF) camera.
2 code implementations • 28 Mar 2016 • Tatsunori Taniai, Yasuyuki Matsushita, Yoichi Sato, Takeshi Naemura
The local expansion moves extend traditional expansion moves by two ways: localization and spatial propagation.
1 code implementation • CVPR 2018 • Asako Kanezaki, Yasuyuki Matsushita, Yoshifumi Nishida
We propose a Convolutional Neural Network (CNN)-based model "RotationNet," which takes multi-view images of an object as input and jointly estimates its pose and object category.
no code implementations • 7 Dec 2015 • Tae-Hyun Oh, Yasuyuki Matsushita, In So Kweon, David Wipf
Commonly used in computer vision and other applications, robust PCA represents an algorithmic attempt to reduce the sensitivity of classical PCA to outliers.
no code implementations • ICCV 2015 • Jian Wang, Yasuyuki Matsushita, Boxin Shi, Aswin C. Sankaranarayanan
This paper studies the effect of small angular variations in illumination directions to photometric stereo.
no code implementations • 1 Sep 2015 • Tae-Hyun Oh, Yasuyuki Matsushita, Yu-Wing Tai, In So Kweon
The problems related to NNM, or WNNM, can be solved iteratively by applying a closed-form proximal operator, called Singular Value Thresholding (SVT), or Weighted SVT, but they suffer from high computational cost of Singular Value Decomposition (SVD) at each iteration.
no code implementations • CVPR 2015 • Kenichiro Tanaka, Yasuhiro Mukaigawa, Hiroyuki Kubo, Yasuyuki Matsushita, Yasushi Yagi
This paper describes a method for recovering appearance of inner slices of translucent objects.
no code implementations • CVPR 2015 • Tae-Hyun Oh, Yasuyuki Matsushita, Yu-Wing Tai, In So Kweon
The problems related to NNM (or WNNM) can be solved iteratively by applying a closed-form proximal operator, called Singular Value Thresholding (SVT) (or Weighted SVT), but they suffer from high computational cost to compute a Singular Value Decomposition (SVD) at each iteration.
no code implementations • CVPR 2015 • Tatsunori Taniai, Yasuyuki Matsushita, Takeshi Naemura
We then present our method as generalization of SSP, which is further shown to generalize several state-of-the-art techniques for higher-order and pairwise non-submodular functions [Ayed13, Gorelick14, Tang14].
no code implementations • CVPR 2014 • Tatsunori Taniai, Yasuyuki Matsushita, Takeshi Naemura
We present an accurate and efficient stereo matching method using locally shared labels, a new labeling scheme that enables spatial propagation in MRF inference using graph cuts.
no code implementations • CVPR 2014 • Yusuke Sugano, Yasuyuki Matsushita, Yoichi Sato
Unlike existing appearance-based methods that assume person-specific training data, we use a large amount of cross-subject training data to train a 3D gaze estimator.
no code implementations • CVPR 2014 • Jaesik Park, Sudipta N. Sinha, Yasuyuki Matsushita, Yu-Wing Tai, In So Kweon
We show that a non-isotropic near point light source rigidly attached to a camera can be calibrated using multiple images of a weakly textured planar scene.
no code implementations • CVPR 2013 • Feng Lu, Yasuyuki Matsushita, Imari Sato, Takahiro Okabe, Yoichi Sato
We propose an uncalibrated photometric stereo method that works with general and unknown isotropic reflectances.
no code implementations • CVPR 2013 • Chen Li, Shuochen Su, Yasuyuki Matsushita, Kun Zhou, Stephen Lin
We present a method that enhances the performance of depth-from-defocus (DFD) through the use of shading information.