no code implementations • 18 Jul 2024 • Yifan Zhan, Zhuoxiao Li, Muyao Niu, Zhihang Zhong, Shohei Nobuhara, Ko Nishino, Yinqiang Zheng
To further enhance the performance of the observation MLP, we introduce regularization in the canonical space to facilitate the network's ability to learn warping for different frames.
no code implementations • 23 May 2024 • Xinran Nicole Han, Todd Zickler, Ko Nishino
We introduce a model that reconstructs a multimodal distribution of shapes from a single shading image, which aligns with the human experience of multistable perception.
no code implementations • 7 Dec 2023 • Kohei Yamashita, Vincent Lepetit, Ko Nishino
In this paper, we introduce correspondences of the third kind we call reflection correspondences and show that they can help estimate camera pose by just looking at objects without relying on the background.
no code implementations • CVPR 2024 • Yuto Enyo, Ko Nishino
In this paper, we introduce the first stochastic inverse rendering method, which recovers the attenuated frequency spectrum of an illumination jointly with the reflectance of an object of known geometry from a single image.
no code implementations • 7 Dec 2023 • Genki Kinoshita, Ko Nishino
In this paper, we introduce a novel training method for making any monocular depth network learn absolute scale and estimate metric road-scene depth just from regular training data, i. e., driving videos.
no code implementations • CVPR 2024 • Tomoki Ichikawa, Shohei Nobuhara, Ko Nishino
We introduce structured polarization for invisible depth and reflectance sensing (SPIDeRS), the first depth and reflectance sensing method using patterns of polarized light.
no code implementations • 26 Oct 2023 • Kohei Yamashita, Shohei Nobuhara, Ko Nishino
We introduce a novel deep reflectance map estimation network that recovers the camera-view reflectance maps from the surface normals of the current geometry estimate and the input multi-view images.
no code implementations • ICCV 2023 • Shu Nakamura, Yasutomo Kawanishi, Shohei Nobuhara, Ko Nishino
The first is the introduction of a first-of-its-kind large-scale dataset for pointing recognition and direction estimation, which we refer to as the DP Dataset.
no code implementations • CVPR 2024 • Zhuoxiao Li, Zhihang Zhong, Shohei Nobuhara, Ko Nishino, Yinqiang Zheng
If so, is that possible to realize these adversarial attacks in the physical world, without being perceived by human eyes?
no code implementations • 23 Mar 2023 • Yuta Yoshitake, Mai Nishimura, Shohei Nobuhara, Ko Nishino
We propose a novel method for joint estimation of shape and pose of rigid objects from their sequentially observed RGB-D images.
no code implementations • 16 Mar 2023 • Mai Nishimura, Shohei Nobuhara, Ko Nishino
We introduce an on-ground Pedestrian World Model, a computational model that can predict how pedestrians move around an observer in the crowd on the ground plane, but from just the egocentric-views of the observer.
1 code implementation • ICCV 2023 • Yifan Zhan, Shohei Nobuhara, Ko Nishino, Yinqiang Zheng
We introduce NeRFrac to realize neural novel view synthesis of scenes captured through refractive surfaces, typically water surfaces.
no code implementations • CVPR 2023 • Tomoki Ichikawa, Yoshiki Fukao, Shohei Nobuhara, Ko Nishino
Our key idea is to model the Fresnel reflection and transmission of the surface microgeometry with a collection of oriented mirror facets, both for body and surface reflections.
no code implementations • 12 Oct 2022 • Mai Nishimura, Shohei Nobuhara, Ko Nishino
We introduce a novel learning-based method for view birdification, the task of recovering ground-plane trajectories of pedestrians of a crowd and their observer in the same crowd just from the observed ego-centric video.
no code implementations • 25 Jul 2022 • Kohei Yamashita, Yuto Enyo, Shohei Nobuhara, Ko Nishino
Our key idea is to formulate MVS as an end-to-end learnable network, which we refer to as nLMVS-Net, that seamlessly integrates radiometric cues to leverage surface normals as view-independent surface features for learned cost volume construction and filtering.
no code implementations • 8 Jul 2022 • Taichi Fukuda, Kotaro Hasegawa, Shinya Ishizaki, Shohei Nobuhara, Ko Nishino
Next, we introduce BlindSpotNet (BSN), a simple network that fully leverages this dataset for fully automatic estimation of frame-wise blind spot probability maps for arbitrary driving videos.
1 code implementation • CVPR 2022 • Soma Nonaka, Shohei Nobuhara, Ko Nishino
We introduce a novel method and dataset for 3D gaze estimation of a freely moving person from a distance, typically in surveillance views.
1 code implementation • CVPR 2022 • Yupeng Liang, Ryosuke Wakaki, Shohei Nobuhara, Ko Nishino
We use semantic segmentation, as a prior to "guide" this filter selection.
Ranked #7 on Semantic Segmentation on UPLight
no code implementations • 9 Nov 2021 • Mai Nishimura, Shohei Nobuhara, Ko Nishino
We introduce view birdification, the problem of recovering ground-plane movements of people in a crowd from an ego-centric video captured from an observer (e. g., a person or a vehicle) also moving in the crowd.
no code implementations • CVPR 2021 • Tomoki Ichikawa, Matthew Purri, Ryo Kawahara, Shohei Nobuhara, Kristin Dana, Ko Nishino
That is, we show that the unique polarization pattern encoded in the polarimetric appearance of an object captured under the sky can be decoded to reveal the surface normal at each pixel.
no code implementations • CVPR 2021 • Yoshiki Fukao, Ryo Kawahara, Shohei Nobuhara, Ko Nishino
Our key idea is to introduce a polarimetric cost volume of distance defined on the polarimetric observations and the polarization state computed from the surface normal.
1 code implementation • 22 Sep 2020 • Jia Xue, Hang Zhang, Ko Nishino, Kristin J. Dana
A key concept is differential angular imaging, where small angular variations in image capture enables angular-gradient features for an enhanced appearance representation that improves recognition.
no code implementations • 17 Aug 2020 • Yuzheng Xu, Yang Wu, Nur Sabrina binti Zuraimi, Shohei Nobuhara, Ko Nishino
Video analysis has been moving towards more detailed interpretation (e. g. segmentation) with encouraging progresses.
1 code implementation • ECCV 2020 • Zhe Chen, Shohei Nobuhara, Ko Nishino
We introduce a novel neural network-based BRDF model and a Bayesian framework for object inverse rendering, i. e., joint estimation of reflectance and natural illumination from a single image of an object of known geometry.
no code implementations • 10 Dec 2019 • Kohei Yamashita, Shohei Nobuhara, Ko Nishino
In this paper, we introduce 3D-GMNet, a deep neural network for 3D object shape reconstruction from a single image.
no code implementations • 25 Jun 2019 • Ryo Kawahara, Meng-Yu Jennifer Kuo, Shohei Nobuhara, Ko Nishino
In other words, for the first time, we show that capturing a direct and water-reflected scene in a single exposure forms a self-calibrating HDR catadioptric stereo camera.
no code implementations • ECCV 2018 • Ko Nishino, Art Subpa-Asa, Yuta Asano, Mihoko Shimano, Imari Sato
We show that the path length of light captured in each of these observations is naturally lower-bounded by the ring light radius.
no code implementations • 9 Jan 2018 • Gabriel Schwartz, Ko Nishino
We refer to such material properties as visual material attributes.
no code implementations • CVPR 2017 • Mihoko Shimano, Hiroki Okawa, Yuta Asano, Ryoma Bise, Ko Nishino, Imari Sato
We derive an analytical spectral appearance model of wet surfaces that expresses the characteristic spectral sharpening due to multiple scattering and absorption in the surface.
no code implementations • CVPR 2017 • Jia Xue, Hang Zhang, Kristin Dana, Ko Nishino
We realize this by developing a framework for differential angular imaging, where small angular variations in image capture provide an enhanced appearance representation and significant recognition improvement.
no code implementations • 28 Nov 2016 • Gabriel Schwartz, Ko Nishino
We achieve this by training a fully-convolutional material recognition network end-to-end with only material category supervision.
no code implementations • 5 Apr 2016 • Gabriel Schwartz, Ko Nishino
In this paper, we introduce a novel material category recognition network architecture to show that perceptual attributes can, in fact, be automatically discovered inside a local material recognition framework.
no code implementations • 5 Apr 2016 • Stephen Lombardi, Ko Nishino
Recovering the radiometric properties of a scene (i. e., the reflectance, illumination, and geometry) is a long-sought ability of computer vision that can provide invaluable information for a wide range of applications.
no code implementations • 25 Mar 2016 • Hang Zhang, Kristin Dana, Ko Nishino
In this work, we address the question of what reflectance can reveal about materials in an efficient manner.
no code implementations • CVPR 2015 • Gabriel Schwartz, Ko Nishino
We argue that it would be ideal to recognize materials without relying on object cues such as shape.
no code implementations • CVPR 2015 • Hang Zhang, Kristin Dana, Ko Nishino
Reflectance offers a unique signature of the material but is challenging to measure and use for recognizing materials due to its high-dimensionality.
no code implementations • CVPR 2014 • Geoffrey Oxholm, Ko Nishino
To this end, we derive a probabilistic geometry estimation method that fully exploits the rich signal embedded in complex appearance.