no code implementations • 18 Jul 2024 • Yifan Zhan, Zhuoxiao Li, Muyao Niu, Zhihang Zhong, Shohei Nobuhara, Ko Nishino, Yinqiang Zheng
To further enhance the performance of the observation MLP, we introduce regularization in the canonical space to facilitate the network's ability to learn warping for different frames.
no code implementations • CVPR 2024 • Tomoki Ichikawa, Shohei Nobuhara, Ko Nishino
We introduce structured polarization for invisible depth and reflectance sensing (SPIDeRS), the first depth and reflectance sensing method using patterns of polarized light.
no code implementations • 26 Oct 2023 • Kohei Yamashita, Shohei Nobuhara, Ko Nishino
We introduce a novel deep reflectance map estimation network that recovers the camera-view reflectance maps from the surface normals of the current geometry estimate and the input multi-view images.
no code implementations • ICCV 2023 • Shu Nakamura, Yasutomo Kawanishi, Shohei Nobuhara, Ko Nishino
The first is the introduction of a first-of-its-kind large-scale dataset for pointing recognition and direction estimation, which we refer to as the DP Dataset.
no code implementations • CVPR 2024 • Zhuoxiao Li, Zhihang Zhong, Shohei Nobuhara, Ko Nishino, Yinqiang Zheng
If so, is that possible to realize these adversarial attacks in the physical world, without being perceived by human eyes?
no code implementations • 23 Mar 2023 • Yuta Yoshitake, Mai Nishimura, Shohei Nobuhara, Ko Nishino
We propose a novel method for joint estimation of shape and pose of rigid objects from their sequentially observed RGB-D images.
no code implementations • 16 Mar 2023 • Mai Nishimura, Shohei Nobuhara, Ko Nishino
We introduce an on-ground Pedestrian World Model, a computational model that can predict how pedestrians move around an observer in the crowd on the ground plane, but from just the egocentric-views of the observer.
1 code implementation • ICCV 2023 • Yifan Zhan, Shohei Nobuhara, Ko Nishino, Yinqiang Zheng
We introduce NeRFrac to realize neural novel view synthesis of scenes captured through refractive surfaces, typically water surfaces.
no code implementations • CVPR 2023 • Ryo Kawahara, Meng-Yu Jennifer Kuo, Shohei Nobuhara
The planar mirrors virtually define multiple viewpoints by multiple reflections, and the monocentric lens realizes a high magnification with less blurry and surround view even in closeup imaging.
no code implementations • CVPR 2023 • Tomoki Ichikawa, Yoshiki Fukao, Shohei Nobuhara, Ko Nishino
Our key idea is to model the Fresnel reflection and transmission of the surface microgeometry with a collection of oriented mirror facets, both for body and surface reflections.
no code implementations • 12 Oct 2022 • Mai Nishimura, Shohei Nobuhara, Ko Nishino
We introduce a novel learning-based method for view birdification, the task of recovering ground-plane trajectories of pedestrians of a crowd and their observer in the same crowd just from the observed ego-centric video.
no code implementations • 25 Jul 2022 • Kohei Yamashita, Yuto Enyo, Shohei Nobuhara, Ko Nishino
Our key idea is to formulate MVS as an end-to-end learnable network, which we refer to as nLMVS-Net, that seamlessly integrates radiometric cues to leverage surface normals as view-independent surface features for learned cost volume construction and filtering.
no code implementations • 8 Jul 2022 • Taichi Fukuda, Kotaro Hasegawa, Shinya Ishizaki, Shohei Nobuhara, Ko Nishino
Next, we introduce BlindSpotNet (BSN), a simple network that fully leverages this dataset for fully automatic estimation of frame-wise blind spot probability maps for arbitrary driving videos.
1 code implementation • CVPR 2022 • Soma Nonaka, Shohei Nobuhara, Ko Nishino
We introduce a novel method and dataset for 3D gaze estimation of a freely moving person from a distance, typically in surveillance views.
1 code implementation • CVPR 2022 • Yupeng Liang, Ryosuke Wakaki, Shohei Nobuhara, Ko Nishino
We use semantic segmentation, as a prior to "guide" this filter selection.
Ranked #7 on Semantic Segmentation on UPLight
no code implementations • 9 Nov 2021 • Mai Nishimura, Shohei Nobuhara, Ko Nishino
We introduce view birdification, the problem of recovering ground-plane movements of people in a crowd from an ego-centric video captured from an observer (e. g., a person or a vehicle) also moving in the crowd.
no code implementations • CVPR 2021 • Yoshiki Fukao, Ryo Kawahara, Shohei Nobuhara, Ko Nishino
Our key idea is to introduce a polarimetric cost volume of distance defined on the polarimetric observations and the polarization state computed from the surface normal.
no code implementations • CVPR 2021 • Tomoki Ichikawa, Matthew Purri, Ryo Kawahara, Shohei Nobuhara, Kristin Dana, Ko Nishino
That is, we show that the unique polarization pattern encoded in the polarimetric appearance of an object captured under the sky can be decoded to reveal the surface normal at each pixel.
no code implementations • 29 Mar 2021 • Kosuke Takahashi, Shohei Nobuhara
The kaleidoscopic imaging system can be recognized as the virtual multi-camera system and has strong advantages in that the virtual cameras are strictly synchronized and have the same intrinsic parameters.
no code implementations • 17 Aug 2020 • Yuzheng Xu, Yang Wu, Nur Sabrina binti Zuraimi, Shohei Nobuhara, Ko Nishino
Video analysis has been moving towards more detailed interpretation (e. g. segmentation) with encouraging progresses.
1 code implementation • ECCV 2020 • Zhe Chen, Shohei Nobuhara, Ko Nishino
We introduce a novel neural network-based BRDF model and a Bayesian framework for object inverse rendering, i. e., joint estimation of reflectance and natural illumination from a single image of an object of known geometry.
no code implementations • 9 Mar 2020 • Weimin Wang, Shohei Nobuhara, Ryosuke Nakamura, Ken Sakurada
This paper presents a novel semantic-based online extrinsic calibration approach, SOIC (so, I see), for Light Detection and Ranging (LiDAR) and camera sensors.
no code implementations • 10 Dec 2019 • Kohei Yamashita, Shohei Nobuhara, Ko Nishino
In this paper, we introduce 3D-GMNet, a deep neural network for 3D object shape reconstruction from a single image.
no code implementations • 25 Jun 2019 • Ryo Kawahara, Meng-Yu Jennifer Kuo, Shohei Nobuhara, Ko Nishino
In other words, for the first time, we show that capturing a direct and water-reflected scene in a single exposure forms a self-calibrating HDR catadioptric stereo camera.
no code implementations • 15 Oct 2018 • Jinsong Zhang, Rodrigo Verschae, Shohei Nobuhara, Jean-François Lalonde
Our experiments reveal that the MLP network, already used similarly in previous work, achieves an RMSE skill score of 7% over the commonly-used persistence baseline on the 1-minute future photovoltaic power prediction task.
no code implementations • CVPR 2017 • Kosuke Takahashi, Akihiro Miyata, Shohei Nobuhara, Takashi Matsuyama
This paper proposes a new extrinsic calibration of kaleidoscopic imaging system by estimating normals and distances of the mirrors.
1 code implementation • 9 Dec 2016 • Hanbyul Joo, Tomas Simon, Xulong Li, Hao liu, Lei Tan, Lin Gui, Sean Banerjee, Timothy Godisart, Bart Nabbe, Iain Matthews, Takeo Kanade, Shohei Nobuhara, Yaser Sheikh
The core challenges in capturing social interactions are: (1) occlusion is functional and frequent; (2) subtle motion needs to be measured over a space large enough to host a social group; (3) human appearance and configuration variation is immense; and (4) attaching markers to the body may prime the nature of interactions.
no code implementations • ICCV 2015 • Hanbyul Joo, Hao liu, Lei Tan, Lin Gui, Bart Nabbe, Iain Matthews, Takeo Kanade, Shohei Nobuhara, Yaser Sheikh
We present an approach to capture the 3D structure and motion of a group of people engaged in a social interaction.
no code implementations • ICCV 2015 • Mai Nishimura, Shohei Nobuhara, Takashi Matsuyama, Shinya Shimizu, Kensaku Fujii
This paper presents a new generalized (or ray-pixel, raxel) camera calibration algorithm for camera systems involving distortions by unknown refraction and reflection processes.