Search Results for author: Ko Nishino

Found 35 papers, 5 papers with code

SPIDeRS: Structured Polarization for Invisible Depth and Reflectance Sensing

no code implementations7 Dec 2023 Tomoki Ichikawa, Shohei Nobuhara, Ko Nishino

We introduce structured polarization for invisible depth and reflectance sensing (SPIDeRS), the first depth and reflectance sensing method using patterns of polarized light.

Correspondences of the Third Kind: Camera Pose Estimation from Object Reflection

no code implementations7 Dec 2023 Kohei Yamashita, Vincent Lepetit, Ko Nishino

In this paper, we introduce correspondences of the third kind we call reflection correspondences and show that they can help estimate camera pose by just looking at objects without relying on the background.

Motion Estimation Object +1

Diffusion Reflectance Map: Single-Image Stochastic Inverse Rendering of Illumination and Reflectance

no code implementations7 Dec 2023 Yuto Enyo, Ko Nishino

In this paper, we introduce the first stochastic inverse rendering method, which recovers the attenuated frequency spectrum of an illumination jointly with the reflectance of an object of known geometry from a single image.

Inverse Rendering

Camera Height Doesn't Change: Unsupervised Training for Metric Monocular Road-Scene Depth Estimation

no code implementations7 Dec 2023 Genki Kinoshita, Ko Nishino

In this paper, we introduce a novel training method for making any monocular depth network learn absolute scale and estimate metric road-scene depth just from regular training data, i. e., driving videos.

Monocular Depth Estimation

DeepShaRM: Multi-View Shape and Reflectance Map Recovery Under Unknown Lighting

no code implementations26 Oct 2023 Kohei Yamashita, Shohei Nobuhara, Ko Nishino

We introduce a novel deep reflectance map estimation network that recovers the camera-view reflectance maps from the surface normals of the current geometry estimate and the input multi-view images.

Inverse Rendering

DeePoint: Visual Pointing Recognition and Direction Estimation

no code implementations ICCV 2023 Shu Nakamura, Yasutomo Kawanishi, Shohei Nobuhara, Ko Nishino

The first is the introduction of a first-of-its-kind large-scale dataset for pointing recognition and direction estimation, which we refer to as the DP Dataset.

Fooling Polarization-based Vision using Locally Controllable Polarizing Projection

no code implementations31 Mar 2023 Zhuoxiao Li, Zhihang Zhong, Shohei Nobuhara, Ko Nishino, Yinqiang Zheng

If so, is that possible to realize these adversarial attacks in the physical world, without being perceived by human eyes?

Color Constancy Reflection Removal +1

TransPoser: Transformer as an Optimizer for Joint Object Shape and Pose Estimation

no code implementations23 Mar 2023 Yuta Yoshitake, Mai Nishimura, Shohei Nobuhara, Ko Nishino

We propose a novel method for joint estimation of shape and pose of rigid objects from their sequentially observed RGB-D images.

Pose Estimation

InCrowdFormer: On-Ground Pedestrian World Model From Egocentric Views

no code implementations16 Mar 2023 Mai Nishimura, Shohei Nobuhara, Ko Nishino

We introduce an on-ground Pedestrian World Model, a computational model that can predict how pedestrians move around an observer in the crowd on the ground plane, but from just the egocentric-views of the observer.

NeRFrac: Neural Radiance Fields through Refractive Surface

1 code implementation ICCV 2023 Yifan Zhan, Shohei Nobuhara, Ko Nishino, Yinqiang Zheng

We introduce NeRFrac to realize neural novel view synthesis of scenes captured through refractive surfaces, typically water surfaces.

Novel View Synthesis

Fresnel Microfacet BRDF: Unification of Polari-Radiometric Surface-Body Reflection

no code implementations CVPR 2023 Tomoki Ichikawa, Yoshiki Fukao, Shohei Nobuhara, Ko Nishino

Our key idea is to model the Fresnel reflection and transmission of the surface microgeometry with a collection of oriented mirror facets, both for body and surface reflections.

ViewBirdiformer: Learning to recover ground-plane crowd trajectories and ego-motion from a single ego-centric view

no code implementations12 Oct 2022 Mai Nishimura, Shohei Nobuhara, Ko Nishino

We introduce a novel learning-based method for view birdification, the task of recovering ground-plane trajectories of pedestrians of a crowd and their observer in the same crowd just from the observed ego-centric video.

Robot Navigation

nLMVS-Net: Deep Non-Lambertian Multi-View Stereo

no code implementations25 Jul 2022 Kohei Yamashita, Yuto Enyo, Shohei Nobuhara, Ko Nishino

Our key idea is to formulate MVS as an end-to-end learnable network, which we refer to as nLMVS-Net, that seamlessly integrates radiometric cues to leverage surface normals as view-independent surface features for learned cost volume construction and filtering.

BlindSpotNet: Seeing Where We Cannot See

no code implementations8 Jul 2022 Taichi Fukuda, Kotaro Hasegawa, Shinya Ishizaki, Shohei Nobuhara, Ko Nishino

Next, we introduce BlindSpotNet (BSN), a simple network that fully leverages this dataset for fully automatic estimation of frame-wise blind spot probability maps for arbitrary driving videos.

Monocular Depth Estimation road scene understanding +1

Dynamic 3D Gaze From Afar: Deep Gaze Estimation From Temporal Eye-Head-Body Coordination

1 code implementation CVPR 2022 Soma Nonaka, Shohei Nobuhara, Ko Nishino

We introduce a novel method and dataset for 3D gaze estimation of a freely moving person from a distance, typically in surveillance views.

Gaze Estimation

View Birdification in the Crowd: Ground-Plane Localization from Perceived Movements

no code implementations9 Nov 2021 Mai Nishimura, Shohei Nobuhara, Ko Nishino

We introduce view birdification, the problem of recovering ground-plane movements of people in a crowd from an ego-centric video captured from an observer (e. g., a person or a vehicle) also moving in the crowd.

Shape From Sky: Polarimetric Normal Recovery Under the Sky

no code implementations CVPR 2021 Tomoki Ichikawa, Matthew Purri, Ryo Kawahara, Shohei Nobuhara, Kristin Dana, Ko Nishino

That is, we show that the unique polarization pattern encoded in the polarimetric appearance of an object captured under the sky can be decoded to reveal the surface normal at each pixel.

Navigate

Polarimetric Normal Stereo

no code implementations CVPR 2021 Yoshiki Fukao, Ryo Kawahara, Shohei Nobuhara, Ko Nishino

Our key idea is to introduce a polarimetric cost volume of distance defined on the polarimetric observations and the polarization state computed from the surface normal.

Denoising

Differential Viewpoints for Ground Terrain Material Recognition

1 code implementation22 Sep 2020 Jia Xue, Hang Zhang, Ko Nishino, Kristin J. Dana

A key concept is differential angular imaging, where small angular variations in image capture enables angular-gradient features for an enhanced appearance representation that improves recognition.

Autonomous Driving Material Recognition +1

Video Region Annotation with Sparse Bounding Boxes

no code implementations17 Aug 2020 Yuzheng Xu, Yang Wu, Nur Sabrina binti Zuraimi, Shohei Nobuhara, Ko Nishino

Video analysis has been moving towards more detailed interpretation (e. g. segmentation) with encouraging progresses.

Invertible Neural BRDF for Object Inverse Rendering

1 code implementation ECCV 2020 Zhe Chen, Shohei Nobuhara, Ko Nishino

We introduce a novel neural network-based BRDF model and a Bayesian framework for object inverse rendering, i. e., joint estimation of reflectance and natural illumination from a single image of an object of known geometry.

Inverse Rendering Object

3D-GMNet: Single-View 3D Shape Recovery as A Gaussian Mixture

no code implementations10 Dec 2019 Kohei Yamashita, Shohei Nobuhara, Ko Nishino

In this paper, we introduce 3D-GMNet, a deep neural network for 3D object shape reconstruction from a single image.

3D Reconstruction Density Estimation +1

Appearance and Shape from Water Reflection

no code implementations25 Jun 2019 Ryo Kawahara, Meng-Yu Jennifer Kuo, Shohei Nobuhara, Ko Nishino

In other words, for the first time, we show that capturing a direct and water-reflected scene in a single exposure forms a self-calibrating HDR catadioptric stereo camera.

3D Scene Reconstruction Stereo Matching +1

Variable Ring Light Imaging: Capturing Transient Subsurface Scattering with An Ordinary Camera

no code implementations ECCV 2018 Ko Nishino, Art Subpa-Asa, Yuta Asano, Mihoko Shimano, Imari Sato

We show that the path length of light captured in each of these observations is naturally lower-bounded by the ring light radius.

Wetness and Color From a Single Multispectral Image

no code implementations CVPR 2017 Mihoko Shimano, Hiroki Okawa, Yuta Asano, Ryoma Bise, Ko Nishino, Imari Sato

We derive an analytical spectral appearance model of wet surfaces that expresses the characteristic spectral sharpening due to multiple scattering and absorption in the surface.

Autonomous Vehicles

Differential Angular Imaging for Material Recognition

no code implementations CVPR 2017 Jia Xue, Hang Zhang, Kristin Dana, Ko Nishino

We realize this by developing a framework for differential angular imaging, where small angular variations in image capture provide an enhanced appearance representation and significant recognition improvement.

Material Recognition

Material Recognition from Local Appearance in Global Context

no code implementations28 Nov 2016 Gabriel Schwartz, Ko Nishino

We achieve this by training a fully-convolutional material recognition network end-to-end with only material category supervision.

Material Recognition

Integrating Local Material Recognition with Large-Scale Perceptual Attribute Discovery

no code implementations5 Apr 2016 Gabriel Schwartz, Ko Nishino

In this paper, we introduce a novel material category recognition network architecture to show that perceptual attributes can, in fact, be automatically discovered inside a local material recognition framework.

Attribute Material Recognition

Radiometric Scene Decomposition: Scene Reflectance, Illumination, and Geometry from RGB-D Images

no code implementations5 Apr 2016 Stephen Lombardi, Ko Nishino

Recovering the radiometric properties of a scene (i. e., the reflectance, illumination, and geometry) is a long-sought ability of computer vision that can provide invaluable information for a wide range of applications.

Scene Understanding

Automatically Discovering Local Visual Material Attributes

no code implementations CVPR 2015 Gabriel Schwartz, Ko Nishino

We argue that it would be ideal to recognize materials without relying on object cues such as shape.

Object Object Recognition

Reflectance Hashing for Material Recognition

no code implementations CVPR 2015 Hang Zhang, Kristin Dana, Ko Nishino

Reflectance offers a unique signature of the material but is challenging to measure and use for recognizing materials due to its high-dimensionality.

Dictionary Learning Material Recognition

Multiview Shape and Reflectance from Natural Illumination

no code implementations CVPR 2014 Geoffrey Oxholm, Ko Nishino

To this end, we derive a probabilistic geometry estimation method that fully exploits the rich signal embedded in complex appearance.

Cannot find the paper you are looking for? You can Submit a new open access paper.