no code implementations • ECCV 2020 • Chi Xu, Yasushi Makihara, Xiang Li, Yasushi Yagi, Jianfeng Lu
Specifically, a phase estimation network is introduced for the input single image, and the gait cycle reconstruction network exploits the estimated phase to mitigate the dependence of an encoded feature on the phase of that single image.
no code implementations • 21 Mar 2023 • Jose Reinaldo da Cunha Santos Aroso Vieira da Silva Neto, Tomoya Nakamura, Yasushi Makihara, Yasushi Yagi
The freedom of design of coded masks used by mask-based lensless cameras is an advantage these systems have when compared to lens-based ones.
no code implementations • 27 Nov 2020 • Takuma Doi, Fumio Okura, Toshiki Nagahara, Yasuyuki Matsushita, Yasushi Yagi
This paper proposes a multi-view extension of instance segmentation without relying on texture or shape descriptor matching.
no code implementations • 19 Oct 2020 • Bowen Wang, Liangzhi Li, Yuta Nakashima, Ryo Kawasaki, Hajime Nagahara, Yasushi Yagi
Semantic video segmentation is a key challenge for various applications.
no code implementations • CVPR 2018 • Takahiro Isokane, Fumio Okura, Ayaka Ide, Yasuyuki Matsushita, Yasushi Yagi
This paper describes a method for inferring three-dimensional (3D) plant branch structures that are hidden under leaves from multi-view observations.
no code implementations • CVPR 2017 • Kenichiro Tanaka, Yasuhiro Mukaigawa, Takuya Funatomi, Hiroyuki Kubo, Yasuyuki Matsushita, Yasushi Yagi
This paper presents a material classification method using an off-the-shelf Time-of-Flight (ToF) camera.
no code implementations • CVPR 2017 • Yasushi Makihara, Atsuyuki Suzuki, Daigo Muramatsu, Xiang Li, Yasushi Yagi
This paper describes a joint intensity metric learning method to improve the robustness of gait recognition with silhouette-based descriptors such as gait energy images.
no code implementations • CVPR 2016 • Kenichiro Tanaka, Yasuhiro Mukaigawa, Hiroyuki Kubo, Yasuyuki Matsushita, Yasushi Yagi
This paper presents a method for recovering shape and normal of a transparent object from a single viewpoint using a Time-of-Flight (ToF) camera.
no code implementations • CVPR 2015 • Kenichiro Tanaka, Yasuhiro Mukaigawa, Hiroyuki Kubo, Yasuyuki Matsushita, Yasushi Yagi
This paper describes a method for recovering appearance of inner slices of translucent objects.
no code implementations • CVPR 2014 • Al Mansur, Yasushi Makihara, Rasyid Aqmar, Yasushi Yagi
Given an input image sequence of speed transited gait of a test subject, we estimate the mapping matrix of the test subject as well as the phase and stride sequence using an energy minimization framework considering the following three points: (1) fitness of the synthesized images to the input image sequence as well as to an eigenspace constructed by exemplars of training subjects; (2) smoothness of the phase and the stride sequence; and (3) pitch and stride fitness to the pitch-stride preference model.