Search Results for author: Yasunori Ishii

Found 11 papers, 1 papers with code

hear-your-action: human action recognition by ultrasound active sensing

no code implementations15 Sep 2023 Risako Tanigawa, Yasunori Ishii

As action recognition from ultrasound active sensing in a non-invasive manner is not well investigated, we create a new dataset for action recognition and conduct a comparison of features for classification.

Action Recognition Privacy Preserving +1

PALF: Pre-Annotation and Camera-LiDAR Late Fusion for the Easy Annotation of Point Clouds

no code implementations13 Apr 2023 Yucheng Zhang, Masaki Fukuda, Yasunori Ishii, Kyoko Ohshima, Takayoshi Yamashita

Unlike 2D image labels, annotating point cloud data is difficult due to the limitations of sparsity, irregularity, and low resolution, which requires more manual work, and the annotation efficiency is much lower than 2D image. Therefore, we propose an annotation algorithm for point cloud data, which is pre-annotation and camera-LiDAR late fusion algorithm to easily and accurately annotate.

3D Object Detection Autonomous Driving +2

Masking and Mixing Adversarial Training

no code implementations16 Feb 2023 Hiroki Adachi, Tsubasa Hirakawa, Takayoshi Yamashita, Hironobu Fujiyoshi, Yasunori Ishii, Kazuki Kozuka

Adversarial training is a popular and straightforward technique to defend against the threat of adversarial examples.

Few-shot Adaptive Object Detection with Cross-Domain CutMix

no code implementations31 Aug 2022 Yuzuru Nakamura, Yasunori Ishii, Yuki Maruyama, Takayoshi Yamashita

In object detection, data amount and cost are a trade-off, and collecting a large amount of data in a specific domain is labor intensive.

Domain Adaptation Object +3

CutDepth:Edge-aware Data Augmentation in Depth Estimation

no code implementations16 Jul 2021 Yasunori Ishii, Takayoshi Yamashita

It is difficult to collect data on a large scale in a monocular depth estimation because the task requires the simultaneous acquisition of RGB images and depths.

Ranked #44 on Monocular Depth Estimation on NYU-Depth V2 (using extra training data)

Data Augmentation Monocular Depth Estimation

Explain to Fix: A Framework to Interpret and Correct DNN Object Detector Predictions

1 code implementation19 Nov 2018 Denis Gudovskiy, Alec Hodgkinson, Takuya Yamaguchi, Yasunori Ishii, Sotaro Tsukizawa

We qualitatively and quantitatively show that the proposed explanation method can be used to find image features which cause failures in DNN object detection.

Computational Efficiency Feature Importance +2

Cannot find the paper you are looking for? You can Submit a new open access paper.